So the single place that we can buy this is showing no stock (already) and not clear if this will even ship given all the customs and tariffs stuff. I must say after waiting for months on the 'almost ready to ship' DGX Spark (with multiple partners no less), getting strong announce-ware vibes from this already.
My naive first reaction was that a unit like that would consume a way too much power to be practical on a robot, but then I remembered how many calories our own brains need vs the rest of our body (Google says 20% of total body needs).
Looks like power consumption for the Thor T5000 is between 30W-140W. The Unitree G1 (https://www.unitree.com/g1) has a 9Ah battery that lasts 2hrs under normal operation. Assuming an operating voltage of 48V (13s battery), that implies the robot's actuator and sensor power usage is ~216W.
Assuming average power usage is somewhere down the middle (85W), a thor unit would consume 28% of the robot's total power needs. This doesn't account for the fact that the robot would have to carry around the additional weight of the compute unit though. Can't say if that's good or bad, just interesting to see that it's in the same ballpark.
worldsayshi · 1h ago
I tried to look up human wattage as a comparison and I'm very surprised that it lands around the same ballpark. Around 145W as a daily average and around 440W as a an approximate hourly average during exercise.
I thought current gen robots would be an order of magnitude less efficient. Maybe I'm misunderstanding something.
BobbyJo · 1h ago
Electric motors are very energy efficient. I believe they are actually far more efficient on a per-joint movement basis, and the equivalence between us and them is largely due to inefficient locomotion.
Where we excel is energy storage. Far less weight, far higher density.
lm28469 · 56m ago
We do a whole lot of things a robot doesn't have to do, like filtering blood, digesting, keeping warm.
worldsayshi · 33m ago
Body maintenance.
LtdJorge · 34m ago
Every hardware piece of such a robot can do a few things. Our body parts do orders of magnitude more, including growing and regeneration.
xattt · 48m ago
Can self-driving cars be framed as robots?
An electric car would have no issue sustaining this level of power; a gas-powered car doubly-so.
AlotOfReading · 34m ago
Autonomous vehicles are indeed robots, but they have power constraints (that Thor can reasonably fit within). Most industrial robots aren't meaningfully power constrained though.
It was a bit of a culture shock the first time I was involved with industrial robots because of how much power constraints had impacted the design of previous systems I worked on.
ilaksh · 16m ago
GMTec AMD Ryzen™ AI Max+ 395 --EVO-X2 AI Mini PC seems pretty similar and only $2000.
Would be interested to see head to head benchmarks including power usage between those mini PCs and the Nvidia Thor.
kjhughes · 1h ago
Here's NVIDIA's blog post on this:
NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
> CEO Jensen Huang has said robotics is the company’s largest growth opportunity outside of artificial intelligence
> The Jetson Thor chips are equipped with 128GB of memory, which is essential for big AI models.
Just put it into a robot and run some unhinged model on it, that should be fun.
bigfishrunning · 1h ago
The models that run on robots do things like "where is the road" or "is this package damaged"; people will run LLMs on this thing, but that's not it's primary bread-and-butter
ACCount37 · 29m ago
The future of advanced robotics likely requires LLM-scale models. With more bias towards vision and locomotion than the usual LLM, of course.
pradn · 35m ago
There's already this hilarious bot. It's able to use people's outfits to woo them, or insult them. It's pretty good!
Edge compute has not yet been won. There is no ecosystem for CUDA for it yet.
Someone else but Nvidia please pay attention to this market.
Robots can't deal with the latency of calls back to the data center. Vision, navigation, 6DOF, articulation all must happen in real time.
This will absolutely be a huge market in time. Robots, autonomous cars, any sort of real time, on-prem, hardware type application.
No comments yet
nickfromseattle · 1h ago
What are the variables that prefer local GPUs vs cloud inference? Is connectivity the dividing line or are there other variables that influence the choice?
Anduril submersibles probably need local processing, but does my laundry/dishes robot need local processing? Or machines in factories? Or delivery drones?
michaelt · 1h ago
Any sort of continuous video processing, especially low-latency.
Imagine you were tracking items on video at a self-service checkout. Sure, you could compress the video down to 15 Mbps or so and send it to the cloud. But now, a store with 20 self-checkouts needs 300 Mbps of upload bandwidth. That's one more problem making it harder for Wal-Mart to buy and roll out your product.
Also, if you know you need an NVIDIA L4 dedicated to you 24/7 for a year, a g6.xlarge will cost $7,000/year on-demand or $4,300/year reserved [1] while you can buy the card for $2,500.
Of course for many other use cases the cloud is a fine choice. If you only need a fraction of a GPU, or you only need a monster GPU a tiny fraction of the time, or you need an enormous LLM that demands water cooling and tolerates latency easily, the cloud can be a fine choice.
Simple example, Security cameras that only use bandwidth when they've detected something. The cost of live streaming 20 cameras over 5g is very high. The cost of sending text messages with still images when you see a person is reasonable.
pyrale · 1h ago
Why the hell would a dishwasher need to be connected, or smart for that matter?
I just want clean dishes/clothes, not to be upsold into some stupid shit that fails when it can’t ping google.com or gets bricked when the company closes.
I would pay premium for certified mindless products.
sidewndr46 · 1h ago
Anecdotally, I don't have any direct physical evidence or written evidence to support this. But I talked to someone in the industry over a decade ago when "run it on a GPU" was just heating up. It's drones. Not DJI ones, military ones with surveillance gear and weapons.
bigfishrunning · 1h ago
Mining, remote construction, remote power station inspection, battlefields. there are many many places where a stable network connection can't be taken for granted.
newsclues · 1h ago
I want local processing for my local data. That includes my photos, documents and surveillance camera feeds.
exe34 · 46m ago
it depends if the plates were expensive.
probablydan · 1h ago
Can these be used for local inference on large models? I'm assuming the 128G of memory is like system memory, not like GPU VRAM.
bigyabai · 52m ago
Yes, but it is substantially cheaper and usually faster to buy a Jetson Orin chip or build an x86 homelab.
shrubble · 51m ago
The Strix 395+ or whatever it is called is $2k with 128gb, but I think less performance.
jauntywundrkind · 57m ago
Wow: notably a more advanced CPU than DGX GB200! 14 Neoverse V3AE cores, where-as Grace Hopper is 72x Neoverse V2. Comparing versus big GB100: 2560/96 CUDA/Tensor cores here vs big Blackwell's 18432/576 cores.
> Compared to NVIDIA Jetson AGX Orin, it provides up to 7.5x higher AI compute and 3.5x better energy efficiency.
I could really use a table of all the various options Nvidia has! Jetson AGX Orin (2023) seems to start at ~$1700 for a 32GB system, with 204GB/s bandwidth, 1792 Ampere, 56 Tensor, & 8 A78AE ARM Cores, 200 TOPS "AI Performance", 15-45W. Slightly bigger model of 2048/64/12 cores/275 TOPS, 15-60W available. https://en.wikipedia.org/wiki/Nvidia_Jetson#Performance
Now Jetson T5000 is 2070 TFLOPS (but FP4 - Sparse! Still ~double-ish). 2560 Core Blackwell, 96 Tensor cores, 14 Neoverse V3AE cores. 273GB/s 128GB. 4x25Gbe is a neat new addition. 40-130W. There's also a lower spec T4000.
Seems like a pretty in line leap at 2x the price!
Looks like a physically pretty big unit. Big enough to scratch my head in the intro video of robots opening up the package & wonder: where are they going to fit their new brain? But man, the breakdown diagram: it's- unsurprisingly- half heatsink.
jsight · 1h ago
This sounds very similar to the dgx spark, which still hasn't shipped afaik.
wmf · 2h ago
Orin was pretty expensive at $2,000; now Thor is significantly more.
AlotOfReading · 2h ago
Thor is a pretty big jump in power and the current prices are a bargain compared to what else is out there if you need the capabilities. I wish there was a competitive alternative, because Nvidia is horrible to work with.
mikepurvis · 42m ago
And now everyone's $2000 Orins will be stuck forever on Ubuntu 24.04 just like the Xaviers were abandoned on 20.04 and the TX1/2 on 18.04.
Nothing like explaining to your ML engineers that they can only use Python 3.6 because your deployed hardware is abandoned by the vendor.
CamperBob2 · 2h ago
128 GB for $3,499 doesn't sound bad at all.
vjk800 · 1h ago
What exactly does this chip do?
wmf · 1h ago
It has ARM CPU cores and an Nvidia GPU so it can do whatever you want but it's optimized for AI video analysis. Great for factory robots or self-driving cars.
neom · 2h ago
AGX Thor + TensorRT + SDXL-Turbo (or SD 1.5 LCM) + ControlNet (depth/canny) + ROS 2 + Isaac ROS + CUDA zero-copy camera feeds = fun!!
a_t48 · 2h ago
You don’t need Thor nor ROS for this, but it can certainly help.
sabareesh · 2h ago
Looks very similar to DGX spark
justincormack · 1h ago
It is, some small differences. Also rumoured this will become a laptop chipset in future, although probably at lower power.
lousken · 48m ago
cost needs to be reduced by 90% to be viable
AlotOfReading · 41m ago
Serious question, what comparable hardware can you buy for 10% of the cost?
lousken · 12m ago
I meant for a dev kit it's fine, but it's not viable for anything beyond that. Shouldn't cost 100x of RPi if you gonna use as a part of a robot.
croes · 1h ago
The shovel sellers new shovels
throwawayoldie · 1h ago
If I were Jensen Huang, the first thing I'd do...well, the _first_ thing I'd do is ditch the silly leather jacket and dress like an adult. But the first thing I'd do with Nvidia is make sure the company's product line is well diversified for the coming AI winter.
cheema33 · 24m ago
> the _first_ thing I'd do is ditch the silly leather jacket and dress like an adult
Given what he has accomplished, he has more than earned the right to wear a leather jacket if he wants to. People didn't complain about Steve Jobs wearing a turtleneck for the same reason. When you have accomplished as much as these guys, then you can dish out fashion advice and maybe someone will listen.
throwawayoldie · 7m ago
Aren't you noble, standing up for the rights of the poor oppressed billionaires.
metalliqaz · 1h ago
First thing I'd do is keep making more expensive shovels until gold miners stop buying them.
https://www.arrow.com/en/products/945-14070-0080-000/nvidia?...
Looks like power consumption for the Thor T5000 is between 30W-140W. The Unitree G1 (https://www.unitree.com/g1) has a 9Ah battery that lasts 2hrs under normal operation. Assuming an operating voltage of 48V (13s battery), that implies the robot's actuator and sensor power usage is ~216W.
Assuming average power usage is somewhere down the middle (85W), a thor unit would consume 28% of the robot's total power needs. This doesn't account for the fact that the robot would have to carry around the additional weight of the compute unit though. Can't say if that's good or bad, just interesting to see that it's in the same ballpark.
I thought current gen robots would be an order of magnitude less efficient. Maybe I'm misunderstanding something.
Where we excel is energy storage. Far less weight, far higher density.
An electric car would have no issue sustaining this level of power; a gas-powered car doubly-so.
It was a bit of a culture shock the first time I was involved with industrial robots because of how much power constraints had impacted the design of previous systems I worked on.
Would be interested to see head to head benchmarks including power usage between those mini PCs and the Nvidia Thor.
NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
https://blogs.nvidia.com/blog/jetson-thor-physical-ai-edge/
> The Jetson Thor chips are equipped with 128GB of memory, which is essential for big AI models.
Just put it into a robot and run some unhinged model on it, that should be fun.
https://www.instagram.com/rizzbot_official/
Edge compute has not yet been won. There is no ecosystem for CUDA for it yet.
Someone else but Nvidia please pay attention to this market.
Robots can't deal with the latency of calls back to the data center. Vision, navigation, 6DOF, articulation all must happen in real time.
This will absolutely be a huge market in time. Robots, autonomous cars, any sort of real time, on-prem, hardware type application.
No comments yet
Anduril submersibles probably need local processing, but does my laundry/dishes robot need local processing? Or machines in factories? Or delivery drones?
Imagine you were tracking items on video at a self-service checkout. Sure, you could compress the video down to 15 Mbps or so and send it to the cloud. But now, a store with 20 self-checkouts needs 300 Mbps of upload bandwidth. That's one more problem making it harder for Wal-Mart to buy and roll out your product.
Also, if you know you need an NVIDIA L4 dedicated to you 24/7 for a year, a g6.xlarge will cost $7,000/year on-demand or $4,300/year reserved [1] while you can buy the card for $2,500.
Of course for many other use cases the cloud is a fine choice. If you only need a fraction of a GPU, or you only need a monster GPU a tiny fraction of the time, or you need an enormous LLM that demands water cooling and tolerates latency easily, the cloud can be a fine choice.
[1] https://instances.vantage.sh/aws/ec2/g6.xlarge?currency=USD&...
Simple example, Security cameras that only use bandwidth when they've detected something. The cost of live streaming 20 cameras over 5g is very high. The cost of sending text messages with still images when you see a person is reasonable.
I just want clean dishes/clothes, not to be upsold into some stupid shit that fails when it can’t ping google.com or gets bricked when the company closes.
I would pay premium for certified mindless products.
> Compared to NVIDIA Jetson AGX Orin, it provides up to 7.5x higher AI compute and 3.5x better energy efficiency.
I could really use a table of all the various options Nvidia has! Jetson AGX Orin (2023) seems to start at ~$1700 for a 32GB system, with 204GB/s bandwidth, 1792 Ampere, 56 Tensor, & 8 A78AE ARM Cores, 200 TOPS "AI Performance", 15-45W. Slightly bigger model of 2048/64/12 cores/275 TOPS, 15-60W available. https://en.wikipedia.org/wiki/Nvidia_Jetson#Performance
Now Jetson T5000 is 2070 TFLOPS (but FP4 - Sparse! Still ~double-ish). 2560 Core Blackwell, 96 Tensor cores, 14 Neoverse V3AE cores. 273GB/s 128GB. 4x25Gbe is a neat new addition. 40-130W. There's also a lower spec T4000.
Seems like a pretty in line leap at 2x the price!
Looks like a physically pretty big unit. Big enough to scratch my head in the intro video of robots opening up the package & wonder: where are they going to fit their new brain? But man, the breakdown diagram: it's- unsurprisingly- half heatsink.
Nothing like explaining to your ML engineers that they can only use Python 3.6 because your deployed hardware is abandoned by the vendor.
Given what he has accomplished, he has more than earned the right to wear a leather jacket if he wants to. People didn't complain about Steve Jobs wearing a turtleneck for the same reason. When you have accomplished as much as these guys, then you can dish out fashion advice and maybe someone will listen.