Ask HN: Why hasn't x86 caught up with Apple M series?
397 points by stephenheron 1d ago 570 comments
Ask HN: Is there a temp phone number like temp email?
8 points by piratesAndSons 14h ago 11 comments
Ask HN: Best codebases to study to learn software design?
100 points by pixelworm 3d ago 89 comments
Ask HN: Are AI filters becoming stricter than society itself?
29 points by tsevis 3d ago 16 comments
Nvidia's new 'robot brain' goes on sale for $3,499
90 tiahura 82 8/25/2025, 4:19:36 PM cnbc.com ↗
https://www.arrow.com/en/products/945-14070-0080-000/nvidia?...
Looks like power consumption for the Thor T5000 is between 30W-140W. The Unitree G1 (https://www.unitree.com/g1) has a 9Ah battery that lasts 2hrs under normal operation. Assuming an operating voltage of 48V (13s battery), that implies the robot's actuator and sensor power usage is ~216W.
Assuming average power usage is somewhere down the middle (85W), a thor unit would consume 28% of the robot's total power needs. This doesn't account for the fact that the robot would have to carry around the additional weight of the compute unit though. Can't say if that's good or bad, just interesting to see that it's in the same ballpark.
An electric car would have no issue sustaining this level of power; a gas-powered car doubly-so.
It was a bit of a culture shock the first time I was involved with industrial robots because of how much power constraints had impacted the design of previous systems I worked on.
I thought current gen robots would be an order of magnitude less efficient. Maybe I'm misunderstanding something.
Where we excel is energy storage. Far less weight, far higher density.
2000 kilocalorie converts to 8.3 megajoules. This should be the amount of energy consumed per day.
8.3 megajoules / 24 hours is 96 watts. This should be the average rate of energy expediture.
96 watts * 20% is 19 watts. This should be the portion your brain uses out of that average.
96 watts * 24 hours is 464 watthours. This should be the average amount of energy your brain uses in a day.
This is why I've never found "AI" to be particularly competitive with human beings. The level of energy efficiency that our brains operate at is amazing. Our electrical and computer engineering is several orders of magnitude out from the achievements of nature and biology.
That's way off the envelope though.
robot could be useful even when permanently plugged to the grid.
NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
https://blogs.nvidia.com/blog/jetson-thor-physical-ai-edge/
Anduril submersibles probably need local processing, but does my laundry/dishes robot need local processing? Or machines in factories? Or delivery drones?
Imagine you were tracking items on video at a self-service checkout. Sure, you could compress the video down to 15 Mbps or so and send it to the cloud. But now, a store with 20 self-checkouts needs 300 Mbps of upload bandwidth. That's one more problem making it harder for Wal-Mart to buy and roll out your product.
Also, if you know you need an NVIDIA L4 dedicated to you 24/7 for a year, a g6.xlarge will cost $7,000/year on-demand or $4,300/year reserved [1] while you can buy the card for $2,500.
Of course for many other use cases the cloud is a fine choice. If you only need a fraction of a GPU, or you only need a monster GPU a tiny fraction of the time, or you need an enormous LLM that demands water cooling and tolerates latency easily, the cloud can be a fine choice.
[1] https://instances.vantage.sh/aws/ec2/g6.xlarge?currency=USD&...
Simple example, Security cameras that only use bandwidth when they've detected something. The cost of live streaming 20 cameras over 5g is very high. The cost of sending text messages with still images when you see a person is reasonable.
I just want clean dishes/clothes, not to be upsold into some stupid shit that fails when it can’t ping google.com or gets bricked when the company closes.
I would pay premium for certified mindless products.
> The Jetson Thor chips are equipped with 128GB of memory, which is essential for big AI models.
Just put it into a robot and run some unhinged model on it, that should be fun.
https://www.instagram.com/rizzbot_official/
Does "robotics outside of AI" imply they want to get into making actual robots (beyond the GPU "brains")?
Edge compute has not yet been won. There is no ecosystem for CUDA for it yet.
Someone else but Nvidia please pay attention to this market.
Robots can't deal with the latency of calls back to the data center. Vision, navigation, 6DOF, articulation all must happen in real time.
This will absolutely be a huge market in time. Robots, autonomous cars, any sort of real time, on-prem, hardware type application.
And just like with CUDA no one else has currently the resources to go there because at the moment you have to invest heavily without any return.
> Compared to NVIDIA Jetson AGX Orin, it provides up to 7.5x higher AI compute and 3.5x better energy efficiency.
I could really use a table of all the various options Nvidia has! Jetson AGX Orin (2023) seems to start at ~$1700 for a 32GB system, with 204GB/s bandwidth, 1792 Ampere, 56 Tensor, & 8 A78AE ARM Cores, 200 TOPS "AI Performance", 15-45W. Slightly bigger model of 2048/64/12 cores/275 TOPS, 15-60W available. https://en.wikipedia.org/wiki/Nvidia_Jetson#Performance
Now Jetson T5000 is 2070 TFLOPS (but FP4 - Sparse! Still ~double-ish). 2560 Core Blackwell, 96 Tensor cores, 14 Neoverse V3AE cores. 273GB/s 128GB. 4x25Gbe is a neat new addition. 40-130W. There's also a lower spec T4000.
Seems like a pretty in line leap at 2x the price!
Looks like a physically pretty big unit. Big enough to scratch my head in the intro video of robots opening up the package & wonder: where are they going to fit their new brain? But man, the breakdown diagram: it's- unsurprisingly- half heatsink.
While the Cortex-X925, the successor of Cortex-X4, has better absolute performance, it has much worse performance per die area. Therefore, for a CPU where the best multi-threaded performance is desired, Neoverse V3AE/Neoverse V3/Cortex-X4 remains the best CPU core designed by the Arm company.
This year's Arm core announcements have been delayed and it is not clear how the future Cortex-A930 and Cortex-X930 will compare with the currently existing Cortex-X4, Cortex-A725 and Cortex-X925.
Would be interested to see head to head benchmarks including power usage between those mini PCs and the Nvidia Thor.
When using bigger number formats and/or dense matrices the advantage of Thor diminishes considerably.
Also the 50 Tops is only from the low power NPU. When distributing the computation also over GPU and CPU you get much more. So for a balanced comparison one has to divide the Thor value by 4 or more and multiply the Ryzen value by a factor that might be around 3 or even more.
The Ryzen CPU is significantly better, and the GPU has about the same size but a much higher clock frequency, so it should also be faster, so for anything except AI inference a Ryzen Max at half price will offer much more bang for the buck.
Nothing like explaining to your ML engineers that they can only use Python 3.6 on an EOL operating system because you deployed a bunch of hardware shortly before the vendor released a new shiny thing and abruptly lost interest in supporting everything that came before.
And yes, TX2 was launched in 2017, but Nvidia continued shipping them until the end of 2024, so it's absurd they never got updated software: https://forums.developer.nvidia.com/t/jetson-tx2-lifecycle-e...
AMD has managed to blunder multiple opportunities to launch something into this space and earn the trust of developers. And no, NUC form factor APU machines are not the answer— both for power/heat concerns and the software integration story being an incomplete patchwork.
Therefore it sounds quite bad. Like Orin before it, Thor is severely overpriced.
It is worthwhile only for those who absolutely need some of its features that are not available elsewhere, like automotive certification or no need of additional boards when interfacing with a great number of video cameras.
It appears to be an embedded DGX Spark, at the end of the day.
The CPU of DGX Spark has better single-threaded performance, while that of Thor has better multi-threaded performance per die area and per power consumption.
If you're buying a durable good like a warehouse robot or household chores robot that costs as much as a car this doesn't seem like that high of a starting point for the market segment to me.
Anyone who can use an RPi (or one of many other SoCs in that class) should absolutely consider them, but that's not the market this is competing in. RPis are more comparable to the Jetson nano line, which had sub-$100 dev kits. Slightly above that are the Orin-based tegras like the SoC in the switch 2, which are still clearly viable.
Given what he has accomplished, he has more than earned the right to wear a leather jacket if he wants to. People didn't complain about Steve Jobs wearing a turtleneck for the same reason. When you have accomplished as much as these guys, then you can dish out fashion advice and maybe someone will listen.
> “So I called Issey [Miyake] and asked him to design a vest for Apple,” Jobs recalled. “I came back with some samples and told everyone it would be great if we would all wear these vests. Oh man, did I get booed off the stage. Everybody hated the idea.”
> “So I asked Issey to make me some of his black turtlenecks that I liked, and he made me like a hundred of them.” Jobs noticed my surprise when he told this story, so he gestured to them stacked up in the closet. “That’s what I wear,” he said. “I have enough to last for the rest of my life.”
I wonder if Issey thought the turtlenecks might be the Apple uniform, and that's why he made lots.