Ask HN: Why hasn't x86 caught up with Apple M series?
416 points by stephenheron 1d ago 597 comments
iOS 26 Launches Sept 15 – Even GPT-5 Doesn't Know It Exists
2 points by rileygersh 4h ago 5 comments
Ask HN: Best codebases to study to learn software design?
100 points by pixelworm 3d ago 89 comments
Stop squashing your commits. You're squashing your AI too
4 points by jannesblobel 1d ago 9 comments
Hierarchical Reasoning Model outperforms LLMs at reasoning tasks
2 geox 1 8/27/2025, 1:00:37 PM livescience.com ↗
https://arcprize.org/blog/hrm-analysis
...we made some surprising findings that call into question the prevailing narrative around HRM:
1. The "hierarchical" architecture had minimal performance impact when compared to a similarly sized transformer.
2. However, the relatively under-documented "outer loop" refinement process drove substantial performance, especially at training time.
3. Cross-task transfer learning has limited benefits; most of the performance comes from memorizing solutions to the specific tasks used at evaluation time.
4. Pre-training task augmentation is critical, though only 300 augmentations are needed (not 1K augmentations as reported in the paper). Inference-time task augmentation had limited impact.
Findings 2 & 3 suggest that the paper's approach is fundamentally similar to Liao and Gu's "ARC-AGI without pretraining".