Ask HN: Why hasn't x86 caught up with Apple M series?
431 points by stephenheron 3d ago 615 comments
Ask HN: Best codebases to study to learn software design?
103 points by pixelworm 4d ago 90 comments
A conservative vision for AI alignment
7 flypunk 7 8/28/2025, 5:32:59 PM lesswrong.com ↗
> Not suffering for its own sake, or trauma, but the kind of tension that arises from limits, from saying no, from sustaining hard-won compromises. In a reductionist frame, every source of pain looks like a bug to fix, any imposed limit is oppression, and any disappointment is something to try to avoid. But from a holistic view, pain often marks the edge of a meaningful boundary. The child who’s told they can’t stay out late may feel hurt. But the boundary says: someone cares enough to enforce limits. Someone thinks you are part of a system worth preserving. To be unfairly simplistic, values can’t be preserved without allowing this pain, because limits are painful.
Good lord how much meaningless slop can you spew onto one page.
The Twilight Zone episode "A Nice Place to Visit"* is about a man who gets whatever wish he desires. Initially he's overjoyed, but after a month he becomes numb and miserable: with no conflict, he has no purpose (it turns out, he's in hell). In reality, a superintelligence that could grant anything could grant more: it could make people not "feel" numb and purposeless even though they have everything. But what would they "feel", would they be conscious, would they be "human"?
This is something that the article sort of addresses: perhaps there's something inherent to conflict and struggle. Also, that people often ask for things that make them sad in the long run: e.g. children asking to eat junk food and stay up late, forming bad habits that hurt them later in life. A near-godlike superintelligence could solve most modern problems (e.g. maintain people's health and sleep/wake states regardless of what/when they eat/sleep), but would those fixes create future problems it can't solve? Basically, giving people whatever they want (the article's definition of "liberalism", which has become a term with many common definitions) has consequences.
Sure, taking this reasoning too far lets you justify any suffering (because "suffering is necessary") and restriction (because "allowing would make you unhappy in the long run"). But I think even most liberals can acknowledge it's a fair consideration: at least to prevent the Twilight Zone or loss of humanity, or at least because solving problems too fast without thinking through and accomodating the solution, can create larger unsolvable problems later. See: LLMs making people stupid, promoting delusions, increasing the wealth gap, and polluting social discourse even more than now.
My stupid opinion: that's an impossible question, but it's also one we don't need to solve. What we have right now is AI that's far from superintelligent, and lots of problems, including the ones I described above. I think what we should do, and the only thing we can do right now, is keep solving problems; we should try to in ways that create the smallest second-order problems, but only avoid solving a problem if every solution is likely to create a larger second-order one.
My politics lean towards "live and let live" largely because it's practical. Restraints based on "moral" and "holistic" principles do benefit some people in the long run, but whenever they're applied on a large scale, they hurt more people. Because somebody only knows what's better for somebody else than themselves, if they're significantly more competent (in whatever category they know what's better for), and they really understand the person's values and emotions (especially what makes them happy or sad).
* https://en.wikipedia.org/wiki/A_Nice_Place_to_Visit