Tesla's Dojo supercomputer is DOA – now what? (theverge.com)
1 points by CharlesW 27m ago 1 comments
Deep Dive into Advanced N8n Node Development (intelda.ca)
2 points by swengcrunch 46m ago 2 comments
GPT-5 is a joke. Will it matter?
33 dbalatero 36 8/13/2025, 2:30:30 AM bloodinthemachine.com ↗
The reason why true singularity-level ASI will never happen isn't because the technology cannot support it (though, to be clear: It can't). Its because no one actually wants it. The capital markets will take care of the rest.
Plenty of people want something capable of doing tasks well beyond what GPT-5 can and that are equally capable to a proficient human.
If you can do that cheaper or faster or more available than said skilled human, there is definitely a market for it.
As much as we hate on meta, open models are the answer.
ASI isn't what people want or not. It's an AGI that is able to self improve at a rapid rate. We haven't built something that can self improve continuously yet. It's not related to whether people want it or not. Personally, I do want ASI.
Except it seems like it's a worse google, therapist, life coach and programming mate, with the personality of someone who spends all their time trying to solve the Riemann hypothesis.
But the rest I agree with.
2) Maybe I'm biased because I'm using GPT5-Pro for my coding, but so far it's been quite good. Normal thinking mode isn't substantially better than o3 IMO, but that's a limitation of data/search.
Like, I don’t care how much Sam Altman hyped up GPT-5, or how many dumb people fell for it. I figured it was most likely GPT-5 would be a good improvement over GPT-4, but not a huge improvement, because, idk, that’s mostly how these things go? The product and price are fine, the hype is annoying but you can just ignore that.
If you feel it’s hard to ignore it, then stop going on LinkedIn.
All I want to know is which of these things are useful, what they’re useful for, and what’s the best way to use them in terms of price/performance/privacy tradeoffs.
I don't know if it improved at all lately, but for awhile it seemed like every startup I could find was the same variation on "healthcare note taking AI".
Speed.
ChatGPT 5 Thinking is So. Much. Slower. than o4-mini and o4-mini-high. Like between 5 and 10 times slower. Am I the only one experiencing this? I understand they were "mini" models, but those were the current-gen thinking models available to Pro. Is GPT 5 Thinking supposed to be beefier and more effective? Because the output feels no better.
> As is true with a good many tech companies, especially the giants, in the AI age, OpenAI’s products are no longer primarily aimed at consumers but at investors
https://brokk.ai/power-rankings
Also, that gpt-5-nano can't handle the work that gpt-4o-mini could (and is somehow slower?) is also surprising and bad, but they'd really painted themselves into a corner with the version numbers.
We are investing literal hundreds of billions into something that is looking more and more likely to flop than succeed.
What scares me the most is we are being steered into a sunk cost fallacy. Industry will continue to claim it is just around the corner, more and more infrastructure will be built, even underground water is being rationed in certain places because AI datacenters apparently deserve priority.
Are we being forced into a situation where we are invested too much in this to come face to face with it doesn't work and it makes everything worse?
What is this capacity being built for? It no longer makes any sense.
It's just like dot-com bubble, everyone was pumped that Internet is going to take over the world. And even though, the bubble popped, the Internet did eventually take over the world.
Open source and last years models will be good enough for 99% of people. Very few are going to pay billions to use the absolute best model there is.
I don't think there's the kind of systemic risk that you had in a say 2008 is there?, but I do think there is likely to be a "correction" to put it lightly.
For example, I want to use an LLM system to track every promise a politician makes ever and see if he or she actually did it. No one is going to give me $1 billion to do this. But I think it would enhance society.
I don't need an AGI to do this idea. I think current LLMs are already more than good enough. But LLM prices are still way to high to do this cost effectively. When inference is as cheap relatively as serving up a web page, LLMs will be ubiquitous.
I'm just imagining in 2030 when there is an absolutely massive secondary market for used GPUs.
Yes, some people are concerned, see for example a recent Hank Greene YouTube video:
https://www.youtube.com/watch?v=VZMFp-mEWoM
I'm probably more concerned than he is in that a large market correction would be bad, but even scarier are the growing (in number and power) techo-religious folks who really think we are on the cusp of creating an electronic god and are willing to trash what's left of the planet in an attempt to birth that electronic god that they think will magically save us.
Yes, but don't worry, they're getting close to the government so they'll get bailed out.
> What is this capacity being built for? It no longer makes any sense.
Give more power to Silicon Valley billionaires.