I'm not sure the comparison is apples to apples, but this article claims the current AI investment boom pales compared to the railroad investment boom in the 19th century.
> Next, Kedrosky bestows a 2x multiplier to this imputed AI CapEx level, which equates to a $624 billion positive impact on the US GDP. Based on an estimated US GDP figure of $30 trillion, AI CapEx is expected to amount to 2.08 percent of the US GDP!
Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century. This means that the ongoing AI CapEx boom has lots of legroom to run before it reaches parity with the rail road boom of that bygone era.
tripletao · 3h ago
> Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century.
Has anyone found the source for that 20%? Here's a paper I found:
> Between 1848 and 1854, railroad investment, in these and in preceding years, contributed to 4.31% of GDP. Overall, the 1850s are the period in which railroad investment had the most substantial contribution to economic conditions, 2.93% of GDP, relative to 2.51% during the 1840s and 2.49% during the 1830s, driven by the much larger investment volumes during the period.
The first sentence isn't clear to me. Is 4.31 > 2.93 because the average was higher from 1848-1854 than from 1850-1859, or because the "preceding years" part means they lumped earlier investment into the former range so it's not actually an average? Regardless, we're nowhere near 20%.
I'm wondering if the claim was actually something like "total investment over x years was 20% of GDP for one year". For example, a paper about the UK says:
> At that time, £170 million was close to 20% of GDP, and most of it was spent in about four years.
That would be more believable, but the comparison with AI spending in a single year would not be meaningful.
jefftk · 2h ago
> Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century.
When you go so far back in time you run into the problem where GDP only counts the market economy. When you count people farming for their own consumption, making their own clothes, etc, spending on railroads was a much smaller fraction of the US economy than you'd estimate from that statistic (maybe 5-10%?)
eru · 1h ago
Yes, that was a problem back then, and is also a problem today, but in different ways.
First, GDP still doesn't count you making your own meals. Second, when eg free Wikipedia replaces paid for encyclopedias, this makes society better off, but technically decreases GDP.
However, having said all that, it's remarkably how well GDP correlates with all the goods things we care about, despite its technical limitations.
jefftk · 58m ago
This has always been an issue with GDP, but it's a much larger issue the father back you go.
While GDP correlates reasonably well, imagine very roughly what it would be like if GDP growth averaged 3% annually while the overall economy grew at 2%. While correlation would be good, if we speculate that 80% of the economy is counted in GDP today, then only 10% would have been counted 200 years ago.
onlyrealcuzzo · 2h ago
I don't know if the economy could ever be accurately reduced to "good" or "bad".
What's good for one class is often bad for another.
Is it a "good" economy if real GDP is up 4%, the S&P 500 is up 40%, and unemployment is up 10%?
For some people that's great. For others, not so great.
Maybe some economies are great for everyone, but this is definitely not one of those.
This economy is great for some people and bad for others.
fc417fc802 · 2h ago
> Is it a "good" economy if real GDP is up 4%, the S&P 500 is up 40%, and unemployment is up 10%?
In today's US? Debatable, but on the whole probably not.
In a hypothetical country with sane health care and social safety net policies? Yes that would be hugely beneficial. The tax base would bear the vast majority of the burden of those displaced from their jobs making it a much more straightforward collective optimization problem.
eru · 1h ago
The US spends more per capita on their social safety net than almost all other countries, including France and the UK.
The US spends around 6.8k USD/capita/year on public health care. The UK spends around 4.2k USD/capita/year and France spends around 3.7k.
For general public social spending the numbers are 17.7k for the US, 10.2k for the UK and 13k for France.
(The data is for 2022.)
Though I realise you asked for sane policies. I can't comment on that.
I'm not quite sure why the grandfather commenter talks about unemployment: the US had and has fairly low unemployment in the last few decades. And places like France with their vaunted social safety net have much higher unemployment.
lossolo · 1h ago
It’s not just how much you spend on healthcare, but what that spending actually delivers. How much does an emergency room visit cost in the U.S. compared to the UK or France? How do prescription drug prices in the U.S. compare to those in the EU? When you look at what Americans pay relative to outcomes, the U.S. has one of the most inefficient healthcare systems among OECD countries.
eru · 59m ago
If you want to see an efficient healthcare system in a rich country, have a look at Singapore. They spend far less than eg the UK.
bugglebeetle · 1h ago
> The US spends more per capita on their social safety net than almost all other countries, including France and the UK.
To a vast and corrupt array of rentiers, middlemen, and out-and-out fraudsters, instead of direct provision of services, resulting in worse outcomes at higher costs!
Turns out if I’m forced to have visits with three different wallet inspectors on the way to seeing a doctor, I’ve somehow spent more money and end up less healthy than my neighbors who did not. Curious…
marcus_holmes · 1h ago
Agree completely. The idea that an increasing GDP or stock market is always good has taken a beating recently. Mostly because it seems that the beneficiaries of that number increase are the same few who already have more than enough, and everyone else continues to decline.
We need new metrics.
eru · 1h ago
What's a class?
decimalenough · 6h ago
There is obvious utility to railroads, especially in a world with no cars.
The net utility of AI is far more debatable.
rockemsockem · 5h ago
I'm continually amazed to find takes like this. Can you explain how you don't find clear utility, at the personal level, from LLMs?
I am being 100% genuine here, I struggle to understand how the most useful things I've ever encountered are thought of this way and would like to better understand your perspective.
tikhonj · 2h ago
Can't speak for anyone else, but for me, AI/LLMs have been firmly in the "nice but forgettable" camp. Like, sometimes it's marginally more convenient to use an LLM than to do a proper web search or to figure out how to write some code—but that's a small time saving at best, it's less of a net impact than Stack Overflow was.
I'm already a pretty fast writer and programmer without LLMs. If I hadn't already learned how to write and program quickly, perhaps I would get more use out of LLMs. But the LLMs would be saving me the effort of learning which, ultimately, is an O(1) cost for O(n) benefit. Not super compelling. And what would I even do with a larger volume of text output? I already write more than most folks are willing to read...
So, sure, it's not strictly zero utility, but it's far less utility than a long series of other things.
On the other hand, trains are fucking amazing. I don't drive, and having real passenger rail is a big chunk of why I want to move to Europe one day. Being able to get places without needing to learn and then operate a big, dangerous machine—one that is statistically much more dangerous for folks with ADHD like me—makes a massive difference in my day-to-day life. Having a language model... doesn't.
And that's living in the Bay Area where the trains aren't great. Bart, Caltrain and Amtrak disappearing would have an orders of magnitude larger effect on my life than if LLMs stopped working.
And I'm totally ignoring the indirect but substantial value I get out of freight rail. Sure, ships and trucks could probably get us there, but the net increase in costs and pollution should not be underestimated.
roncesvalles · 12m ago
As a dev, I find that the personal utility of LLMs is still very limited.
Analyze it this way: Are LLMs enabling something that was impossible before? My answer would be No.
Whatever I'm asking of the LLM, I'd have figured it out from googling and RTFMing anyway, and probably have done a better job at it. And guess what, after letting the LLM do it, I probably still need to google and RTFM anyway.
You might say "it's enabling the impossible because you can now do things in less time", to which I would say, I don't really think you can do it in less time. It's more like cruise control where it takes the same time to get to your destination but you just need to expend less mental effort.
Other elephants in the room:
- where is the missing explosion of (non-AI) software startups that should've been enabled by LLM dev efficiency improvements?
- why is adoption among big tech SWEs near zero despite intense push from management? You'd think, of all people, you wouldn't have to ask them twice.
The emperor has no clothes.
bluefirebrand · 2h ago
> Can you explain how you don't find clear utility, at the personal level, from LLMs?
Sure. They don't meaningfully improve anything in my life personally.
They don't improve my search experience, they don't improve my work experience, they don't improve the quality of my online interactions, and I don't think they improve the quality of the society I live in either
rockemsockem · 1h ago
Have you even tried using them though? Like in earnest? Or do you see yourself as a conscientious objector of sorts?
bluefirebrand · 42m ago
I have tried using them frequently. I've tried many things for years now, and while I am impressed I'm not impressed enough to replace any substantial part of my workflow with them
At this point I am somewhat of a conscientious objector though
Mostly from a stance of "these are not actually as good as people say and we will regret automating away jobs held by competent people in favor of these low quality automations"
decimalenough · 2h ago
I actually do get clear utility, with major caveats, namely that I only ask things where the answer is both well known and verifiable.
I still do 10-20x regular Kagi searches for every LLM search, which seems about right in terms of the utility I'm personally getting out of this.
shusaku · 3m ago
The concerning thing is that AI contrarianism is being left wing coded. Imagine you’re fighting a war and one side decides “guns are overhyped, let’s stick with swords”. While there is a lot of hype about AI, even the pessimistic take has to admit it’s a game changing tech. If it isn’t doing anything useful for you, that’s because you need to get off your butt and start building tools on top of it.
agent_turtle · 5h ago
There was a study recently that showed how not only did devs overestimate the time saved using AI, but that they were net negative compared to the control group.
Anyway, that about sums up my experience with AI. It may save some time here and there, but on net, you’re better off without it.
>This implies that each hour spent using genAI increases the worker’s productivity for that hour by 33%. This is similar in magnitude to the average productivity gain of 27% from several randomized experiments of genAI usage (Cui et al., 2024; Dell’Acqua et al., 2023; Noy and Zhang, 2023; Peng et al., 2023)
Our estimated aggregate productivity gain from genAI (1.1%) exceeds the 0.7% estimate by Acemoglu (2024) based on a similar framework.
To be clear, they are surmising that GenAI is already having a productivity gain.
agent_turtle · 4h ago
The article you gave is derived from a poll, not a study.
As for the quote, I can’t find it in the article. Can you point me to it? I did click on one of the studies and it indicated productivity gains specifically on writing tasks. Which reminded me of this recent BBC article about a copywriter making bank fixing expensive mistakes caused by AI: https://www.bbc.com/news/articles/cyvm1dyp9v2o
It's actually based on the results of three surveys conducted by two different parties. While surveys are subject to all kinds of biases and the gains are self-reported, their findings of 25% - 33% producitivity do match the gains shown by at least 3 other randomized studies, one of which was specifically about programming. Those studies are worth looking at as well.
foolswisdom · 45m ago
It's worth noting that the METR paper that found decreased productivity also found that many of the developers thought the work was being sped up.
rockemsockem · 4h ago
I'm not talking about time saving. AI seems to speed up my searching a bit since I can get results quicker without having to find the right query then find a site that actually answers my question, but that's minor, as nice as it is.
I use AI in my personal life to learn about things I never would have without it because it makes the cost of finding any basic knowledge basically 0. Diet improvement ideas based on several quick questions about gut functioning, etc, recently learning how to gauge tsunami severity, and tons of other things. Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
How much have you actually tried using LLMs and did you just use normal chat or some big grand complex tool? I mostly just use chat and prefer to enter my code in artisanally.
lisbbb · 1h ago
How much of that is junk knowledge, though? I mean, sure, I love looking up obscure information, particularly about cosmology and astronomy, but in reality, it's not making me better or smarter, it's just kind of "science junk food." It feels good, though. I feel smarter. I don't think I am, though, because the things I really need to work on about myself are getting pushed aside.
flkiwi · 2h ago
This is kind of how I use it:
1. To work through a question I'm not sure how to ask yet
2. To give me a starting point/framework when I have zero experience with an issue
3. To automate incredibly stupid monkey-level tasks that I have to do but are not particularly valuable
It's a remarkable accomplishment that has the potential to change a lot of things very quickly but, right now, it's (by which I mean publicly available models) only revolutionary for people who (a) have a vested interest in its success, (b) are easily swayed by salespeople, (c) have quite simple needs (which, incidentally, can relate to incredible work!), or (d) never really bothered to check their work anyway.
fzeroracer · 3h ago
Why not just look up the information directly instead of asking a machine that you can never truly validate?
If I need information, I can just keyword search wikipedia, then follow the chain there and then validate the sources along with outside information. An LLM would actually cost me time because I would still need to do all of the above anyways, making it a meaningless step.
If you don't do the above then it's 'cheaper' but you're implicitly trusting the lying machine to not lie to you.
rockemsockem · 2h ago
See my previous statement
> Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
You're practically saying that looking at an index in the back of a book is a meaningless step.
It is significantly faster, so much so that I am able to ask it things that would have taken an indeterminate amount of time to research before, for just simple information, not deep understanding.
Edit:
Also I can truly validate literally any piece of information it gives me. Like I said previously, it makes it very easy to validate via Wikipedia or other places with the right terms, which I may not have known ahead of time.
fzeroracer · 2h ago
Again, why would you just not use Wikipedia as your index? I'm saying why would you use the index that lies and hallucinates to you instead of another perfectly good index elsewhere.
You're using the machine that ingests and regurgitates stuff like Wikipedia to you. Why not skip the middleman entirely?
rockemsockem · 2h ago
Because the middleman is faster and practically never lies/hallucinates for simple queries, the middleman can handle vague queries that Google and Wikipedia cannot.
The same reasons you use Wikipedia instead of reading all the citations on Wikipedia.
lisbbb · 59m ago
A lot of formerly useful search tools, particularly Google, are just trash now, absolute trash.
ares623 · 2h ago
Is the "here and there" tasks that were previously so little value that they are always stuck in the backlog? i.e. the parts where it helps have very little value in the first place.
kazinator · 2h ago
Gemini wasted my time today assuring me that if I want a git bundle that only has the top N commits, yet is cleanly clone-able, I can just make a --depth N clone of the original repo, and and do a git bundle create ... --all.
Nope; cloning a bundle created from a depth-limited clone results in error messages about missing commit objects.
So I tell the parrot that, and it comes back with: of course, it is well-known that it doesn't work, blah blah. (Then why wasn't it well known one prompt ago, when it was suggested as the definitive answer?)
Obviously, I wasn't in the "the right mindset" today.
This mindset is one of two things:
- the mindset of a complete n00b asking a n00b question that it will nail every time, predicting it out of its training data richly replete with n00b material.
- the mindset of a patient data miner, willing to expend all they keystrokes. needed to build up enough context to in effect create a query which zeroes in on the right nugget of information that made an appearance in the training data.
It was interesting to go down this #2 rabbit hole when this stuff was new, which it isn't any more. Basically do most of the work, while it looks as if it solved the problem.
I had the right mindset for AI, but most of it has worn off. If I don't get something useful in one query with at most one follow up, I quit.
The only shills who continue to hype AI are either completely dishonest assholes, or genuine bros bearing weapons-grade confirmation bias.
Let's try something else:
Q: "What modes of C major are their own reflection?"
A: "The Lydian and Phrygian modes are reflections of each other, as are the Ionian and Aeolian modes, and the Dorian and Mixolydian modes. The Locrian mode is its own reflection."
Very nice sounding and grammatical, but gapingly wrong in every point. The only mode that is its own reflection is Dorian. Furthermore, Lydian and Phrygian are not mutual reflections. Phrygian reflected around is root is Ionian. The reflection of Lydian is Locrian; and of Aeolian, Mixolydian.
I once loaded a NotebookLM with materials about George Russel's concept of the Lydian Chromatic, and Tonal Gravity. It made an incomprehensible mess of explaining the stuff, worse than the original sources.
AI performs well on whatever is the focus of its purveyors. When they want to shake down entry-level coding, they beef it up on entry-level coding and let it loose, leaving it unable to tell Mixolydian from mixing console.
lisbbb · 56m ago
Thank you! This is what I've been trying to tell people about LLMs. They don't hold up. They're like those Western movie set towns that look normal from the front, but when you walk around behind them, you see it is all just scaffolding with false fronts.
No comments yet
ishyaboibro · 40m ago
what model did you ask? here's the exact reply I received from Claude Sonnet, which appears to be exactly the answer you were expecting:
"Among the seven modes of C major, only Dorian is its own reflection.
Understanding Mode Reflections
When we reflect a mode, we reverse its interval pattern. The modes of C major and their interval patterns are:
Ionian: W-W-H-W-W-W-H
Dorian: W-H-W-W-W-H-W
Phrygian: H-W-W-W-H-W-W
Lydian: W-W-W-H-W-W-H
Mixolydian: W-W-H-W-W-H-W
Aeolian: W-H-W-W-H-W-W
Locrian: H-W-W-H-W-W-W
The Palindromic Nature of Dorian
Dorian mode is palindromic, meaning it produces the same scale whether you read its interval pattern forwards or backwards. When you reverse the Dorian interval pattern W-H-W-W-W-H-W, you get exactly the same sequence: W-H-W-W-W-H-W.
Mirror Pairs Among the Other Modes
The remaining modes form mirror pairs with each other:
Ionian-Phrygian: Mirror pair
Lydian-Locrian: Mirror pair
Mixolydian-Aeolian: Mirror pair
For example, when you reflect the C major scale (Ionian), which has the interval pattern W-W-H-W-W-W-H, you get H-W-W-W-H-W-W, which corresponds to the Phrygian mode.
This symmetrical relationship exists because the whole diatonic scale system can be symmetrically inverted, creating these natural mirror relationships between the modes"
kazinator · 32m ago
> what model did you ask?
Are you hoping to disprove my point by cherry picking the AI that gets the answer?
I used Gemini 2.5 Flash.
Where can I get an exact list of stuff that Gemini 2.5 Flash does not know that Claude Sonnet does, and vice versa?
Then before deciding to consult with AI, I can consult the list?
harimau777 · 2h ago
I think that there's a strong argument to be made that the negatives of having to wade through AI slop outweights the benefits that AI may provide. I also suspect that AI could contribute to enshittification of society; e.g. AI therapy being substituted for real therapy, AI products displacing industrial design, etc.
fc417fc802 · 2h ago
> e.g. AI therapy being substituted for real therapy, AI products displacing industrial design, etc.
That depends on the quality of the end product and the willingness to invest the resources necessary to achieve a given quality of result. If average quality goes up in practice then I'd chalk that up as a net win. Low quality replacing high quality is categorically different than low quality filling a previously empty void.
Therapy in particular is interesting not just because of average quality in practice (therapists are expensive experts) but also because of user behavior. There will be users who exhibit both increased and decreased willingness to share with an LLM versus a human.
There's also a very strong privacy angle. Querying a local LLM affords me an expectation of privacy that I don't have when it comes to Google or even Wikipedia. (In the latter case I could maintain a local mirror but that's similar to maintaining a local LLM from a technical perspective making it a moot point.)
rockemsockem · 2h ago
What is this AI slop that you're wading through and where is it?
Spam emails are not any worse for being verbose, I don't recognize the sender, I send it straight to spam. The volume seems to be the same.
You don't want an AI therapist? Go get a normal therapist.
I have not heard of any AI product displacing industrial design, but if anything it'll make it easier to make/design stuff if/when it gets there.
Like are these real things you are personally experiencing?
eru · 1h ago
> There is obvious utility to railroads, especially in a world with no cars.
> The net utility of AI is far more debatable.
As long as people are willing to pay for access to AI (either directly or indirectly), who are we to argue?
In comparison: what's the utility of watching a Star Wars movie? I say, if people are willing to part with their hard earned cash for something, we must assume that they get something out of it.
gruez · 6h ago
>The net utility of AI is far more debatable.
I'm sure if you asked the luddites the utility of mechanized textile production you'd get a negative response as well.
decimalenough · 6h ago
Railroads move people and cargo quickly and cheaply from point A to point B. Mechanized textile production made clothing, a huge sink of time and resources before the industrial age, affordable to everybody.
What does AI get the consumer? Worse spam, more realistic scams, hallucinated search results, easy cheating on homework? AI-assisted coding doesn't benefit them, and the jury is still out on that too (see recent study showing it's a net negative for efficiency).
osigurdson · 1h ago
I don't have that negative of a take but agree to some extent. The internet, mobile, AI have all been useful but not in the same way as earlier advancements like electricity, cars, aircraft and even basic appliances. Outside of things that you can do on screens, most people live exactly the same way as they did in the 70s and 80s. For instance, it still takes 30-45 minutes to clean up after dinner - using the same kind of appliances that people used 50 years ago. The same goes for washing clothes, sorting socks and other boring things that even fairly rich people still do. Basically, the things people dreamed about in the 50s - more wealth, more leisure time, robots and flying cars really were the right dream.
azeirah · 5h ago
For learning with self-study it has been amazing.
gamblor956 · 5h ago
Until you dive deeper and discover that most of what the AI agents provided you was completely wrong...
There's a reason that AI is already starting to fade out of the limelight with customers (companies and consumers both). After several years, the best they can offer is slightly better chatbots than we had a decade ago with a fraction of the hardware.
simonw · 3h ago
"Until you dive deeper and discover that most of what the AI agents provided you was completely wrong..."
Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
Learning well is about consulting multiple sources and using them to build up your own robust mental model of the truth of how something works.
If you can really find the single perfect source of 100% correct information then great, I guess... but that's never been my experience. Every source of information has its flaws. You need to build your own mental model with a skeptical eye from as many sources as possible.
As such, even if AI makes mistakes it can still accelerate your learning, provided you know how to learn and know how to use tips from AI as part of your overall process.
Having an unreliable teacher in the mix may even be beneficial, because it enforces the need for applying critical thinking to what you are learning.
JimDabell · 2h ago
> > "Until you dive deeper and discover that most of what the AI agents provided you was completely wrong..."
> Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
I think it does matter, but the problem is vastly overstated. One person points out that AIs aren’t 100% reliable. Then the next person exaggerates that a little and says that AIs often get things wrong. Then the next person exaggerates that a little and says that AIs very often get things wrong. And so on.
Before you know it, you’ve got a group of anti-AI people utterly convinced that AI is totally unreliable and you can’t trust it at all. Not because they have a clear view of the problem, but because they are caught in this purity spiral where any criticism gets amplified every time it’s repeated.
Go and talk to a chatbot about beginner-level, mainstream stuff. They are very good at explaining things reliably. Can you catch them out with trick questions? Sure. Can you get incorrect information when you hit the edges of their knowledge? Sure. But for explaining the basics of a huge range of subjects, they are great. “Most of what they told you was completely wrong” is not something a typical beginner learning a typical subject would encounter. It’s a wild caricature of AI that people focused on the negatives have blown out of all proportion.
Gud · 5h ago
That has not been the case for me. I use LLMs to study German, so far it’s been an excellent teacher.
I also use them to help me write code, which it does pretty well.
rockemsockem · 4h ago
I almost always validate what I get back from LLMs and it's usually right. Even when it isn't it still usually gets me closer to my goal (e.g maybe some UX has changed where a setting I'm looking for in an app has changed, etc).
IDK where you're getting the idea that it's fading out. So many people are using the "slightly better chatbots" every single day.
Btw if you only think chat GPT is slightly better than what we had a decade ago then I do not believe that you have used any chat bots at all, either 10 years ago or recently because that's actually a completely insane take.
simonw · 3h ago
> So many people are using the "slightly better chatbots" every single day.
> This week, ChatGPT is on track to reach 700M weekly active users — up from 500M at the end of March and 4× since last year.
fc417fc802 · 2h ago
At a minimum, presumably once it arrives it will provide the consumer custom software solutions which are clearly a huge sink of time and resources (prior to the AI age).
You're looking at the prototype while complaining about an end product that isn't here yet.
no_wizard · 2h ago
Luddites weren’t anti technology at all[0] in fact they were quite adept at using technology. It was a labor movement that fought for worker rights in the face of new technologies.
Luddites weren't at a point where every industry sees individual capital formation/demand for labor trend towards zero over time.
Prices are ratios in the currency between factors and producers.
What do you suppose happens when the factors can't buy anything because there is nothing they can trade. Slavery has quite a lot of historic parallels with the trend towards this. Producers stop producing when they can make no profit.
You have a deflationary (chaotic) spiral towards socio-economic collapse, under the burden of debt/money-printing (as production risk). There are limits to systems, and when such limits are exceeded; great destruction occurs.
Malthus/Catton pose a very real existential threat when such disorder occurs, and its almost inevitable that it does without action to prevent it. One cannot assume action will happen until it actually does.
harimau777 · 2h ago
I mean, for them it probably was.
bgwalter · 5h ago
The mechanical loom produced a tangible good. That kind of automation was supposed to free people from menial work. Now they are trying to replace interesting work with human supervised slop, which is a stolen derivative work in the first place.
The loom wasn't centralized in four companies. Customers of textiles did not need an expensive subscription.
Obviously average people would benefit more if all that investment went into housing or in fact high speed railways. "AI" does not improve their lives one bit.
shadowgovt · 6h ago
With this generation of AI, it's too early to tell whether it's the next railroad, the next textile machine, or the next way to lock your exclusive ownership of an ugly JPG of a multicolored ape into a globally-referenceable, immutable datastore backed by a blockchain.
skybrian · 3h ago
Computing is fairly general-purpose, so I suspect that the data centers at least will be used for something. Reusing so many GPU's might be harder, but not as bad as ASICs. There are a lot of other calculations they could do.
kazinator · 2h ago
> Reusing so many GPU's might be harder
It could have some unexciting applications like, oh, modeling climate change and other scientific simulations.
blibble · 2h ago
a data centre is a big warehouse
the vast expense is on the GPU silicon, which is essentially useless for compute other than parallel floating point operations
when the bubble pops, the "investment" will be a very expensive total waste of perfectly good sand
fc417fc802 · 2h ago
Most scientific HPC workloads are designed to utilize GPU equipped nodes. If AI completely flops scientific modeling will see huge benefits. It's a win-win (except for the investors I guess).
BobaFloutist · 2h ago
Maybe they can use it all to mine crypto.
peab · 5h ago
the goal of the major AI labs is to create AGI. The net utility of AGI is at least on the level of electricity, or the steam engine. It's debatable whether or not they'll achieve that, but if you actually look at what the goal is, the investment makes sense.
jcgrillo · 4h ago
what? crashing the economy for a psychotic sci-fi delusion "makes sense"? how?
rockemsockem · 3h ago
How exactly is AI crashing the economy....? Do you walk around with these beliefs every day?
jcgrillo · 1h ago
when bubbles burst crashes follow. this is a colossal bubble. i do walk around with that belief every day, because every day that passes is yet another day when this overblown AI hype bullshit fails to deliver the goods.
tharmas · 4h ago
Isn't the US economy far more varied than it was in the 19th century? More dense? And therefore wouldn't be more difficult for one industry to dominate the US economy today than it was in the 19th century?
jus3sixty · 41m ago
The article's comparison to the 19th century railroad boom is pretty spot on for how big it all feels, but maybe not so much for what actually happened.
Back then, the money poured into building real stuff like actual railroads and factories and making tangible products.
That kind of investment really grew the value of companies and was more about creating actual economic value than just making shareholders rich super fast.
johncole · 23m ago
While some of the investment in railroads (and canals before it, and shipping before that) was going into physical assets of economic value, there were widespread instances of speculation, land "rights" fraud, and straight up fraud without any economic value added.
0cf8612b2e1e · 7h ago
Over the last six months, capital expenditures on AI—counting just information processing equipment and software, by the way—added more to the growth of the US economy than all consumer spending combined. You can just pull any of those quotes out—spending on IT for AI is so big it might be making up for economic losses from the tariffs, serving as a private sector stimulus program.
Wow.
gruez · 6h ago
It's not as bad as the alarmist phrasing would suggest. Consider a toy example: suppose consumer spending was $100 and grew by $1, but AI spending was $10 and grew by $1.5, then you can rightly claim that "AI added more to the grow of the US economy than all consumer spending combined"[1]. But it's not as if the economy consists mostly of AI, or that if AI spending stopped the economy will collapse. It just means AI is a major contributor to the economy's growth right now. It's not even certain that the AI bubble popping would lead to all of that growth evaporating. Much of the AI boom involves infrastructure build out for data centers. That can be reallocated to building houses if datacenters are no longer needed.
[1] Things get even spicier if consumer growth was zero. Then what would the comparison? That AI added infinitely more to growth than consumer spending? What if it was negative? All this shows how ridiculous the framing is.
agent_turtle · 5h ago
[flagged]
dang · 3h ago
Please don't cross into personal attack. We ban accounts that do that.
Have you heard of the disagreement hierarchy? You're somewhere between 1 and 3 right now, so I'm not even going to bother to engage with you further until you bring up more substantive points and cool it with the personal attacks.
One of the major reasons there’s such a shortage of homes in the US is the extensive permit process required. Pivoting from data centers to home construction is not a straightforward process.
Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on. A resilient economy has multiple growth areas; an unstable one has one or two.
While you could certainly argue that we may already be in rough shape even without the bubble popping, it would undoubtedly get worse for the reasons I listed above,
gruez · 3h ago
>One of the major reasons there’s such a shortage of homes in the US is the extensive permit process required. Pivoting from data centers to home construction is not a straightforward process.
Right, I'm not suggesting that all of the datacenter construction will seamlessly switch over to building homes, just that some of the labor/materials freed would be allocated to other sorts construction. That could be homes, amazon distribution centers, or grid connections for renewable power projects.
>A resilient economy has multiple growth areas; an unstable one has one or two.
>[...] it would undoubtedly get worse for the reasons I listed above,
No disagreement there. My point is that if AI somehow evaporated, the hit to GDP would be less than $10 (total size of the sector in the toy example above), because the resources would be allocated to do something else, rather than sitting idle entirely.
>Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on.
That's a fair point, although to be fair the federal government is pretty good at stimulus after the GFC and covid that any credit crunch would be short lived.
raincole · 1h ago
> growth
Is the keyword here. US consumers have been spending so much so of course that sector doesn't have that much room to grow.
lisbbb · 53m ago
That's bad because you just know at some point the bell is getting rung and then the bubble bursts. It was the same thing with office space in the late 1990s--they overbuilt like crazy predicting huge demand that never appeared and then the dot-com bubble burst and that was that.
troyastorino · 6h ago
I've seen this quote in a couple places and it's misleading.
So, non-seasonally adjusted consumer spending is flat. In that sense, yes, anything where spend increased contributed more to GDP growth than consumer spending.
If you look at seasonally-adjusted rates, consumer spending has grown ~$400 billion, which might outstrips total AI CapEx in that time period, let alone growth. (To be fair the WSJ graph only shows the spending from Meta, Google, Microsoft, and Amazon. But it also says that Apple, Nvidia, and Tesla combined "only" spent $6.7 billion in Q2 2025 vs the $96 billion from the other four. So it's hard to believe that spend coming from elsewhere is contributing a ton.)
If you click through the the tweet that is the source for the WSJ article where the original quote comes from (https://x.com/RenMacLLC/status/1950544075989377196) it's very unclear what it's showing...it only shows percentage change, and it doesn't even show anything about consumer spending.
So, at best this quote is very misleadingly worded. It also seems possible that the original source was wrong.
bravetraveler · 7h ago
Tepidly socially-acceptable welfare
electrondood · 6h ago
For context though, consumer spending has contracted significantly.
intended · 7h ago
Yes, wow. When I heard that data point I was floored.
rglover · 6h ago
> There could be a crash that exceeds the dot com bust, at a time when the political situation through which such a crash would be navigated would be nightmarish.
If the general theme of this article is right (that it's a bubble soon to burst), I'm less concerned about the political environment and more concerned about the insane levels of debt.
If AI is indeed the thing propping up the economy, when that busts, unless there are some seriously unpopular moves made (Volcker level interest rates, another bailout leading to higher taxes, etc), then we're heading towards another depression. Likely one that makes the first look like a sideshow.
The only thing preventing that from coming true IMO is dollar hegemony (and keeping the world convinced that the world's super power having $37T of debt and growing is totally normal if you'd just accept MMT).
margalabargala · 6h ago
> Likely one that makes the first look like a sideshow.
The first Great Depression was pretty darn bad, I'm not at all convinced that this hypothetical one would be worse.
agent_turtle · 5h ago
Some of the variables that made the Great Depression what it was included very high tariff rates and lack of quality federal oversight.
Today, we have the highest tariffs since right before the Great Depression, with the added bonus of economic uncertainty because our current tariff rates change on a near daily basis.
Add in meme stocks, AI bubble, crypto, attacks on the Federal Reserve’s independence, and a decreasing trust in federal economic data, and you can make the case that things could get pretty ugly.
margalabargala · 4h ago
Sure, you can make the case that things could get pretty ugly. You could even make the case that things could get about as bad as the Great Depression.
But for things to be much worse than the Great Depression, I think is an extraordinary claim. I see the ingredients for a Great Depression-scale event, but not for a much-worse-than-Great-Depression event.
Gabriel_Martin · 2h ago
MMT is just a description of the monetary reality we're in. If everything changed, the new reality would be MMT.
Hikikomori · 6h ago
Ai bubble, economy in the trash already, inflation from tariffs. Dollar might get real cheap when big holders start selling stocks and exchanging it, nobody wants to be left holding their bag, and they have a lot of dollars.
Which is their (Thiel, project2025, etc) plan, federal land will be sold for cheap.
decimalenough · 6h ago
Selling stocks for what? If the dollar is going down the toilet, the last thing you want to have is piles of rapidly evaporating cash.
marcusestes · 6h ago
Never totally discount _deflationary_ scenarios.
heathrow83829 · 2h ago
based on my understanding of what all the financial pros are saying: they'll never let that happen. they'll inflate away to the moon before they allow for a deflationary bust. that's why everyone's in equities in the first place. it's almost insured, at this point.
Animats · 6h ago
"Over the last six months, capital expenditures on AI—counting just information processing equipment and software, by the way—added more to the growth of the US economy than all consumer spending combined."
If this isn't the Singularity, there's going to be a big crash. What we have now is semi-useful, but too limited. It has to get a lot better to justify multiple companies with US $4 trillion valuations. Total US consumer spending is about $16 trillion / yr.
Remember the Metaverse/VR/AR boom? Facebook/Meta did somehow lose upwards of US$20 billion on that. That was tiny compared to the AI boom.
brotchie · 2h ago
Look at the induced demand due to Claude code. I mean, they wildly underestimated average token usage by users. There's high willingness to pay. There's literally not enough inference infra available.
I was working on crypto during the NFT mania, and THAT felt like a bubble at the time. I'd spend my days writing smart contracts and related infra, but I was doing a genuine wallet transaction at most once a week, and that was on speculation, not work.
My adoption rate of AI has been rapid, not for toy tasks, but for meaningful complex work. Easily send 50 prompts per day to various AI tools, use LLM-driven auto-complete continuously, etc.
That's where AI is different from the dot com bubble (not enough folks materially transaction on the web at the time), or the crypto mania (speculation and not utility).
Could I use a smarter model today? Yes, I would love that and use the hell out of it.
Could I use a model with 10x the tokens/second today? Yes, I would use it immediately and get substantial gains from a faster iteration cycle.
sothatsit · 2h ago
Claude Code was the tipping point for me from "that's neat" to "wow, that's really useful". Suddenly, paying $200/month for an AI service made sense. Before that, I didn't want to pay $20/month for access to Claude, as I already had my $20/month subscription to ChatGPT.
I have to imagine that other professions are going to see similar inflection points at some point. When they do, as seen with Claude Code, demand can increase very rapidly.
* Even with all this infra buildout all the hyperscalers are constantly capacity constrained, especially for GPUs.
* Surveys are showing that most people are only using AI for a fraction of the time at work, and still reporting significant productivity benefits, even with current models.
The AGI/ASI hype is a distraction, potentially only relevant to the frontier model labs. Even if all model development froze today, there is tremendous untapped demand to be met.
The Metaverse/VR/AR boom was never a boom, with only 2 big companies (Meta, Apple) plowing any "real" money into it. Similarly with crypto, another thing that AI is unjustifiably compared to. I think because people were trying to make it happen.
With the AI boom, however, the largest companies, major governments and VCs are all investing feverishly because it is already happening and they want in on it.
Animats · 4h ago
> Even with all this infra buildout all the hyperscalers are constantly capacity constrained, especially for GPUs.
Are they constrained on resources for training, or resources for serving users using pre-trained LLMs? The first use case is R&D, the second is revenue. The ratio of hardware costs for those areas would be good to know.
keeda · 2h ago
Good question, I don't believe they break out their workloads into training versus inference, in fact they don't even break out any mumbers in any useful detail. But anecdotally the public clouds did seem to be most GPU-constrained whenever Sam Altman was making the rounds asking for trillions in infra for training.
However, my understanding is that the same GPUs can be used for both training and inference (potentially in different configurations?) so there is a lot of elasticity there.
That said, for the public clouds like Azure, AWS and GCP, training is also a source of revenue because other labs pay them to train their models. This is where accusations of funny money shell games come into play because these companies often themselves invest in those labs.
rockemsockem · 5h ago
Tbf I think most would say that the VR/AR boom is still ongoing, just with less glitz.
Edit: agree on the metaverse as implemented/demoed not being much, but that's literally one application
throwmeaway222 · 7h ago
- Microsoft’s AI-fueled $4 trillion valuation
As someone in an AI company right now - Almost every company we work with is using Azure wrapped OpenAI. We're not sure why, but that is the case.
guidedlight · 6h ago
It’s because most companies already have a lot of confidence with Microsoft contracts, and are generally very comfortable storing and processing highly sensitive data on Microsoft’s SaaS platforms. It’s a significant advantage.
Also Microsoft Azure hosts its own OpenAI models. It isn’t a proxy for OpenAI.
ElevenLathe · 6h ago
MS salespeople presumably already have weekly or monthly meetings with all the people with check-cutting authority, and OpenAI doesn't. They're already an approved vendor, and what's more the Azure bill is already really really big, so a few more AI charges barely register.
It's the same reason you would use RDS at an AWS shop, even if you really like CloudSQL better.
This is the main reason the big cloud vendors are so well-positioned to suck up basically any surplus from any industry even vaguely shaped like a b2b SaaS.
edaemon · 2h ago
Lots of AI things are features masquerading as products. Microsoft already has the products, so they just have to add the AI features. Customers can either start using a new and incomplete product just for one new feature, or they can stick with the mature Microsoft suite of products they're already using and get that same feature.
hnuser123456 · 6h ago
Nobody gets fired for choosing Microsoft
chung8123 · 6h ago
All of their files are likely on a Microsoft store already too.
throw0101d · 2h ago
From a few weeks ago, see "Honey, AI Capex is Eating the Economy" / "AI capex is so big that it's affecting economic statistics" (365 points by throw0101c 18 days ago | hide | past | favorite | 355 comments):
If AI gets us into orbit ( https://news.ycombinator.com/item?id=44800051#44804687 ) or revitalizes nuclear, I'm fine with those things. It's true that AI usage can scale with availability better than most things but that's not a path to world domination.
GianFabien · 4h ago
There's only two reasons to buy stocks:
1) for future cashflows (aka dividends) derived from net profits.
2) to on-sell to somebody willing to pay even more.
When option (2) is no longer feasible, the bubble pops and (1) resets the prices to some multiple of dividends. Economics 101.
heathrow83829 · 2h ago
yes, but there will always be a #2 with QE being normalized now.
Nition · 3h ago
Just like land :)
lenerdenator · 7h ago
And that's why there's a desire to make interest rates lower: cheap money is good for propping up bubbles.
Now, it does that at the expense of the average person, but it will definitely prop up the bubble just long enough for the next election cycle to hit.
bluecalm · 6h ago
I am curious, why do you think lower interest rates are bad for an average person?
mason_mpls · 6h ago
We want interest rates as close to zero as possible. However they’re also the only reliable tool available to stop inflation.
Youre implying the country exerting financial responsibility to control inflation isn’t good.
Not using interest rates to control inflation caused the stagflation crisis of the 70s, and ended when Volcker set rates to 20%.
nr378 · 5h ago
Interest rates set the exchange rate between future cashflows (i.e. assets) and cash today. Lower interest rates mean higher asset values, higher interest rates mean lower asset values. Higher asset values generally disproportionately benefit those that own assets (wealthy people) over those that don't (average people).
Of course, this is just one way that interest rates affect the economy, and it's important to bear in mind that lower interest rates can also stimulate investment which help to create jobs for average people as well.
tharmas · 4h ago
> it's important to bear in mind that lower interest rates can also stimulate investment which help to create jobs for average people as well.
Precisely! Yet the big problem in the Anglosphere is that most of that money has been invested in asset accumulation, namely housing, causing a massive housing crisis in these countries.
verdverm · 6h ago
more money in the economy drives inflation, which largely affects those with less disposable income
This is why in a hot economy we raise rates, and in a not economy we lower them
(oversimplification, but it is a commonly provided explanation)
tharmas · 4h ago
>more money in the economy drives inflation
Not necessarily. Sure, it that money is chasing fixed assets like housing but if that money was invested into production of things to consume its not necessarily inflation inducing is it? For example, if that money went into expanding the electricity grid and production of electric cars, the pool of goods to be consumed is expanding so there is less likelihood of inflation.
verdverm · 3h ago
> if that money was invested into production of things to consume its not necessarily inflation inducing is it
People are paid salaries to work at these production facilities, which means they have more money to spend, and the competition drives people to be willing to spend more to get the outputs. Not all outputs will be scaled, those that aren't experience inflation, like food and housing today
micromacrofoot · 5h ago
Low interest rates make borrowing cheap, so companies flood money into real estate and stocks, inflating prices. This also drives up costs for regular people, fuels risky lending (remember subprime mortgages?), and when the bubble bursts... guess who gets hit the hardest when companies start scaling back and lenders come calling?
decimalenough · 6h ago
You really think the AI bubble can be sustained for another three years?
dylan604 · 6h ago
15 months. Mid-terms are next November. After that, legacy cannot be changed by election. If POTUS loses control of either/both chambers, he might have some 'splanin to do. If POTUS keeps control and/or makes further gains, there might not be an election in 3 years.
tick_tock_tick · 5h ago
> he might have some 'splanin to do
About what? Like seriously what would they even do other then try and lame duck him?
The big issue is Dem approval ratings are even lower then Trumps so how the hell are they going to gain any seats?
dylan604 · 4h ago
Gerrymandering helps. Just look at Texas
chasd00 · 2h ago
And California
Hikikomori · 6h ago
Gerrymandering in Texas and elsewhere they might stay in power, if they do it's unlikely to change. Basically speed running a fascist takeover.
smackeyacky · 5h ago
It's not really a speed run.
The seeds were planted after Nixon resigned and it was decided to re-shape the media landscape and move the overton window rightwards in the 1970s, dismantling social democracy across the west and leading to a gradual reversal of the norms of governance in the US (see Newt Gingrich).
It's been gradual, slow and methodical. It has definitely accelerated but in retrospect the intent was there from the very beginning.
mathiaspoint · 5h ago
The way most of you define "fascism" America has always been fascist with a brief perturbation where we tried Democracy and some Communism.
If you see it that way this is just a reversion to the mean.
smackeyacky · 4h ago
True. We have collectively forgotten segregation was a thing in the US. Perhaps it has always been a right wing country that flirts with fascism.
dylan604 · 3h ago
The Constitution was clearly written for rich land owning white men first of thought, and everything else being left out or only in fractions. They added some checks and balances as a hand wavy idea of trying to stay away from autocracy, but they kind of made them toothless. I'd guess they just didn't have the imagination that people would willingly allow someone to go back towards autocracy since they were fighting so hard to leave it.
fzeroracer · 2h ago
It's been an unfortunate truth that the US has long been a country that's flirted with fascism. Ultimately, Thaddeus Stevens was right in his conviction that after the civil war the southern states should've been completely crushed and the land given to the freedmen.
tharmas · 4h ago
Excellent post.
You could say that was when things reverted back to "normal". The FDR social reconstruction and post WW2 economic boom were the exception, anomaly. But the Scandinavian countries seem to be doing alright. Sure, they have some big size problems (Sweden in particular) but daily life for the majority in those countries appears to be better than a lot of people in the Anglosphere.
skinnymuch · 2h ago
A difference also is neoliberalism ramping up in that time period of the 80s. The concept of privatizing anything and everything and bullshit like “private public partnership” are fairly recent.
dylan604 · 5h ago
Interesting to see if California follows suit. Governor Newsom has his eye on the 2028 prize it seems. If the Dems do not wake up and start playing the same game the GOP is playing, they will never win. Taking the higher ground is such a nice concept, but it's also what losers say to feel good about not winning. Meanwhile, those willing to break/bend/change rules to ensure they continue to win will, well, continue to win.
SpicyLemonZest · 1h ago
I think it's important to remember how California got here. In the 2000 redistricting, the state legislature agreed to conduct an extreme bipartisan gerrymander, drawing every seat to be as safe as possible so that no incumbent could get voted out without losing a primary. This was widely understood to be a conspiracy of politicians against democratic accountability, and thus voters decided (with the support of many advocacy orgs and every major newspaper in the state) to put an end to it.
That's not the redistricting Newsom wants for 2028, and I tend to agree that Dems have to play the game right now, but I'd really like to see them present some sort of story for why it's not going to happen again.
tick_tock_tick · 5h ago
Honestly it's not really bubbling like we expected revenues are growing way too fast income from AI investment is coming back to these companies way sooner then anyone thought possible. At this rate we have another couple of 20+% years in the stock market for there to be anything left of a "bubble".
Nvidia the poster-child of this "bubble" has been getting effectively cheaper every day.
icedchai · 6h ago
Possibly. For comparison, how long did the dot-com bubble last? From roughly 1995 to early 2000.
thrance · 7h ago
Trump and his administration harassing the Fed and Powell over interest rates is like a swarm of locust salivating at ripened wheat fields. They want a quick feast at the expense of everything and everyone else, including themselves over the long term.
dylan604 · 6h ago
Trump knows that the next POTUS can just reverse his decisions much like he's done in both of his at bats. Only thing is there is no next at bat for Trump (without major changes that would be quite devastating), so he's got to get them in now. The sooner the better to take as much advantage of being in control.
pessimizer · 6h ago
The left is almost completely unanimous in their support for lowering interest rates, and have been screaming about it for years, since the first moment they started being raised again. And for the same reasons that Trump wants it, except without the negative connotations for some reason.
Recently, I've heard many left wingers, as a response to Trump's tariffs, start 1) railing about taxes being too high, and that tariffs are taxes so they're bad, and 2) saying that the US trade deficit is actually wonderful because it gives us all this free money for nothing.
I know all of these are opposite positions to every one of the central views of the left of 30 years ago, but politics is a video game now. Lefties are going out of their way to repeat the old progressive refrain:
> "The way that Trump is doing it is all wrong, is a sign of mental instability, is cunning psychopathic genius and will resurrect Russia's Third Reich, but in a twisted way he has blundered into something resembling a point..."
"...the Fed shouldn't be independent and they should lower interest rates now."
mason_mpls · 6h ago
I have not heard a single left wing pundit demand interest rates go down
rockemsockem · 5h ago
Elizabeth Warren has gone on several talk shows insisting interest rates should be lowered. If you look at video from the last time Powell was being questioned by Congress there were many other Democratic congress-people asking him why he wouldn't lower rates.
Personally I trust Jerome Powell more than any other part of the government at the moment. The man is made of steel.
mason_mpls · 5h ago
Jerome Powell belongs on Mt Rushmore if you ask me
no_wizard · 2h ago
He's entirely too cozy with big banks. He's one of their biggest advocates when it comes to policy. I think Elizabeth Warren had a point here[0]
Coziness with banks can certainly be an issue, I don't know specifics and that article is pay walled for me, but it sounds very believable to me.
That doesn't really change what I said regarding interest rates though.
skinnymuch · 2h ago
People are upset at the tariffs as taxes because they hurt poorer people more. That’s how it works when everyone pays the same amt of taxes
thrance · 3h ago
Who cares? Even if it were true, why is your first reflex to point the finger at progressives when they're absolutely irrelevant to the current government?
akomtu · 13m ago
More like the corpos are really excited about the post-human AI future, so they are pouring truckloads of money on it and this raises the GDP number. The well-being of the average folk is in decline.
poopiokaka · 37m ago
You lost me at 404 wanna be event. 13 viewers looking ass
xg15 · 5h ago
So what will happen to all those massive data centers when the bubble bursts? Back to crypto?
GianFabien · 4h ago
After the dot bomb of 2000, the market got flooded with CISCO and Sun gear for pennies on the dollar. Lots of post 2000 startups got their gear from those auctions and were able to massively extend their runway. Same could happen again.
morpheos137 · 58m ago
Imagine, the world's biggest economy propped up by hopes and dreams. Has anyone successfully monetized "AI" at a scale that generates a reasonable return on investment?
BigglesB · 5h ago
I also wonder the extent to which "pseudo-black-box-AI" is potentially driving some of these crazy valuations now due to it actually being used in a lot algorithmic trading itself... seems like a prevalence of over-corrected models, all expecting "line go up" from recent historical data would be the perfect way to cook up a really "big beautiful bubble" so to speak...
kogasa240p · 6h ago
Surprised the SVB collapse wasn't mentioned, the LLM boom gained a huge amount of steam right after that happened.
asdev · 6h ago
look at the S&P 500 chart when ChatGPT came out. We were just on our way to flushing out the Covid excess money and then the AI narrative saved the market. AI narrative + inflation that is definitely way more than reported is propping up this market.
diogenescynic · 1h ago
They've been saying the same thing about whatever the trend of the moment is for years. Before this it was Magnificent 7 and before that it was FANG, and before that it was something else. Isn't this just sort of fundamental to how the economy works?
johng · 6h ago
There has to be give and take to this as well. The AI increase is going to cost jobs. I see it in my work flow and our company. We used to pay artists to do artwork and editors to post content. Now we use AI to generate the artwork and AI to write the content. It's verified by a human, but it's still done by AI and saves a ton of time and money.
These are jobs that normally would have gone to a human and now go to AI. We haven't paid a cent for AI mind you -- it's all on the ChatGPT free tier or using this tool for the graphics: https://labs.google/fx/tools/image-fx
I could be wrong, but I think we are at the start of a major bloodbath as far as employment goes.... in tech mostly but also in anything that can be replaced by AI?
I'm worried. Does this mean there will be a boom in needing people for tradeskills and stuff? I honestly don't know what to think about the prospects moving forward.
micromacrofoot · 6h ago
This is going to be an absolute disaster, the government is afraid of regulating AI because it's so embedded in our economy now too
jcgrillo · 4h ago
I think we're already starting to see the cracks with OpenAI drastically tightening their belt across various cloud services. Depends how long it takes to set in, but seems like it could be starting this quarter.
m0llusk · 2h ago
Interesting piece, but the idea that this guy understands how oligarchs think seems way off. Jack Welch took General Electric from a global leader to a sad bag holder and he and his fans cheered progress with every positive quarterly report.
gamblor956 · 5h ago
This is backwards.
The AI bubble is so big that it's draining useful investment from the rest of the economy. Hundreds of thousands of people are getting fired so billionaires can try to add a few more zeros to their bank account.
The best investment we can make would be to send the billionaires and AI researchers to an island somewhere and not let them leave until they develop an AI that's actually useful. In the meanwhile, the rest of us get to live productive lives.
I dont think the comment is saying AI was able to replace the work people were doing but people are getting fired and their salary is being redirected into funding AI development.
add-sub-mul-div · 4h ago
Discounting the evidence of it being explicitly cited as a reason for layoffs and that its purpose to business is to replace human labor, there's no evidence that its replacing human labor. Got it.
rockemsockem · 2h ago
Citing AI for layoffs is great cover for "we over hired during Covid".
There probably are a few nuts out there that actually fired people to be replaced with AI, I feel like that won't go well for them
There really is no evidence.
no_wizard · 2h ago
There's strong sentiment bubbling that supports AI driven layoffs are going to happen or are happening[0].
I'll say its okay to be reserved on this, since we won't know until after the fact, but give it 6-12 months, then we'll know for sure. Until then, I see no reason not to believe there is a culture in the boardrooms forming around AI that is driving closed door conversations about reducing headcount specifically to be replaced by AI.
https://wccftech.com/ai-capex-might-equal-2-percent-of-us-gd...
> Next, Kedrosky bestows a 2x multiplier to this imputed AI CapEx level, which equates to a $624 billion positive impact on the US GDP. Based on an estimated US GDP figure of $30 trillion, AI CapEx is expected to amount to 2.08 percent of the US GDP!
Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century. This means that the ongoing AI CapEx boom has lots of legroom to run before it reaches parity with the rail road boom of that bygone era.
Has anyone found the source for that 20%? Here's a paper I found:
> Between 1848 and 1854, railroad investment, in these and in preceding years, contributed to 4.31% of GDP. Overall, the 1850s are the period in which railroad investment had the most substantial contribution to economic conditions, 2.93% of GDP, relative to 2.51% during the 1840s and 2.49% during the 1830s, driven by the much larger investment volumes during the period.
https://economics.wm.edu/wp/cwm_wp153.pdf
The first sentence isn't clear to me. Is 4.31 > 2.93 because the average was higher from 1848-1854 than from 1850-1859, or because the "preceding years" part means they lumped earlier investment into the former range so it's not actually an average? Regardless, we're nowhere near 20%.
I'm wondering if the claim was actually something like "total investment over x years was 20% of GDP for one year". For example, a paper about the UK says:
> At that time, £170 million was close to 20% of GDP, and most of it was spent in about four years.
https://www-users.cse.umn.edu/~odlyzko/doc/mania18.pdf
That would be more believable, but the comparison with AI spending in a single year would not be meaningful.
When you go so far back in time you run into the problem where GDP only counts the market economy. When you count people farming for their own consumption, making their own clothes, etc, spending on railroads was a much smaller fraction of the US economy than you'd estimate from that statistic (maybe 5-10%?)
First, GDP still doesn't count you making your own meals. Second, when eg free Wikipedia replaces paid for encyclopedias, this makes society better off, but technically decreases GDP.
However, having said all that, it's remarkably how well GDP correlates with all the goods things we care about, despite its technical limitations.
While GDP correlates reasonably well, imagine very roughly what it would be like if GDP growth averaged 3% annually while the overall economy grew at 2%. While correlation would be good, if we speculate that 80% of the economy is counted in GDP today, then only 10% would have been counted 200 years ago.
What's good for one class is often bad for another.
Is it a "good" economy if real GDP is up 4%, the S&P 500 is up 40%, and unemployment is up 10%?
For some people that's great. For others, not so great.
Maybe some economies are great for everyone, but this is definitely not one of those.
This economy is great for some people and bad for others.
In today's US? Debatable, but on the whole probably not.
In a hypothetical country with sane health care and social safety net policies? Yes that would be hugely beneficial. The tax base would bear the vast majority of the burden of those displaced from their jobs making it a much more straightforward collective optimization problem.
The US spends around 6.8k USD/capita/year on public health care. The UK spends around 4.2k USD/capita/year and France spends around 3.7k.
For general public social spending the numbers are 17.7k for the US, 10.2k for the UK and 13k for France.
(The data is for 2022.)
Though I realise you asked for sane policies. I can't comment on that.
I'm not quite sure why the grandfather commenter talks about unemployment: the US had and has fairly low unemployment in the last few decades. And places like France with their vaunted social safety net have much higher unemployment.
To a vast and corrupt array of rentiers, middlemen, and out-and-out fraudsters, instead of direct provision of services, resulting in worse outcomes at higher costs!
Turns out if I’m forced to have visits with three different wallet inspectors on the way to seeing a doctor, I’ve somehow spent more money and end up less healthy than my neighbors who did not. Curious…
We need new metrics.
The net utility of AI is far more debatable.
I am being 100% genuine here, I struggle to understand how the most useful things I've ever encountered are thought of this way and would like to better understand your perspective.
I'm already a pretty fast writer and programmer without LLMs. If I hadn't already learned how to write and program quickly, perhaps I would get more use out of LLMs. But the LLMs would be saving me the effort of learning which, ultimately, is an O(1) cost for O(n) benefit. Not super compelling. And what would I even do with a larger volume of text output? I already write more than most folks are willing to read...
So, sure, it's not strictly zero utility, but it's far less utility than a long series of other things.
On the other hand, trains are fucking amazing. I don't drive, and having real passenger rail is a big chunk of why I want to move to Europe one day. Being able to get places without needing to learn and then operate a big, dangerous machine—one that is statistically much more dangerous for folks with ADHD like me—makes a massive difference in my day-to-day life. Having a language model... doesn't.
And that's living in the Bay Area where the trains aren't great. Bart, Caltrain and Amtrak disappearing would have an orders of magnitude larger effect on my life than if LLMs stopped working.
And I'm totally ignoring the indirect but substantial value I get out of freight rail. Sure, ships and trucks could probably get us there, but the net increase in costs and pollution should not be underestimated.
Analyze it this way: Are LLMs enabling something that was impossible before? My answer would be No.
Whatever I'm asking of the LLM, I'd have figured it out from googling and RTFMing anyway, and probably have done a better job at it. And guess what, after letting the LLM do it, I probably still need to google and RTFM anyway.
You might say "it's enabling the impossible because you can now do things in less time", to which I would say, I don't really think you can do it in less time. It's more like cruise control where it takes the same time to get to your destination but you just need to expend less mental effort.
Other elephants in the room:
- where is the missing explosion of (non-AI) software startups that should've been enabled by LLM dev efficiency improvements?
- why is adoption among big tech SWEs near zero despite intense push from management? You'd think, of all people, you wouldn't have to ask them twice.
The emperor has no clothes.
Sure. They don't meaningfully improve anything in my life personally.
They don't improve my search experience, they don't improve my work experience, they don't improve the quality of my online interactions, and I don't think they improve the quality of the society I live in either
At this point I am somewhat of a conscientious objector though
Mostly from a stance of "these are not actually as good as people say and we will regret automating away jobs held by competent people in favor of these low quality automations"
I still do 10-20x regular Kagi searches for every LLM search, which seems about right in terms of the utility I'm personally getting out of this.
Anyway, that about sums up my experience with AI. It may save some time here and there, but on net, you’re better off without it.
>This implies that each hour spent using genAI increases the worker’s productivity for that hour by 33%. This is similar in magnitude to the average productivity gain of 27% from several randomized experiments of genAI usage (Cui et al., 2024; Dell’Acqua et al., 2023; Noy and Zhang, 2023; Peng et al., 2023)
Our estimated aggregate productivity gain from genAI (1.1%) exceeds the 0.7% estimate by Acemoglu (2024) based on a similar framework.
To be clear, they are surmising that GenAI is already having a productivity gain.
As for the quote, I can’t find it in the article. Can you point me to it? I did click on one of the studies and it indicated productivity gains specifically on writing tasks. Which reminded me of this recent BBC article about a copywriter making bank fixing expensive mistakes caused by AI: https://www.bbc.com/news/articles/cyvm1dyp9v2o
It's actually based on the results of three surveys conducted by two different parties. While surveys are subject to all kinds of biases and the gains are self-reported, their findings of 25% - 33% producitivity do match the gains shown by at least 3 other randomized studies, one of which was specifically about programming. Those studies are worth looking at as well.
I use AI in my personal life to learn about things I never would have without it because it makes the cost of finding any basic knowledge basically 0. Diet improvement ideas based on several quick questions about gut functioning, etc, recently learning how to gauge tsunami severity, and tons of other things. Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
How much have you actually tried using LLMs and did you just use normal chat or some big grand complex tool? I mostly just use chat and prefer to enter my code in artisanally.
1. To work through a question I'm not sure how to ask yet 2. To give me a starting point/framework when I have zero experience with an issue 3. To automate incredibly stupid monkey-level tasks that I have to do but are not particularly valuable
It's a remarkable accomplishment that has the potential to change a lot of things very quickly but, right now, it's (by which I mean publicly available models) only revolutionary for people who (a) have a vested interest in its success, (b) are easily swayed by salespeople, (c) have quite simple needs (which, incidentally, can relate to incredible work!), or (d) never really bothered to check their work anyway.
If I need information, I can just keyword search wikipedia, then follow the chain there and then validate the sources along with outside information. An LLM would actually cost me time because I would still need to do all of the above anyways, making it a meaningless step.
If you don't do the above then it's 'cheaper' but you're implicitly trusting the lying machine to not lie to you.
> Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
You're practically saying that looking at an index in the back of a book is a meaningless step.
It is significantly faster, so much so that I am able to ask it things that would have taken an indeterminate amount of time to research before, for just simple information, not deep understanding.
Edit:
Also I can truly validate literally any piece of information it gives me. Like I said previously, it makes it very easy to validate via Wikipedia or other places with the right terms, which I may not have known ahead of time.
You're using the machine that ingests and regurgitates stuff like Wikipedia to you. Why not skip the middleman entirely?
The same reasons you use Wikipedia instead of reading all the citations on Wikipedia.
Nope; cloning a bundle created from a depth-limited clone results in error messages about missing commit objects.
So I tell the parrot that, and it comes back with: of course, it is well-known that it doesn't work, blah blah. (Then why wasn't it well known one prompt ago, when it was suggested as the definitive answer?)
Obviously, I wasn't in the "the right mindset" today.
This mindset is one of two things:
- the mindset of a complete n00b asking a n00b question that it will nail every time, predicting it out of its training data richly replete with n00b material.
- the mindset of a patient data miner, willing to expend all they keystrokes. needed to build up enough context to in effect create a query which zeroes in on the right nugget of information that made an appearance in the training data.
It was interesting to go down this #2 rabbit hole when this stuff was new, which it isn't any more. Basically do most of the work, while it looks as if it solved the problem.
I had the right mindset for AI, but most of it has worn off. If I don't get something useful in one query with at most one follow up, I quit.
The only shills who continue to hype AI are either completely dishonest assholes, or genuine bros bearing weapons-grade confirmation bias.
Let's try something else:
Q: "What modes of C major are their own reflection?"
A: "The Lydian and Phrygian modes are reflections of each other, as are the Ionian and Aeolian modes, and the Dorian and Mixolydian modes. The Locrian mode is its own reflection."
Very nice sounding and grammatical, but gapingly wrong in every point. The only mode that is its own reflection is Dorian. Furthermore, Lydian and Phrygian are not mutual reflections. Phrygian reflected around is root is Ionian. The reflection of Lydian is Locrian; and of Aeolian, Mixolydian.
I once loaded a NotebookLM with materials about George Russel's concept of the Lydian Chromatic, and Tonal Gravity. It made an incomprehensible mess of explaining the stuff, worse than the original sources.
AI performs well on whatever is the focus of its purveyors. When they want to shake down entry-level coding, they beef it up on entry-level coding and let it loose, leaving it unable to tell Mixolydian from mixing console.
No comments yet
"Among the seven modes of C major, only Dorian is its own reflection.
Understanding Mode Reflections When we reflect a mode, we reverse its interval pattern. The modes of C major and their interval patterns are:
Ionian: W-W-H-W-W-W-H
Dorian: W-H-W-W-W-H-W
Phrygian: H-W-W-W-H-W-W
Lydian: W-W-W-H-W-W-H
Mixolydian: W-W-H-W-W-H-W
Aeolian: W-H-W-W-H-W-W
Locrian: H-W-W-H-W-W-W
The Palindromic Nature of Dorian Dorian mode is palindromic, meaning it produces the same scale whether you read its interval pattern forwards or backwards. When you reverse the Dorian interval pattern W-H-W-W-W-H-W, you get exactly the same sequence: W-H-W-W-W-H-W.
Mirror Pairs Among the Other Modes The remaining modes form mirror pairs with each other:
Ionian-Phrygian: Mirror pair
Lydian-Locrian: Mirror pair
Mixolydian-Aeolian: Mirror pair
For example, when you reflect the C major scale (Ionian), which has the interval pattern W-W-H-W-W-W-H, you get H-W-W-W-H-W-W, which corresponds to the Phrygian mode.
This symmetrical relationship exists because the whole diatonic scale system can be symmetrically inverted, creating these natural mirror relationships between the modes"
Are you hoping to disprove my point by cherry picking the AI that gets the answer?
I used Gemini 2.5 Flash.
Where can I get an exact list of stuff that Gemini 2.5 Flash does not know that Claude Sonnet does, and vice versa?
Then before deciding to consult with AI, I can consult the list?
That depends on the quality of the end product and the willingness to invest the resources necessary to achieve a given quality of result. If average quality goes up in practice then I'd chalk that up as a net win. Low quality replacing high quality is categorically different than low quality filling a previously empty void.
Therapy in particular is interesting not just because of average quality in practice (therapists are expensive experts) but also because of user behavior. There will be users who exhibit both increased and decreased willingness to share with an LLM versus a human.
There's also a very strong privacy angle. Querying a local LLM affords me an expectation of privacy that I don't have when it comes to Google or even Wikipedia. (In the latter case I could maintain a local mirror but that's similar to maintaining a local LLM from a technical perspective making it a moot point.)
Spam emails are not any worse for being verbose, I don't recognize the sender, I send it straight to spam. The volume seems to be the same.
You don't want an AI therapist? Go get a normal therapist.
I have not heard of any AI product displacing industrial design, but if anything it'll make it easier to make/design stuff if/when it gets there.
Like are these real things you are personally experiencing?
> The net utility of AI is far more debatable.
As long as people are willing to pay for access to AI (either directly or indirectly), who are we to argue?
In comparison: what's the utility of watching a Star Wars movie? I say, if people are willing to part with their hard earned cash for something, we must assume that they get something out of it.
I'm sure if you asked the luddites the utility of mechanized textile production you'd get a negative response as well.
What does AI get the consumer? Worse spam, more realistic scams, hallucinated search results, easy cheating on homework? AI-assisted coding doesn't benefit them, and the jury is still out on that too (see recent study showing it's a net negative for efficiency).
There's a reason that AI is already starting to fade out of the limelight with customers (companies and consumers both). After several years, the best they can offer is slightly better chatbots than we had a decade ago with a fraction of the hardware.
Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
Learning well is about consulting multiple sources and using them to build up your own robust mental model of the truth of how something works.
If you can really find the single perfect source of 100% correct information then great, I guess... but that's never been my experience. Every source of information has its flaws. You need to build your own mental model with a skeptical eye from as many sources as possible.
As such, even if AI makes mistakes it can still accelerate your learning, provided you know how to learn and know how to use tips from AI as part of your overall process.
Having an unreliable teacher in the mix may even be beneficial, because it enforces the need for applying critical thinking to what you are learning.
> Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
I think it does matter, but the problem is vastly overstated. One person points out that AIs aren’t 100% reliable. Then the next person exaggerates that a little and says that AIs often get things wrong. Then the next person exaggerates that a little and says that AIs very often get things wrong. And so on.
Before you know it, you’ve got a group of anti-AI people utterly convinced that AI is totally unreliable and you can’t trust it at all. Not because they have a clear view of the problem, but because they are caught in this purity spiral where any criticism gets amplified every time it’s repeated.
Go and talk to a chatbot about beginner-level, mainstream stuff. They are very good at explaining things reliably. Can you catch them out with trick questions? Sure. Can you get incorrect information when you hit the edges of their knowledge? Sure. But for explaining the basics of a huge range of subjects, they are great. “Most of what they told you was completely wrong” is not something a typical beginner learning a typical subject would encounter. It’s a wild caricature of AI that people focused on the negatives have blown out of all proportion.
I also use them to help me write code, which it does pretty well.
IDK where you're getting the idea that it's fading out. So many people are using the "slightly better chatbots" every single day.
Btw if you only think chat GPT is slightly better than what we had a decade ago then I do not believe that you have used any chat bots at all, either 10 years ago or recently because that's actually a completely insane take.
To back that up, here's a rare update on stats from OpenAI: https://x.com/nickaturley/status/1952385556664520875
> This week, ChatGPT is on track to reach 700M weekly active users — up from 500M at the end of March and 4× since last year.
You're looking at the prototype while complaining about an end product that isn't here yet.
[0]: https://www.newyorker.com/books/page-turner/rethinking-the-l...
Luddites weren't at a point where every industry sees individual capital formation/demand for labor trend towards zero over time.
Prices are ratios in the currency between factors and producers.
What do you suppose happens when the factors can't buy anything because there is nothing they can trade. Slavery has quite a lot of historic parallels with the trend towards this. Producers stop producing when they can make no profit.
You have a deflationary (chaotic) spiral towards socio-economic collapse, under the burden of debt/money-printing (as production risk). There are limits to systems, and when such limits are exceeded; great destruction occurs.
Malthus/Catton pose a very real existential threat when such disorder occurs, and its almost inevitable that it does without action to prevent it. One cannot assume action will happen until it actually does.
The loom wasn't centralized in four companies. Customers of textiles did not need an expensive subscription.
Obviously average people would benefit more if all that investment went into housing or in fact high speed railways. "AI" does not improve their lives one bit.
It could have some unexciting applications like, oh, modeling climate change and other scientific simulations.
the vast expense is on the GPU silicon, which is essentially useless for compute other than parallel floating point operations
when the bubble pops, the "investment" will be a very expensive total waste of perfectly good sand
Back then, the money poured into building real stuff like actual railroads and factories and making tangible products.
That kind of investment really grew the value of companies and was more about creating actual economic value than just making shareholders rich super fast.
[1] Things get even spicier if consumer growth was zero. Then what would the comparison? That AI added infinitely more to growth than consumer spending? What if it was negative? All this shows how ridiculous the framing is.
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
Have you heard of the disagreement hierarchy? You're somewhere between 1 and 3 right now, so I'm not even going to bother to engage with you further until you bring up more substantive points and cool it with the personal attacks.
https://paulgraham.com/disagree.html
Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on. A resilient economy has multiple growth areas; an unstable one has one or two.
While you could certainly argue that we may already be in rough shape even without the bubble popping, it would undoubtedly get worse for the reasons I listed above,
Right, I'm not suggesting that all of the datacenter construction will seamlessly switch over to building homes, just that some of the labor/materials freed would be allocated to other sorts construction. That could be homes, amazon distribution centers, or grid connections for renewable power projects.
>A resilient economy has multiple growth areas; an unstable one has one or two.
>[...] it would undoubtedly get worse for the reasons I listed above,
No disagreement there. My point is that if AI somehow evaporated, the hit to GDP would be less than $10 (total size of the sector in the toy example above), because the resources would be allocated to do something else, rather than sitting idle entirely.
>Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on.
That's a fair point, although to be fair the federal government is pretty good at stimulus after the GFC and covid that any credit crunch would be short lived.
Is the keyword here. US consumers have been spending so much so of course that sector doesn't have that much room to grow.
Using non-seasonally adjusted St. Louis FRED data (https://fred.stlouisfed.org/series/NA000349Q), and the AI CapEx spending for Meta, Alphabet, Microsoft, and Amazon from the WSJ article (https://www.wsj.com/tech/ai/silicon-valley-ai-infrastructure...):
-------------------------------------------------
Q4 2025 consumer spending: ~$5.2 trillion
Q4 2025 AI CapEx spending: ~$75 billion
-------------------------------------------------
Q1 2025 consumer spending: ~$5 trillion
Q1 2025 AI CapEx spending: ~$75 billion
-------------------------------------------------
Q2 2025 consumer spending: ~$5.2 trillion
Q2 2025 AI CapEx spending: ~$100 billion
-------------------------------------------------
So, non-seasonally adjusted consumer spending is flat. In that sense, yes, anything where spend increased contributed more to GDP growth than consumer spending.
If you look at seasonally-adjusted rates, consumer spending has grown ~$400 billion, which might outstrips total AI CapEx in that time period, let alone growth. (To be fair the WSJ graph only shows the spending from Meta, Google, Microsoft, and Amazon. But it also says that Apple, Nvidia, and Tesla combined "only" spent $6.7 billion in Q2 2025 vs the $96 billion from the other four. So it's hard to believe that spend coming from elsewhere is contributing a ton.)
If you click through the the tweet that is the source for the WSJ article where the original quote comes from (https://x.com/RenMacLLC/status/1950544075989377196) it's very unclear what it's showing...it only shows percentage change, and it doesn't even show anything about consumer spending.
So, at best this quote is very misleadingly worded. It also seems possible that the original source was wrong.
If the general theme of this article is right (that it's a bubble soon to burst), I'm less concerned about the political environment and more concerned about the insane levels of debt.
If AI is indeed the thing propping up the economy, when that busts, unless there are some seriously unpopular moves made (Volcker level interest rates, another bailout leading to higher taxes, etc), then we're heading towards another depression. Likely one that makes the first look like a sideshow.
The only thing preventing that from coming true IMO is dollar hegemony (and keeping the world convinced that the world's super power having $37T of debt and growing is totally normal if you'd just accept MMT).
The first Great Depression was pretty darn bad, I'm not at all convinced that this hypothetical one would be worse.
Today, we have the highest tariffs since right before the Great Depression, with the added bonus of economic uncertainty because our current tariff rates change on a near daily basis.
Add in meme stocks, AI bubble, crypto, attacks on the Federal Reserve’s independence, and a decreasing trust in federal economic data, and you can make the case that things could get pretty ugly.
But for things to be much worse than the Great Depression, I think is an extraordinary claim. I see the ingredients for a Great Depression-scale event, but not for a much-worse-than-Great-Depression event.
Which is their (Thiel, project2025, etc) plan, federal land will be sold for cheap.
If this isn't the Singularity, there's going to be a big crash. What we have now is semi-useful, but too limited. It has to get a lot better to justify multiple companies with US $4 trillion valuations. Total US consumer spending is about $16 trillion / yr.
Remember the Metaverse/VR/AR boom? Facebook/Meta did somehow lose upwards of US$20 billion on that. That was tiny compared to the AI boom.
I was working on crypto during the NFT mania, and THAT felt like a bubble at the time. I'd spend my days writing smart contracts and related infra, but I was doing a genuine wallet transaction at most once a week, and that was on speculation, not work.
My adoption rate of AI has been rapid, not for toy tasks, but for meaningful complex work. Easily send 50 prompts per day to various AI tools, use LLM-driven auto-complete continuously, etc.
That's where AI is different from the dot com bubble (not enough folks materially transaction on the web at the time), or the crypto mania (speculation and not utility).
Could I use a smarter model today? Yes, I would love that and use the hell out of it. Could I use a model with 10x the tokens/second today? Yes, I would use it immediately and get substantial gains from a faster iteration cycle.
I have to imagine that other professions are going to see similar inflection points at some point. When they do, as seen with Claude Code, demand can increase very rapidly.
* Even with all this infra buildout all the hyperscalers are constantly capacity constrained, especially for GPUs.
* Surveys are showing that most people are only using AI for a fraction of the time at work, and still reporting significant productivity benefits, even with current models.
The AGI/ASI hype is a distraction, potentially only relevant to the frontier model labs. Even if all model development froze today, there is tremendous untapped demand to be met.
The Metaverse/VR/AR boom was never a boom, with only 2 big companies (Meta, Apple) plowing any "real" money into it. Similarly with crypto, another thing that AI is unjustifiably compared to. I think because people were trying to make it happen.
With the AI boom, however, the largest companies, major governments and VCs are all investing feverishly because it is already happening and they want in on it.
Are they constrained on resources for training, or resources for serving users using pre-trained LLMs? The first use case is R&D, the second is revenue. The ratio of hardware costs for those areas would be good to know.
However, my understanding is that the same GPUs can be used for both training and inference (potentially in different configurations?) so there is a lot of elasticity there.
That said, for the public clouds like Azure, AWS and GCP, training is also a source of revenue because other labs pay them to train their models. This is where accusations of funny money shell games come into play because these companies often themselves invest in those labs.
Edit: agree on the metaverse as implemented/demoed not being much, but that's literally one application
As someone in an AI company right now - Almost every company we work with is using Azure wrapped OpenAI. We're not sure why, but that is the case.
Also Microsoft Azure hosts its own OpenAI models. It isn’t a proxy for OpenAI.
It's the same reason you would use RDS at an AWS shop, even if you really like CloudSQL better.
This is the main reason the big cloud vendors are so well-positioned to suck up basically any surplus from any industry even vaguely shaped like a b2b SaaS.
* https://paulkedrosky.com/honey-ai-capex-ate-the-economy/
* https://news.ycombinator.com/item?id=44609130
Now, it does that at the expense of the average person, but it will definitely prop up the bubble just long enough for the next election cycle to hit.
Youre implying the country exerting financial responsibility to control inflation isn’t good.
Not using interest rates to control inflation caused the stagflation crisis of the 70s, and ended when Volcker set rates to 20%.
Of course, this is just one way that interest rates affect the economy, and it's important to bear in mind that lower interest rates can also stimulate investment which help to create jobs for average people as well.
Precisely! Yet the big problem in the Anglosphere is that most of that money has been invested in asset accumulation, namely housing, causing a massive housing crisis in these countries.
This is why in a hot economy we raise rates, and in a not economy we lower them
(oversimplification, but it is a commonly provided explanation)
Not necessarily. Sure, it that money is chasing fixed assets like housing but if that money was invested into production of things to consume its not necessarily inflation inducing is it? For example, if that money went into expanding the electricity grid and production of electric cars, the pool of goods to be consumed is expanding so there is less likelihood of inflation.
People are paid salaries to work at these production facilities, which means they have more money to spend, and the competition drives people to be willing to spend more to get the outputs. Not all outputs will be scaled, those that aren't experience inflation, like food and housing today
About what? Like seriously what would they even do other then try and lame duck him?
The big issue is Dem approval ratings are even lower then Trumps so how the hell are they going to gain any seats?
The seeds were planted after Nixon resigned and it was decided to re-shape the media landscape and move the overton window rightwards in the 1970s, dismantling social democracy across the west and leading to a gradual reversal of the norms of governance in the US (see Newt Gingrich).
It's been gradual, slow and methodical. It has definitely accelerated but in retrospect the intent was there from the very beginning.
If you see it that way this is just a reversion to the mean.
You could say that was when things reverted back to "normal". The FDR social reconstruction and post WW2 economic boom were the exception, anomaly. But the Scandinavian countries seem to be doing alright. Sure, they have some big size problems (Sweden in particular) but daily life for the majority in those countries appears to be better than a lot of people in the Anglosphere.
That's not the redistricting Newsom wants for 2028, and I tend to agree that Dems have to play the game right now, but I'd really like to see them present some sort of story for why it's not going to happen again.
Nvidia the poster-child of this "bubble" has been getting effectively cheaper every day.
Recently, I've heard many left wingers, as a response to Trump's tariffs, start 1) railing about taxes being too high, and that tariffs are taxes so they're bad, and 2) saying that the US trade deficit is actually wonderful because it gives us all this free money for nothing.
I know all of these are opposite positions to every one of the central views of the left of 30 years ago, but politics is a video game now. Lefties are going out of their way to repeat the old progressive refrain:
> "The way that Trump is doing it is all wrong, is a sign of mental instability, is cunning psychopathic genius and will resurrect Russia's Third Reich, but in a twisted way he has blundered into something resembling a point..."
"...the Fed shouldn't be independent and they should lower interest rates now."
Personally I trust Jerome Powell more than any other part of the government at the moment. The man is made of steel.
[0]: https://www.bloomberg.com/news/articles/2024-07-03/senator-w...
That doesn't really change what I said regarding interest rates though.
These are jobs that normally would have gone to a human and now go to AI. We haven't paid a cent for AI mind you -- it's all on the ChatGPT free tier or using this tool for the graphics: https://labs.google/fx/tools/image-fx
I could be wrong, but I think we are at the start of a major bloodbath as far as employment goes.... in tech mostly but also in anything that can be replaced by AI?
I'm worried. Does this mean there will be a boom in needing people for tradeskills and stuff? I honestly don't know what to think about the prospects moving forward.
The AI bubble is so big that it's draining useful investment from the rest of the economy. Hundreds of thousands of people are getting fired so billionaires can try to add a few more zeros to their bank account.
The best investment we can make would be to send the billionaires and AI researchers to an island somewhere and not let them leave until they develop an AI that's actually useful. In the meanwhile, the rest of us get to live productive lives.
There probably are a few nuts out there that actually fired people to be replaced with AI, I feel like that won't go well for them
There really is no evidence.
I'll say its okay to be reserved on this, since we won't know until after the fact, but give it 6-12 months, then we'll know for sure. Until then, I see no reason not to believe there is a culture in the boardrooms forming around AI that is driving closed door conversations about reducing headcount specifically to be replaced by AI.
[0]: https://gizmodo.com/the-end-of-work-as-we-know-it-2000635294