I have started to notice this as well over the past several months, in-fact, I would say it is orders of magnitude larger than the Crypto bubble and when it bursts will be significantly more impactful. Right now, everything is propped up on the premise the hyperscalers will grow EPS proportionately to their investment and that ROA is being priced in as a best case scenario (hope) in their share price. Maybe its not ROA at all, maybe its simply FOMO, we keep citing this 'AI Race' as if there is some end objective to 'win' drawing parallels to the nuclear arms race that only resulted in massive wasted CAPEX in decaying nukes sitting in unused bunkers since the cold war. (Not to mention it isn't even clear this arms race played a direct factor in the war beside depleting resources) Everyone is happy right now. Execs get stock bonuses, investors get returns, vendors get contracts. Once this is questioned, and history demonstrates it undoubtedly will be, it will cause a cascading collapse. History is a mathematical truth.
ubercore · 6h ago
The usable amount of AI, no matter how optimistic you are about current progression, has to be below the projections the entire AI economy is living on right now.
Matthyze · 5h ago
A considerable group of people think AGI or even ASI is right around the corner
conartist6 · 4h ago
I've never gotten a straight answer as to whether AGI is a good thing for humanity or the economy.
Real AGI would be alive and would be capable of art and music and suffering and community, of course. So there would really be no more need for humans except to shovel the coal (or bodies of other humans) into the furnace that power the Truly Valuable members is society, the superintelligent AIs, which all other aspects of our society will be structured towards serving.
Real AGI might realistically decide to go to war with us if we've leaned anything from current LLMs and their penchant for blackmail
soiltype · 50m ago
That's all been thought of, yeah.
No, AGI isn't a good thing. We should expect it to go badly, because there are so many ways it could be catastrophic. Bad outcomes might even be the default without intervention. We have virtually no idea how to drive good outcomes of AGI.
AGI isn't being pursued because it will be good, it's being pursued because it is believed to be more-or-less inevitable, and everyone wants to be the one holding the reins for the best odds of survival and/or being crowned god-emperor (this is pretty obviously sam altman's angle for example)
caleb-allen · 4h ago
Sure, just as a considerable group of alchemists believed the recipe for gold was right around the corner
lm28469 · 4h ago
Remove the people with horses in the race and your "considerable" group becomes much much smaller.
ponector · 4h ago
Coincidence, it is the same group trying to sell "AI" startup/services.
Modern AI is not an intelligence. Wonder what crap they are calling AGI.
fuzzfactor · 47m ago
If it was that close I think they would have made well over half their money back by now . . .
freejazz · 1h ago
Don't a considerable group of people think the rapture is right around the corner?
You've got to figure that the mega-capitalists have so much more discretionary accumulated resources relative to government than ever.
Other than government, is there anybody else who can loosen the purse strings a little bit and have it not act as a temporary stimulant as long as it lasts?
Whether they wish it would last or even provide any benefit to the average person, seems like there are plenty who wouldn't wish more prosperity on anyone who doesn't already have it :\
The only real way for long-term growth would be to plant seeds rather than dispense mere artificial stimulants.
Unless AI makes the general public way more than capitalists have spent, it wouldn't be worth any increase in cost whatsoever for things like energy or hardware. Even non-AI software could become unaffordable if their labor costs go up enough too keep top people from being poached by AI companies flush with cash.
I bet even the real estate near the data centers gets more unaffordable, at the same time clocking a win for the local economy due to the increased cash flow and tax revenue. Except all that additional cash is flowing out of peoples' pockets not in :\
etempleton · 1h ago
It is a bubble. It will collapse. The only thing that might cushion the collapse a bit is that most of the capital is from large tech companies that can absorb the fallout and pivot to the next thing. The hardware and infrastructure should be able to be leveraged for other things.
packetlost · 31m ago
Oracle really does seem poised to be the winner here.
etempleton · 20m ago
It was a shrewd move by Oracle.
spaceman_2020 · 5h ago
At the moment, every AI service is dealing with capacity issues. Demand is much bigger than supply.
As long as that remains true, don't see how this bubble will be popped
serial_dev · 5h ago
I’m sure someone has numbers but I do wonder how many of their users pay and whether that covers the free users. It can still be a bubble even with high demand, if you are burning money to serve those users, because you hope one day you will rule the galaxy.
I don’t really have a strong preference, so I just use any service where I’m currently not rate limited. There are many of them and I don’t see much difference between them for day to day use. My company pays for Cursor but I burned through my monthly quota in a day working on a proof of concept that mirrored their SDK in a different language. Was it nice that I could develop a proof of concept? Yes. Would I pay 500 dollars for it from my own pocket? No, I don’t think so.
It’s like those extremely cheap food and grocery delivery apps, they made their food cheap, no delivery fees for a while… of course everyone was using it. Then, they started to run out of VC money, they had to raise prices, then suddenly nobody used them anymore and they went bankrupt. There was demand, but only because of the suppressed prices fueled by VC money.
JimDabell · 4h ago
> I do wonder how many of their users pay and whether that covers the free users.
It doesn’t cover the free users, but that’s normal startup strategy. They are using investor cash to grow as quickly as possible. This involves providing as much free service as they can afford so that some of those free users convert to paid customers. At any point they can decide not to give so much free service away to rebalance in favour of profitability over growth. The cost of inference is plummeting, so it’s getting cheaper to service those free users all the time.
> It’s like those extremely cheap food and grocery delivery apps, they made their food cheap, no delivery fees for a while… […] they started to run out of VC money, they had to raise prices
That’s not the same situation because that makes the product more expensive for the customers, which will hit sales. This isn’t the same as cutting back on a free tier, because in that situation you’re not doing anything to harm your customers’ service.
digitcatphd · 4h ago
Respectfully, starting one's argument with a factually invalid statement is not a good way to argue against a bubble. If by 'every' you are referring to the foundation model providers, this is not 100% of the 'AI service' market and even then, I would argue that a lot of this demand will need to be answered to by companies measuring ROI after the FOMO or unreasonable expectations get settled in and right now, my primary argument is this ROI is driven by speculation rather than empirical measurement.
I have spoken with many companies and nearly all of them, when speaking about AI, have gotten to the point they don't even make any sense. A common theme is 'we need' AI, but nobody can articulate 'why' and in-fact they get defensive when questioned. It is almost perfectly parallel to the 'we need blockchain' argument or 'we need a mobile app'. That isn't to say those are not useful technologies, but the rapid rise, steep decline, then gradual rise is a theme in tech.
disqard · 1h ago
James Mickens named-and-shamed this viewpoint, which he called "Technological Manifest Destiny".
Others have observed and pointed out his prescience before:
> As long as that remains true, don't see how this bubble will be popped
That's what everybody was saying in February 2000.
No comments yet
izacus · 3h ago
Are those services actually operating with profit to benefit from this demand? Or are they serving that demand while taking losses and showing unrealistically inflated metrics for demand?
conartist6 · 4h ago
Well we'll keep scaling to meet the "demand".
Teachers are demanding not to do the work that is teaching. Lawyers are demanding not to do the work of lawyering. Engineers don't want to do coding and leaders don't want to steer the ship anymore unless it's towards AI.
Alllll the "value" is bullshit. Either AGI arrives and all jobs are over and it's eternal orgy time, or at some point the lazy ai-using losers will get fired and everyone else will go back to doing work like usual
DontchaKnowit · 2h ago
This kind of black and white thinking never pans out. The truth is always nuanced.
Eternal orgy time is not possible, will never happen. And if AI is useful, which it is, it will never be abandoned. Somewhere in the middle is the real prognosis
fuzzfactor · 35m ago
I'm not so sure about the middle. In many more things than ever, groups tend to cluster closer to the extremes.
It may "balance" around the middle, and expect it to be noticably different than now, even without much of an actual middle.
Or the middle could end up being just another group, maybe or maybe not the most prominent one.
georgeplusplus · 4h ago
I can already see the campaign message for the Donald trump of 2125,
Make teachers great again.
xoac · 3h ago
Make center for kids who can’t read good and wanna learn how to do other things good too.
lm28469 · 4h ago
I'd love to see the stats though, how much capacity is used for slop vs how much is used for actual productive tasks. If half of the capacity is used by bots on social medias and scammers it doesn't mean much for the economy
conartist6 · 4h ago
Probably more than half is used to try to undercut good work with bad work
georgeplusplus · 4h ago
What percentage of that capacity is being put towards useful things and what is being put towards entertainment? The products main selling point is to make us more efficient at our job and if it’s being primarily used as entertainment which everyone I know who uses an LLM besides programmers falls into that category, then I’d say the expected profit from them is a bubble.
crinkly · 4h ago
User retention is a more important metric. Everyone is silent there. That is directly tied to MRR, long term viability and ROI for investment. If those were positive I'd expect them to be crowing about it, but they aren't.
Capacity just means there is currently more demand than supply and that might be a number of negative factors driving that: users with no ROI (free users), too rapid growth, poor efficiency etc etc.
ponector · 4h ago
At least we will have a huge amount of data centers if this bubble burst. Insane compute overcapacity, as well in chip manufacture
IsTom · 4h ago
I worry about correction in chip manufacture after this – will the fab industry get even more concentrated? What if the remaining fabs scale down and set us back a decade or two cutting edge chip research? Will GPUs become extraordinarily expensive?
diegocg · 4h ago
Reminds me of all the optic fiber infrastructure that was built during the dot com bubble
ponector · 3h ago
Yes, this bubble is much better than crypto/nft. Same as mentioned railroad rush - capex burned, but lots of stuff have been built and left after burst.
bamboozled · 1h ago
That's why every layoff is now "because of AI", it's the perfect cover up for, "the forecast isn't looking great". No no, we're about to 10x!
j45 · 1h ago
One thing that's different about AI is it's a greater percentage applicable in the real world today than most bubble things that are more hype and speculation mostly by non-technical people who are attention farming.
This statement is unrelated to the funding in the space, which is not going to be mis placed, only a question of how much.
If anything, this might be a realer version of a dot com boom.
begueradj · 3h ago
"History is a mathematical truth"
No, history is a web of lies written by the winners, just like your daily news.
jackcosgrove · 19h ago
I'm not sure the comparison is apples to apples, but this article claims the current AI investment boom pales compared to the railroad investment boom in the 19th century.
> Next, Kedrosky bestows a 2x multiplier to this imputed AI CapEx level, which equates to a $624 billion positive impact on the US GDP. Based on an estimated US GDP figure of $30 trillion, AI CapEx is expected to amount to 2.08 percent of the US GDP!
Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century. This means that the ongoing AI CapEx boom has lots of legroom to run before it reaches parity with the rail road boom of that bygone era.
tripletao · 16h ago
> Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century.
Has anyone found the source for that 20%? Here's a paper I found:
> Between 1848 and 1854, railroad investment, in these and in preceding years, contributed to 4.31% of GDP. Overall, the 1850s are the period in which railroad investment had the most substantial contribution to economic conditions, 2.93% of GDP, relative to 2.51% during the 1840s and 2.49% during the 1830s, driven by the much larger investment volumes during the period.
The first sentence isn't clear to me. Is 4.31 > 2.93 because the average was higher from 1848-1854 than from 1850-1859, or because the "preceding years" part means they lumped earlier investment into the former range so it's not actually an average? Regardless, we're nowhere near 20%.
I'm wondering if the claim was actually something like "total investment over x years was 20% of GDP for one year". For example, a paper about the UK says:
> At that time, £170 million was close to 20% of GDP, and most of it was spent in about four years.
That would be more believable, but the comparison with AI spending in a single year would not be meaningful.
theologic · 8h ago
By the way it's always nice when somebody actually tries to double check somebody else's research especially when you hear numbers that seemingly just sound crazy. Maybe another factoid, GDP or GNP for all practical purposes wasn't rigorously done by the government until about 1944. I believe a large part of our viewpoints on what happened in the 1800s is primarily based upon census data. But obviously if you're trying to measure a 7 year event Using census that happens every 10 years, there's going to be a lot of gap in the whisker chart.
potato3732842 · 5h ago
Is 20% on railroad actually crazy? We spend 20% on make-work for the healthcare industry.
In a majority agrarian economy where a lot of output doesn't go toward GDP (e.g. milking your own damn cow to feed milk to your own damn family won't show up) I would expect "new hotness" booms to look bigger than they actually are.
Onewildgamer · 4h ago
So we're much closer to the per year spend US saw during the railroad construction era.
At this rate, I hope we get something useful, public, and reasonably priced infrastructure out of these spending in about 5-8 years just like the railroads.
jefftk · 15h ago
> Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century.
When you go so far back in time you run into the problem where GDP only counts the market economy. When you count people farming for their own consumption, making their own clothes, etc, spending on railroads was a much smaller fraction of the US economy than you'd estimate from that statistic (maybe 5-10%?)
eru · 14h ago
Yes, that was a problem back then, and is also a problem today, but in different ways.
First, GDP still doesn't count you making your own meals. Second, when eg free Wikipedia replaces paid for encyclopedias, this makes society better off, but technically decreases GDP.
However, having said all that, it's remarkably how well GDP correlates with all the goods things we care about, despite its technical limitations.
jefftk · 13h ago
This has always been an issue with GDP, but it's a much larger issue the father back you go.
While GDP correlates reasonably well, imagine very roughly what it would be like if GDP growth averaged 3% annually while the overall economy grew at 2%. While correlation would be good, if we speculate that 80% of the economy is counted in GDP today, then only 10% would have been counted 200 years ago.
epicureanideal · 11h ago
It would be great if there was a "GDP + non-transactional economy" metric. Does one exist, or is there a relatively straightforward way to construct one?
danlitt · 9h ago
I don't think we have a way to reliably estimate the value of non-transactional goods, because by definition nobody gives them a price.
chii · 7h ago
but an estimate could be had if you use an imputed price of similar goods/services that _are_ transactional? So the problem reduces down to counting these events - perhaps a survey and such could be used to estimate their frequency etc?
eru · 7h ago
Why are you so pessimistic? Just because something is hard and you get big error bars doesn't mean we can't do it at all.
If you wanted to, you could look at eg black market prices for kidneys to get an estimate for how much your kidney is worth. Or, less macabre, you can look at how much you'd have to pay a gardener to mow your lawn to see what the labour of your son is worth.
pjc50 · 7h ago
I guess "GDP doesn't count you doing your own thinking" is going to become a problem.
onlyrealcuzzo · 15h ago
I don't know if the economy could ever be accurately reduced to "good" or "bad".
What's good for one class is often bad for another.
Is it a "good" economy if real GDP is up 4%, the S&P 500 is up 40%, and unemployment is up 10%?
For some people that's great. For others, not so great.
Maybe some economies are great for everyone, but this is definitely not one of those.
This economy is great for some people and bad for others.
fc417fc802 · 15h ago
> Is it a "good" economy if real GDP is up 4%, the S&P 500 is up 40%, and unemployment is up 10%?
In today's US? Debatable, but on the whole probably not.
In a hypothetical country with sane health care and social safety net policies? Yes that would be hugely beneficial. The tax base would bear the vast majority of the burden of those displaced from their jobs making it a much more straightforward collective optimization problem.
eru · 13h ago
The US spends more per capita on their social safety net than almost all other countries, including France and the UK.
The US spends around 6.8k USD/capita/year on public health care. The UK spends around 4.2k USD/capita/year and France spends around 3.7k.
For general public social spending the numbers are 17.7k for the US, 10.2k for the UK and 13k for France.
(The data is for 2022.)
Though I realise you asked for sane policies. I can't comment on that.
I'm not quite sure why the grandfather commenter talks about unemployment: the US had and has fairly low unemployment in the last few decades. And places like France with their vaunted social safety net have much higher unemployment.
esseph · 10h ago
You're looking at costs, not outcomes. Our outcomes aren't in alignment with our costs.
(Too many people getting their metaphorical pound of flesh, and bad incentives.)
fc417fc802 · 10h ago
This. My intent was to refer to outcomes. My hypothetical country was one where being unemployed might lose you various luxuries but would still see you with guaranteed food on the table and a roof over your head. Under such conditions there's no need to consider a rise in the unemployment metric to be a major downside except for the inevitable ballooning cost to the tax base.
rjsw · 5h ago
The UK tried that experiment in the 80s [1], it didn't work well.
> Though I realise you asked for sane policies. I can't comment on that.
I don't think you are disagreeing with them.
bugglebeetle · 13h ago
> The US spends more per capita on their social safety net than almost all other countries, including France and the UK.
To a vast and corrupt array of rentiers, middlemen, and out-and-out fraudsters, instead of direct provision of services, resulting in worse outcomes at higher costs!
Turns out if I’m forced to have visits with three different wallet inspectors on the way to seeing a doctor, I’ve somehow spent more money and end up less healthy than my neighbors who did not. Curious…
dash2 · 10h ago
It's easier to see your own society's faults. The NHS also has waste, most obviously the deadweight loss caused by queuing. I know someone who went back to get treated to her own country. Not remarkable except that country was Ukraine.
bugglebeetle · 9h ago
Yes, because the UK’s two dominant (and right wing) parties have been actively sabotaging it for years, chasing after a despicable dream of homegrown middlemen and fraudsters, envious as they are of the unchecked criminality of their friends from across the pond. Quelle surprise, things have gotten worse.
dash2 · 8h ago
Spending grew about 8% a year under New Labour on average, which doesn't seem like sabotage to me.
eru · 7h ago
Also if both dominant political parties are supposedly so against the NHS, why don't they just abolish it?
HPsquared · 4h ago
They need to be sneaky. Same with a lot of other unpopular policies which nevertheless (somehow...) have support from "both sides".
tremon · 2h ago
They need the public to (nominally) assent to it first, otherwise it'd be suicide. They're using the republican playbook: overburden the sector with tasks and regulations while underfunding it, and allow for private competition that is not subject to the same regulatory burden. Then in a decade or so, you can claim that the "free market" works better and the public won't kick up too much of a fuss.
lossolo · 13h ago
It’s not just how much you spend on healthcare, but what that spending actually delivers. How much does an emergency room visit cost in the U.S. compared to the UK or France? How do prescription drug prices in the U.S. compare to those in the EU? When you look at what Americans pay relative to outcomes, the U.S. has one of the most inefficient healthcare systems among OECD countries.
eru · 13h ago
If you want to see an efficient healthcare system in a rich country, have a look at Singapore. They spend far less than eg the UK.
incone123 · 8h ago
You imply the UK is a rich country...
eru · 7h ago
Yes?
onlyrealcuzzo · 4h ago
> In a hypothetical country with sane health care and social safety net policies? Yes that would be hugely beneficial.
I think you're forgetting the Soviet Union, which looked great on paper until it turned out that it wasn't actually great...
Real GDP can go up, and it doesn't HAVE to mean you are producing more of anything valuable, and can - in fact - mean that you're not producing enough of what you need, and a bunch of what you don't need.
A very simple way to view this is: currently x% of GDP is waste. If Real GDP goes up 4% but the percentage of waste goes from 1% to 8% - you are clearly doing worse.
This is a reduction of what happened in the Soviet Union.
marcus_holmes · 14h ago
Agree completely. The idea that an increasing GDP or stock market is always good has taken a beating recently. Mostly because it seems that the beneficiaries of that number increase are the same few who already have more than enough, and everyone else continues to decline.
We need new metrics.
eru · 14h ago
What's a class?
decimalenough · 18h ago
There is obvious utility to railroads, especially in a world with no cars.
The net utility of AI is far more debatable.
Falkon1313 · 7h ago
It's more than that even. AI may have plenty of utility. But does the massive capex on GPUs that will all be obsolete in a couple years?
You can still run a train on those old tracks. And it'll be competitive. Sure you could build all new tracks, but that's a lot more expensive and difficult. So they'll need to be a whole lot better to beat the established network.
But GPUs? And with how much tech has changed in the last decade or two and might in the next?
We saw cryptocurrency mining go from CPU to GPU to FPGA to ASICs in just a few years.
We can't yet tell where this fad is going. But there's fair reason to believe that, even if AI has tons of utility, the current economics of it might be problematic.
HPsquared · 4h ago
Yeah you'd think the rapid obsolescence of the hardware would dampen P/E ratios a little. It has to pay for itself quickly.
P/E is, after all, given in the implied unit of "years". (Same as other ratios like debt/GDP).
rockemsockem · 18h ago
I'm continually amazed to find takes like this. Can you explain how you don't find clear utility, at the personal level, from LLMs?
I am being 100% genuine here, I struggle to understand how the most useful things I've ever encountered are thought of this way and would like to better understand your perspective.
tikhonj · 14h ago
Can't speak for anyone else, but for me, AI/LLMs have been firmly in the "nice but forgettable" camp. Like, sometimes it's marginally more convenient to use an LLM than to do a proper web search or to figure out how to write some code—but that's a small time saving at best, it's less of a net impact than Stack Overflow was.
I'm already a pretty fast writer and programmer without LLMs. If I hadn't already learned how to write and program quickly, perhaps I would get more use out of LLMs. But the LLMs would be saving me the effort of learning which, ultimately, is an O(1) cost for O(n) benefit. Not super compelling. And what would I even do with a larger volume of text output? I already write more than most folks are willing to read...
So, sure, it's not strictly zero utility, but it's far less utility than a long series of other things.
On the other hand, trains are fucking amazing. I don't drive, and having real passenger rail is a big chunk of why I want to move to Europe one day. Being able to get places without needing to learn and then operate a big, dangerous machine—one that is statistically much more dangerous for folks with ADHD like me—makes a massive difference in my day-to-day life. Having a language model... doesn't.
And that's living in the Bay Area where the trains aren't great. Bart, Caltrain and Amtrak disappearing would have an orders of magnitude larger effect on my life than if LLMs stopped working.
And I'm totally ignoring the indirect but substantial value I get out of freight rail. Sure, ships and trucks could probably get us there, but the net increase in costs and pollution should not be underestimated.
knowitnone2 · 10h ago
No matter how good or fast you are, you will never beat the LLM. What you're saying is akin to "your math is faster than a calculator" and I'm willing to bet it's not. LLMs are not perfect and will require intervention and fixing but if it can get you 90% there, that's pretty good. In the coming years, you'll soon find your peers are performing much faster than you (assuming you program for a living) and you will have no choice but you do you.
tikhonj · 10h ago
Fun story: when I interned at Jane Street, they gave out worksheets full of put-call parity calculations to do in your head because, when you're trading, being able to do that sort of calculation at a glance is far faster and more fluid than using a calculator or computer.
So for some professionals, mental math really is faster.
Make of that what you will.
WD-42 · 10h ago
LLMs do not work the same way calculators do, not even close.
globular-toast · 8h ago
Beat an LLM at what? Lines of code per minute? Certainly not. But that's not my job. If anything I try to minimise the amount of code I output. On a good day my line count will be negative.
Mathematicians are not calculators. Programmers are not typists.
haganomy · 9h ago
So now programmers add value when they write more code faster? Curious how this was anathema but now is a clear evidence of LLM-driven coding superiority.
The math that isn't mathing is even more basic tho. This is a Concorde situation all over again. Yes, supersonic passenger jets would be amazing. And they did reach production. But the economics were not there.
Yeah, using GPU farms delivers some conveniences that are real. But after 1.6 trillion dollars it's not clear at all that they are a net gain.
bluefirebrand · 14h ago
> Can you explain how you don't find clear utility, at the personal level, from LLMs?
Sure. They don't meaningfully improve anything in my life personally.
They don't improve my search experience, they don't improve my work experience, they don't improve the quality of my online interactions, and I don't think they improve the quality of the society I live in either
knowitnone2 · 10h ago
so you never read the summary at the top of Google search results to get the answer because it provides the answer to most of my searches. "they don't improve my work experience" that's fair but perhaps you haven't really given it a try? "they don't improve the quality of my online interactions" but how do you know? LLMs are being used to create websites, generate logos, images, memes, art videos, stories - you've already been entertained by them and not even know it. "I don't think they improve the quality of the society I live in either" That's a feeling, not a fact.
AngryData · 7h ago
I never do because I still don't trust its answers enough without also seeing a secondary source to confirm it and the first result or two is already correct 99% of the time and often also has source citations to tell me how that conclusion or information was made or gathered if im dealing with a potential edge case.
whatarethembits · 1h ago
I generally recognise utility of AI, but on this particular point, it has been a net negative if I were to add up the time I wasted by believing a summarised answer, got some length further on given task only to find that the answer was wrong and having to backtrack and redo all that work.
GJim · 1h ago
> so you never read the summary at the top of Google search results to get the answer
No.
Because I cannot trust it. (Especially when it gives no attributions).
bluefirebrand · 10h ago
> so you never read the summary at the top of Google search results to get the answer because it provides the answer to most of my searches
Unfortunately yes I do, because it is placed in a way to immediately hijack my attention
Most of the time it is just regurgitating the text of the first link anyways, so I don't think it saves a substantial amount of time or effort. I would genuinely turn it off if they let me
> That's a feeling, not a fact
So? I'm allowed to navigate my life by how I feel
ryao · 9h ago
If you find it annoying, why not configure a custom blocking rule in an adblocker to remove it?
bluefirebrand · 12m ago
Good suggestion, thanks. I might just do that
incone123 · 8h ago
I've read some complete nonsense in those summaries. I use LLMs for other things but I don't find this application useful because I would need to trust it, and I don't.
rockemsockem · 14h ago
Have you even tried using them though? Like in earnest? Or do you see yourself as a conscientious objector of sorts?
ruszki · 11h ago
This whole topic makes me remember the argument for vi, and quick typing. I was always baffled because for the 25 years since I can code, typing was never that huge block of my time that it would matter.
I have the same feeling with AI.
It clearly cannot produce the quality of code, architecture, features which I require from myself. And I also want to understand what’s written, and not saying “it works, it’s fine <inserting dog with coffee image here>”, and not copy-pasting a terrible StackOverflow answer which doesn’t need half of the code in reality, and clearly nobody who answered sat down and tried to understand it.
Of course, not everybody wants these, and I’ve seen several people who were fine with not understanding what they were doing. Even before AI. Now they are happy AI users. But it clears to me that it’s not beneficial salary, promotion, and political power wise.
So what’s left is that it types faster… but that was never an issue.
It can be better however. There was the first case just about a month ago when one of them could answer better to a problem than anything else which I knew or could find via Kagi/Google. But generally speaking it’s not there at all. Yet.
bluefirebrand · 13h ago
I have tried using them frequently. I've tried many things for years now, and while I am impressed I'm not impressed enough to replace any substantial part of my workflow with them
At this point I am somewhat of a conscientious objector though
Mostly from a stance of "these are not actually as good as people say and we will regret automating away jobs held by competent people in favor of these low quality automations"
ehnto · 6h ago
Much work is not "text in, text out", and much work is not digital at all. But even in the digital world, inserting an LLM into a workflow is just not always that useful.
In fact much automation, code or otherwise, benefits from or even requires explicit, concise rules.
It is far quicker for me to already know, and write, an SQL statement, than it is to explain what I need to an LLM.
It is also quite difficult to get LLMs into a lot of processes, and I think big enterprises are going to really struggle with this. I would absolutely love AI to manage some Windows servers that are in my care, but they are three VMs deep in a remote desktop stack that gets me into a DMZ/intranet. There's no interface, and how would an LLM help anyway. What I need is concise, discreet automations. Not a chat bot interface to try and instruct every day.
To be clear I do try to use AI most days, I have Claude and I am a software developer so ideally it could be very helpful, but I have far less use for it than say people in the strategy or marketing departments for example. I do a lot of things, but not really all that much writing.
roncesvalles · 12h ago
As a dev, I find that the personal utility of LLMs is still very limited.
Analyze it this way: Are LLMs enabling something that was impossible before? My answer would be No.
Whatever I'm asking of the LLM, I'd have figured it out from googling and RTFMing anyway, and probably have done a better job at it. And guess what, after letting the LLM do it, I probably still need to google and RTFM anyway.
You might say "it's enabling the impossible because you can now do things in less time", to which I would say, I don't really think you can do it in less time. It's more like cruise control where it takes the same time to get to your destination but you just need to expend less mental effort.
Other elephants in the room:
- where is the missing explosion of (non-AI) software startups that should've been enabled by LLM dev efficiency improvements?
- why is adoption among big tech SWEs near zero despite intense push from management? You'd think, of all people, you wouldn't have to ask them twice.
The emperor has no clothes.
ryao · 9h ago
> Are LLMs enabling something that was impossible before?
I would say yes when the LLM is combined with function calling to allow it to do web searches and read web pages. It was previously impossible for me to research a subject within 5 minutes when it required doing several searches and reviewing dozens of search results (not just reading the list entries, but reading the actual HTML pages). I simply cannot read that fast. A LLM with function calling can do this.
The other day, I asked it to check the Linux kernel sources to tell me which TCP connection states for a closing connection would not return an error to send() with MSG_NOSIGNAL. It not only gave me the answer, but made citations that I could use to verify the answer. This happened in less than 2 minutes. Very few developers could find the answer that fast, unless they happen to already know it. I doubt very many know it offhand.
Beyond that, I am better informed than I have ever been since I have been offloading previously manual research to LLMs to do for me, allowing me to ask questions that I previously would not ask due to the amount of time it took to do the background research. What previously would be a rabbit hole that took hours can be done in minutes with minimal mental effort on my part. Note that I am careful to ask for citations so I can verify what the LLM says. Most of the time, the citations vouch for what the LLM said, but there are some instances where the LLM will provide citations that do not.
knowitnone2 · 10h ago
do cars enable something that was impossible before? bikes? shoes? clothing? Your answer would be No.
danlitt · 8h ago
Yes, obviously. Commuting between cities would be an example.
roncesvalles · 10h ago
If your implication is that LLM-assisted coding to non-LLM-assisted coding is like motorcar to horse buggy, that is just not the case.
ryao · 9h ago
I think he was referring to the ability to go from A to B within a certain amount of time. There is a threshold at which it is possible for a car, yet impossible for a horse and buggy.
That said, I recently saw a colleague use a LLM to make a non-trivial UI for electron in HTML/CSS/JS, despite knowing nothing about any of those technologies, in less time than it would have taken me to do it. We had been in the process of devising a set of requirements, he fed his version of them into the LLM, did some back and forth with the LLM, showed me the result, got feedback, fed my feedback back into the LLM and got a good solution. I had suggested that he make a mockup (a drawing in kolourpaint for example) for further discussion, but he had surprised me by using a LLM to make a functional prototype in place of the mockup. It was a huge time saver.
roncesvalles · 6h ago
The issue is that the 'B' is not very consequential.
Consider something like Shopify - someone with zero knowledge of programming can wow you with an incredible ecommerce site built through Shopify. It's probably like a 1000x efficiency improvement versus building one from scratch (or even using the popular lowcode tools of the era like Magento and Drupal). But it won't help you build Amazon.com, or even Nike.com. It won't even get you part of the way there.
And LLMs, while more general/expressive than Shopify, are inferior to Shopify at doing what Shopify does i.e. you're still better off using Shopify instead of trying to vibe-code an e-commerce website. I would say the same line of thinking extends to general software engineering.
satyrun · 4h ago
Or you are just not creative at all and not making anything that interesting yourself.
agent_turtle · 18h ago
There was a study recently that showed how not only did devs overestimate the time saved using AI, but that they were net negative compared to the control group.
Anyway, that about sums up my experience with AI. It may save some time here and there, but on net, you’re better off without it.
>This implies that each hour spent using genAI increases the worker’s productivity for that hour by 33%. This is similar in magnitude to the average productivity gain of 27% from several randomized experiments of genAI usage (Cui et al., 2024; Dell’Acqua et al., 2023; Noy and Zhang, 2023; Peng et al., 2023)
Our estimated aggregate productivity gain from genAI (1.1%) exceeds the 0.7% estimate by Acemoglu (2024) based on a similar framework.
To be clear, they are surmising that GenAI is already having a productivity gain.
agent_turtle · 17h ago
The article you gave is derived from a poll, not a study.
As for the quote, I can’t find it in the article. Can you point me to it? I did click on one of the studies and it indicated productivity gains specifically on writing tasks. Which reminded me of this recent BBC article about a copywriter making bank fixing expensive mistakes caused by AI: https://www.bbc.com/news/articles/cyvm1dyp9v2o
It's actually based on the results of three surveys conducted by two different parties. While surveys are subject to all kinds of biases and the gains are self-reported, their findings of 25% - 33% producitivity do match the gains shown by at least 3 other randomized studies, one of which was specifically about programming. Those studies are worth looking at as well.
foolswisdom · 13h ago
It's worth noting that the METR paper that found decreased productivity also found that many of the developers thought the work was being sped up.
rockemsockem · 16h ago
I'm not talking about time saving. AI seems to speed up my searching a bit since I can get results quicker without having to find the right query then find a site that actually answers my question, but that's minor, as nice as it is.
I use AI in my personal life to learn about things I never would have without it because it makes the cost of finding any basic knowledge basically 0. Diet improvement ideas based on several quick questions about gut functioning, etc, recently learning how to gauge tsunami severity, and tons of other things. Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
How much have you actually tried using LLMs and did you just use normal chat or some big grand complex tool? I mostly just use chat and prefer to enter my code in artisanally.
lisbbb · 13h ago
How much of that is junk knowledge, though? I mean, sure, I love looking up obscure information, particularly about cosmology and astronomy, but in reality, it's not making me better or smarter, it's just kind of "science junk food." It feels good, though. I feel smarter. I don't think I am, though, because the things I really need to work on about myself are getting pushed aside.
flkiwi · 15h ago
This is kind of how I use it:
1. To work through a question I'm not sure how to ask yet
2. To give me a starting point/framework when I have zero experience with an issue
3. To automate incredibly stupid monkey-level tasks that I have to do but are not particularly valuable
It's a remarkable accomplishment that has the potential to change a lot of things very quickly but, right now, it's (by which I mean publicly available models) only revolutionary for people who (a) have a vested interest in its success, (b) are easily swayed by salespeople, (c) have quite simple needs (which, incidentally, can relate to incredible work!), or (d) never really bothered to check their work anyway.
agent_turtle · 10h ago
OpenAI is currently being evaluated in terms of hundreds of billions. That’s an insane number for creating a product that “speeds up searching a bit”.
fzeroracer · 15h ago
Why not just look up the information directly instead of asking a machine that you can never truly validate?
If I need information, I can just keyword search wikipedia, then follow the chain there and then validate the sources along with outside information. An LLM would actually cost me time because I would still need to do all of the above anyways, making it a meaningless step.
If you don't do the above then it's 'cheaper' but you're implicitly trusting the lying machine to not lie to you.
rockemsockem · 15h ago
See my previous statement
> Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
You're practically saying that looking at an index in the back of a book is a meaningless step.
It is significantly faster, so much so that I am able to ask it things that would have taken an indeterminate amount of time to research before, for just simple information, not deep understanding.
Edit:
Also I can truly validate literally any piece of information it gives me. Like I said previously, it makes it very easy to validate via Wikipedia or other places with the right terms, which I may not have known ahead of time.
fzeroracer · 15h ago
Again, why would you just not use Wikipedia as your index? I'm saying why would you use the index that lies and hallucinates to you instead of another perfectly good index elsewhere.
You're using the machine that ingests and regurgitates stuff like Wikipedia to you. Why not skip the middleman entirely?
rockemsockem · 15h ago
Because the middleman is faster and practically never lies/hallucinates for simple queries, the middleman can handle vague queries that Google and Wikipedia cannot.
The same reasons you use Wikipedia instead of reading all the citations on Wikipedia.
fzeroracer · 9h ago
> Because the middleman is faster and practically never lies/hallucinates for simple queries
How do you KNOW it doesn't lie/hallucinate? In order to know that, you have to verify what it says. And in order to verify what it says, you need to check other outside sources, like Wikipedia. So what I'm saying is: Why bother wasting time with the middle man? 'Vague queries' can be distilled into simple keyword searches: If I want to know what a 'Tsunami' is I can simply just plug that keyword into a Wikipedia search and skim through the page or ctrl-f for the information I want instantly.
If you assume that it doesn't lie/hallucinate because it was right on previous requests then you fall into the exact trap that blows your foot off eventually, because sometimes it can and will hallucinate over even benign things.
lisbbb · 13h ago
A lot of formerly useful search tools, particularly Google, are just trash now, absolute trash.
ares623 · 15h ago
Is the "here and there" tasks that were previously so little value that they are always stuck in the backlog? i.e. the parts where it helps have very little value in the first place.
decimalenough · 15h ago
I actually do get clear utility, with major caveats, namely that I only ask things where the answer is both well known and verifiable.
I still do 10-20x regular Kagi searches for every LLM search, which seems about right in terms of the utility I'm personally getting out of this.
satyrun · 4h ago
IMO I think it is a combination of being a really great programmer already and then either not all that intellectually curious or so well read and so intellectually curious that LLMs are a step down from being a voracious reader of books and papers.
For me, LLMs are also the most useful thing ever but I was a C student in all my classes. My programming is a joke. I have always been intellectually curious but I am quite lazy. I have always had tons of ideas to explore though and LLMS let me explore these ideas that I either wouldn't be able to otherwise or would be too lazy to bother.
danlitt · 8h ago
Like other commenters here, when I try to use them to help with my work, they don't. It's that simple. I have tried AI coding assistants, and they just guess incorrectly. If I know the answer, they generally give me the same answer. If I don't know the answer, they give me gibberish that ends up wasting time. I would love to look over the shoulder of an AI booster who had a really good interaction, because it's hard for me to believe they didn't just already know what they were looking for.
kazinator · 15h ago
Gemini wasted my time today assuring me that if I want a git bundle that only has the top N commits, yet is cleanly clone-able, I can just make a --depth N clone of the original repo, and and do a git bundle create ... --all.
Nope; cloning a bundle created from a depth-limited clone results in error messages about missing commit objects.
So I tell the parrot that, and it comes back with: of course, it is well-known that it doesn't work, blah blah. (Then why wasn't it well known one prompt ago, when it was suggested as the definitive answer?)
Obviously, I wasn't in the "the right mindset" today.
This mindset is one of two things:
- the mindset of a complete n00b asking a n00b question that it will nail every time, predicting it out of its training data richly replete with n00b material.
- the mindset of a patient data miner, willing to expend all they keystrokes. needed to build up enough context to in effect create a query which zeroes in on the right nugget of information that made an appearance in the training data.
It was interesting to go down this #2 rabbit hole when this stuff was new, which it isn't any more. Basically do most of the work, while it looks as if it solved the problem.
I had the right mindset for AI, but most of it has worn off. If I don't get something useful in one query with at most one follow up, I quit.
The only shills who continue to hype AI are either completely dishonest assholes, or genuine bros bearing weapons-grade confirmation bias.
Let's try something else:
Q: "What modes of C major are their own reflection?"
A: "The Lydian and Phrygian modes are reflections of each other, as are the Ionian and Aeolian modes, and the Dorian and Mixolydian modes. The Locrian mode is its own reflection."
Very nice sounding and grammatical, but gapingly wrong in every point. The only mode that is its own reflection is Dorian. Furthermore, Lydian and Phrygian are not mutual reflections. Phrygian reflected around is root is Ionian. The reflection of Lydian is Locrian; and of Aeolian, Mixolydian.
I once loaded a NotebookLM with materials about George Russel's concept of the Lydian Chromatic, and Tonal Gravity. It made an incomprehensible mess of explaining the stuff, worse than the original sources.
AI performs well on whatever is the focus of its purveyors. When they want to shake down entry-level coding, they beef it up on entry-level coding and let it loose, leaving it unable to tell Mixolydian from mixing console.
lisbbb · 13h ago
Thank you! This is what I've been trying to tell people about LLMs. They don't hold up. They're like those Western movie set towns that look normal from the front, but when you walk around behind them, you see it is all just scaffolding with false fronts.
No comments yet
ishyaboibro · 13h ago
what model did you ask? here's the exact reply I received from Claude Sonnet, which appears to be exactly the answer you were expecting:
"Among the seven modes of C major, only Dorian is its own reflection.
Understanding Mode Reflections
When we reflect a mode, we reverse its interval pattern. The modes of C major and their interval patterns are:
Ionian: W-W-H-W-W-W-H
Dorian: W-H-W-W-W-H-W
Phrygian: H-W-W-W-H-W-W
Lydian: W-W-W-H-W-W-H
Mixolydian: W-W-H-W-W-H-W
Aeolian: W-H-W-W-H-W-W
Locrian: H-W-W-H-W-W-W
The Palindromic Nature of Dorian
Dorian mode is palindromic, meaning it produces the same scale whether you read its interval pattern forwards or backwards. When you reverse the Dorian interval pattern W-H-W-W-W-H-W, you get exactly the same sequence: W-H-W-W-W-H-W.
Mirror Pairs Among the Other Modes
The remaining modes form mirror pairs with each other:
Ionian-Phrygian: Mirror pair
Lydian-Locrian: Mirror pair
Mixolydian-Aeolian: Mirror pair
For example, when you reflect the C major scale (Ionian), which has the interval pattern W-W-H-W-W-W-H, you get H-W-W-W-H-W-W, which corresponds to the Phrygian mode.
This symmetrical relationship exists because the whole diatonic scale system can be symmetrically inverted, creating these natural mirror relationships between the modes"
kazinator · 13h ago
> what model did you ask?
Are you hoping to disprove my point by cherry picking the AI that gets the answer?
I used Gemini 2.5 Flash.
Where can I get an exact list of stuff that Gemini 2.5 Flash does not know that Claude Sonnet does, and vice versa?
Then before deciding to consult with AI, I can consult the list?
simianwords · 12h ago
2.5 flash is particularly cheap and fast, I think 2.5 pro would have got all the answers correct - at least it gets this one correct.
Yokolos · 7h ago
I get a lot of garbage out of 2.5 Pro and Claude Sonnet and ChatGPT. There's always this "this is how you solve it", I take a close look and it's clearly broken, I point it out and it's all "you're right, this is a common issue". Okay, so why do we have to do this song and dance a million times to arrive at the actually correct answer?
kazinator · 11h ago
Why doesn't Flash get it correct, yet comes up with plausible sounding nonsense? That means it is trained on some texts in the area.
What would make 2.5 Pro (or anything else) categorically better would be if it could say "I don't know".
There will be things that Claude 3.7 or Gemini Pro will not know, and the interpolations they come up with will not make sense.
simianwords · 9h ago
Model accuracy goes up as you use heavier models. Accuracy is always preferable and the jump from Flash to Pro is considerable.
You must rely on your own internal model in your head to verify the answers it gives.
On hallucination: it is a problem but again, it reduces as you use heavier models.
Macha · 3h ago
> You must rely on your own internal model in your head to verify the answers it gives
This is what significantly reduces the utility, if it can only be trusted to answer things I know the answer to, why would I ask it anything?
simianwords · 3h ago
its the same reason I find it useful to read comments in Reddit, ask people their advice and opinions.
Gemini 2.5 Flash is meant for things that have a higher tolerance for mistakes as long as the costs are low and responses are quick. Claude Sonnet is similar, although the trade off it makes between mistake tolerance and cost/speed is more in favor of fewer mistakes.
Lately, I have been using Grok 4 and I have had very good results from it.
iusewindows · 11h ago
Today I read a stupid Hackernews comment about how AI is useless. Therefore Hackernews is stupid. Oh, I need a filtered list of which comments to read?
Do you build computers by ordering random parts off Alibaba and complaining when they are deficient? You are complaining that you need to RTFM for a piece of high tech?
kazinator · 10h ago
> Oh, I need a filtered list of which comments to read?
If they are about something you're not sure about, and you're making decisions based on them ... maybe it would actually help, so yes?
> Do you build computers by ordering random parts off Alibaba and complaining when they are deficient?
We build computers using parts which are carefully documented by data sheets, which tell you exactly for what ranges of parameters their operation is defined and in what ways. (temperatures, voltages, currents, frequencies, loads, timings, typical circuits, circuit board layouts, programming details ...)
harimau777 · 15h ago
I think that there's a strong argument to be made that the negatives of having to wade through AI slop outweights the benefits that AI may provide. I also suspect that AI could contribute to enshittification of society; e.g. AI therapy being substituted for real therapy, AI products displacing industrial design, etc.
fc417fc802 · 14h ago
> e.g. AI therapy being substituted for real therapy, AI products displacing industrial design, etc.
That depends on the quality of the end product and the willingness to invest the resources necessary to achieve a given quality of result. If average quality goes up in practice then I'd chalk that up as a net win. Low quality replacing high quality is categorically different than low quality filling a previously empty void.
Therapy in particular is interesting not just because of average quality in practice (therapists are expensive experts) but also because of user behavior. There will be users who exhibit both increased and decreased willingness to share with an LLM versus a human.
There's also a very strong privacy angle. Querying a local LLM affords me an expectation of privacy that I don't have when it comes to Google or even Wikipedia. (In the latter case I could maintain a local mirror but that's similar to maintaining a local LLM from a technical perspective making it a moot point.)
rockemsockem · 15h ago
What is this AI slop that you're wading through and where is it?
Spam emails are not any worse for being verbose, I don't recognize the sender, I send it straight to spam. The volume seems to be the same.
You don't want an AI therapist? Go get a normal therapist.
I have not heard of any AI product displacing industrial design, but if anything it'll make it easier to make/design stuff if/when it gets there.
Like are these real things you are personally experiencing?
shusaku · 12h ago
The concerning thing is that AI contrarianism is being left wing coded. Imagine you’re fighting a war and one side decides “guns are overhyped, let’s stick with swords”. While there is a lot of hype about AI, even the pessimistic take has to admit it’s a game changing tech. If it isn’t doing anything useful for you, that’s because you need to get off your butt and start building tools on top of it.
Especially people on the left need to realize how important their vision is to the future if AI. Right now you can see the current US admin having zero concern for AI safety or carbon use. If you keep your head in the dirt saying “bubble!” that’s no problem. But if this is here to stay then you need to get involved.
mopsi · 10h ago
Last night, I asked a LLM to produce an /etc/fstab entry for connecting to a network share with specific options. I was too lazy to look up the options from the manual. It gave me the options separated by semicolons, which is invalid because the config file requires commas as separators.
I honestly don't see technology that stumbles over trivial problems like these as something that will replace my job, or any job that is not already automatable within ten thousand lines of Python, anytime soon. The gap between hype and actual capabilities is insane. The more I've tried to apply LLMs to real problems, the more disillusioned I've become. There is nothing, absolutely nothing, no matter how small the task, that I can trust LLMs to do correctly.
ryao · 8h ago
Which one? There are huge variations between LLMs. Was this a frontier thinking model with tool use? Did you ask it review online references before presenting an answer?
gruez · 18h ago
>The net utility of AI is far more debatable.
I'm sure if you asked the luddites the utility of mechanized textile production you'd get a negative response as well.
decimalenough · 18h ago
Railroads move people and cargo quickly and cheaply from point A to point B. Mechanized textile production made clothing, a huge sink of time and resources before the industrial age, affordable to everybody.
What does AI get the consumer? Worse spam, more realistic scams, hallucinated search results, easy cheating on homework? AI-assisted coding doesn't benefit them, and the jury is still out on that too (see recent study showing it's a net negative for efficiency).
azeirah · 18h ago
For learning with self-study it has been amazing.
gamblor956 · 18h ago
Until you dive deeper and discover that most of what the AI agents provided you was completely wrong...
There's a reason that AI is already starting to fade out of the limelight with customers (companies and consumers both). After several years, the best they can offer is slightly better chatbots than we had a decade ago with a fraction of the hardware.
simonw · 15h ago
"Until you dive deeper and discover that most of what the AI agents provided you was completely wrong..."
Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
Learning well is about consulting multiple sources and using them to build up your own robust mental model of the truth of how something works.
If you can really find the single perfect source of 100% correct information then great, I guess... but that's never been my experience. Every source of information has its flaws. You need to build your own mental model with a skeptical eye from as many sources as possible.
As such, even if AI makes mistakes it can still accelerate your learning, provided you know how to learn and know how to use tips from AI as part of your overall process.
Having an unreliable teacher in the mix may even be beneficial, because it enforces the need for applying critical thinking to what you are learning.
JimDabell · 15h ago
> > "Until you dive deeper and discover that most of what the AI agents provided you was completely wrong..."
> Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
I think it does matter, but the problem is vastly overstated. One person points out that AIs aren’t 100% reliable. Then the next person exaggerates that a little and says that AIs often get things wrong. Then the next person exaggerates that a little and says that AIs very often get things wrong. And so on.
Before you know it, you’ve got a group of anti-AI people utterly convinced that AI is totally unreliable and you can’t trust it at all. Not because they have a clear view of the problem, but because they are caught in this purity spiral where any criticism gets amplified every time it’s repeated.
Go and talk to a chatbot about beginner-level, mainstream stuff. They are very good at explaining things reliably. Can you catch them out with trick questions? Sure. Can you get incorrect information when you hit the edges of their knowledge? Sure. But for explaining the basics of a huge range of subjects, they are great. “Most of what they told you was completely wrong” is not something a typical beginner learning a typical subject would encounter. It’s a wild caricature of AI that people focused on the negatives have blown out of all proportion.
Gud · 18h ago
That has not been the case for me. I use LLMs to study German, so far it’s been an excellent teacher.
I also use them to help me write code, which it does pretty well.
rockemsockem · 16h ago
I almost always validate what I get back from LLMs and it's usually right. Even when it isn't it still usually gets me closer to my goal (e.g maybe some UX has changed where a setting I'm looking for in an app has changed, etc).
IDK where you're getting the idea that it's fading out. So many people are using the "slightly better chatbots" every single day.
Btw if you only think chat GPT is slightly better than what we had a decade ago then I do not believe that you have used any chat bots at all, either 10 years ago or recently because that's actually a completely insane take.
simonw · 15h ago
> So many people are using the "slightly better chatbots" every single day.
> This week, ChatGPT is on track to reach 700M weekly active users — up from 500M at the end of March and 4× since last year.
fc417fc802 · 14h ago
At a minimum, presumably once it arrives it will provide the consumer custom software solutions which are clearly a huge sink of time and resources (prior to the AI age).
You're looking at the prototype while complaining about an end product that isn't here yet.
osigurdson · 14h ago
I don't have that negative of a take but agree to some extent. The internet, mobile, AI have all been useful but not in the same way as earlier advancements like electricity, cars, aircraft and even basic appliances. Outside of things that you can do on screens, most people live exactly the same way as they did in the 70s and 80s. For instance, it still takes 30-45 minutes to clean up after dinner - using the same kind of appliances that people used 50 years ago. The same goes for washing clothes, sorting socks and other boring things that even fairly rich people still do. Basically, the things people dreamed about in the 50s - more wealth, more leisure time, robots and flying cars really were the right dream.
AngryData · 7h ago
The luddites were often the ones that built the mechanized looms. They had nothing against mechanized looms, they had everything against the business owners using their workers talents and knowledge to build an entire operation only to later undercut their wages and/or replace them with lesser paid unskilled workers and reduce the quality of life of their entire community.
Getting people to associate the luddites as anti-technology zealots rather than pro-labor organization is one of the most successful pieces of propaganda in history.
Macha · 3h ago
People will also use "look society was fine afterwards" as proof the luddites were wrong, but if you look at the fact the growth of industrial revolution cities was driven by importing more people from the countryside than died of disease, it's not clear at all that they were wrong about it's impact on their society, even if it worked out alright for us in the aftermath.
gruez · 1h ago
>The luddites were often the ones that built the mechanized looms.
Source? Skimming the wikipedia article it definitely sounds like most were made up of former skilled textile workers that were upset they were replaced with unskilled workers operating the new machines.
> They had nothing against mechanized looms, they had everything against the business owners using their workers talents and knowledge to build an entire operation only to later undercut their wages and/or replace them with lesser paid unskilled workers and reduce the quality of life of their entire community.
Sounds a lot like the anti-AI sentiment today, eg. "I'm not against AI, I'm just against it being used by evil corporations so they don't have to hire human workers". The "AI slop" argument also resembles luddites objecting to the new machines on the quality of "quality" (also from wikipedia), although to be fair that was only a passing mention.
GJim · 1h ago
> Getting people to associate the luddites as anti-technology zealots
Interestingly....
..... the fact that luddites also called for unemployment compensation and retraining for workers displaced by the new machinery, probably makes them amongst the most forward thinking and progressive people of the 1800's.
no_wizard · 14h ago
Luddites weren’t anti technology at all[0] in fact they were quite adept at using technology. It was a labor movement that fought for worker rights in the face of new technologies.
They laughed at Einstein, but they also laughed at Bozo the Clown.
This sort of “other people were wrong once, so you might be too” comment is really pointless.
trod1234 · 17h ago
Apples to oranges.
Luddites weren't at a point where every industry sees individual capital formation/demand for labor trend towards zero over time.
Prices are ratios in the currency between factors and producers.
What do you suppose happens when the factors can't buy anything because there is nothing they can trade. Slavery has quite a lot of historic parallels with the trend towards this. Producers stop producing when they can make no profit.
You have a deflationary (chaotic) spiral towards socio-economic collapse, under the burden of debt/money-printing (as production risk). There are limits to systems, and when such limits are exceeded; great destruction occurs.
Malthus/Catton pose a very real existential threat when such disorder occurs, and its almost inevitable that it does without action to prevent it. One cannot assume action will happen until it actually does.
shadowgovt · 18h ago
With this generation of AI, it's too early to tell whether it's the next railroad, the next textile machine, or the next way to lock your exclusive ownership of an ugly JPG of a multicolored ape into a globally-referenceable, immutable datastore backed by a blockchain.
bgwalter · 18h ago
The mechanical loom produced a tangible good. That kind of automation was supposed to free people from menial work. Now they are trying to replace interesting work with human supervised slop, which is a stolen derivative work in the first place.
The loom wasn't centralized in four companies. Customers of textiles did not need an expensive subscription.
Obviously average people would benefit more if all that investment went into housing or in fact high speed railways. "AI" does not improve their lives one bit.
harimau777 · 15h ago
I mean, for them it probably was.
skybrian · 15h ago
Computing is fairly general-purpose, so I suspect that the data centers at least will be used for something. Reusing so many GPU's might be harder, but not as bad as ASICs. There are a lot of other calculations they could do.
blibble · 15h ago
a data centre is a big warehouse
the vast expense is on the GPU silicon, which is essentially useless for compute other than parallel floating point operations
when the bubble pops, the "investment" will be a very expensive total waste of perfectly good sand
skybrian · 11h ago
I don't think we're too worried about wasting sand, though? What are the major costs of producing a GPU? Which of those are we worried about wasting?
I'm not going to do the homework for a Hacker News comment, but here are a few guesses:
I suspect that a lot of it is TSMC's capex for building new fabs. But since the fabs are already built, they could run them for longer. (Possibly producing different chips.)
Meanwhile, carbon emissions due to electricity use by data centers can't be taken back.
But also, much of an investment bubble popping wouldn't be about wasting resources. It would be investors' anticipated profits turning out to be a mirage - that is, investors feel poorer, but nothing material was lost.
fc417fc802 · 14h ago
Most scientific HPC workloads are designed to utilize GPU equipped nodes. If AI completely flops scientific modeling will see huge benefits. It's a win-win (except for the investors I guess).
BobaFloutist · 15h ago
Maybe they can use it all to mine crypto.
haganomy · 8h ago
If I had to hazard a guess as to why China and the US are building so many GPU farms under the guise of "AI supremacy", I'd say it's to support state sponsored hacking.
kazinator · 15h ago
> Reusing so many GPU's might be harder
It could have some unexciting applications like, oh, modeling climate change and other scientific simulations.
pjc50 · 7h ago
The utility of housing is not at all ambiguous, but yet we still had a destructive boom-crash in debt-financed housing.
peab · 17h ago
the goal of the major AI labs is to create AGI. The net utility of AGI is at least on the level of electricity, or the steam engine. It's debatable whether or not they'll achieve that, but if you actually look at what the goal is, the investment makes sense.
Fomite · 10h ago
'It's debatable whether or not they'll achieve that, but if you actually look at what the goal is, the investment makes sense.'
The first clause of that sentence negates the second.
The investment only makes sense if the the expectation of success * the investment < the payoff of that goal.
If I don't think the major AI labs will succeed, then it's not justified.
danlitt · 8h ago
AGI is not even a well-defined goal, let alone one that can be reasonably expected from the current tranche of investment. By your logic, any investment makes sense - this is not investment at all, it is a gambling addiction.
jcgrillo · 17h ago
what? crashing the economy for a psychotic sci-fi delusion "makes sense"? how?
rockemsockem · 16h ago
How exactly is AI crashing the economy....? Do you walk around with these beliefs every day?
jcgrillo · 13h ago
when bubbles burst crashes follow. this is a colossal bubble. i do walk around with that belief every day, because every day that passes is yet another day when this overblown AI hype bullshit fails to deliver the goods.
eru · 13h ago
> There is obvious utility to railroads, especially in a world with no cars.
> The net utility of AI is far more debatable.
As long as people are willing to pay for access to AI (either directly or indirectly), who are we to argue?
In comparison: what's the utility of watching a Star Wars movie? I say, if people are willing to part with their hard earned cash for something, we must assume that they get something out of it.
Ekaros · 7h ago
I wonder about actual effectiveness of spending on railroads vs AI... Even if railroads were somewhat waste, did the investment spread much wider? At least geographically it must have as there were workers that moved around and needed services. That is it was mostly spend in economy. Thus had actual change to trickle down.
Where as AI, who actually gets the investment? Nvidia? TMSC? Are people who are employed some that would have anyway been employed? Do they actually spend much more? Any Nvidia profits likely go just back to the market propping it up even higher.
How much efficiency from use of LLMs have actually increased proctiveness?
esseph · 10h ago
Another interesting claim I have come across is that AI investment is now larger that consumer spending:
Put a comment on this below, but the claim is highly misleading...consumer spending is ~$5 trillion, AI investment is ~$100 billion. The graph is looking at something like contribution to GDP growth (not contribution to GDP), but that is even misleading b/c if you don't adjust for seasonality, H1 consumer spending is almost always lower than H2 consumer spending of the previous year (because Q4 always has a higher level of consumer spending).
To clarify, AI investment has contributed more to GDP GROWTH than consumer spending.
So they are talking about changes not levels.
gorgoiler · 8h ago
One way to think about it is what if we’d done it the other way around? If we’d had AI first at 20% GDP investment levels, would the subsequent railroad boom have been an order of magnitude smaller at 2% GDP?
For me, that’s enough of a thought experiment — as implausible as it might be to have AI in 1901 — to be skeptical that the difference is simply that the first tech step-change was a pre-war uplift to build the post-war US success story, and the latter builds on it.
fuzzfactor · 1h ago
>not sure the comparison is apples to apples
More like apples to octopus.
People should keep in mind that there was no such thing as a GDP before the 1980's.
All that has been back-calculated, and the further back you go the more ridiculous it gets.
Excuses sounded plausible at the time but killed two birds with one stone.
Less rapid increase in government benefits which had become based on GNP for survival to cope with inflation, and further obscuring the ongoing poor economic performance of the 1980's going forward compared to how it was before 1970 numerically.
The people who were numerically smart before that and saw what things were like first hand were not fooled so easily.
Even using GDP back in the 1980's when it first came out, you couldn't get a good picture of the 1960's which were not that much earlier.
Don't make me laugh trying for the 1860's :)
tharmas · 17h ago
Isn't the US economy far more varied than it was in the 19th century? More dense? And therefore wouldn't be more difficult for one industry to dominate the US economy today than it was in the 19th century?
antman · 9h ago
Althought we know that there is no empirical evidence for trickle down economy, a worst case scenario was that some of the profit would be allocated to be cost of labor and through great economy expansion and regardless of rising inequality and some reskilling, innovation and its effect on the rise of the economy was at least somewhat positive for everybody.
This will not be the case anymore. There is no labor restructuring to be made, the lists for the future safe jobs are humorous to say the least. There has been a difficulty in finding skilled labor in sustainable wages for the companies and that has been highlighted as a key blocker for growth. Econony will rise by removing this blocker by AI. Rise of the economy due to AI invalidates old models and trickle down spurious correlations. Rise of the economy through AI directly enables the most extreme inequality and no reflexes or economics experience exists to manage it.
There have been many theories for revolutions, social financial ideological and others. I will not comnent on those but I will make a practical observation: It boils down to the ratio of controlers vs controlled. AI also enables an extremely minimal number of controllers through the AI managment of the flow information and later a large number of drones can keep everyone at bay. Cheaply, so good for the economy.
ryao · 8h ago
> there is no empirical evidence for trickle down economy
I usually avoid responding to remarks like this because they risk forays into politics, which I avoid, but the temptation to ask was too great here. What do you consider computers, cellphones, air conditioners, flat screen TVs and refrigerators to be? The first ones had outrageous prices that only the exorbitantly wealthy could afford. Now almost everyone in the US has them. They seem to have trickled down to me.
usrbinbash · 7h ago
> What do you consider computers, cellphones, air conditioners, flat screen TVs and refrigerators to be?
Products people buy with the money they earn. Not things that fall down from the tables of the ultra rich.
Their affordability comes from the economies of scale. If I can sell 100000 units of something as opposed to 100 units, the cost-per-unit goes down. Again, nothing to do with anything "trickling down".
ryao · 7h ago
R&D was required not only to create initial versions, but also to increase scale. If the money had not been there for all of that, how would the affordable versions exist today?
rTX5CMRXIfFG · 5h ago
The money for RND exists because capital markets exist, not because of “trickle-down economics”. Capital markets exist by pooling in the savings even by poor and middle class households. You can argue that the vast majority of savings used to fuel tech and innovation come from the upper classes, but then where’s your trickle-down economics there?
ryao · 5h ago
Your question was answered above:
> What do you consider computers, cellphones, air conditioners, flat screen TVs and refrigerators to be? The first ones had outrageous prices that only the exorbitantly wealthy could afford. Now almost everyone in the US has them. They seem to have trickled down to me.
rTX5CMRXIfFG · 4h ago
Eh, whatever. That’s not a direct answer explaining what and where exactly is trickle-down economics in that phenomenon. At best, you’re just arguing off a fallacy: that because B happened after A, then A must have necessarily caused B.
ryao · 3h ago
I do not think that fits “If the money had not been there for all of that, how would the affordable versions exist today?”, but let’s agree to disagree.
That said, I see numerous things that exist solely because those with money funded R&D. Your capital markets theory for how the R&D was funded makes no sense because banks will not give loans for R&D. If any R&D funds came from capital markets, it was by using existing property as collateral. Funds for R&D typically come from profitable businesses and venture capitalists. Howard Hughes for example, obtained substantial funds for R&D from the Hughes Tool Company.
Just to name how the R&D for some things was funded:
- Microwave oven: Developed by Raytheon, using profits from work for the US military
- PC: Developed by IBM using profits from selling business equipment.
- Cellular phone: Developed by Motorola using profits from selling radio components.
- Air conditioner: Developed by Willis Carrier at Buffalo Forge Company using profits from the sale of blacksmith forges.
- Flat panel TV: Developed by Epson using profits from printers.
The capital markets are no where to be seen. I am at a startup where hardware is developed. Not a single cent that went into R&D or the business as a whole came from capital markets. My understanding is that the money came from an angel investor and income from early adopters. A hardware patent that had given people the idea for the business came from research in academia, and how that was funded is unknown to me, although I would not be surprised if it had been funded through a NSF grant. The business has been run on a shoe string budget and could grow much quicker with an injection of funding, yet the capital markets will not touch it.
blensor · 7h ago
That is literally what patents were invented for. Give the entity that puts the resources into creating something new some protection to be able to recoup that
ryao · 6h ago
That does not answer the question of how the affordable versions would exist if the money to create them was not there in the first place. You cannot recoup what never existed.
Also, not all patents are monetizable.
blensor · 6h ago
I don't understand your point then. The original product exists because someone used their own or their investors money and made a bet on an idea.
Then they hope they can sell it at a profit.
Products becoming cheaper is a result of the processes getting more optimized ( on the production side and the supply side ) which is a function of the desire to increase the profit on a product.
Without any other player in the market this means the profit a company makes on that product increases over time.
With other players in that market that underprice your product it means that you have to reinvest parts of your profit into making the product cheaper ( or better ) for the consumer.
IncreasePosts · 4h ago
Is the idea that the person with $1M in 1900 had the ability to direct that towards their idea for air conditioning , whereas if the same amount of money disbursed among 100,000 people, they would just moderately increase their consumption and we would end up right where we started?
bayindirh · 5h ago
> R&D was required not only to create initial versions, but also to increase scale.
Not to increase scale, but to reduce the cost of the device while maintaining 99% of the previous version, IOW, enshittification of the product.
> how would the affordable versions exist today?
Not all "affordability" comes from the producer of the said stuff. Many things are made from commodity materials, and producers of these commodity materials want to increase their profits, hence trying to produce "cheaper" versions of them, not for the customers, but for themselves.
Affordability comes from this cost reduction, again enshittification. Only a few companies I see produce lower priced versions of their past items which also surpasses them in functionality and quality.
e.g. I have Sony WH-CH510 wireless headphones, which has way higher resolution than some wired headphones paired with decent-ish amps, this is because Sony is an audiovisual company, and takes pride in what they do. On the other end of the spectrum is tons of other brands which doesn't sell for much cheaper, but get way worse sound quality and feature set, not because they can't do it as good as Sony, but want to get a small pie of the said market and earn some free money, basically.
ryao · 4h ago
Can you honestly tell me that modern cellphones are worse than the original cell phones:
As for your wireless headphones, if you compare them to early wireless headphones, you should find that prices have decreased, while quality has increased.
bayindirh · 3h ago
I used phones similar to this (a Nokia 2110 to be precise), BTW.
I can argue, from some aspects, yes. Given that you provide the infrastructure for these devices, they'll work exactly as they are designed today. On the other hand, a modern smartphone has a way shorter life span. OLED screens die, batteries, swell, electronics degrade.
Ni-Cad batteries, while being finicky and toxic, are much more longer lasting than Li-ion and Li-Poly batteries. If we want to talk Li-Poly batteries, my old Sony power bank (advertising 1000 recharge cycles with a proprietary Sony battery tech) is keeping its promise, capacity and shape 11 years after its stamped manufacturing date.
Can you give me an example of another battery/power pack which is built today and can continue operating for 11 years without degrading?
As electronics shrink, the number of atoms per gate decreases, and this also reduces the life of the things. My 35 y/o amplifier works pretty well, even today, but modern processors visibly degrade. A processor degrading to a limit of losing performance and stability was unthinkable a decade ago.
> you will find that prices have decreased, while quality has increased.
This is not primarily driven by the desire to create better products. First, cheaper and worse ones come, and somebody decides to use the design headroom to improve things later on, and put a way higher price tag.
Today, in most cases, speakers' quality has not improved, but the signal processed by DSP makes them appear sound better. This is cheaper, and OK for most people. IOW, enshittification, again. Psychoacoustics is what makes this possible, not better sounding drivers.
The last car I rented has a "sound focus mode" under its DSP settings. If you're the only one in the car, you can set it to focus to driver, and it "moves" the speakers around you. Otherwise, you select "everyone", and it "improves" sound stage. Digital (black) magic. In either case, that car does not sound better than my 25 year old car, made by the same manufacturer.
You want genuinely better sounding drivers, you'll pay top dollar in most cases.
ryao · 2h ago
> Can you give me an example of another battery/power pack which is built today and can continue operating for 11 years without degrading?
I have LiFePo4 batteries from K2 Energy that will be 13 years old in a few months. They were designed as replacements for SLA batteries. Just the other day, I had put two of them into a UPS that needed a battery replacement. They had outlived the UPS units where I had them previously.
I have heard of Nickel Iron batteries around 100 years old that still work, although the only current modern manufacturers are in China. The last US manufacturer went out of business in 2023.
> You want genuinely better sounding drivers, you'll pay top dollar in most cases.
I do not doubt that, but if the signal processing improves things, I would consider that to be a quality improvement.
bayindirh · 2h ago
> The last US manufacturer went out of business in 2023.
Interesting, but they are not manufactured more, but way less, as you can see. So, quality doesn't drive the market. Monies do.
> I do not doubt that, but if the signal processing improves things, I would consider that to be a quality improvement.
Depends on the "improvement" you are looking for. If you are a casual listener hunting for an enjoyable pair while at a run or gym, you can argue that's an improvement.
But if you're looking for resolution increases, they're not there. I occasionally put one of my favorite albums on, get a tea, and listen to that album for the sake of listening to it. It's sadly not possible on all gear I have. You don't need to pay $1MM, but you need to select the parts correctly. You still need a good class AB or an exceptional class D amplifier to get good sound from a good pair of speakers.
This "apparent" improvement which is not there drives me nuts actually. Yes, we're better from some aspects (you can get hooked to feeds instead of drugs and get the same harm for free), but don't get distracted, the aim is to make numbers and line go up.
ryao · 2h ago
> Interesting, but they are not manufactured more, but way less, as you can see. So, quality doesn't drive the market. Monies do.
They were always really expensive, heavy and had low energy density (both by weight and by volume). Power density was lower than lead acid batteries. Furthermore, they would cause a hydrolysis reaction in their electrolyte, consuming water and producing a mix of oxygen and hydrogen gas, which could cause explosions if not properly vented. This required periodic addition of water to the electrolyte. They also had issues operating at lower temperatures.
They were only higher quality if you looked at longevity and nothing else. I had long thought about getting them for home energy storage, but I decided against them in favor of waiting for LiFePo4 based solutions to mature.
By the way, I did a bit more digging. It turns out that US production of NiFe batteries ended before 2023, as the company that was supposed to make them had outsourced production to China:
> They were always really expensive, heavy and had low energy density (both by weight and by volume).
Sorry, I misread your comment. I thought you were talking about LiFePo4 production ending in 2023, not NiFe.
I know that NiFe batteries are not suitable (or possible to be precise) to be miniaturized. :)
I still wish market does research on longevity as much as charge speed and capacity, but it seems companies are happy to have batteries with shorter and shorter life spans to keep up with their version of the razor and blades model.
Also, this is why regulation is necessary in some areas.
cheema33 · 6h ago
My understanding of trickle down economy, could be incorrect, has been that it is a policy that advocates govt. giving money to the rich. Through tax breaks and other means. The idea being that the rich would then spend that money in ways that would allow the benefits to trickle down to the poor.
squidbeak · 6h ago
This is correct. The idea is that money spent by the wealthy enjoying themselves or making themselves wealthier through investments eventually reaches the poorest - in some form.
Quite why we've persuaded ourselves we need to do this through a remote & deaf middleman is anyone's guess, when governments we elect could just direct money through policies we can all argue about and nudge in our own small ways.
Are those entirely separate things? Hardware development is expensive. Having the money to develop these things and iterate on them enabled them to begin as luxuries for wealthy people and evolve into things the rest of us can have.
neom · 7h ago
You're right to a degree in that they are somewhat coupled systems, the real economy operates in a gaussian of many economic theories, hence it's hard to model and all that. Never the less in traditional economic theory they are analyzed separately. The connection exists but it shouldn't be seen as validating trickle down as economic policy.
conductr · 7h ago
I would argue that technology diffusion would occur even without trickle down economics/tax policies. Even if they were taxed more heavily, there would still be people wealthy enough to buy the v1.0 flat screen, computer, DVD player, etc because wealth is still unevenly distributed and there are still some richer people in the population.
Trickle down economics is supposed to make poorer people more wealthy. Not suppress their wage growth while offering a greater selection of affordable gadgets.
ryao · 7h ago
Would it have happened to the same extent? Also, describing these things as gadgets understates the extent to which they are beneficial, given that a gadget refers to a novelty by definition:
Among the many things that have become affordable for every day people because money had been present to fund the R&D are air conditioners, refrigerators, microwave ovens, dish washers, washing machines, clothes dryers, etcetera. When I was born in the 80s, my parents had only a refrigerator (and maybe a microwave oven). They could not afford more. Now they have all of these things.
conductr · 1h ago
I could ask the same of you. Do these things only exist because of trickle down? Do you have proof they wouldn’t have been invented and commercialized without it?
I don’t expect either of us to be able the answer the questions posed. Nobody in the 80s was asking for any of these inventions. People were living their lives happily ignorant to a better future. For that reason, most of these things do amount to just gadgets. They have shaped our lives in a dramatic way and had huge commercial success by solving huge problems or increasing conveniences, but they are still nonessential. That’s the way I’m using the term, don’t really care what Webster has to say about it tbh as I’m perhaps being dramatic precisely to highlight this point.
The continuation of R&D isn’t even a trickle down policy. If you’re a big manufacturer of CRT televisions, it’s in your interest to continue inventing better technology in that space just to remain competitive. If you’re really good at it, there’s a good chance you can steal market share. It’s good old fashioned business as usual in a competitive industry. I don’t see how they relate to one another. Not to mention that many things are invented in a garage somewhere and capital is infused later. Would this only happen if the rich uncles of the world benefited from economic policies aimed at making them rich? I think it would still find a way in most cases, good ideas typically always find a way. I don’t think a majority of gadgets can be linked to something like “brought to you by trickle down economics”.
wat10000 · 5h ago
“Trickle down” is about making the masses wealthier in general, not just making shiny new toys for them. It’s easy for the HN crowd to think that a cool new computer equates to wealth, but that’s not what most people consider it to be. Does cutting taxes for the rich allow the common person to buy better food, pay their mortgage off earlier, send their kids to better schools? That’s the question you need to ask about “trickle down,” not how big our TVs would be.
ryao · 5h ago
If it results in businesses like Aldi, then yes. Aldi not only pays above market rates, but charges below market prices for quality food.
Honestly, I have to say that I am relatively happy with the things that I have these days because of obscenely wealthy people’s investments. I have a heat pump air conditioner that would have been unthinkable when I was a child. I have food from Aldi and Lidl, whose prices relative to the competition also would have been unthinkable when I was a child. I have an electric car and solar panels, which were in the realm of fantasy when I was a child. Solar panels and electric cars existed, but solar panels were obscenely expensive and electric cars were considered a joke when I was young. I have a gigabit fiber internet connection at $64.99 per month, such internet connections were only available to the obscenely rich when I was a child. I am not sure if I would have any of these things if the money had not been there to fund them. I really do feel like things have trickled down to me.
wat10000 · 4h ago
What’s the connection between wealthy people getting wealthier and businesses like Aldi?
I like electric cars and solar panels and gigabit fiber as much as the next person, but they aren’t wealth.
If you shop there, you are enriching its owners. That is not a bad thing. The more money they have, the better they make things for people, so it is a win-win.
Note that Aldi is technically two companies since the family that founded it had some internal disagreement and split the company into two, but they are both privately owned.
That said, if wealthy people had not made investments, I would not have an electric car, solar panels or gigabit fiber. The solar panels also improve property values, so it very much is a form of wealth, although not a liquid one. Electric cars similarly are things that you can sell (although they are depreciating assets), so saying that they are not wealth is not quite correct. The internet connection is not wealth in a traditional sense, but it enables me to work remotely, so it more than pays for itself.
guywithahat · 58m ago
Tickle down is a bit of a nonsensical term, it's called supply side economics and it's a well studied, proven way to strengthen the economy. It's how Reagan ended stagflation, and is generally one of the first things governments turn to when the economy is struggling
Thanks, TLDR doesn't work says science according to the Wikipedia article.
ryao · 7h ago
A sibling comment linked investopedia.com, which had a very different take on the matter. The TLDR there was:
> The trickle-down theory includes commonly debated policies associated with supply-side economics.
blensor · 7h ago
I'd say prices of products come down due to competition, not due to the companies getting more money outside of the regular supply/demand relationship.
Let's assume you have a monopoly on something with a guarantee that no one else can sell the same product in your market. Then there is no direct incentive to make the product cheaper, even if you can produce it for cheaper.
Adding more money on top of it that is supposed to trickle down in some way will not make that product cheaper, unless there is an incentive for that company to do so.
The real world is of course more complicated, let's say you have two companies that get the incentives and one of them is using it to make the product cheaper, then that will "trickle down" as a price decrease because the other company need to follow suit to stay competitive. But this again is driven by the market and not the incentives and would have happened without them just as well.
ryao · 6h ago
You seem to describe commodities, rather than new technologies that are not commodities. New technologies start so far out on the supply demand curve that an order of magnitude decrease in price can expand the market by orders of magnitude.
The first cellular phone in modern currency cost something like $15,000. At that price, the market for it would be orders of magnitude below the present cellular phone market size. Lower the price 1 to 2 orders of magnitude and we have the present cellular phone market, which is so much larger than what it would have been with the cellular phone at $15,000.
Interestingly, the cellular phone market also seems to be in a period where competition is driving prices upward through market segmentation. This is the opposite of what you described competition as doing. Your remark that the real world is more complicated could not be more true.
blensor · 5h ago
Prices going up is only true if you also let the specs go up
If you fix the specs and progress time then the prices go down considerably
Take the first Iphone which was $499 ( $776.29 if adjusted for inflation ) and try to find a currently built phone with similar specs. I couldn't find any that go down that far but the cheapest one I could find was the ZTE Blade L9 ( which still has higher specs overall ) then we are looking at over 90% price reduction
Macha · 3h ago
Computers of course were invented by a state controlled war economy, pretty much the opposite of trickle down.
Permeation of technology due to early adopters paying high costs leading to lower costs is not what trickle down generally means. Being an early adopter of cellphones, AC, flat screen TVs or computers required the wealth level of your average accountant of that era - it didn't require being a millionaire.
the_other · 5h ago
And yet the wealth gap has only widened over the period between their invention and distribution.
I ask hyperbolically: are they economic enablers or financial traps?
(My hunch is that fridges are net-enablers, but TVs are net-traps. I say this as someone with a TV habit I would like to kick.)
PicassoCTs · 7h ago
The end result of state investment into large research projects during the cold war?
0cf8612b2e1e · 20h ago
Over the last six months, capital expenditures on AI—counting just information processing equipment and software, by the way—added more to the growth of the US economy than all consumer spending combined. You can just pull any of those quotes out—spending on IT for AI is so big it might be making up for economic losses from the tariffs, serving as a private sector stimulus program.
Wow.
gruez · 18h ago
It's not as bad as the alarmist phrasing would suggest. Consider a toy example: suppose consumer spending was $100 and grew by $1, but AI spending was $10 and grew by $1.5, then you can rightly claim that "AI added more to the grow of the US economy than all consumer spending combined"[1]. But it's not as if the economy consists mostly of AI, or that if AI spending stopped the economy will collapse. It just means AI is a major contributor to the economy's growth right now. It's not even certain that the AI bubble popping would lead to all of that growth evaporating. Much of the AI boom involves infrastructure build out for data centers. That can be reallocated to building houses if datacenters are no longer needed.
[1] Things get even spicier if consumer growth was zero. Then what would the comparison? That AI added infinitely more to growth than consumer spending? What if it was negative? All this shows how ridiculous the framing is.
agent_turtle · 17h ago
[flagged]
dang · 15h ago
Please don't cross into personal attack. We ban accounts that do that.
Have you heard of the disagreement hierarchy? You're somewhere between 1 and 3 right now, so I'm not even going to bother to engage with you further until you bring up more substantive points and cool it with the personal attacks.
One of the major reasons there’s such a shortage of homes in the US is the extensive permit process required. Pivoting from data centers to home construction is not a straightforward process.
Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on. A resilient economy has multiple growth areas; an unstable one has one or two.
While you could certainly argue that we may already be in rough shape even without the bubble popping, it would undoubtedly get worse for the reasons I listed above,
gruez · 16h ago
>One of the major reasons there’s such a shortage of homes in the US is the extensive permit process required. Pivoting from data centers to home construction is not a straightforward process.
Right, I'm not suggesting that all of the datacenter construction will seamlessly switch over to building homes, just that some of the labor/materials freed would be allocated to other sorts construction. That could be homes, amazon distribution centers, or grid connections for renewable power projects.
>A resilient economy has multiple growth areas; an unstable one has one or two.
>[...] it would undoubtedly get worse for the reasons I listed above,
No disagreement there. My point is that if AI somehow evaporated, the hit to GDP would be less than $10 (total size of the sector in the toy example above), because the resources would be allocated to do something else, rather than sitting idle entirely.
>Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on.
That's a fair point, although to be fair the federal government is pretty good at stimulus after the GFC and covid that any credit crunch would be short lived.
raincole · 14h ago
> growth
Is the keyword here. US consumers have been spending so much so of course that sector doesn't have that much room to grow.
troyastorino · 18h ago
I've seen this quote in a couple places and it's misleading.
So, non-seasonally adjusted consumer spending is flat. In that sense, yes, anything where spend increased contributed more to GDP growth than consumer spending.
If you look at seasonally-adjusted rates, consumer spending has grown ~$400 billion, which might outstrips total AI CapEx in that time period, let alone growth. (To be fair the WSJ graph only shows the spending from Meta, Google, Microsoft, and Amazon. But it also says that Apple, Nvidia, and Tesla combined "only" spent $6.7 billion in Q2 2025 vs the $96 billion from the other four. So it's hard to believe that spend coming from elsewhere is contributing a ton.)
If you click through the the tweet that is the source for the WSJ article where the original quote comes from (https://x.com/RenMacLLC/status/1950544075989377196) it's very unclear what it's showing...it only shows percentage change, and it doesn't even show anything about consumer spending.
So, at best this quote is very misleadingly worded. It also seems possible that the original source was wrong.
bravetraveler · 19h ago
Tepidly socially-acceptable welfare
lisbbb · 13h ago
That's bad because you just know at some point the bell is getting rung and then the bubble bursts. It was the same thing with office space in the late 1990s--they overbuilt like crazy predicting huge demand that never appeared and then the dot-com bubble burst and that was that.
intended · 19h ago
Yes, wow. When I heard that data point I was floored.
electrondood · 19h ago
For context though, consumer spending has contracted significantly.
Animats · 19h ago
"Over the last six months, capital expenditures on AI—counting just information processing equipment and software, by the way—added more to the growth of the US economy than all consumer spending combined."
If this isn't the Singularity, there's going to be a big crash. What we have now is semi-useful, but too limited. It has to get a lot better to justify multiple companies with US $4 trillion valuations. Total US consumer spending is about $16 trillion / yr.
Remember the Metaverse/VR/AR boom? Facebook/Meta did somehow lose upwards of US$20 billion on that. That was tiny compared to the AI boom.
brotchie · 15h ago
Look at the induced demand due to Claude code. I mean, they wildly underestimated average token usage by users. There's high willingness to pay. There's literally not enough inference infra available.
I was working on crypto during the NFT mania, and THAT felt like a bubble at the time. I'd spend my days writing smart contracts and related infra, but I was doing a genuine wallet transaction at most once a week, and that was on speculation, not work.
My adoption rate of AI has been rapid, not for toy tasks, but for meaningful complex work. Easily send 50 prompts per day to various AI tools, use LLM-driven auto-complete continuously, etc.
That's where AI is different from the dot com bubble (not enough folks materially transaction on the web at the time), or the crypto mania (speculation and not utility).
Could I use a smarter model today? Yes, I would love that and use the hell out of it.
Could I use a model with 10x the tokens/second today? Yes, I would use it immediately and get substantial gains from a faster iteration cycle.
sudohalt · 9h ago
A bubble isn't related to whether something is useful or not, it's about speculation and detachment from reality. AI being extremely useful and being a bubble aren't mutually exclusive. It can be the case that everyone finds it useful but at the same time the valuations and investments aren't realized.
shoo · 7h ago
Yep, there's a big difference between a company and its stock. Even if the company is great, an investment such as a stock can never be good or bad without reference to the price you need to pay for it. A famous example from the dot-com era is Cisco. Great company, but buying Cisco stock at its March 2000 peak was a bad investment -- it was "priced to perfection" and the stock price today, over 25 years later, is lower than the dot com era price.
kergonath · 7h ago
> A bubble isn't related to whether something is useful or not, it's about speculation and detachment from reality.
See the dotcom bubble in the early 2000s for a perfect example. The Web is still useful, but the bubble bursting was painful.
sothatsit · 14h ago
Claude Code was the tipping point for me from "that's neat" to "wow, that's really useful". Suddenly, paying $200/month for an AI service made sense. Before that, I didn't want to pay $20/month for access to Claude, as I already had my $20/month subscription to ChatGPT.
I have to imagine that other professions are going to see similar inflection points at some point. When they do, as seen with Claude Code, demand can increase very rapidly.
* Even with all this infra buildout all the hyperscalers are constantly capacity constrained, especially for GPUs.
* Surveys are showing that most people are only using AI for a fraction of the time at work, and still reporting significant productivity benefits, even with current models.
The AGI/ASI hype is a distraction, potentially only relevant to the frontier model labs. Even if all model development froze today, there is tremendous untapped demand to be met.
The Metaverse/VR/AR boom was never a boom, with only 2 big companies (Meta, Apple) plowing any "real" money into it. Similarly with crypto, another thing that AI is unjustifiably compared to. I think because people were trying to make it happen.
With the AI boom, however, the largest companies, major governments and VCs are all investing feverishly because it is already happening and they want in on it.
Animats · 17h ago
> Even with all this infra buildout all the hyperscalers are constantly capacity constrained, especially for GPUs.
Are they constrained on resources for training, or resources for serving users using pre-trained LLMs? The first use case is R&D, the second is revenue. The ratio of hardware costs for those areas would be good to know.
keeda · 15h ago
Good question, I don't believe they break out their workloads into training versus inference, in fact they don't even break out any mumbers in any useful detail. But anecdotally the public clouds did seem to be most GPU-constrained whenever Sam Altman was making the rounds asking for trillions in infra for training.
However, my understanding is that the same GPUs can be used for both training and inference (potentially in different configurations?) so there is a lot of elasticity there.
That said, for the public clouds like Azure, AWS and GCP, training is also a source of revenue because other labs pay them to train their models. This is where accusations of funny money shell games come into play because these companies often themselves invest in those labs.
lisbbb · 12h ago
Everything I have worked on as a fullstack developer for multiple large companies over the past 25 years tells me that AI isn't just going to replace a bunch of workers. The complexity of those places is crazy and it takes teamwork to keep them running. Just look what happens internally over a long holiday weekend at most big companies, they are often just barely meeting their uptime guarantees.
I was recently at a big, three-letter pharmacy company and I can't be specific, but just let me say this: They're always on the edge of having the main websites going down for this or that reason. It's a constant battle.
How is adding more AI complexity going to help any of that when they don't even have a competent enough workforce to manage the complexity as it is today?
You mention VR--that's another huge flop. I got my son a VR headset for Christmas in like 2022. It was cool, but he couldn't use it long or he got nauseaus. I was like "okay, this is problematic." I really liked it in some ways, but sitting around with that goofy thing on your head wasn't a strong selling point at all. It just wasn't.
If AI can't start doing things with accuracy and cleverness, then it's not useful.
cheema33 · 6h ago
> If AI can't start doing things with accuracy and cleverness, then it's not useful.
Humans are not always accurate or clever. But we still consider them useful and employ them.
cheevly · 11h ago
You have it so backwards. The complexity of those places is exactly why AI will replace it.
827a · 11h ago
So, to give a tactile example that helped me recently: We have a frontend web application that was having some issues with a specific feature. This feature makes a complex chain of a maybe dozen API requests when a resource is created, conditionally based on certain things, and there's a similar process that happens when editing this resource. But, there was a difference in behavior between the creating and editing routes, when a user expected that the behavior would be the same.
This is crusty, horrible, old, complex code. Nothing is in one place. The entire editing experience was copy-pasted from the create resource experience (not even reusable components; literally copy-pasted). As the principal on the team, with the best understanding of anyone about it, even my understanding was basically just "yeah I think these ten or so things should happen in both cases because that's how the last guy explained it to me and it vibes with how I've seen it behave when I use it".
I asked Cursor (Opus Max) something along the lines of: Compare and contrast the differences in how the application behaves when creating this resource versus updating it. Focus on the API calls its making. It responded in short order with a great summary, and without really being specifically prompted to generate this insight it ended the message by saying: It looks like editing this resource doesn't make the API call to send a notification to affected users, even though the text on the page suggests that it should and it does when creating the resource.
I suspect I could have just said "fix it" and it could have handled it. But, as with anything, as you say: Its more complicated than that. Because while we imply we want the app to do this, its a human's job (not the AI's) to read into what's happening here: The user was confused because they expected the app to do this, but do they actually want the app to do this? Or were they just confused because text on the page (which was probably just copy-pasted from the create resource flow) implied that it would?
So instead I say: Summarize this finding into a couple sentences I can send to the affected customer to get his take on it. Well, that's bread and butter for even AIs three years ago right there, so off it goes. The current behavior is correct; we just need to update the language to manage expectations better. AI could also do that, but its faster for me to just click the hyperlink in Claude's output, jumps right to the file, and I make the update.
Opus Max is expensive. According to Cursor's dashboard, this back-and-forth cost ~$1.50. But let's say it would have taken me just an hour to arrive at the same insight it did (in a fifth the time): that's easily over $100. That's a net win for the business, and its a net win for me because I now understand the code better than I did before, and I was able to focus my time on the components of the problem that humans are good at.
rockemsockem · 18h ago
Tbf I think most would say that the VR/AR boom is still ongoing, just with less glitz.
Edit: agree on the metaverse as implemented/demoed not being much, but that's literally one application
Macha · 2h ago
Honestly, VR/AR is a small gaming peripheral business, like joysticks and third party controllers. And their are companies that make that their thing and make money from it, but it was never going to be profitable enough to be the thing that a company the size of Facebook pivots to, which I could see being a consumer in the space before Facebook got in and after too.
Don't get me wrong, VRChat and Beat Saber are neat, and all the money thrown at the space got the tech advanced at a much faster rate than it would have organically have done I'm the same time (or potentially ever). But you can see Horizon's attempt to be "VRChat but a larger more profitable business" to see how the things you would need to do to monetise it to that level will lose you the audience that you want to monetise.
827a · 11h ago
I honestly disagree (mostly). Sure, we might see some adjustments to valuations to better account for the expected profit margins; those might have been overblown. But if you had access to any dashboard inside these companies ([1]) all you'd see is numbers going up and to the right. Every day is a mad struggle to find capacity to serve people who want what they're selling.
The average response to that is "its just fake demand from other businesses also trying to make AI work". Then why are the same trends all but certainly happening at Cursor, for Claude Code, Midjourney, entities that generally serve customers outside of the fake money bubble? Talk to anyone under the age of 21 and ask them when they used Chat last. McDonalds wants to deploy Gemini in 43,000 US locations to help "enhance" employees (and you know they won't stop there) [2]. Students use it to cheat at school, while their professors use it to grade their generated papers. Developers on /r/ClaudeAI are funding triple $200/mo claude max subscriptions and swapping between them because the limits aren't high enough.
You can not like the world that this technology is hurtling us toward, but you need to separate that from the recognition that this is real, everyone wants this, today its the worst it'll ever be, and people still really want it. This isn't like the metaverse.
> There could be a crash that exceeds the dot com bust, at a time when the political situation through which such a crash would be navigated would be nightmarish.
If the general theme of this article is right (that it's a bubble soon to burst), I'm less concerned about the political environment and more concerned about the insane levels of debt.
If AI is indeed the thing propping up the economy, when that busts, unless there are some seriously unpopular moves made (Volcker level interest rates, another bailout leading to higher taxes, etc), then we're heading towards another depression. Likely one that makes the first look like a sideshow.
The only thing preventing that from coming true IMO is dollar hegemony (and keeping the world convinced that the world's super power having $37T of debt and growing is totally normal if you'd just accept MMT).
margalabargala · 18h ago
> Likely one that makes the first look like a sideshow.
The first Great Depression was pretty darn bad, I'm not at all convinced that this hypothetical one would be worse.
agent_turtle · 17h ago
Some of the variables that made the Great Depression what it was included very high tariff rates and lack of quality federal oversight.
Today, we have the highest tariffs since right before the Great Depression, with the added bonus of economic uncertainty because our current tariff rates change on a near daily basis.
Add in meme stocks, AI bubble, crypto, attacks on the Federal Reserve’s independence, and a decreasing trust in federal economic data, and you can make the case that things could get pretty ugly.
margalabargala · 16h ago
Sure, you can make the case that things could get pretty ugly. You could even make the case that things could get about as bad as the Great Depression.
But for things to be much worse than the Great Depression, I think is an extraordinary claim. I see the ingredients for a Great Depression-scale event, but not for a much-worse-than-Great-Depression event.
BLKNSLVR · 11h ago
How much worse could it be if the President was likely to fire the individual holding the position responsible for announcing "it's official, this is a recession"? And so on in that head-in-the-sand direction for as long as their loyalists are willing and able to defend the Presidents proclamations of fake news?
How long will the foot stay on the accelerator after (almost literally) everyone else knows we might be in a bit of strife here?
If the US can put off the depression for the next three years then it has a much better chance of working it's way out gracefully.
ronald_raygun · 11h ago
Throw some nukes and a war over Taiwan into the mix?
BLKNSLVR · 11h ago
I'm currently reading The Mandibles[0], which is feeling increasingly inevitably prophetic.
MMT is just a description of the monetary reality we're in. If everything changed, the new reality would be MMT.
Hikikomori · 18h ago
Ai bubble, economy in the trash already, inflation from tariffs. Dollar might get real cheap when big holders start selling stocks and exchanging it, nobody wants to be left holding their bag, and they have a lot of dollars.
Which is their (Thiel, project2025, etc) plan, federal land will be sold for cheap.
decimalenough · 18h ago
Selling stocks for what? If the dollar is going down the toilet, the last thing you want to have is piles of rapidly evaporating cash.
piva00 · 8h ago
For other currencies, sell stocks, get USD, sell USD to buy currencies appreciating over the USD.
It's already happening, past 6 months USD has been losing value against EUR, CHF, GBP, even BRL and almost flat against the JPY which was losing a ton of value the past years.
marcusestes · 18h ago
Never totally discount _deflationary_ scenarios.
heathrow83829 · 15h ago
based on my understanding of what all the financial pros are saying: they'll never let that happen. they'll inflate away to the moon before they allow for a deflationary bust. that's why everyone's in equities in the first place. it's almost insured, at this point.
sriram_malhar · 4m ago
The latest ponzi.
throwmeaway222 · 19h ago
- Microsoft’s AI-fueled $4 trillion valuation
As someone in an AI company right now - Almost every company we work with is using Azure wrapped OpenAI. We're not sure why, but that is the case.
guidedlight · 19h ago
It’s because most companies already have a lot of confidence with Microsoft contracts, and are generally very comfortable storing and processing highly sensitive data on Microsoft’s SaaS platforms. It’s a significant advantage.
Also Microsoft Azure hosts its own OpenAI models. It isn’t a proxy for OpenAI.
ElevenLathe · 19h ago
MS salespeople presumably already have weekly or monthly meetings with all the people with check-cutting authority, and OpenAI doesn't. They're already an approved vendor, and what's more the Azure bill is already really really big, so a few more AI charges barely register.
It's the same reason you would use RDS at an AWS shop, even if you really like CloudSQL better.
This is the main reason the big cloud vendors are so well-positioned to suck up basically any surplus from any industry even vaguely shaped like a b2b SaaS.
hnuser123456 · 19h ago
Nobody gets fired for choosing Microsoft
edaemon · 14h ago
Lots of AI things are features masquerading as products. Microsoft already has the products, so they just have to add the AI features. Customers can either start using a new and incomplete product just for one new feature, or they can stick with the mature Microsoft suite of products they're already using and get that same feature.
2d8a875f-39a2-4 · 8h ago
As others have kind of pointed out, using "outside our DC" processing for corporate data is a non-starter for many companies.
These companies are left to choose between self-hosting models, or a vendor like MS who will rent them "their own AI running in their own Azure subscription", cut off from the outside world.
chung8123 · 18h ago
All of their files are likely on a Microsoft store already too.
highfrequency · 5h ago
> These are not railroads—we aren’t building century-long infrastructure. AI datacenters are short-lived, asset-intensive facilities riding declining-cost technology curves, requiring frequent hardware replacement to preserve margins.
Neglects the most important benefit of large semiconductor spending: we are riding the Learning Curve up Moore's Law. We are not much better at building railroads today than we were in 1950. We are way better at building computers today. The GPUs may depreciate but the knowledge of how to build, connect, and use them does not - that knowledge compounds over time. Where else do you see decades of exponential efficiency improvements?
missingdays · 4h ago
"We are not much better at building railroads today than we were in 1950." - in the US - maybe. In other countries - I doubt that's the case
nowittyusername · 11h ago
Correction ... Nvidia is propping up the economy. its like 24% of the tech sector and is the only source of gpus for most companies. This is really , really bad. Talk about all eggs in one basket. If that company was to take a shit, the domino effect would cripple the whole sector and have unimaginable ramifications to the US economy.
jus3sixty · 13h ago
The article's comparison to the 19th century railroad boom is pretty spot on for how big it all feels, but maybe not so much for what actually happened.
Back then, the money poured into building real stuff like actual railroads and factories and making tangible products.
That kind of investment really grew the value of companies and was more about creating actual economic value than just making shareholders rich super fast.
johncole · 12h ago
While some of the investment in railroads (and canals before it, and shipping before that) was going into physical assets of economic value, there were widespread instances of speculation, land "rights" fraud, and straight up fraud without any economic value added.
dehrmann · 9h ago
> Back then, the money poured into building real stuff
Its limitations are well-documented, but cutting-edge AI right now is very much "real stuff."
satyrun · 4h ago
You have no idea what you are talking about.
The amount speculation and fraud from this time period would make even the biggest shit coin fraud blush.
Try a biography of Jay Gould if you want more information.
mrbluecoat · 1h ago
9 of today's top 30 homepage articles are about AI so I'd say it's also propping up HN.
throw0101d · 14h ago
From a few weeks ago, see "Honey, AI Capex is Eating the Economy" / "AI capex is so big that it's affecting economic statistics" (365 points by throw0101c 18 days ago | hide | past | favorite | 355 comments):
I'm genuinely scared of what the crash will do to society. As much as I loathe AI boosterism, I'm starting to think my personal desire for schadenfreude may not outweigh the fear that I and everyone else I care about will get swept under the tsunami of the bubble burst.
rapsey · 9h ago
If you are worried about a stock market crash, the current AI boom is absolutely nothing compared to the dotcom boom.
Also because so many companies are staying private, a crash in private markets is relatively irrelevant for the overall economy.
bad_username · 9h ago
The Buffett indicator is larger than in 2000. Schiller PE ratio is getting close.
rapsey · 8h ago
What these things fail to take into account is the growth in the amount of money going into the stock market.
Simplifying an economic activity down to a single short formula leaves out a lot of important parameters and these kinds of things tend to hold some truth for the time they are invented and often break at some point in the near future after they are created. This is because of changes in money flows in the economy as a result of tax, regulation and technology changes.
Like the yield curve inversions and the Sahm rule and so on.
GianFabien · 16h ago
There's only two reasons to buy stocks:
1) for future cashflows (aka dividends) derived from net profits.
2) to on-sell to somebody willing to pay even more.
When option (2) is no longer feasible, the bubble pops and (1) resets the prices to some multiple of dividends. Economics 101.
dehrmann · 8h ago
> ...resets the prices to some multiple of dividends
But wouldn't you want to pay more for a company that has a history of revenue and income growth than one in a declining industry? And you have to look at assets on the company's books; you're not just buying a company, you're buying a share of what it owns. What if it has no income, but you think there's a 10% chance it'll be printing money in 5 years?
That's why prices won't naively reset to a multiple of ~~dividends~~ income (see the dividend irrelevance theory) across the board. Someone will always put a company's income in context.
heathrow83829 · 14h ago
yes, but there will always be a #2 with QE being normalized now.
Nition · 15h ago
Just like land :)
DebtDeflation · 4h ago
AI isn't propping up the economy. The BELIEF that AI will replace almost all labor (employees) is what's propping up the economy. Or rather, propping up the stock market. I'm not sure what kind of economy we're going to have when 99% of the income flows to 0.001% of the population.
ckocagil · 1h ago
One only needs to look at the massive island complexes with underground bunkers the billionaires are building for themselves.
lenerdenator · 19h ago
And that's why there's a desire to make interest rates lower: cheap money is good for propping up bubbles.
Now, it does that at the expense of the average person, but it will definitely prop up the bubble just long enough for the next election cycle to hit.
highfrequency · 6h ago
Careful not to automatically slip into a zero sum mindset - it is possible that low interest rates benefit everyone even if the benefits disproportionately go to one class over another. (Example: lowering interest rates after the Financial Crisis - certainly good for banks, but lowering unemployment is critical for normal people too).
In the same way that UBI would disproportionately benefit poor people, but considered with its downstream effects could benefit rich people too.
bluecalm · 18h ago
I am curious, why do you think lower interest rates are bad for an average person?
lenerdenator · 1h ago
Generally speaking, the lower the prime interest rate, the lower the returns on "safer" investments like certificates of deposit, sometimes below the rate of inflation. As a bank, why would you borrow from local depositors when you could borrow from the central banking system and pay less in interest?
People like my parents, who are both 65, could just park their money at a local bank and have an FDIC-insured savings instrument that roughly tracks inflation and helps invest in the local economy. They don't have to worry about cokeheads in lower Manhattan making bets that endanger their retirements like they have numerous times.
If they do that with lower interest rates, they're more likely to lose money instead of preserving it or slightly increasing it. Which, of course, gives the cokeheads more money to gamble with.
nr378 · 17h ago
Interest rates set the exchange rate between future cashflows (i.e. assets) and cash today. Lower interest rates mean higher asset values, higher interest rates mean lower asset values. Higher asset values generally disproportionately benefit those that own assets (wealthy people) over those that don't (average people).
Of course, this is just one way that interest rates affect the economy, and it's important to bear in mind that lower interest rates can also stimulate investment which help to create jobs for average people as well.
tharmas · 16h ago
> it's important to bear in mind that lower interest rates can also stimulate investment which help to create jobs for average people as well.
Precisely! Yet the big problem in the Anglosphere is that most of that money has been invested in asset accumulation, namely housing, causing a massive housing crisis in these countries.
mason_mpls · 18h ago
We want interest rates as close to zero as possible. However they’re also the only reliable tool available to stop inflation.
Youre implying the country exerting financial responsibility to control inflation isn’t good.
Not using interest rates to control inflation caused the stagflation crisis of the 70s, and ended when Volcker set rates to 20%.
verdverm · 18h ago
more money in the economy drives inflation, which largely affects those with less disposable income
This is why in a hot economy we raise rates, and in a not economy we lower them
(oversimplification, but it is a commonly provided explanation)
827a · 10h ago
The issue with this theory post-internet economy is: its only true if that money is spent chasing a limited amount of scarce goods and services. But the majority of the US economy today is spent on goods and services that are no longer scarce (more accurately, whose unit costs are so low that they might as well be unlimited). We are in a very different world than the one Volker presided over, and this is the core axiom as to why: The economists who correctly invented this central bank interest rate lever could never have foreseen a world so supply-unconstrained.
Another way to look at this: Low interest rates can induce demand and drive inflation. But they also control the rates when financing supply-side production; so they can also ramp up supply to meet increased demand.
1. Not all goods and services are like this, obviously. Real estate is the big one that low interest rates will continue to inflate. We need legislative-side solutions to this, ideally focused at the state and local levels.
2. None of this applies if you have an economy culturally resistant to consumerism, like Japan. Everything flips on its head and things get weird. But that's not the US.
tharmas · 16h ago
>more money in the economy drives inflation
Not necessarily. Sure, it that money is chasing fixed assets like housing but if that money was invested into production of things to consume its not necessarily inflation inducing is it? For example, if that money went into expanding the electricity grid and production of electric cars, the pool of goods to be consumed is expanding so there is less likelihood of inflation.
verdverm · 16h ago
> if that money was invested into production of things to consume its not necessarily inflation inducing is it
People are paid salaries to work at these production facilities, which means they have more money to spend, and the competition drives people to be willing to spend more to get the outputs. Not all outputs will be scaled, those that aren't experience inflation, like food and housing today
micromacrofoot · 17h ago
Low interest rates make borrowing cheap, so companies flood money into real estate and stocks, inflating prices. This also drives up costs for regular people, fuels risky lending (remember subprime mortgages?), and when the bubble bursts... guess who gets hit the hardest when companies start scaling back and lenders come calling?
hdgvhicv · 9h ago
Stocks increased more from 1985 to 2005 with higher rates than from 2005 to 2025.
You really think the AI bubble can be sustained for another three years?
dylan604 · 19h ago
15 months. Mid-terms are next November. After that, legacy cannot be changed by election. If POTUS loses control of either/both chambers, he might have some 'splanin to do. If POTUS keeps control and/or makes further gains, there might not be an election in 3 years.
tick_tock_tick · 18h ago
> he might have some 'splanin to do
About what? Like seriously what would they even do other then try and lame duck him?
The big issue is Dem approval ratings are even lower then Trumps so how the hell are they going to gain any seats?
dylan604 · 16h ago
Gerrymandering helps. Just look at Texas
chasd00 · 14h ago
And California
Hikikomori · 18h ago
Gerrymandering in Texas and elsewhere they might stay in power, if they do it's unlikely to change. Basically speed running a fascist takeover.
smackeyacky · 18h ago
It's not really a speed run.
The seeds were planted after Nixon resigned and it was decided to re-shape the media landscape and move the overton window rightwards in the 1970s, dismantling social democracy across the west and leading to a gradual reversal of the norms of governance in the US (see Newt Gingrich).
It's been gradual, slow and methodical. It has definitely accelerated but in retrospect the intent was there from the very beginning.
tharmas · 16h ago
Excellent post.
You could say that was when things reverted back to "normal". The FDR social reconstruction and post WW2 economic boom were the exception, anomaly. But the Scandinavian countries seem to be doing alright. Sure, they have some big size problems (Sweden in particular) but daily life for the majority in those countries appears to be better than a lot of people in the Anglosphere.
skinnymuch · 15h ago
A difference also is neoliberalism ramping up in that time period of the 80s. The concept of privatizing anything and everything and bullshit like “private public partnership” are fairly recent.
mathiaspoint · 18h ago
The way most of you define "fascism" America has always been fascist with a brief perturbation where we tried Democracy and some Communism.
If you see it that way this is just a reversion to the mean.
smackeyacky · 16h ago
True. We have collectively forgotten segregation was a thing in the US. Perhaps it has always been a right wing country that flirts with fascism.
dylan604 · 16h ago
The Constitution was clearly written for rich land owning white men first of thought, and everything else being left out or only in fractions. They added some checks and balances as a hand wavy idea of trying to stay away from autocracy, but they kind of made them toothless. I'd guess they just didn't have the imagination that people would willingly allow someone to go back towards autocracy since they were fighting so hard to leave it.
mathiaspoint · 6h ago
Every time you claim to go after the "rich" you just go after normal people. I think everyone has figured that out.
fzeroracer · 15h ago
It's been an unfortunate truth that the US has long been a country that's flirted with fascism. Ultimately, Thaddeus Stevens was right in his conviction that after the civil war the southern states should've been completely crushed and the land given to the freedmen.
dylan604 · 18h ago
Interesting to see if California follows suit. Governor Newsom has his eye on the 2028 prize it seems. If the Dems do not wake up and start playing the same game the GOP is playing, they will never win. Taking the higher ground is such a nice concept, but it's also what losers say to feel good about not winning. Meanwhile, those willing to break/bend/change rules to ensure they continue to win will, well, continue to win.
lenerdenator · 30m ago
> Governor Newsom has his eye on the 2028 prize it seems
This makes me feel dread. I just don't see him dragging moderates in the middle of the country to the polls, or getting people in the leftist part of the Democratic Party to not "but but but" their way out of voting against fascism again.
Oh well.
SpicyLemonZest · 14h ago
I think it's important to remember how California got here. In the 2000 redistricting, the state legislature agreed to conduct an extreme bipartisan gerrymander, drawing every seat to be as safe as possible so that no incumbent could get voted out without losing a primary. This was widely understood to be a conspiracy of politicians against democratic accountability, and thus voters decided (with the support of many advocacy orgs and every major newspaper in the state) to put an end to it.
That's not the redistricting Newsom wants for 2028, and I tend to agree that Dems have to play the game right now, but I'd really like to see them present some sort of story for why it's not going to happen again.
tick_tock_tick · 18h ago
Honestly it's not really bubbling like we expected revenues are growing way too fast income from AI investment is coming back to these companies way sooner then anyone thought possible. At this rate we have another couple of 20+% years in the stock market for there to be anything left of a "bubble".
Nvidia the poster-child of this "bubble" has been getting effectively cheaper every day.
icedchai · 19h ago
Possibly. For comparison, how long did the dot-com bubble last? From roughly 1995 to early 2000.
thrance · 19h ago
Trump and his administration harassing the Fed and Powell over interest rates is like a swarm of locust salivating at ripened wheat fields. They want a quick feast at the expense of everything and everyone else, including themselves over the long term.
dylan604 · 19h ago
Trump knows that the next POTUS can just reverse his decisions much like he's done in both of his at bats. Only thing is there is no next at bat for Trump (without major changes that would be quite devastating), so he's got to get them in now. The sooner the better to take as much advantage of being in control.
pessimizer · 18h ago
The left is almost completely unanimous in their support for lowering interest rates, and have been screaming about it for years, since the first moment they started being raised again. And for the same reasons that Trump wants it, except without the negative connotations for some reason.
Recently, I've heard many left wingers, as a response to Trump's tariffs, start 1) railing about taxes being too high, and that tariffs are taxes so they're bad, and 2) saying that the US trade deficit is actually wonderful because it gives us all this free money for nothing.
I know all of these are opposite positions to every one of the central views of the left of 30 years ago, but politics is a video game now. Lefties are going out of their way to repeat the old progressive refrain:
> "The way that Trump is doing it is all wrong, is a sign of mental instability, is cunning psychopathic genius and will resurrect Russia's Third Reich, but in a twisted way he has blundered into something resembling a point..."
"...the Fed shouldn't be independent and they should lower interest rates now."
mason_mpls · 18h ago
I have not heard a single left wing pundit demand interest rates go down
rockemsockem · 18h ago
Elizabeth Warren has gone on several talk shows insisting interest rates should be lowered. If you look at video from the last time Powell was being questioned by Congress there were many other Democratic congress-people asking him why he wouldn't lower rates.
Personally I trust Jerome Powell more than any other part of the government at the moment. The man is made of steel.
mason_mpls · 17h ago
Jerome Powell belongs on Mt Rushmore if you ask me
no_wizard · 14h ago
He's entirely too cozy with big banks. He's one of their biggest advocates when it comes to policy. I think Elizabeth Warren had a point here[0]
Coziness with banks can certainly be an issue, I don't know specifics and that article is pay walled for me, but it sounds very believable to me.
That doesn't really change what I said regarding interest rates though.
skinnymuch · 15h ago
People are upset at the tariffs as taxes because they hurt poorer people more. That’s how it works when everyone pays the same amt of taxes
thrance · 15h ago
Who cares? Even if it were true, why is your first reflex to point the finger at progressives when they're absolutely irrelevant to the current government?
HocusLocus · 14h ago
If AI gets us into orbit ( https://news.ycombinator.com/item?id=44800051#44804687 ) or revitalizes nuclear, I'm fine with those things. It's true that AI usage can scale with availability better than most things but that's not a path to world domination.
BriggyDwiggs42 · 9h ago
Is the linked post referring to literal orbit? Hell will freeze over before orbital datacenters make sense (assuming no antigrav etc gets invented tomorrow).
zingababba · 57m ago
AI and GLP-1 Receptor Agonists.
xg15 · 18h ago
So what will happen to all those massive data centers when the bubble bursts? Back to crypto?
GianFabien · 16h ago
After the dot bomb of 2000, the market got flooded with CISCO and Sun gear for pennies on the dollar. Lots of post 2000 startups got their gear from those auctions and were able to massively extend their runway. Same could happen again.
dboreham · 11h ago
Aeron chairs too.
asdev · 19h ago
look at the S&P 500 chart when ChatGPT came out. We were just on our way to flushing out the Covid excess money and then the AI narrative saved the market. AI narrative + inflation that is definitely way more than reported is propping up this market.
hdgvhicv · 8h ago
It amazes me when people say figures are wrong, but fail to provide alternative figures which widely differ from published ones.
Often it comes down to arguing the “basket of goods” is wrong rather than the individual components, or perhaps that there are wider rates in specific areas.
kogasa240p · 18h ago
Surprised the SVB collapse wasn't mentioned, the LLM boom gained a huge amount of steam right after that happened.
BigglesB · 18h ago
I also wonder the extent to which "pseudo-black-box-AI" is potentially driving some of these crazy valuations now due to it actually being used in a lot algorithmic trading itself... seems like a prevalence of over-corrected models, all expecting "line go up" from recent historical data would be the perfect way to cook up a really "big beautiful bubble" so to speak...
georgeplusplus · 4h ago
It seems AI or AGI has become the divider between technology that is needed, vs a nice to have.
I can’t think of one reason anyone really wants this right now. I prefer to deal with a human in 99% of my interactions.
IAmGraydon · 11h ago
Seems like the title should be “AI hype is propping up US company valuations.”
morpheos137 · 13h ago
Imagine, the world's biggest economy propped up by hopes and dreams. Has anyone successfully monetized "AI" at a scale that generates a reasonable return on investment?
PicassoCTs · 7h ago
So if the whole of the old economic elite goes fully catatonic, hiding in server farms from the desert of the real whispering "singularity" while rocking back and forth- will there spring up a new economic elite that is willing to deal with realities as they are?
gamblor956 · 18h ago
This is backwards.
The AI bubble is so big that it's draining useful investment from the rest of the economy. Hundreds of thousands of people are getting fired so billionaires can try to add a few more zeros to their bank account.
The best investment we can make would be to send the billionaires and AI researchers to an island somewhere and not let them leave until they develop an AI that's actually useful. In the meanwhile, the rest of us get to live productive lives.
I dont think the comment is saying AI was able to replace the work people were doing but people are getting fired and their salary is being redirected into funding AI development.
add-sub-mul-div · 16h ago
Discounting the evidence of it being explicitly cited as a reason for layoffs and that its purpose to business is to replace human labor, there's no evidence that its replacing human labor. Got it.
int_19h · 9h ago
In the case of Microsoft layoffs, that is how it is sold to the public, but the reality according to my former colleagues is that fewer people tasked with the same amount of overall work just end up grinding more. But the charade must be sustained, and so now "how much do you use AI" is one of the performance metrics pushed from the top. Nobody wants to be in the next layoff wave so everybody finds ways to meet those metrics, which then Satya goes and parades to the investors.
(I am an AI optimist, by the by. But that is not one of its success stories.)
rockemsockem · 15h ago
Citing AI for layoffs is great cover for "we over hired during Covid".
There probably are a few nuts out there that actually fired people to be replaced with AI, I feel like that won't go well for them
There really is no evidence.
no_wizard · 14h ago
There's strong sentiment bubbling that supports AI driven layoffs are going to happen or are happening[0].
I'll say its okay to be reserved on this, since we won't know until after the fact, but give it 6-12 months, then we'll know for sure. Until then, I see no reason not to believe there is a culture in the boardrooms forming around AI that is driving closed door conversations about reducing headcount specifically to be replaced by AI.
yes and no. AI may be propping up some tech companies who make items in the AI space but the number of jobs lost due to AI is pretty overwhelming. I spent the last week using LLMs to build an app using libraries I am unfamiliar with. It was amazing - to the point where I won't have to write a line of code anymore. These developers probably have some money saved up but that'll dry up. Plus the new grads are now competing against people with some experience. Tariffs + inflation + a presidential grifter = bad times.
jaimex2 · 10h ago
It's all fun and games till the LLM can't fix the issue it created. It can regurgitate and mash up all the blog tutorials that exist online but eventually it'll come across a new problem and it'll be sweet out of luck.
Vibe coding is great for Shanty town software and the aftermath from storms is equally entertaining to watch.
imiric · 7h ago
It is terrifying whenever a vibe coder publishes an app or service that gains users. It's celebrated and seen as an example of what AI tools can accomplish. Until the inevitable security "breach" happens a few weeks later because they left the front door open.
The scary thing is that these tools are now part of the toolset of experienced developers as well. So those same issues can and will happen to apps and services that used to take security seriously.
It's depressing witnessing the average level of quality in the software industry go down. This is the same scenario that caused mass consumer dissatisfaction in 1983 and 2000, yet here we are again.
poopiokaka · 13h ago
You lost me at 404 wanna be event. 13 viewers looking ass
neuroelectron · 11h ago
As designed
tharmas · 12h ago
Here's a comprehensive explanation of the coming employment apocalypse as a result of AI:
More like the corpos are really excited about the post-human AI future, so they are pouring truckloads of money on it and this raises the GDP number. The well-being of the average folk is in decline.
BriggyDwiggs42 · 9h ago
I think the corpos are excited for a new source of meaningful revenue growth.
diogenescynic · 14h ago
They've been saying the same thing about whatever the trend of the moment is for years. Before this it was Magnificent 7 and before that it was FANG, and before that it was something else. Isn't this just sort of fundamental to how the economy works?
johng · 18h ago
There has to be give and take to this as well. The AI increase is going to cost jobs. I see it in my work flow and our company. We used to pay artists to do artwork and editors to post content. Now we use AI to generate the artwork and AI to write the content. It's verified by a human, but it's still done by AI and saves a ton of time and money.
These are jobs that normally would have gone to a human and now go to AI. We haven't paid a cent for AI mind you -- it's all on the ChatGPT free tier or using this tool for the graphics: https://labs.google/fx/tools/image-fx
I could be wrong, but I think we are at the start of a major bloodbath as far as employment goes.... in tech mostly but also in anything that can be replaced by AI?
I'm worried. Does this mean there will be a boom in needing people for tradeskills and stuff? I honestly don't know what to think about the prospects moving forward.
micromacrofoot · 19h ago
This is going to be an absolute disaster, the government is afraid of regulating AI because it's so embedded in our economy now too
jcgrillo · 17h ago
I think we're already starting to see the cracks with OpenAI drastically tightening their belt across various cloud services. Depends how long it takes to set in, but seems like it could be starting this quarter.
m0llusk · 14h ago
Interesting piece, but the idea that this guy understands how oligarchs think seems way off. Jack Welch took General Electric from a global leader to a sad bag holder and he and his fans cheered progress with every positive quarterly report.
Real AGI would be alive and would be capable of art and music and suffering and community, of course. So there would really be no more need for humans except to shovel the coal (or bodies of other humans) into the furnace that power the Truly Valuable members is society, the superintelligent AIs, which all other aspects of our society will be structured towards serving.
Real AGI might realistically decide to go to war with us if we've leaned anything from current LLMs and their penchant for blackmail
No, AGI isn't a good thing. We should expect it to go badly, because there are so many ways it could be catastrophic. Bad outcomes might even be the default without intervention. We have virtually no idea how to drive good outcomes of AGI.
AGI isn't being pursued because it will be good, it's being pursued because it is believed to be more-or-less inevitable, and everyone wants to be the one holding the reins for the best odds of survival and/or being crowned god-emperor (this is pretty obviously sam altman's angle for example)
Modern AI is not an intelligence. Wonder what crap they are calling AGI.
https://www.pewresearch.org/short-reads/2022/12/08/about-fou...
Other than government, is there anybody else who can loosen the purse strings a little bit and have it not act as a temporary stimulant as long as it lasts?
Whether they wish it would last or even provide any benefit to the average person, seems like there are plenty who wouldn't wish more prosperity on anyone who doesn't already have it :\
The only real way for long-term growth would be to plant seeds rather than dispense mere artificial stimulants.
Unless AI makes the general public way more than capitalists have spent, it wouldn't be worth any increase in cost whatsoever for things like energy or hardware. Even non-AI software could become unaffordable if their labor costs go up enough too keep top people from being poached by AI companies flush with cash.
I bet even the real estate near the data centers gets more unaffordable, at the same time clocking a win for the local economy due to the increased cash flow and tax revenue. Except all that additional cash is flowing out of peoples' pockets not in :\
As long as that remains true, don't see how this bubble will be popped
I don’t really have a strong preference, so I just use any service where I’m currently not rate limited. There are many of them and I don’t see much difference between them for day to day use. My company pays for Cursor but I burned through my monthly quota in a day working on a proof of concept that mirrored their SDK in a different language. Was it nice that I could develop a proof of concept? Yes. Would I pay 500 dollars for it from my own pocket? No, I don’t think so.
It’s like those extremely cheap food and grocery delivery apps, they made their food cheap, no delivery fees for a while… of course everyone was using it. Then, they started to run out of VC money, they had to raise prices, then suddenly nobody used them anymore and they went bankrupt. There was demand, but only because of the suppressed prices fueled by VC money.
It doesn’t cover the free users, but that’s normal startup strategy. They are using investor cash to grow as quickly as possible. This involves providing as much free service as they can afford so that some of those free users convert to paid customers. At any point they can decide not to give so much free service away to rebalance in favour of profitability over growth. The cost of inference is plummeting, so it’s getting cheaper to service those free users all the time.
> It’s like those extremely cheap food and grocery delivery apps, they made their food cheap, no delivery fees for a while… […] they started to run out of VC money, they had to raise prices
That’s not the same situation because that makes the product more expensive for the customers, which will hit sales. This isn’t the same as cutting back on a free tier, because in that situation you’re not doing anything to harm your customers’ service.
I have spoken with many companies and nearly all of them, when speaking about AI, have gotten to the point they don't even make any sense. A common theme is 'we need' AI, but nobody can articulate 'why' and in-fact they get defensive when questioned. It is almost perfectly parallel to the 'we need blockchain' argument or 'we need a mobile app'. That isn't to say those are not useful technologies, but the rapid rise, steep decline, then gradual rise is a theme in tech.
Others have observed and pointed out his prescience before:
https://news.ycombinator.com/item?id=22364699
That's what everybody was saying in February 2000.
No comments yet
Teachers are demanding not to do the work that is teaching. Lawyers are demanding not to do the work of lawyering. Engineers don't want to do coding and leaders don't want to steer the ship anymore unless it's towards AI.
Alllll the "value" is bullshit. Either AGI arrives and all jobs are over and it's eternal orgy time, or at some point the lazy ai-using losers will get fired and everyone else will go back to doing work like usual
Eternal orgy time is not possible, will never happen. And if AI is useful, which it is, it will never be abandoned. Somewhere in the middle is the real prognosis
It may "balance" around the middle, and expect it to be noticably different than now, even without much of an actual middle.
Or the middle could end up being just another group, maybe or maybe not the most prominent one.
Make teachers great again.
Capacity just means there is currently more demand than supply and that might be a number of negative factors driving that: users with no ROI (free users), too rapid growth, poor efficiency etc etc.
This statement is unrelated to the funding in the space, which is not going to be mis placed, only a question of how much.
If anything, this might be a realer version of a dot com boom.
No, history is a web of lies written by the winners, just like your daily news.
https://wccftech.com/ai-capex-might-equal-2-percent-of-us-gd...
> Next, Kedrosky bestows a 2x multiplier to this imputed AI CapEx level, which equates to a $624 billion positive impact on the US GDP. Based on an estimated US GDP figure of $30 trillion, AI CapEx is expected to amount to 2.08 percent of the US GDP!
Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century. This means that the ongoing AI CapEx boom has lots of legroom to run before it reaches parity with the rail road boom of that bygone era.
Has anyone found the source for that 20%? Here's a paper I found:
> Between 1848 and 1854, railroad investment, in these and in preceding years, contributed to 4.31% of GDP. Overall, the 1850s are the period in which railroad investment had the most substantial contribution to economic conditions, 2.93% of GDP, relative to 2.51% during the 1840s and 2.49% during the 1830s, driven by the much larger investment volumes during the period.
https://economics.wm.edu/wp/cwm_wp153.pdf
The first sentence isn't clear to me. Is 4.31 > 2.93 because the average was higher from 1848-1854 than from 1850-1859, or because the "preceding years" part means they lumped earlier investment into the former range so it's not actually an average? Regardless, we're nowhere near 20%.
I'm wondering if the claim was actually something like "total investment over x years was 20% of GDP for one year". For example, a paper about the UK says:
> At that time, £170 million was close to 20% of GDP, and most of it was spent in about four years.
https://www-users.cse.umn.edu/~odlyzko/doc/mania18.pdf
That would be more believable, but the comparison with AI spending in a single year would not be meaningful.
In a majority agrarian economy where a lot of output doesn't go toward GDP (e.g. milking your own damn cow to feed milk to your own damn family won't show up) I would expect "new hotness" booms to look bigger than they actually are.
At this rate, I hope we get something useful, public, and reasonably priced infrastructure out of these spending in about 5-8 years just like the railroads.
When you go so far back in time you run into the problem where GDP only counts the market economy. When you count people farming for their own consumption, making their own clothes, etc, spending on railroads was a much smaller fraction of the US economy than you'd estimate from that statistic (maybe 5-10%?)
First, GDP still doesn't count you making your own meals. Second, when eg free Wikipedia replaces paid for encyclopedias, this makes society better off, but technically decreases GDP.
However, having said all that, it's remarkably how well GDP correlates with all the goods things we care about, despite its technical limitations.
While GDP correlates reasonably well, imagine very roughly what it would be like if GDP growth averaged 3% annually while the overall economy grew at 2%. While correlation would be good, if we speculate that 80% of the economy is counted in GDP today, then only 10% would have been counted 200 years ago.
If you wanted to, you could look at eg black market prices for kidneys to get an estimate for how much your kidney is worth. Or, less macabre, you can look at how much you'd have to pay a gardener to mow your lawn to see what the labour of your son is worth.
What's good for one class is often bad for another.
Is it a "good" economy if real GDP is up 4%, the S&P 500 is up 40%, and unemployment is up 10%?
For some people that's great. For others, not so great.
Maybe some economies are great for everyone, but this is definitely not one of those.
This economy is great for some people and bad for others.
In today's US? Debatable, but on the whole probably not.
In a hypothetical country with sane health care and social safety net policies? Yes that would be hugely beneficial. The tax base would bear the vast majority of the burden of those displaced from their jobs making it a much more straightforward collective optimization problem.
The US spends around 6.8k USD/capita/year on public health care. The UK spends around 4.2k USD/capita/year and France spends around 3.7k.
For general public social spending the numbers are 17.7k for the US, 10.2k for the UK and 13k for France.
(The data is for 2022.)
Though I realise you asked for sane policies. I can't comment on that.
I'm not quite sure why the grandfather commenter talks about unemployment: the US had and has fairly low unemployment in the last few decades. And places like France with their vaunted social safety net have much higher unemployment.
(Too many people getting their metaphorical pound of flesh, and bad incentives.)
[1] https://en.wikipedia.org/wiki/Thatcherism
I don't think you are disagreeing with them.
To a vast and corrupt array of rentiers, middlemen, and out-and-out fraudsters, instead of direct provision of services, resulting in worse outcomes at higher costs!
Turns out if I’m forced to have visits with three different wallet inspectors on the way to seeing a doctor, I’ve somehow spent more money and end up less healthy than my neighbors who did not. Curious…
I think you're forgetting the Soviet Union, which looked great on paper until it turned out that it wasn't actually great...
Real GDP can go up, and it doesn't HAVE to mean you are producing more of anything valuable, and can - in fact - mean that you're not producing enough of what you need, and a bunch of what you don't need.
A very simple way to view this is: currently x% of GDP is waste. If Real GDP goes up 4% but the percentage of waste goes from 1% to 8% - you are clearly doing worse.
This is a reduction of what happened in the Soviet Union.
We need new metrics.
The net utility of AI is far more debatable.
You can still run a train on those old tracks. And it'll be competitive. Sure you could build all new tracks, but that's a lot more expensive and difficult. So they'll need to be a whole lot better to beat the established network.
But GPUs? And with how much tech has changed in the last decade or two and might in the next?
We saw cryptocurrency mining go from CPU to GPU to FPGA to ASICs in just a few years.
We can't yet tell where this fad is going. But there's fair reason to believe that, even if AI has tons of utility, the current economics of it might be problematic.
P/E is, after all, given in the implied unit of "years". (Same as other ratios like debt/GDP).
I am being 100% genuine here, I struggle to understand how the most useful things I've ever encountered are thought of this way and would like to better understand your perspective.
I'm already a pretty fast writer and programmer without LLMs. If I hadn't already learned how to write and program quickly, perhaps I would get more use out of LLMs. But the LLMs would be saving me the effort of learning which, ultimately, is an O(1) cost for O(n) benefit. Not super compelling. And what would I even do with a larger volume of text output? I already write more than most folks are willing to read...
So, sure, it's not strictly zero utility, but it's far less utility than a long series of other things.
On the other hand, trains are fucking amazing. I don't drive, and having real passenger rail is a big chunk of why I want to move to Europe one day. Being able to get places without needing to learn and then operate a big, dangerous machine—one that is statistically much more dangerous for folks with ADHD like me—makes a massive difference in my day-to-day life. Having a language model... doesn't.
And that's living in the Bay Area where the trains aren't great. Bart, Caltrain and Amtrak disappearing would have an orders of magnitude larger effect on my life than if LLMs stopped working.
And I'm totally ignoring the indirect but substantial value I get out of freight rail. Sure, ships and trucks could probably get us there, but the net increase in costs and pollution should not be underestimated.
So for some professionals, mental math really is faster.
Make of that what you will.
Mathematicians are not calculators. Programmers are not typists.
The math that isn't mathing is even more basic tho. This is a Concorde situation all over again. Yes, supersonic passenger jets would be amazing. And they did reach production. But the economics were not there.
Yeah, using GPU farms delivers some conveniences that are real. But after 1.6 trillion dollars it's not clear at all that they are a net gain.
Sure. They don't meaningfully improve anything in my life personally.
They don't improve my search experience, they don't improve my work experience, they don't improve the quality of my online interactions, and I don't think they improve the quality of the society I live in either
No.
Because I cannot trust it. (Especially when it gives no attributions).
Unfortunately yes I do, because it is placed in a way to immediately hijack my attention
Most of the time it is just regurgitating the text of the first link anyways, so I don't think it saves a substantial amount of time or effort. I would genuinely turn it off if they let me
> That's a feeling, not a fact
So? I'm allowed to navigate my life by how I feel
I have the same feeling with AI.
It clearly cannot produce the quality of code, architecture, features which I require from myself. And I also want to understand what’s written, and not saying “it works, it’s fine <inserting dog with coffee image here>”, and not copy-pasting a terrible StackOverflow answer which doesn’t need half of the code in reality, and clearly nobody who answered sat down and tried to understand it.
Of course, not everybody wants these, and I’ve seen several people who were fine with not understanding what they were doing. Even before AI. Now they are happy AI users. But it clears to me that it’s not beneficial salary, promotion, and political power wise.
So what’s left is that it types faster… but that was never an issue.
It can be better however. There was the first case just about a month ago when one of them could answer better to a problem than anything else which I knew or could find via Kagi/Google. But generally speaking it’s not there at all. Yet.
At this point I am somewhat of a conscientious objector though
Mostly from a stance of "these are not actually as good as people say and we will regret automating away jobs held by competent people in favor of these low quality automations"
In fact much automation, code or otherwise, benefits from or even requires explicit, concise rules.
It is far quicker for me to already know, and write, an SQL statement, than it is to explain what I need to an LLM.
It is also quite difficult to get LLMs into a lot of processes, and I think big enterprises are going to really struggle with this. I would absolutely love AI to manage some Windows servers that are in my care, but they are three VMs deep in a remote desktop stack that gets me into a DMZ/intranet. There's no interface, and how would an LLM help anyway. What I need is concise, discreet automations. Not a chat bot interface to try and instruct every day.
To be clear I do try to use AI most days, I have Claude and I am a software developer so ideally it could be very helpful, but I have far less use for it than say people in the strategy or marketing departments for example. I do a lot of things, but not really all that much writing.
Analyze it this way: Are LLMs enabling something that was impossible before? My answer would be No.
Whatever I'm asking of the LLM, I'd have figured it out from googling and RTFMing anyway, and probably have done a better job at it. And guess what, after letting the LLM do it, I probably still need to google and RTFM anyway.
You might say "it's enabling the impossible because you can now do things in less time", to which I would say, I don't really think you can do it in less time. It's more like cruise control where it takes the same time to get to your destination but you just need to expend less mental effort.
Other elephants in the room:
- where is the missing explosion of (non-AI) software startups that should've been enabled by LLM dev efficiency improvements?
- why is adoption among big tech SWEs near zero despite intense push from management? You'd think, of all people, you wouldn't have to ask them twice.
The emperor has no clothes.
I would say yes when the LLM is combined with function calling to allow it to do web searches and read web pages. It was previously impossible for me to research a subject within 5 minutes when it required doing several searches and reviewing dozens of search results (not just reading the list entries, but reading the actual HTML pages). I simply cannot read that fast. A LLM with function calling can do this.
The other day, I asked it to check the Linux kernel sources to tell me which TCP connection states for a closing connection would not return an error to send() with MSG_NOSIGNAL. It not only gave me the answer, but made citations that I could use to verify the answer. This happened in less than 2 minutes. Very few developers could find the answer that fast, unless they happen to already know it. I doubt very many know it offhand.
Beyond that, I am better informed than I have ever been since I have been offloading previously manual research to LLMs to do for me, allowing me to ask questions that I previously would not ask due to the amount of time it took to do the background research. What previously would be a rabbit hole that took hours can be done in minutes with minimal mental effort on my part. Note that I am careful to ask for citations so I can verify what the LLM says. Most of the time, the citations vouch for what the LLM said, but there are some instances where the LLM will provide citations that do not.
That said, I recently saw a colleague use a LLM to make a non-trivial UI for electron in HTML/CSS/JS, despite knowing nothing about any of those technologies, in less time than it would have taken me to do it. We had been in the process of devising a set of requirements, he fed his version of them into the LLM, did some back and forth with the LLM, showed me the result, got feedback, fed my feedback back into the LLM and got a good solution. I had suggested that he make a mockup (a drawing in kolourpaint for example) for further discussion, but he had surprised me by using a LLM to make a functional prototype in place of the mockup. It was a huge time saver.
Consider something like Shopify - someone with zero knowledge of programming can wow you with an incredible ecommerce site built through Shopify. It's probably like a 1000x efficiency improvement versus building one from scratch (or even using the popular lowcode tools of the era like Magento and Drupal). But it won't help you build Amazon.com, or even Nike.com. It won't even get you part of the way there.
And LLMs, while more general/expressive than Shopify, are inferior to Shopify at doing what Shopify does i.e. you're still better off using Shopify instead of trying to vibe-code an e-commerce website. I would say the same line of thinking extends to general software engineering.
Anyway, that about sums up my experience with AI. It may save some time here and there, but on net, you’re better off without it.
>This implies that each hour spent using genAI increases the worker’s productivity for that hour by 33%. This is similar in magnitude to the average productivity gain of 27% from several randomized experiments of genAI usage (Cui et al., 2024; Dell’Acqua et al., 2023; Noy and Zhang, 2023; Peng et al., 2023)
Our estimated aggregate productivity gain from genAI (1.1%) exceeds the 0.7% estimate by Acemoglu (2024) based on a similar framework.
To be clear, they are surmising that GenAI is already having a productivity gain.
As for the quote, I can’t find it in the article. Can you point me to it? I did click on one of the studies and it indicated productivity gains specifically on writing tasks. Which reminded me of this recent BBC article about a copywriter making bank fixing expensive mistakes caused by AI: https://www.bbc.com/news/articles/cyvm1dyp9v2o
It's actually based on the results of three surveys conducted by two different parties. While surveys are subject to all kinds of biases and the gains are self-reported, their findings of 25% - 33% producitivity do match the gains shown by at least 3 other randomized studies, one of which was specifically about programming. Those studies are worth looking at as well.
I use AI in my personal life to learn about things I never would have without it because it makes the cost of finding any basic knowledge basically 0. Diet improvement ideas based on several quick questions about gut functioning, etc, recently learning how to gauge tsunami severity, and tons of other things. Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
How much have you actually tried using LLMs and did you just use normal chat or some big grand complex tool? I mostly just use chat and prefer to enter my code in artisanally.
1. To work through a question I'm not sure how to ask yet 2. To give me a starting point/framework when I have zero experience with an issue 3. To automate incredibly stupid monkey-level tasks that I have to do but are not particularly valuable
It's a remarkable accomplishment that has the potential to change a lot of things very quickly but, right now, it's (by which I mean publicly available models) only revolutionary for people who (a) have a vested interest in its success, (b) are easily swayed by salespeople, (c) have quite simple needs (which, incidentally, can relate to incredible work!), or (d) never really bothered to check their work anyway.
If I need information, I can just keyword search wikipedia, then follow the chain there and then validate the sources along with outside information. An LLM would actually cost me time because I would still need to do all of the above anyways, making it a meaningless step.
If you don't do the above then it's 'cheaper' but you're implicitly trusting the lying machine to not lie to you.
> Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
You're practically saying that looking at an index in the back of a book is a meaningless step.
It is significantly faster, so much so that I am able to ask it things that would have taken an indeterminate amount of time to research before, for just simple information, not deep understanding.
Edit:
Also I can truly validate literally any piece of information it gives me. Like I said previously, it makes it very easy to validate via Wikipedia or other places with the right terms, which I may not have known ahead of time.
You're using the machine that ingests and regurgitates stuff like Wikipedia to you. Why not skip the middleman entirely?
The same reasons you use Wikipedia instead of reading all the citations on Wikipedia.
How do you KNOW it doesn't lie/hallucinate? In order to know that, you have to verify what it says. And in order to verify what it says, you need to check other outside sources, like Wikipedia. So what I'm saying is: Why bother wasting time with the middle man? 'Vague queries' can be distilled into simple keyword searches: If I want to know what a 'Tsunami' is I can simply just plug that keyword into a Wikipedia search and skim through the page or ctrl-f for the information I want instantly.
If you assume that it doesn't lie/hallucinate because it was right on previous requests then you fall into the exact trap that blows your foot off eventually, because sometimes it can and will hallucinate over even benign things.
I still do 10-20x regular Kagi searches for every LLM search, which seems about right in terms of the utility I'm personally getting out of this.
For me, LLMs are also the most useful thing ever but I was a C student in all my classes. My programming is a joke. I have always been intellectually curious but I am quite lazy. I have always had tons of ideas to explore though and LLMS let me explore these ideas that I either wouldn't be able to otherwise or would be too lazy to bother.
Nope; cloning a bundle created from a depth-limited clone results in error messages about missing commit objects.
So I tell the parrot that, and it comes back with: of course, it is well-known that it doesn't work, blah blah. (Then why wasn't it well known one prompt ago, when it was suggested as the definitive answer?)
Obviously, I wasn't in the "the right mindset" today.
This mindset is one of two things:
- the mindset of a complete n00b asking a n00b question that it will nail every time, predicting it out of its training data richly replete with n00b material.
- the mindset of a patient data miner, willing to expend all they keystrokes. needed to build up enough context to in effect create a query which zeroes in on the right nugget of information that made an appearance in the training data.
It was interesting to go down this #2 rabbit hole when this stuff was new, which it isn't any more. Basically do most of the work, while it looks as if it solved the problem.
I had the right mindset for AI, but most of it has worn off. If I don't get something useful in one query with at most one follow up, I quit.
The only shills who continue to hype AI are either completely dishonest assholes, or genuine bros bearing weapons-grade confirmation bias.
Let's try something else:
Q: "What modes of C major are their own reflection?"
A: "The Lydian and Phrygian modes are reflections of each other, as are the Ionian and Aeolian modes, and the Dorian and Mixolydian modes. The Locrian mode is its own reflection."
Very nice sounding and grammatical, but gapingly wrong in every point. The only mode that is its own reflection is Dorian. Furthermore, Lydian and Phrygian are not mutual reflections. Phrygian reflected around is root is Ionian. The reflection of Lydian is Locrian; and of Aeolian, Mixolydian.
I once loaded a NotebookLM with materials about George Russel's concept of the Lydian Chromatic, and Tonal Gravity. It made an incomprehensible mess of explaining the stuff, worse than the original sources.
AI performs well on whatever is the focus of its purveyors. When they want to shake down entry-level coding, they beef it up on entry-level coding and let it loose, leaving it unable to tell Mixolydian from mixing console.
No comments yet
"Among the seven modes of C major, only Dorian is its own reflection.
Understanding Mode Reflections When we reflect a mode, we reverse its interval pattern. The modes of C major and their interval patterns are:
Ionian: W-W-H-W-W-W-H
Dorian: W-H-W-W-W-H-W
Phrygian: H-W-W-W-H-W-W
Lydian: W-W-W-H-W-W-H
Mixolydian: W-W-H-W-W-H-W
Aeolian: W-H-W-W-H-W-W
Locrian: H-W-W-H-W-W-W
The Palindromic Nature of Dorian Dorian mode is palindromic, meaning it produces the same scale whether you read its interval pattern forwards or backwards. When you reverse the Dorian interval pattern W-H-W-W-W-H-W, you get exactly the same sequence: W-H-W-W-W-H-W.
Mirror Pairs Among the Other Modes The remaining modes form mirror pairs with each other:
Ionian-Phrygian: Mirror pair
Lydian-Locrian: Mirror pair
Mixolydian-Aeolian: Mirror pair
For example, when you reflect the C major scale (Ionian), which has the interval pattern W-W-H-W-W-W-H, you get H-W-W-W-H-W-W, which corresponds to the Phrygian mode.
This symmetrical relationship exists because the whole diatonic scale system can be symmetrically inverted, creating these natural mirror relationships between the modes"
Are you hoping to disprove my point by cherry picking the AI that gets the answer?
I used Gemini 2.5 Flash.
Where can I get an exact list of stuff that Gemini 2.5 Flash does not know that Claude Sonnet does, and vice versa?
Then before deciding to consult with AI, I can consult the list?
What would make 2.5 Pro (or anything else) categorically better would be if it could say "I don't know".
There will be things that Claude 3.7 or Gemini Pro will not know, and the interpolations they come up with will not make sense.
You must rely on your own internal model in your head to verify the answers it gives.
On hallucination: it is a problem but again, it reduces as you use heavier models.
This is what significantly reduces the utility, if it can only be trusted to answer things I know the answer to, why would I ask it anything?
I have written about it here: https://news.ycombinator.com/item?id=44712300
Lately, I have been using Grok 4 and I have had very good results from it.
Do you build computers by ordering random parts off Alibaba and complaining when they are deficient? You are complaining that you need to RTFM for a piece of high tech?
If they are about something you're not sure about, and you're making decisions based on them ... maybe it would actually help, so yes?
> Do you build computers by ordering random parts off Alibaba and complaining when they are deficient?
We build computers using parts which are carefully documented by data sheets, which tell you exactly for what ranges of parameters their operation is defined and in what ways. (temperatures, voltages, currents, frequencies, loads, timings, typical circuits, circuit board layouts, programming details ...)
That depends on the quality of the end product and the willingness to invest the resources necessary to achieve a given quality of result. If average quality goes up in practice then I'd chalk that up as a net win. Low quality replacing high quality is categorically different than low quality filling a previously empty void.
Therapy in particular is interesting not just because of average quality in practice (therapists are expensive experts) but also because of user behavior. There will be users who exhibit both increased and decreased willingness to share with an LLM versus a human.
There's also a very strong privacy angle. Querying a local LLM affords me an expectation of privacy that I don't have when it comes to Google or even Wikipedia. (In the latter case I could maintain a local mirror but that's similar to maintaining a local LLM from a technical perspective making it a moot point.)
Spam emails are not any worse for being verbose, I don't recognize the sender, I send it straight to spam. The volume seems to be the same.
You don't want an AI therapist? Go get a normal therapist.
I have not heard of any AI product displacing industrial design, but if anything it'll make it easier to make/design stuff if/when it gets there.
Like are these real things you are personally experiencing?
Especially people on the left need to realize how important their vision is to the future if AI. Right now you can see the current US admin having zero concern for AI safety or carbon use. If you keep your head in the dirt saying “bubble!” that’s no problem. But if this is here to stay then you need to get involved.
I honestly don't see technology that stumbles over trivial problems like these as something that will replace my job, or any job that is not already automatable within ten thousand lines of Python, anytime soon. The gap between hype and actual capabilities is insane. The more I've tried to apply LLMs to real problems, the more disillusioned I've become. There is nothing, absolutely nothing, no matter how small the task, that I can trust LLMs to do correctly.
I'm sure if you asked the luddites the utility of mechanized textile production you'd get a negative response as well.
What does AI get the consumer? Worse spam, more realistic scams, hallucinated search results, easy cheating on homework? AI-assisted coding doesn't benefit them, and the jury is still out on that too (see recent study showing it's a net negative for efficiency).
There's a reason that AI is already starting to fade out of the limelight with customers (companies and consumers both). After several years, the best they can offer is slightly better chatbots than we had a decade ago with a fraction of the hardware.
Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
Learning well is about consulting multiple sources and using them to build up your own robust mental model of the truth of how something works.
If you can really find the single perfect source of 100% correct information then great, I guess... but that's never been my experience. Every source of information has its flaws. You need to build your own mental model with a skeptical eye from as many sources as possible.
As such, even if AI makes mistakes it can still accelerate your learning, provided you know how to learn and know how to use tips from AI as part of your overall process.
Having an unreliable teacher in the mix may even be beneficial, because it enforces the need for applying critical thinking to what you are learning.
> Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
I think it does matter, but the problem is vastly overstated. One person points out that AIs aren’t 100% reliable. Then the next person exaggerates that a little and says that AIs often get things wrong. Then the next person exaggerates that a little and says that AIs very often get things wrong. And so on.
Before you know it, you’ve got a group of anti-AI people utterly convinced that AI is totally unreliable and you can’t trust it at all. Not because they have a clear view of the problem, but because they are caught in this purity spiral where any criticism gets amplified every time it’s repeated.
Go and talk to a chatbot about beginner-level, mainstream stuff. They are very good at explaining things reliably. Can you catch them out with trick questions? Sure. Can you get incorrect information when you hit the edges of their knowledge? Sure. But for explaining the basics of a huge range of subjects, they are great. “Most of what they told you was completely wrong” is not something a typical beginner learning a typical subject would encounter. It’s a wild caricature of AI that people focused on the negatives have blown out of all proportion.
I also use them to help me write code, which it does pretty well.
IDK where you're getting the idea that it's fading out. So many people are using the "slightly better chatbots" every single day.
Btw if you only think chat GPT is slightly better than what we had a decade ago then I do not believe that you have used any chat bots at all, either 10 years ago or recently because that's actually a completely insane take.
To back that up, here's a rare update on stats from OpenAI: https://x.com/nickaturley/status/1952385556664520875
> This week, ChatGPT is on track to reach 700M weekly active users — up from 500M at the end of March and 4× since last year.
You're looking at the prototype while complaining about an end product that isn't here yet.
Getting people to associate the luddites as anti-technology zealots rather than pro-labor organization is one of the most successful pieces of propaganda in history.
Source? Skimming the wikipedia article it definitely sounds like most were made up of former skilled textile workers that were upset they were replaced with unskilled workers operating the new machines.
> They had nothing against mechanized looms, they had everything against the business owners using their workers talents and knowledge to build an entire operation only to later undercut their wages and/or replace them with lesser paid unskilled workers and reduce the quality of life of their entire community.
Sounds a lot like the anti-AI sentiment today, eg. "I'm not against AI, I'm just against it being used by evil corporations so they don't have to hire human workers". The "AI slop" argument also resembles luddites objecting to the new machines on the quality of "quality" (also from wikipedia), although to be fair that was only a passing mention.
Interestingly....
..... the fact that luddites also called for unemployment compensation and retraining for workers displaced by the new machinery, probably makes them amongst the most forward thinking and progressive people of the 1800's.
[0]: https://www.newyorker.com/books/page-turner/rethinking-the-l...
This sort of “other people were wrong once, so you might be too” comment is really pointless.
Luddites weren't at a point where every industry sees individual capital formation/demand for labor trend towards zero over time.
Prices are ratios in the currency between factors and producers.
What do you suppose happens when the factors can't buy anything because there is nothing they can trade. Slavery has quite a lot of historic parallels with the trend towards this. Producers stop producing when they can make no profit.
You have a deflationary (chaotic) spiral towards socio-economic collapse, under the burden of debt/money-printing (as production risk). There are limits to systems, and when such limits are exceeded; great destruction occurs.
Malthus/Catton pose a very real existential threat when such disorder occurs, and its almost inevitable that it does without action to prevent it. One cannot assume action will happen until it actually does.
The loom wasn't centralized in four companies. Customers of textiles did not need an expensive subscription.
Obviously average people would benefit more if all that investment went into housing or in fact high speed railways. "AI" does not improve their lives one bit.
the vast expense is on the GPU silicon, which is essentially useless for compute other than parallel floating point operations
when the bubble pops, the "investment" will be a very expensive total waste of perfectly good sand
I'm not going to do the homework for a Hacker News comment, but here are a few guesses:
I suspect that a lot of it is TSMC's capex for building new fabs. But since the fabs are already built, they could run them for longer. (Possibly producing different chips.)
Meanwhile, carbon emissions due to electricity use by data centers can't be taken back.
But also, much of an investment bubble popping wouldn't be about wasting resources. It would be investors' anticipated profits turning out to be a mirage - that is, investors feel poorer, but nothing material was lost.
It could have some unexciting applications like, oh, modeling climate change and other scientific simulations.
The first clause of that sentence negates the second.
The investment only makes sense if the the expectation of success * the investment < the payoff of that goal.
If I don't think the major AI labs will succeed, then it's not justified.
> The net utility of AI is far more debatable.
As long as people are willing to pay for access to AI (either directly or indirectly), who are we to argue?
In comparison: what's the utility of watching a Star Wars movie? I say, if people are willing to part with their hard earned cash for something, we must assume that they get something out of it.
Where as AI, who actually gets the investment? Nvidia? TMSC? Are people who are employed some that would have anyway been employed? Do they actually spend much more? Any Nvidia profits likely go just back to the market propping it up even higher.
How much efficiency from use of LLMs have actually increased proctiveness?
https://sherwood.news/markets/the-ai-spending-boom-is-eating...
(comment below: https://news.ycombinator.com/item?id=44804528 )
So they are talking about changes not levels.
For me, that’s enough of a thought experiment — as implausible as it might be to have AI in 1901 — to be skeptical that the difference is simply that the first tech step-change was a pre-war uplift to build the post-war US success story, and the latter builds on it.
More like apples to octopus.
People should keep in mind that there was no such thing as a GDP before the 1980's.
All that has been back-calculated, and the further back you go the more ridiculous it gets.
Excuses sounded plausible at the time but killed two birds with one stone.
Less rapid increase in government benefits which had become based on GNP for survival to cope with inflation, and further obscuring the ongoing poor economic performance of the 1980's going forward compared to how it was before 1970 numerically.
The people who were numerically smart before that and saw what things were like first hand were not fooled so easily.
Even using GDP back in the 1980's when it first came out, you couldn't get a good picture of the 1960's which were not that much earlier.
Don't make me laugh trying for the 1860's :)
This will not be the case anymore. There is no labor restructuring to be made, the lists for the future safe jobs are humorous to say the least. There has been a difficulty in finding skilled labor in sustainable wages for the companies and that has been highlighted as a key blocker for growth. Econony will rise by removing this blocker by AI. Rise of the economy due to AI invalidates old models and trickle down spurious correlations. Rise of the economy through AI directly enables the most extreme inequality and no reflexes or economics experience exists to manage it.
There have been many theories for revolutions, social financial ideological and others. I will not comnent on those but I will make a practical observation: It boils down to the ratio of controlers vs controlled. AI also enables an extremely minimal number of controllers through the AI managment of the flow information and later a large number of drones can keep everyone at bay. Cheaply, so good for the economy.
I usually avoid responding to remarks like this because they risk forays into politics, which I avoid, but the temptation to ask was too great here. What do you consider computers, cellphones, air conditioners, flat screen TVs and refrigerators to be? The first ones had outrageous prices that only the exorbitantly wealthy could afford. Now almost everyone in the US has them. They seem to have trickled down to me.
Products people buy with the money they earn. Not things that fall down from the tables of the ultra rich.
Their affordability comes from the economies of scale. If I can sell 100000 units of something as opposed to 100 units, the cost-per-unit goes down. Again, nothing to do with anything "trickling down".
> What do you consider computers, cellphones, air conditioners, flat screen TVs and refrigerators to be? The first ones had outrageous prices that only the exorbitantly wealthy could afford. Now almost everyone in the US has them. They seem to have trickled down to me.
That said, I see numerous things that exist solely because those with money funded R&D. Your capital markets theory for how the R&D was funded makes no sense because banks will not give loans for R&D. If any R&D funds came from capital markets, it was by using existing property as collateral. Funds for R&D typically come from profitable businesses and venture capitalists. Howard Hughes for example, obtained substantial funds for R&D from the Hughes Tool Company.
Just to name how the R&D for some things was funded:
- Microwave oven: Developed by Raytheon, using profits from work for the US military
- PC: Developed by IBM using profits from selling business equipment.
- Cellular phone: Developed by Motorola using profits from selling radio components.
- Air conditioner: Developed by Willis Carrier at Buffalo Forge Company using profits from the sale of blacksmith forges.
- Flat panel TV: Developed by Epson using profits from printers.
The capital markets are no where to be seen. I am at a startup where hardware is developed. Not a single cent that went into R&D or the business as a whole came from capital markets. My understanding is that the money came from an angel investor and income from early adopters. A hardware patent that had given people the idea for the business came from research in academia, and how that was funded is unknown to me, although I would not be surprised if it had been funded through a NSF grant. The business has been run on a shoe string budget and could grow much quicker with an injection of funding, yet the capital markets will not touch it.
Also, not all patents are monetizable.
Then they hope they can sell it at a profit.
Products becoming cheaper is a result of the processes getting more optimized ( on the production side and the supply side ) which is a function of the desire to increase the profit on a product.
Without any other player in the market this means the profit a company makes on that product increases over time.
With other players in that market that underprice your product it means that you have to reinvest parts of your profit into making the product cheaper ( or better ) for the consumer.
Not to increase scale, but to reduce the cost of the device while maintaining 99% of the previous version, IOW, enshittification of the product.
> how would the affordable versions exist today?
Not all "affordability" comes from the producer of the said stuff. Many things are made from commodity materials, and producers of these commodity materials want to increase their profits, hence trying to produce "cheaper" versions of them, not for the customers, but for themselves.
Affordability comes from this cost reduction, again enshittification. Only a few companies I see produce lower priced versions of their past items which also surpasses them in functionality and quality.
e.g. I have Sony WH-CH510 wireless headphones, which has way higher resolution than some wired headphones paired with decent-ish amps, this is because Sony is an audiovisual company, and takes pride in what they do. On the other end of the spectrum is tons of other brands which doesn't sell for much cheaper, but get way worse sound quality and feature set, not because they can't do it as good as Sony, but want to get a small pie of the said market and earn some free money, basically.
https://cdn.britannica.com/93/172793-050-33278C86/Cell-phone...
As for your wireless headphones, if you compare them to early wireless headphones, you should find that prices have decreased, while quality has increased.
I can argue, from some aspects, yes. Given that you provide the infrastructure for these devices, they'll work exactly as they are designed today. On the other hand, a modern smartphone has a way shorter life span. OLED screens die, batteries, swell, electronics degrade.
Ni-Cad batteries, while being finicky and toxic, are much more longer lasting than Li-ion and Li-Poly batteries. If we want to talk Li-Poly batteries, my old Sony power bank (advertising 1000 recharge cycles with a proprietary Sony battery tech) is keeping its promise, capacity and shape 11 years after its stamped manufacturing date.
Can you give me an example of another battery/power pack which is built today and can continue operating for 11 years without degrading?
As electronics shrink, the number of atoms per gate decreases, and this also reduces the life of the things. My 35 y/o amplifier works pretty well, even today, but modern processors visibly degrade. A processor degrading to a limit of losing performance and stability was unthinkable a decade ago.
> you will find that prices have decreased, while quality has increased.
This is not primarily driven by the desire to create better products. First, cheaper and worse ones come, and somebody decides to use the design headroom to improve things later on, and put a way higher price tag.
Today, in most cases, speakers' quality has not improved, but the signal processed by DSP makes them appear sound better. This is cheaper, and OK for most people. IOW, enshittification, again. Psychoacoustics is what makes this possible, not better sounding drivers.
The last car I rented has a "sound focus mode" under its DSP settings. If you're the only one in the car, you can set it to focus to driver, and it "moves" the speakers around you. Otherwise, you select "everyone", and it "improves" sound stage. Digital (black) magic. In either case, that car does not sound better than my 25 year old car, made by the same manufacturer.
You want genuinely better sounding drivers, you'll pay top dollar in most cases.
I have LiFePo4 batteries from K2 Energy that will be 13 years old in a few months. They were designed as replacements for SLA batteries. Just the other day, I had put two of them into a UPS that needed a battery replacement. They had outlived the UPS units where I had them previously.
I have heard of Nickel Iron batteries around 100 years old that still work, although the only current modern manufacturers are in China. The last US manufacturer went out of business in 2023.
> You want genuinely better sounding drivers, you'll pay top dollar in most cases.
I do not doubt that, but if the signal processing improves things, I would consider that to be a quality improvement.
Interesting, but they are not manufactured more, but way less, as you can see. So, quality doesn't drive the market. Monies do.
> I do not doubt that, but if the signal processing improves things, I would consider that to be a quality improvement.
Depends on the "improvement" you are looking for. If you are a casual listener hunting for an enjoyable pair while at a run or gym, you can argue that's an improvement.
But if you're looking for resolution increases, they're not there. I occasionally put one of my favorite albums on, get a tea, and listen to that album for the sake of listening to it. It's sadly not possible on all gear I have. You don't need to pay $1MM, but you need to select the parts correctly. You still need a good class AB or an exceptional class D amplifier to get good sound from a good pair of speakers.
This "apparent" improvement which is not there drives me nuts actually. Yes, we're better from some aspects (you can get hooked to feeds instead of drugs and get the same harm for free), but don't get distracted, the aim is to make numbers and line go up.
They were always really expensive, heavy and had low energy density (both by weight and by volume). Power density was lower than lead acid batteries. Furthermore, they would cause a hydrolysis reaction in their electrolyte, consuming water and producing a mix of oxygen and hydrogen gas, which could cause explosions if not properly vented. This required periodic addition of water to the electrolyte. They also had issues operating at lower temperatures.
They were only higher quality if you looked at longevity and nothing else. I had long thought about getting them for home energy storage, but I decided against them in favor of waiting for LiFePo4 based solutions to mature.
By the way, I did a bit more digging. It turns out that US production of NiFe batteries ended before 2023, as the company that was supposed to make them had outsourced production to China:
https://www.terravolt.net/iron-edison
Sorry, I misread your comment. I thought you were talking about LiFePo4 production ending in 2023, not NiFe.
I know that NiFe batteries are not suitable (or possible to be precise) to be miniaturized. :)
I still wish market does research on longevity as much as charge speed and capacity, but it seems companies are happy to have batteries with shorter and shorter life spans to keep up with their version of the razor and blades model.
Also, this is why regulation is necessary in some areas.
Quite why we've persuaded ourselves we need to do this through a remote & deaf middleman is anyone's guess, when governments we elect could just direct money through policies we can all argue about and nudge in our own small ways.
You're talking about this: https://ideas.repec.org/p/wrk/warwec/270.html
:)
Trickle down economics is supposed to make poorer people more wealthy. Not suppress their wage growth while offering a greater selection of affordable gadgets.
https://www.merriam-webster.com/dictionary/gadget
Among the many things that have become affordable for every day people because money had been present to fund the R&D are air conditioners, refrigerators, microwave ovens, dish washers, washing machines, clothes dryers, etcetera. When I was born in the 80s, my parents had only a refrigerator (and maybe a microwave oven). They could not afford more. Now they have all of these things.
I don’t expect either of us to be able the answer the questions posed. Nobody in the 80s was asking for any of these inventions. People were living their lives happily ignorant to a better future. For that reason, most of these things do amount to just gadgets. They have shaped our lives in a dramatic way and had huge commercial success by solving huge problems or increasing conveniences, but they are still nonessential. That’s the way I’m using the term, don’t really care what Webster has to say about it tbh as I’m perhaps being dramatic precisely to highlight this point.
The continuation of R&D isn’t even a trickle down policy. If you’re a big manufacturer of CRT televisions, it’s in your interest to continue inventing better technology in that space just to remain competitive. If you’re really good at it, there’s a good chance you can steal market share. It’s good old fashioned business as usual in a competitive industry. I don’t see how they relate to one another. Not to mention that many things are invented in a garage somewhere and capital is infused later. Would this only happen if the rich uncles of the world benefited from economic policies aimed at making them rich? I think it would still find a way in most cases, good ideas typically always find a way. I don’t think a majority of gadgets can be linked to something like “brought to you by trickle down economics”.
Honestly, I have to say that I am relatively happy with the things that I have these days because of obscenely wealthy people’s investments. I have a heat pump air conditioner that would have been unthinkable when I was a child. I have food from Aldi and Lidl, whose prices relative to the competition also would have been unthinkable when I was a child. I have an electric car and solar panels, which were in the realm of fantasy when I was a child. Solar panels and electric cars existed, but solar panels were obscenely expensive and electric cars were considered a joke when I was young. I have a gigabit fiber internet connection at $64.99 per month, such internet connections were only available to the obscenely rich when I was a child. I am not sure if I would have any of these things if the money had not been there to fund them. I really do feel like things have trickled down to me.
I like electric cars and solar panels and gigabit fiber as much as the next person, but they aren’t wealth.
https://en.wikipedia.org/wiki/Aldi
If you shop there, you are enriching its owners. That is not a bad thing. The more money they have, the better they make things for people, so it is a win-win.
Note that Aldi is technically two companies since the family that founded it had some internal disagreement and split the company into two, but they are both privately owned.
That said, if wealthy people had not made investments, I would not have an electric car, solar panels or gigabit fiber. The solar panels also improve property values, so it very much is a form of wealth, although not a liquid one. Electric cars similarly are things that you can sell (although they are depreciating assets), so saying that they are not wealth is not quite correct. The internet connection is not wealth in a traditional sense, but it enables me to work remotely, so it more than pays for itself.
> The trickle-down theory includes commonly debated policies associated with supply-side economics.
Let's assume you have a monopoly on something with a guarantee that no one else can sell the same product in your market. Then there is no direct incentive to make the product cheaper, even if you can produce it for cheaper. Adding more money on top of it that is supposed to trickle down in some way will not make that product cheaper, unless there is an incentive for that company to do so.
The real world is of course more complicated, let's say you have two companies that get the incentives and one of them is using it to make the product cheaper, then that will "trickle down" as a price decrease because the other company need to follow suit to stay competitive. But this again is driven by the market and not the incentives and would have happened without them just as well.
The first cellular phone in modern currency cost something like $15,000. At that price, the market for it would be orders of magnitude below the present cellular phone market size. Lower the price 1 to 2 orders of magnitude and we have the present cellular phone market, which is so much larger than what it would have been with the cellular phone at $15,000.
Interestingly, the cellular phone market also seems to be in a period where competition is driving prices upward through market segmentation. This is the opposite of what you described competition as doing. Your remark that the real world is more complicated could not be more true.
If you fix the specs and progress time then the prices go down considerably
Take the first Iphone which was $499 ( $776.29 if adjusted for inflation ) and try to find a currently built phone with similar specs. I couldn't find any that go down that far but the cheapest one I could find was the ZTE Blade L9 ( which still has higher specs overall ) then we are looking at over 90% price reduction
Permeation of technology due to early adopters paying high costs leading to lower costs is not what trickle down generally means. Being an early adopter of cellphones, AC, flat screen TVs or computers required the wealth level of your average accountant of that era - it didn't require being a millionaire.
I ask hyperbolically: are they economic enablers or financial traps?
(My hunch is that fridges are net-enablers, but TVs are net-traps. I say this as someone with a TV habit I would like to kick.)
[1] Things get even spicier if consumer growth was zero. Then what would the comparison? That AI added infinitely more to growth than consumer spending? What if it was negative? All this shows how ridiculous the framing is.
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
Have you heard of the disagreement hierarchy? You're somewhere between 1 and 3 right now, so I'm not even going to bother to engage with you further until you bring up more substantive points and cool it with the personal attacks.
https://paulgraham.com/disagree.html
Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on. A resilient economy has multiple growth areas; an unstable one has one or two.
While you could certainly argue that we may already be in rough shape even without the bubble popping, it would undoubtedly get worse for the reasons I listed above,
Right, I'm not suggesting that all of the datacenter construction will seamlessly switch over to building homes, just that some of the labor/materials freed would be allocated to other sorts construction. That could be homes, amazon distribution centers, or grid connections for renewable power projects.
>A resilient economy has multiple growth areas; an unstable one has one or two.
>[...] it would undoubtedly get worse for the reasons I listed above,
No disagreement there. My point is that if AI somehow evaporated, the hit to GDP would be less than $10 (total size of the sector in the toy example above), because the resources would be allocated to do something else, rather than sitting idle entirely.
>Regarding the economics, the reason it’s a big deal that AI is powering growth numbers is because if the bubble pops, jobs go poof and stock prices with it as everyone tries to salvage their positions. While we still create jobs, on net we’ll be losing them. This has many secondary and tertiary effects, such as less money in the economy, less consumer confidence, less investment, fewer businesses causing fewer jobs, and so on.
That's a fair point, although to be fair the federal government is pretty good at stimulus after the GFC and covid that any credit crunch would be short lived.
Is the keyword here. US consumers have been spending so much so of course that sector doesn't have that much room to grow.
Using non-seasonally adjusted St. Louis FRED data (https://fred.stlouisfed.org/series/NA000349Q), and the AI CapEx spending for Meta, Alphabet, Microsoft, and Amazon from the WSJ article (https://www.wsj.com/tech/ai/silicon-valley-ai-infrastructure...):
-------------------------------------------------
Q4 2025 consumer spending: ~$5.2 trillion
Q4 2025 AI CapEx spending: ~$75 billion
-------------------------------------------------
Q1 2025 consumer spending: ~$5 trillion
Q1 2025 AI CapEx spending: ~$75 billion
-------------------------------------------------
Q2 2025 consumer spending: ~$5.2 trillion
Q2 2025 AI CapEx spending: ~$100 billion
-------------------------------------------------
So, non-seasonally adjusted consumer spending is flat. In that sense, yes, anything where spend increased contributed more to GDP growth than consumer spending.
If you look at seasonally-adjusted rates, consumer spending has grown ~$400 billion, which might outstrips total AI CapEx in that time period, let alone growth. (To be fair the WSJ graph only shows the spending from Meta, Google, Microsoft, and Amazon. But it also says that Apple, Nvidia, and Tesla combined "only" spent $6.7 billion in Q2 2025 vs the $96 billion from the other four. So it's hard to believe that spend coming from elsewhere is contributing a ton.)
If you click through the the tweet that is the source for the WSJ article where the original quote comes from (https://x.com/RenMacLLC/status/1950544075989377196) it's very unclear what it's showing...it only shows percentage change, and it doesn't even show anything about consumer spending.
So, at best this quote is very misleadingly worded. It also seems possible that the original source was wrong.
If this isn't the Singularity, there's going to be a big crash. What we have now is semi-useful, but too limited. It has to get a lot better to justify multiple companies with US $4 trillion valuations. Total US consumer spending is about $16 trillion / yr.
Remember the Metaverse/VR/AR boom? Facebook/Meta did somehow lose upwards of US$20 billion on that. That was tiny compared to the AI boom.
I was working on crypto during the NFT mania, and THAT felt like a bubble at the time. I'd spend my days writing smart contracts and related infra, but I was doing a genuine wallet transaction at most once a week, and that was on speculation, not work.
My adoption rate of AI has been rapid, not for toy tasks, but for meaningful complex work. Easily send 50 prompts per day to various AI tools, use LLM-driven auto-complete continuously, etc.
That's where AI is different from the dot com bubble (not enough folks materially transaction on the web at the time), or the crypto mania (speculation and not utility).
Could I use a smarter model today? Yes, I would love that and use the hell out of it. Could I use a model with 10x the tokens/second today? Yes, I would use it immediately and get substantial gains from a faster iteration cycle.
See the dotcom bubble in the early 2000s for a perfect example. The Web is still useful, but the bubble bursting was painful.
I have to imagine that other professions are going to see similar inflection points at some point. When they do, as seen with Claude Code, demand can increase very rapidly.
* Even with all this infra buildout all the hyperscalers are constantly capacity constrained, especially for GPUs.
* Surveys are showing that most people are only using AI for a fraction of the time at work, and still reporting significant productivity benefits, even with current models.
The AGI/ASI hype is a distraction, potentially only relevant to the frontier model labs. Even if all model development froze today, there is tremendous untapped demand to be met.
The Metaverse/VR/AR boom was never a boom, with only 2 big companies (Meta, Apple) plowing any "real" money into it. Similarly with crypto, another thing that AI is unjustifiably compared to. I think because people were trying to make it happen.
With the AI boom, however, the largest companies, major governments and VCs are all investing feverishly because it is already happening and they want in on it.
Are they constrained on resources for training, or resources for serving users using pre-trained LLMs? The first use case is R&D, the second is revenue. The ratio of hardware costs for those areas would be good to know.
However, my understanding is that the same GPUs can be used for both training and inference (potentially in different configurations?) so there is a lot of elasticity there.
That said, for the public clouds like Azure, AWS and GCP, training is also a source of revenue because other labs pay them to train their models. This is where accusations of funny money shell games come into play because these companies often themselves invest in those labs.
I was recently at a big, three-letter pharmacy company and I can't be specific, but just let me say this: They're always on the edge of having the main websites going down for this or that reason. It's a constant battle.
How is adding more AI complexity going to help any of that when they don't even have a competent enough workforce to manage the complexity as it is today?
You mention VR--that's another huge flop. I got my son a VR headset for Christmas in like 2022. It was cool, but he couldn't use it long or he got nauseaus. I was like "okay, this is problematic." I really liked it in some ways, but sitting around with that goofy thing on your head wasn't a strong selling point at all. It just wasn't.
If AI can't start doing things with accuracy and cleverness, then it's not useful.
Humans are not always accurate or clever. But we still consider them useful and employ them.
This is crusty, horrible, old, complex code. Nothing is in one place. The entire editing experience was copy-pasted from the create resource experience (not even reusable components; literally copy-pasted). As the principal on the team, with the best understanding of anyone about it, even my understanding was basically just "yeah I think these ten or so things should happen in both cases because that's how the last guy explained it to me and it vibes with how I've seen it behave when I use it".
I asked Cursor (Opus Max) something along the lines of: Compare and contrast the differences in how the application behaves when creating this resource versus updating it. Focus on the API calls its making. It responded in short order with a great summary, and without really being specifically prompted to generate this insight it ended the message by saying: It looks like editing this resource doesn't make the API call to send a notification to affected users, even though the text on the page suggests that it should and it does when creating the resource.
I suspect I could have just said "fix it" and it could have handled it. But, as with anything, as you say: Its more complicated than that. Because while we imply we want the app to do this, its a human's job (not the AI's) to read into what's happening here: The user was confused because they expected the app to do this, but do they actually want the app to do this? Or were they just confused because text on the page (which was probably just copy-pasted from the create resource flow) implied that it would?
So instead I say: Summarize this finding into a couple sentences I can send to the affected customer to get his take on it. Well, that's bread and butter for even AIs three years ago right there, so off it goes. The current behavior is correct; we just need to update the language to manage expectations better. AI could also do that, but its faster for me to just click the hyperlink in Claude's output, jumps right to the file, and I make the update.
Opus Max is expensive. According to Cursor's dashboard, this back-and-forth cost ~$1.50. But let's say it would have taken me just an hour to arrive at the same insight it did (in a fifth the time): that's easily over $100. That's a net win for the business, and its a net win for me because I now understand the code better than I did before, and I was able to focus my time on the components of the problem that humans are good at.
Edit: agree on the metaverse as implemented/demoed not being much, but that's literally one application
Don't get me wrong, VRChat and Beat Saber are neat, and all the money thrown at the space got the tech advanced at a much faster rate than it would have organically have done I'm the same time (or potentially ever). But you can see Horizon's attempt to be "VRChat but a larger more profitable business" to see how the things you would need to do to monetise it to that level will lose you the audience that you want to monetise.
The average response to that is "its just fake demand from other businesses also trying to make AI work". Then why are the same trends all but certainly happening at Cursor, for Claude Code, Midjourney, entities that generally serve customers outside of the fake money bubble? Talk to anyone under the age of 21 and ask them when they used Chat last. McDonalds wants to deploy Gemini in 43,000 US locations to help "enhance" employees (and you know they won't stop there) [2]. Students use it to cheat at school, while their professors use it to grade their generated papers. Developers on /r/ClaudeAI are funding triple $200/mo claude max subscriptions and swapping between them because the limits aren't high enough.
You can not like the world that this technology is hurtling us toward, but you need to separate that from the recognition that this is real, everyone wants this, today its the worst it'll ever be, and people still really want it. This isn't like the metaverse.
[1] https://openrouter.ai/rankings
[2] https://nypost.com/2025/03/06/lifestyle/mcdonalds-to-employ-...
If the general theme of this article is right (that it's a bubble soon to burst), I'm less concerned about the political environment and more concerned about the insane levels of debt.
If AI is indeed the thing propping up the economy, when that busts, unless there are some seriously unpopular moves made (Volcker level interest rates, another bailout leading to higher taxes, etc), then we're heading towards another depression. Likely one that makes the first look like a sideshow.
The only thing preventing that from coming true IMO is dollar hegemony (and keeping the world convinced that the world's super power having $37T of debt and growing is totally normal if you'd just accept MMT).
The first Great Depression was pretty darn bad, I'm not at all convinced that this hypothetical one would be worse.
Today, we have the highest tariffs since right before the Great Depression, with the added bonus of economic uncertainty because our current tariff rates change on a near daily basis.
Add in meme stocks, AI bubble, crypto, attacks on the Federal Reserve’s independence, and a decreasing trust in federal economic data, and you can make the case that things could get pretty ugly.
But for things to be much worse than the Great Depression, I think is an extraordinary claim. I see the ingredients for a Great Depression-scale event, but not for a much-worse-than-Great-Depression event.
How long will the foot stay on the accelerator after (almost literally) everyone else knows we might be in a bit of strife here?
If the US can put off the depression for the next three years then it has a much better chance of working it's way out gracefully.
[0]: https://en.wikipedia.org/wiki/The_Mandibles
Which is their (Thiel, project2025, etc) plan, federal land will be sold for cheap.
It's already happening, past 6 months USD has been losing value against EUR, CHF, GBP, even BRL and almost flat against the JPY which was losing a ton of value the past years.
As someone in an AI company right now - Almost every company we work with is using Azure wrapped OpenAI. We're not sure why, but that is the case.
Also Microsoft Azure hosts its own OpenAI models. It isn’t a proxy for OpenAI.
It's the same reason you would use RDS at an AWS shop, even if you really like CloudSQL better.
This is the main reason the big cloud vendors are so well-positioned to suck up basically any surplus from any industry even vaguely shaped like a b2b SaaS.
These companies are left to choose between self-hosting models, or a vendor like MS who will rent them "their own AI running in their own Azure subscription", cut off from the outside world.
Neglects the most important benefit of large semiconductor spending: we are riding the Learning Curve up Moore's Law. We are not much better at building railroads today than we were in 1950. We are way better at building computers today. The GPUs may depreciate but the knowledge of how to build, connect, and use them does not - that knowledge compounds over time. Where else do you see decades of exponential efficiency improvements?
Back then, the money poured into building real stuff like actual railroads and factories and making tangible products.
That kind of investment really grew the value of companies and was more about creating actual economic value than just making shareholders rich super fast.
Its limitations are well-documented, but cutting-edge AI right now is very much "real stuff."
The amount speculation and fraud from this time period would make even the biggest shit coin fraud blush.
Try a biography of Jay Gould if you want more information.
* https://paulkedrosky.com/honey-ai-capex-ate-the-economy/
* https://news.ycombinator.com/item?id=44609130
Also because so many companies are staying private, a crash in private markets is relatively irrelevant for the overall economy.
Simplifying an economic activity down to a single short formula leaves out a lot of important parameters and these kinds of things tend to hold some truth for the time they are invented and often break at some point in the near future after they are created. This is because of changes in money flows in the economy as a result of tax, regulation and technology changes.
Like the yield curve inversions and the Sahm rule and so on.
But wouldn't you want to pay more for a company that has a history of revenue and income growth than one in a declining industry? And you have to look at assets on the company's books; you're not just buying a company, you're buying a share of what it owns. What if it has no income, but you think there's a 10% chance it'll be printing money in 5 years?
That's why prices won't naively reset to a multiple of ~~dividends~~ income (see the dividend irrelevance theory) across the board. Someone will always put a company's income in context.
Now, it does that at the expense of the average person, but it will definitely prop up the bubble just long enough for the next election cycle to hit.
In the same way that UBI would disproportionately benefit poor people, but considered with its downstream effects could benefit rich people too.
People like my parents, who are both 65, could just park their money at a local bank and have an FDIC-insured savings instrument that roughly tracks inflation and helps invest in the local economy. They don't have to worry about cokeheads in lower Manhattan making bets that endanger their retirements like they have numerous times.
If they do that with lower interest rates, they're more likely to lose money instead of preserving it or slightly increasing it. Which, of course, gives the cokeheads more money to gamble with.
Of course, this is just one way that interest rates affect the economy, and it's important to bear in mind that lower interest rates can also stimulate investment which help to create jobs for average people as well.
Precisely! Yet the big problem in the Anglosphere is that most of that money has been invested in asset accumulation, namely housing, causing a massive housing crisis in these countries.
Youre implying the country exerting financial responsibility to control inflation isn’t good.
Not using interest rates to control inflation caused the stagflation crisis of the 70s, and ended when Volcker set rates to 20%.
This is why in a hot economy we raise rates, and in a not economy we lower them
(oversimplification, but it is a commonly provided explanation)
Another way to look at this: Low interest rates can induce demand and drive inflation. But they also control the rates when financing supply-side production; so they can also ramp up supply to meet increased demand.
1. Not all goods and services are like this, obviously. Real estate is the big one that low interest rates will continue to inflate. We need legislative-side solutions to this, ideally focused at the state and local levels.
2. None of this applies if you have an economy culturally resistant to consumerism, like Japan. Everything flips on its head and things get weird. But that's not the US.
Not necessarily. Sure, it that money is chasing fixed assets like housing but if that money was invested into production of things to consume its not necessarily inflation inducing is it? For example, if that money went into expanding the electricity grid and production of electric cars, the pool of goods to be consumed is expanding so there is less likelihood of inflation.
People are paid salaries to work at these production facilities, which means they have more money to spend, and the competition drives people to be willing to spend more to get the outputs. Not all outputs will be scaled, those that aren't experience inflation, like food and housing today
https://ofdollarsanddata.com/sp500-calculator/
This means stocks will return less in low rates environment unless there is a lot of additional growth.
> Low interest rates make borrowing cheap, so companies flood money into real estate and stocks
also https://en.wikipedia.org/wiki/List_of_recessions_in_the_Unit...
About what? Like seriously what would they even do other then try and lame duck him?
The big issue is Dem approval ratings are even lower then Trumps so how the hell are they going to gain any seats?
The seeds were planted after Nixon resigned and it was decided to re-shape the media landscape and move the overton window rightwards in the 1970s, dismantling social democracy across the west and leading to a gradual reversal of the norms of governance in the US (see Newt Gingrich).
It's been gradual, slow and methodical. It has definitely accelerated but in retrospect the intent was there from the very beginning.
You could say that was when things reverted back to "normal". The FDR social reconstruction and post WW2 economic boom were the exception, anomaly. But the Scandinavian countries seem to be doing alright. Sure, they have some big size problems (Sweden in particular) but daily life for the majority in those countries appears to be better than a lot of people in the Anglosphere.
If you see it that way this is just a reversion to the mean.
This makes me feel dread. I just don't see him dragging moderates in the middle of the country to the polls, or getting people in the leftist part of the Democratic Party to not "but but but" their way out of voting against fascism again.
Oh well.
That's not the redistricting Newsom wants for 2028, and I tend to agree that Dems have to play the game right now, but I'd really like to see them present some sort of story for why it's not going to happen again.
Nvidia the poster-child of this "bubble" has been getting effectively cheaper every day.
Recently, I've heard many left wingers, as a response to Trump's tariffs, start 1) railing about taxes being too high, and that tariffs are taxes so they're bad, and 2) saying that the US trade deficit is actually wonderful because it gives us all this free money for nothing.
I know all of these are opposite positions to every one of the central views of the left of 30 years ago, but politics is a video game now. Lefties are going out of their way to repeat the old progressive refrain:
> "The way that Trump is doing it is all wrong, is a sign of mental instability, is cunning psychopathic genius and will resurrect Russia's Third Reich, but in a twisted way he has blundered into something resembling a point..."
"...the Fed shouldn't be independent and they should lower interest rates now."
Personally I trust Jerome Powell more than any other part of the government at the moment. The man is made of steel.
[0]: https://www.bloomberg.com/news/articles/2024-07-03/senator-w...
That doesn't really change what I said regarding interest rates though.
Often it comes down to arguing the “basket of goods” is wrong rather than the individual components, or perhaps that there are wider rates in specific areas.
I can’t think of one reason anyone really wants this right now. I prefer to deal with a human in 99% of my interactions.
The AI bubble is so big that it's draining useful investment from the rest of the economy. Hundreds of thousands of people are getting fired so billionaires can try to add a few more zeros to their bank account.
The best investment we can make would be to send the billionaires and AI researchers to an island somewhere and not let them leave until they develop an AI that's actually useful. In the meanwhile, the rest of us get to live productive lives.
(I am an AI optimist, by the by. But that is not one of its success stories.)
There probably are a few nuts out there that actually fired people to be replaced with AI, I feel like that won't go well for them
There really is no evidence.
I'll say its okay to be reserved on this, since we won't know until after the fact, but give it 6-12 months, then we'll know for sure. Until then, I see no reason not to believe there is a culture in the boardrooms forming around AI that is driving closed door conversations about reducing headcount specifically to be replaced by AI.
[0]: https://gizmodo.com/the-end-of-work-as-we-know-it-2000635294
Vibe coding is great for Shanty town software and the aftermath from storms is equally entertaining to watch.
The scary thing is that these tools are now part of the toolset of experienced developers as well. So those same issues can and will happen to apps and services that used to take security seriously.
It's depressing witnessing the average level of quality in the software industry go down. This is the same scenario that caused mass consumer dissatisfaction in 1983 and 2000, yet here we are again.
Post-Labor Economics Lecture 01 - "Better, Faster, Cheaper, Safer" (2025 update) https://www.youtube.com/watch?v=UzJ_HZ9qw14
These are jobs that normally would have gone to a human and now go to AI. We haven't paid a cent for AI mind you -- it's all on the ChatGPT free tier or using this tool for the graphics: https://labs.google/fx/tools/image-fx
I could be wrong, but I think we are at the start of a major bloodbath as far as employment goes.... in tech mostly but also in anything that can be replaced by AI?
I'm worried. Does this mean there will be a boom in needing people for tradeskills and stuff? I honestly don't know what to think about the prospects moving forward.