In 2025, venture capital can't pretend everything is fine any more

268 namanyayg 237 5/11/2025, 2:02:12 PM pivot-to-ai.com ↗

Comments (237)

A_D_E_P_T · 23h ago
> Here is the state of venture capital in early 2025: Venture capital is moribund except AI. AI is moribund except OpenAI. OpenAI is a weird scam that wants to burn money so fast it summons AI God. Nobody can cash out.

The interesting thing, to me, is how speculative OpenAI's bet is.

IIRC it was 2019 when I tinkered with the versions of GPT 2.0 that had web interfaces, and they were interesting toys. Then I began using ChatGPT since its launch, which was around Dec 2022, and that was a profound paradigm shift. It showed real emergent behavior and it was capable of very interesting things.

2019 - 2022 was three years. No hype, no trillions of dollars invested, but tremendous progress.

Now, there has been progress in the part ~three years in synthetic benchmarks, but the feeling with ChatGPT 4.5 today is still the same as it was with GPT-3/GPT-4 in 2022. 4.5/o3 doesn't seem hugely more intelligent then 3.0 -- it hallucinates less, and it's capable of running web searches and doing directed research -- but it's no paradigm shift. If things keep progressing the way they're going, we'll get better interfaces and more tools, but it's far from clear that superintelligence (more-than-human insight, skill, and inventiveness,) is even possible with LLMs.

ctoth · 21h ago
Here's a thing you can do right now, today.

- Go down to your local Ray-Ban store.

- Ask to play with a pear of the Meta glasses.

- Activate "Live AI mode"

- Have a real time video conversation with an AI which can see what you see, translate between languages, read text, recognize objects, and interact with the real world.

Contrary to your (potentially misremembered?) history, nothing at all like this was possible in 2019. I remember finetuning an early GPT-2 (before they even released the 2B model!) on a large corpus of Star Wars novels and being impressed that it would mention "Luke" when I ran the produced model! Now I wear it on my head and read restaurant menus with it. Use it to find my Uber (what kind of car is that?) Today I am building my raised garden beds out back and reading the various soil amendments I purchased, talking about how much bloodmeal to put over the hugelkultur layer, having it do math, and generally having a pair of eyeballs. I'm blind. The amount of utility I get out of these things is ... very hard to overstate.

If this's "moribund," sign me up for more decay.

bolobo · 20h ago
> - Have a real time video conversation with an AI which can see what you see, translate between languages, read text, recognize objects, and interact with the real world.

Maybe it's me having an extremely low imagination, but that stuff existed for a while in the shape of google lens and the various vision flavor of LLMs, and I must have used them.... 3 times in years, and not once did I think "Gosh I wish I could just ask a question aloud while walking in the street about this building and wait for the answer". It's either important enough that I want to see the wikipedia page straight from google maps and read the whole lot or not.

> an AI which can read text, recognize objects, and interact with the real world.

I can already do that pretty well with my eyeballs, and I don't need to worry about hallucinations, privacy, bad phone signal or my bad english accent. I get that is certainly an amazing tools for people with vision impairments, but that is not the market Meta/OpenAI are aiming for and forcefully trying to shove it into.

So yes, mayyybe if I am in a foreign country I could see a use but I usually want to get _away_ from technology on vacation. So I really don't see the point, but it seems that they believe I am the target audience?

ctoth · 20h ago
> I can already do that pretty well with my eyeballs, and I don't need to worry about hallucinations

I see. Perhaps your eyeballs missed the part where I said I'm blind?

The entire purpose of my comment was to push back against this idea that AI is stuck in 2022. It's weird and nonsensical and seems disingenuous, especially when I say "here are things I can do now that I couldn't do before" and the general response is "but I don't need to do those things!"

pinkmuffinere · 20h ago
I think they did miss that, but to be fair you probably should have opened with that. It’s great that AI is enabling new use cases for blind/partially-sighted people! It’s encouraging to hear your perspective. At the same time, I’m sure you can imagine that the “killer app” for a blind person might seem less useful to a fully-sighted person. Imo there are still useful aspects, and you raise good examples, but the “second pair of eyes” aspect in particular is low-value for me.
c22 · 20h ago
I don't think they missed it. They addressed it in:

>>> I get that is certainly an amazing tools for people with vision impairments, but that is not the market Meta/OpenAI are aiming for and forcefully trying to shove it into.

pinkmuffinere · 19h ago
Oh you’re absolutely correct! My mistake
ryanbrunner · 20h ago
That is an amazing use of the technology, and for sure something that should result in a pretty successful company - but not enough to bet the entire VC ecosystem on.

I think anyone saying AI has no use is being willfully ignorant, but like every hype cycle before it since mobile (the last big paradigm shift), IMO it's going to result in a few useful applications and not the paradigm shift promised.

jvanderbot · 20h ago
Let's be calm. It is true that the technology has improved and certainly has new and improving uses. But we started out talking about business.

I think a charitable reading of this thread is simply: AI as a large technology leap is still developing a business case that can pay for all the hype. Not to mention it's operating cost.

jollofricepeas · 19h ago
So what’s…

been the impact of OpenAI and meta glasses / headsets on the blind community at large?

Based on your statements it seems that the real value of AI is increasing the participation rate of visually impaired people in the global workforce.

If Elon or Sam can convince governments and insurance companies to pay for AI-powered glasses as a healthcare necessity maybe there’s a pathway forward for AI and VC class after all.

…maybe that’s the real game plan for Marc Andreesen, Kanye, Elon and the others.

They’re not really Nazi’s just early adopters choosing the “innovative freedom” promised by Emperor Palpatine and the Sith over the slow march of the Senators of the Republic.

Sorry, I’ve been watching too much Andor.

kikimora · 20h ago
You describe new ways of feeding information into the model and new ways model presents outputs. Nothing radically changed in how model transforms inputs into outputs.
lisper · 19h ago
> Ask to play with a pear [sic] of the Meta glasses.

Ironically, this typo is very likely a result of AI dictation making a mistake. There are a lot of common misspellings in English, like "their" and "there", but I've never seen a human confuse "pair" and "pear".

So yeah, there are cool demos you can do that you couldn't five years ago. But whether any of those cool demos actually translate into something useful in day-to-day life where the benefits outweigh the costs and risks is far from clear.

ctoth · 19h ago
Actually dictation would be more-likely to get this right! This was my typical human failure. Vastly more attributable to me reading with speech rather than Braille.
lisper · 19h ago
OK, well, I stand corrected. Still, a dictation transcription failure was plausible.
globnomulous · 20h ago
Am I mistaken in thinking that much of what you're describing would be considered computer vision, and that computer vision was already largely capable of these things in 2019 and before? I vividly remember a live demonstration of an on-device AR-and-object-recognition program at the 2014 Facebook developer conference.
msteffen · 20h ago
For a long time I held a sentiment similar to the parent comment, and then my brother sat me down, took out his phone, put chatGPT into conversation mode, and chatted with it for about five minutes. That was the second time I was truly amazed by chatGPT (after my first conversation, where I got it to tell me a fair bit about how Postgres works). Its ability to hold a natural, context-aware conversation has gotten really amazing.

I somehow agree with the op, that I don’t think I’m much closer to hiring chatGPT for a real job in 2025 than I was in 2022, but also you that there has been meaningful progress. And in particular, products that are transformative for disabled people are usually big improvements to the status quo for abled people too (oxo good grips being the classic example—transformative for people with arthritis, and generally just better for everybody else)

fennecbutt · 11h ago
I use it instead of Google searching now but even I double and triple check the hell out of it. It just bullshits the fuck out of semi complex or unique questions.

Like "can a ps2 game use the ps1 hardware" it gave a noncommital, hallucinated answer. Then when asked to list sources it "searched the Internet" where all the links were from searches like "reddit ps2" etc.

nine_k · 20h ago
ChatGPT and its kin is already being hired massively as first-line customer support. Voice synthesis and recognition is really good now, too, so it's both online chat bots and phone support bots.
snackernews · 7h ago
Does it satisfy customer support requests at a much higher rate than previous generations?

Every time I’ve encountered an AI first-line support agent I still find myself looking for the quickest escalation path to a real human just like before.

dingnuts · 19h ago
> after my first conversation, where I got it to tell me a fair bit about how Postgres works

See, I always start with conversations about things I already know about, and they bullshit me enough that I'm wary of Gell-Mann Amnesia when asking them about things I don't know about. They output a lot of things that seem plausible but the way they blend fact and fiction with no regard for the truth keeps me extremely distrusting of them.

That is to say: after your conversation, did you ask for citations and go read the primary sources? Because if you did not, the model likely mislead you about Postgres in subtle ways.

42lux · 19h ago
And we had these toys with higher latency in 22 with gpt3. The better tooling and integration hides the extremely slowed down pace of innovation in base models all while throwing mountains of compute at it.
api · 21h ago
One thing that’s fascinating to me is that these straight out of sci-fi things are novelties or demos, and don’t seem all that popular. Most people just aren’t interested.

It’s the “boring” stuff that’s interesting: automating drudgery work, a better way to research, etc.

I’ve been predicting for years that glasses — whether AR or VR — are and will remain niche. I don’t think most people want them.

afavour · 20h ago
> Most people just aren’t interested.

Which makes sense, really. Buying an expensive pair of glasses so that I can point at an object and say “what is that” is a cool parlour trick but I can imagine very few scenarios where I’ve wished I had that functionality.

Realtime translation… absolutely a feature I’d use. For a week a year, max. IMO the killer, everyday application just isn’t there yet. I’m still not sure what it would be.

glhaynes · 19h ago
And those are also all things that a smartphone can do (often a little worse, admittedly). Which you'll have with you anyway because of all the things it does much better than glasses can and likely ever could.
Bjartr · 20h ago
These statements were true of the Internet too. Until it wasn't. It'll be a slow burn and then one day you turn around and notice it's everywhere.

Even post dotcom bust there was still this generally shared understanding that you probably shouldn't meet people you met on the Internet and you shouldn't get in strangers cars. Today we use the Internet for the purpose of summoning strangers to us for the purpose of getting in their cars, and it's completely banal to do so.

These shifts take time and in the early-adopter stage the applications look half-baked, or solutions looking for a problem. That's because most of them are. The few that aren't will accrete over time until there's a solid base.

asadotzler · 20h ago
That was not my experience. I was meeting web chat friends IRL in 1995 and none of us were the least bit afraid. IRC and the web were everything SnapChat is today and not considered any stranger than AOL IM from 5 years before that which were hardly different from the BBSes that came before the subscription services.

So, lots of people were meeting online in 1995. 5 years later couples who met on the internet were all over the place. When I moved to Silicon Valley in 2000, "I met my wife online" was hardly even a conversation starter. (I did meet my wife at The State of Insanity web chat boards in 1995.)

Also, from 1998-2008, the world invested precisely what it's now invested in the last decade of LLM AIs and was already adding over a trillion dollars annually to the global economy. AI, with that same spending over about the same amount of time, is still an economic black hole and no one has any answers that would satisfy anyone other than VC about how it's all going to pay for itself.

The internet, and particularly email, IM and the web, were going strong in the 90s and by the mid-2000s were absolutely dominant. This re-writing of history to suggest that AI deserves more runway to burn more hundreds of billions on things that will have no staying power like the trillion dollars in internet build out between '98-'08.

These things don't take time. The whole broadband internet took about a decade and about as much money as AI has gotten in this latest bubble. At least when the dot com bubble burst, we had infrastructure with staying power and real societal value, unlike AI which will leave us with next to nothing when its bubble pops.

bandrami · 15h ago
This is what I keep coming back to. If this is actually supercharging e.g. software developers, why aren't we absolutely awash in better software?
nathanappere · 6h ago
Because most software doesn’t need to be good, but just hit the “kind of does the job well enough” line. It improves velocity for this usecase.
bandrami · 6h ago
OK, but if this 12Xes devs why isn't there 12 times the crappy software there was 5 years ago? I'm not seeing a massive increase in the amount of available software. Shouldn't there be dozens of new word processors? Hundreds of new MIDI sequencers? Thousands of text editors?

If this is actually making software easier to make where is the software?

saltcured · 19h ago
From your "even post dotcom bust..." I wonder if you are too young to know the time before that? It wasn't always that way. In my college days before the dotcom boom really took off, people could meet in real life after first contact on USENET, which I'd say was the social media of the time.

You are referencing a later period where regular folks were piling onto the internet and giving each other advice. And maybe that advice was geared towards novices and/or children, coming from other novices, and steeped in a lot of urban legend fears of the unknown. Even the academic types started to embrace these ideas, lending credence to the idea that their magical garden was now turning into hunting grounds.

But, imagine yourself in the early 90s, mapping your way (on paper, of course) to a trailer home in Silicon Valley or a secluded dirt road in the Santa Cruz mountains. You've never been there before, and your goal is to meet people you've never only seen as text on a screen. You assume they are real people but, technically, it could be all sock puppets.

I did those things! It wasn't a horror movie. Instead it lead to light socializing and drinking.

globnomulous · 19h ago
> These statements were true of the Internet too. Until it wasn't. It'll be a slow burn and then one day you turn around and notice it's everywhere.

Maybe. What I remember is that the niche parts of the Internet remained niche. Everybody seemed excited about email, kids were excited about instant messaging, etc, and fundamentally that never changed; people have continued to use the Internet for communication. And the dotcom boom was a massive, stupid frenzy fueled by investors who had no idea what they were doing or what they were buying, and just wanted to make sure they didn't miss the next Microsoft, but the bubble was grounded in the accurate prediction that the web would become an essential part of business and an enormous money maker.

Does all of this sound right? Please correct me if I'm wrong. I find myself in the unexpected, uncomfortable position of wildly contradicting and undermining what I thought I understood: that people are terrible at predicting the future of technology.

Broadly speaking, in the 80s and 90s, the outlines of the future of technology was obvious and unmistakable: computers would become faster and more connected; that they'd take over or supplement a widening range of tasks and parts of our lives.

But it also can be hard to remember the way you understood the world in the past, or the way things were, because your understanding of the present overwrites it. Remember when VR was going to be huge? Remember when 3d chat was going to be the next big thing? Remember when TV was going to be hugely important in the classroom? Remember when social media was going to make our lives better?

People are making predictions regarding AI (or "AI") -- that it will do Everything Everywhere Very Soon, that it will lead to a 10x increase in worker output -- that strike me as absolutely, obviously wrong, even risible.

danaris · 5m ago
> I find myself in the unexpected, uncomfortable position of wildly contradicting and undermining what I thought I understood: that people are terrible at predicting the future of technology.

By and large, I think you're still right. But by the late '90s and early 2000s, the internet wasn't "the future of technology". It was already the present of technology.

Anyone paying attention could already see by 1994 or 1995 that the Internet was a) a big deal, and b) growing massively. The key at that point was not figuring out that the technology was going to be huge; it was figuring out which companies using that technology would be huge—ie, predicting with more granularity the future of the technology that was the World Wide Web.

And investors failed at that. Hard. That's what caused the dot-com bust.

fennecbutt · 11h ago
People were resistant to cell phones, too. To computers, to the Internet, to online articles and news replacing newspapers and TV programs.

Thid I'd why it sucks that glass was shot down when that technology will soar in a few decades.

People need to be more forward looking and open to change imo. It affects society in very negative ways. Imagine if people had planned for social media when it was first becoming a thing. We'd have safety nets, and an actual fucking plan.

amazingamazing · 20h ago
Hasn’t Google lens been doing that for almost a decade now?
acchow · 20h ago
Like how Google Translate has been translating between languages for a decade+ now...?

There's been a stepwise jump in the capabilities of AI that's changed "products" from mostly fun to actually useful

amazingamazing · 19h ago
Are you implying that google translate was not useful to anyone prior to chat gpt3?
insane_dreamer · 19h ago
> - Have a real time video conversation with an AI which can see what you see, translate between languages, read text, recognize objects, and interact with the real world.

This certainly provides benefit to those with limited vision, which is great. But that is a very small segment of consumers. Besides those, how many other people do you know who are actually _using these glasses_ in the real world?

Google Glass came out 10 years ago.

benatkin · 21h ago
The Facebook glasses aren't as relevant as the LLMs doing many thinking tasks that people get paid to do. A lot of it is as a productivity multiplier, and I'm not saying it's all doom and gloom, but it's transformative.
nativeit · 20h ago
But why should anyone in the general public get excited about a productivity multiplier? If history is any indication (and it appears to be repeating itself with worrying frequency), then all of that benefit will be hoarded by the upper crusts of society while labor is further marginalized. At this point, even as a technology enthusiast and longtime evangelist, I find myself sympathizing with the luddites.
bandrami · 14h ago
If it's a productivity multiplier, where is the output of that productivity? Is there a bunch of new software I can download? A bunch of books I can read? What's all this alleged new productivity actually producing besides Ghibli memes?
surgical_fire · 22h ago
This is sort of why I say that the hype is ultimately detrimental to the healthy development of tech.

Generative AI was a sort of paradigm shift, and can be developed into interesting tools that boost human productivity. But those things take time, sometimes decades to reach maturity.

That is not good for the get rich quick machine of Venture Capital and Hustle Culture, where quick exits require a bunch of bag holders.

You gotta have suckers, and for that Gen AI cannot be an "interesting technology" with good potential. It needs to be "the future", that will disrupt everything and everyone.

Nauxuron · 22h ago
> 4.5/o3 doesn't seem hugely more intelligent then 3.0 -- it hallucinates less [...]

This is not entirely true, or at least the trend is not necessarily less hallucination. See section 3.3 in the OpenAI o3 and o4-mini System Card[1], which shows that o3 and o4-mini both hallucinate more than o1. See also [2] for more data on hallucinations.

[1]: https://openai.com/index/o3-o4-mini-system-card/

[2]: https://github.com/vectara/hallucination-leaderboard/

brundolf · 21h ago
Agreed. It's become a pretty useful tool for individual people to use for individual tasks. But to hyper-scale to a point where it justifies crazy valuations - even short of super-intelligence - it has to scale beyond users. It has to no longer require human oversight from one step to the next.

I think that's the key threshold all these companies have been running up against, and crossing it would be the paradigm shift we keep hearing about. But they've been trying for years, and haven't done it yet, and seem to be plateauing

And then in OpenAI's case specifically- this tech has become commoditized really quickly. They have several direct competitors, including a few that are open, and their only real moat is their brand and Sam's fundraising ability. Their UX is one of the best right now, but that isn't really a moat

HDThoreaun · 28m ago
> it has to scale beyond users.

What? MSFT is worth 3.25 trillion without scaling beyond any users. All these AI companies need is for everyone to pay them $100 a year for a subscription and they have more than justified their valuation. Same strategy Microsoft uses

wouldbecouldbe · 22h ago
But that’s normally how innovation goes. There is a big jump, and then a lot of margin work.

People are expecting it to get exponentially better, but these kind of innovations are more a inversed power law.

asadotzler · 19h ago
The global broadband network that is the physical internet took about 10 years and $1T to build, mostly between 1998 and 2008 and it made the internet massively better every year of the build-out. That's also precisely as much time and money as has been put into this generative AI bubble.

The internet was adding a trillion dollars to the global economy by 2008 and the end of that rapid expansion, where AI is still sucking hundreds of billions a year into a black hole with no killer use cases that could possibly pay off its investment, much less begin adding trillions to the global economy.

And a decade before the web and internet explosion, PCs were similar, with a massive build out and immediate massive returns.

This excuse making for AI is getting old. It needs to put up or shut up because so far it's a joke compared to real advances like the PC and the Internet, all while being hyped by VC collecting companies as the arrival of a literal God.

pier25 · 22h ago
Most technology doesn't get exponentially better.

We look at CPUs or the transmission of digital data and these seem to have improved exponentially but these are rather exceptions and are composed of multiple technologies at different stages. Like how we went from internet through phone lines, to dedicated copper lines for data, to optic fiber straight into to people's homes.

Eg: look how the efficiency of solar cells has progressed over the last 50 years

https://www.nrel.gov/pv/interactive-cell-efficiency

TuringTest · 22h ago
True, but the religion of the Singularity that fueled this round of financing was premised on this improvement growing exponentially fast, thanks to the support provided by the current version of AI tools. There's no sign this will happen anytime soon.
betterThanTexas · 22h ago
I think you mean "invention". Innovation describes how products change over time and doesn't necessarily imply insight or value-adds. Sometimes it just means the packaging gets updated.
exitb · 22h ago
Which would be fine if the market wouldn’t already price in the exponential growth.
gcanyon · 22h ago
> ChatGPT 4.5 today is still the same as it was with GPT-3/GPT-4 in 2022. 4.5/o3 doesn't seem hugely more intelligent then 3.0

I think you're misremembering how 3.0 worked. Granted, the slope from 2.0 to 3.0 was very steep, but a ton of progress has happened in the past few years.

averageRoyalty · 22h ago
Agreed, and as someone who's used it for work most days since 3.0 launch, it's likely way more efficient in outcomes - maybe 30-40%. But as the GP said, there's no paradigm shift. All the new features feel very gimicky, and OpenAI lost their first mover advantage quite a while ago.
stavros · 14h ago
Have you guys seen Deep Research? I'm completely blown away that I can have the thing make entire trip itineraries for me, something I just could not do before without paying a human lots of money.
hatefulmoron · 9h ago
I've used the deep research features from OpenAI, Grok, and most recently Anthropic. They're super impressive, but I can't shake the feeling that they're mostly giving me a summarized explanation of what the first 2 pages of Google Search is returning.

Also, insofar that they're researching a topic, they're not sufficiently critical. They are highly influenced by what they read, and they tend to take the results they're given more or less at face value.

For instance, I asked about a medical product for my girlfriend. I explicitly asked for a critical look at its efficacy, any studies in the area, etc.. and it seemed like 90% of the sources it considered were from the product's own website. It basically gave me an uncritical reporting of what they said about their own product.

technol0gic · 10h ago
halkias is that you? :P
tossandthrow · 22h ago
I would attribute most of that to alignment.

Ie. better datasets.

drob518 · 22h ago
Yea it hallucinates less, but it still hallucinates a lot. I think we’re proving that intelligence is not just a language model, even an obscenely large one.
asadotzler · 19h ago
It's actually hallucinating more. The more they mess with these now mature models, models which no longer scale with training corpus size, the more they push and prod at the models to do better with chain of thought and other techniques, the more they hallucinate. It's getting worse because the magic of LLM scaling is over and the techniques we have to make them better actually make the hallucinations worse.
forgetfreeman · 21h ago
I'm waiting impatiently for someone to connect the obvious dots and roll out an AI assistant intentionally patterned after Hunter S Thompson. The hallucinations are now a feature.
thatguy0900 · 21h ago
I mean, if you look up the data around eye witness testimony being extremely unreliable even for people who directly witnessed things, it's not out of the realm that humans regularly hallucinate as well
IlikeKitties · 22h ago
I must say I do feel the newest GPT Versions are vastly better than the old ones. I found GPT-3 Stuff to be an interesting toy but too often it was too wrong and too stubborn to be useful. I use the 4.0+ Version regularly know.

Just recently I took a screenshot from a jira Burndown chart to write a description of the sprint progress for our stakeholders. Did it in one shot from a screenshot and got it right.

pzo · 22h ago
I think it's simplification to compare progress only on LLM level.

We had big progress in AI in last 2 years but have to take into account more than text token generation. We have image generation that is not only super realistic but you just text what you want to modify without learning complicated tools like ComfyUI.

We have text to speech and audio to audio that is not only very realistic and fluent with many languages but also can express emotions in speech.

We have video generation that is really more realistic every month and taking less computation.

There is big progress in 3d models generation. Speech to text is still getting improved and fast enough to run on phones reducing latency. Next frontier is how AI is applied for robotics. No to mention areas not sexy to end users but in application in healthcare.

candiddevmike · 21h ago
All of your examples are just other flavors of token generation.
pzo · 19h ago
I mentioned that OP focused only of not much improvements in text token generation (since gpt 4.0) but those models got multimodal and not every AI e.g. generative AI are based on tokens but on diffusers.
lynx97 · 21h ago
I have a similar feeling. While LLMs have given me a new way to do search/questions, it is the byproducts that feel like the actual game changers. For me, it is vision models and pretty impressive STT and TTS. I am blind, so I have my own reasons why Vision and Speech have so many real world applications for me. Sure, LLMs are still the backbone of the applications emerging, but the real progress in terms of use cases is in the fringes.

Heck, I wrote myself my own personal radio moderator in a few hundred lines of shell, later rewritten in Python. As a simple MPD client. Watch out for a queued track which has albumart, and pass the track metadata + picture to the LLM. Send the result through a pretty natural sounding TTS, and queue the resulting sound file before the next track. Suddenly, I had a radio moderator that would narrate album art for me. It gave me a glimpse into a world that wouldn't have been possible before. And while the LLM is basically writing the script, the real magic comes from multimodal and great sounding TTS.

Much potential for really cool looking/sounding PoCs. However, what makes me worry is that there is not much progress on (to me) obvious shortcomings. For instance, OpenAI TTS really can't speak any numbers correctly. Digits maybe, but once you hand it something like "2025" the chance is high it will have pronounciation problems. In the first months, this felt like bad but temporarily acceptable. A year later, it feels like hilariously sad that nothing has been done to address such a simple yet important issue. You know that something bad is going on when you start to consider expanding numbers to written-out form before passing the message to the TTS. My girlfriend keeps joking that since LLMs, we now have computers that totally can not compute correctly. And she has a point. Sure, hand the LLM a tool to do calculations, and the situation improves somewhat. But it seems to be underlying, as shown by the problems of TTS.

Vision models have so many applications for me... However, some of them turn out to be actually unusable in practice. That becomes clear when you use a vision model to read the values off a blood pressure sensor. Take three photos, and you get three slightly different values. Not obviously made up stuff, but numbers that could be. 145/90, 147/93, 142/97. Well, the range might be clear, but actually, you can never be sure. Great for scene and art descriptions, since hallucinations almost fall through the cracks. But I would never use it to read any kind of data, neither OCR'd text nor, gasp, numbers! You can never know if you have been lied to. But still, some of the byproducts of LLMs feel like a real revolution. The moment you realize why whisper is named like that. When you test it on your laptop, and realize that it just transcribed the YouTube video you were rather silently running in the background. Some of this stuff feels like a big jump.

parpfish · 20h ago
I’m kind of disappointed that the generative AI hype has overshadowed how many non-generative tasks are basically “solved”, especially in vision.

Human level object recognition can easily be trained up for custom use cases. Image segmentation is amazing. I can take a photo of a document and it’s accurately OCRd. 10-15 years ago that would be unfathomable.

I think current LLMs would give AI a much better reputation if they focused on non generative applications. Sentiment analysis, translation, named entity extraction, etc. these were all problems that data folks have been wrestling with that could very well be seen as “solved” and a big win for AI that businesses would b able onconfidently integrate into their workflows, but instead they went with the generative route and we have to deal with hallucinations and slop

lynx97 · 20h ago
Ahh, I wanted to list translation as another "byproduct". That totally feels like solved now.

However, while OCR done by vision models feels neat, I personally dont feel like it changed anything for me. I have been using KNFB Reader and later Seeing AI, and both have sufficiently solved the "OCR a document you just photographed" use case for me. They even aid the picture taking process by letting me know that a particular edge of the document is not visible.

Besides, I still don't understand the actual potential for hallucinations when doing OCR through vision models fully. I have a feeling there are a number of corner cases which will lead to hallucinations. The tendency to fill in things that might fit but aren't there is rather concerning. Talking about spelling errors and numerical data.

nopinsight · 21h ago
> 4.5/o3 doesn't seem hugely more intelligent then 3.0

I disagree with 3.0, but perhaps that feels true for 4.0 or even 3.5 for some queries.

The reason is that when LLMs are asked questions whose answers can be interpolated or retrieved from their training data, they will likely use widely accepted human knowledge or patterns to compose their responses. (This is a simplification of how LLMs work, just to illustrate the key point here.) This knowledge has been refined and has evolved through decades of human experiments and experiences.

Domain experts of varying intelligence will likely come up with similar replies on these largely routine questions as well.

The difference shows up when you pose a query that demands deep reasoning or requires expertise in multiple fields. Then, frontier reasoning models like o3 can sometimes form creative solutions that are not textbook answers.

I strongly suspect that Reinforcement Learning with feedback from high-quality simulations or real environments will be key for these models' capabilities to surpass those of human experts.

Superhuman milestones, equivalent to those achieved by AlphaGo and AlphaZero between 2016 and 2018, might be reached in several fields over the coming years. This will likely happen first in fields with rapid feedback loops and highly accurate simulators, e.g. math problem solving (as opposed to novel mathematical research), coding (as opposed to product innovation).

nine_k · 20h ago
LLMs are in the limelight, but I won't dismiss the notable progress in areas like vision and visual parsing, and also image synthesis and transformation. Robotic taxis finally are running down city streets, and drive on par with humans, or better. You can give the machine a rough sketch, and get a well-made picture. You can show the machine a few photos, and get a very reasonable 3D model, even in a form of reasonable meshes. Etc.
asadotzler · 19h ago
All of that was progressing well before LLMs and will be afterwards too because it has more practical value even if less "OMG WOW" factor.
jsnell · 21h ago
> The interesting thing, to me, is how speculative OpenAI's bet is.

It doesn't really matter how speculative the AGI bet is, their consumer AI business by itself is basically guaranteed to drown them in money. The only reason they're making losses at the moment is because they're choosing not to monetize their free tier users with ads, presumably since they don't need to make a profit and can prioritize growth.

But the moment they flip the advertising switch, their traffic will be both highly monetizable and ludicrously high margin.

asadotzler · 19h ago
Not really. The best math I've seen says OAI cannot break even with ads on free tiers without about 5X as many users. That's to break even, not to be wildly profitable as you seem to be suggesting. OAI will need more than ChatGPT, pro and "free" to be anything close to web search for ad revenue.
jsnell · 19h ago
It'd be easier to address said "best math you've seen" if you could provide a link to that analysis... I hope it wasn't by Ed Zitron.

But it seems pretty obvious that the math must be based on some incorrect assumptions. The unit inference costs of high quality LLMs are much lower than the unit costs of serving high quality web search queries. The LLM costs have also been decreasing rapidly -- 1000x in two years seems like a fair estimate -- while search engine costs haven't. (If anything they've been going up.)

And web search is obviously a business that is very profitable with the ad model, despite those higher unit costs.

> OAI will need more than ChatGPT, pro and "free" to be anything close to web search for ad revenue.

I don't understand this part at all. Revenue is proportional to traffic and ad rates. Why would the ad rates be lower for chatbots? Or why would their traffic obviously be lower?

This is also moving the goalposts. Search ad revenue is what, a $250B/year market. OpenAI doesn't anywhere near that much revenue to be fabulously profitable. A tenth of it would already be more than enough.

insane_dreamer · 19h ago
> But the moment they flip the advertising switch, their traffic will be both highly monetizable and ludicrously high margin.

I think most people will continue to use Google and its Gemini-generated summaries at the top.

jsnell · 19h ago
Maybe, but it doesn't really matter either way.

They already have half a billion MAUs. I'm pretty sure what I said is true already on that user base. It's not contingent on chatbots replacing search as a whole, all that's needed is their existing traffic having a decent proportion of monetizable queries (this seems basically certain) and for them to come up with ad formats that are effective without driving users away.

KerrAvon · 21h ago
Maybe, but:

1. People won’t ultimate go download ChatGPT.app or use a website — they’re going to be using the functionality through structured services in iOS and Android, and it’s necessarily going to be under control of the OS vendors for security/privacy reasons. This doesn’t mean Apple and Google own the LLMs — there will be consumer choice for antitrust reasons if nothing else — but the operating system has to be a conduit for access to your data, and also for unified user experience. Which means advertising will be limited.

2. Say it does go the way you think it will — what prevents a real non-profit, open source LLM from taking it away from the commercial players? There really is no moat (other than money, energy, data center space).

insane_dreamer · 19h ago
> it hallucinates less,

does it? An anecdote from yesterday: My wife was asked to do a lit review as an assignment for nursing school. Her professor sent her an example of papers on the topic, with a brief "relevance" summary for each. My wife asked me for help as she was frustrated she couldn't find any of the referenced papers online (she's not the most adept at technology and figured she was doing something wrong). I took one look at the email from her professor and could tell just by the formatting that it was LLM generated (which model, I don't know, but obviously a 2025 model). The professor didn't say anything about using an LLM, and my wife didn't suspect that might be the case.

My wife and I did some Google Scholar searches, and _every_ _single_ _one_ of the 5 papers cited did _not_ exist. In 2 of the cases, similar papers did exist, but with different authors or a different title that resembled the fake "citation". The three others did not exist in any form - there were other papers on the same subject, sure, but nothing closely resembling the "citations" either in terms of authorship or title.

Xenoamorphous · 22h ago
> 4.5/o3 doesn't seem hugely more intelligent then 3.0

I wonder what “hugely more intelligent” would look like to you?

Also, in the (rather short) history of computing, what “hugely more X” has happened over the course of a couple of years?

fragmede · 21h ago
I think it’s worth mentioning that OpenAI was founded in 2015, so ChatGPT's overnight success in 2022 was 7 years in the making. Looking at that timescale, I'd say Google counts, as do several other companies is we look at inception till successful product.
OJFord · 22h ago
Uh, compute power? Storage?
rafram · 22h ago
I totally disagree. The initial release of ChatGPT felt magical because of how cool was to be able to hold a natural, fluid conversation with a computer. But it was an extremely stupid computer beneath that veneer of conversational intelligence. Recent models can do a lot of math accurately without “reasoning” or invoking tools; GPT-3 could barely perform basic arithmetic.
spunker540 · 22h ago
I agree with your take, but at the same time, I’m not sure these things should ever do math. I know that they can, but it seems impossible to draw the line of math they can do vs math they shouldn’t do. A part of me suspects they should always be outsourcing any math to a tool.
rafram · 21h ago
Absolutely, but the ability to perform complex math in one step seems like a good indicator that it isn’t just a convincing conversational autocomplete anymore.
jcgrillo · 20h ago
But is it anything more than a convincing mathematical autocomplete? Or a convincing code autocomplete? Mathematics and code are themselves primarily conversational tools, with side effects.
asadotzler · 19h ago
And the latest models get beat on reliability by a $2.99 drug store slim wallet sized solar powered calculator you could have bought 35 years ago for about that same price.
andrepd · 20h ago
> Recent models can do a lot of math accurately without “reasoning” or invoking tools;

They most certainly cannot

Traubenfuchs · 21h ago
+ one thing I just absolutely can‘t understand: What‘s their moat? Their offerings are replaceable by like 5 direct competitors, some of which are open source. Pricewise, it‘s a race to the bottom, and the bottom is „api pricing vs running open source yourself“.

No comments yet

calebkaiser · 21h ago
I understand the spirit in this line of criticism, but I think it's easy to muddle the timelines and feel as if things "aren't moving," when in fact, the pace of research and improvement is great.

For context:

- GPT 2 was released in Feb 2019

- GPT 3 came out roughly 18 months later in 2020. It was a huge jump, but still not "usable" for many things.

- InstructGPT came out roughly 18 months later in early 2022, and was a huge advancement. This is RLHF's big moment.

- About 10 months later, ChatGPT is released at the end of 2022 as a "sibling" to InstructGPT. It's an "open research preview" at this point. This is around the time OpenAI starts referring to certain models as being in the "3.5 family"

- GPT-4 comes out in March 2023, so barely 2 years ago now. Huge jumps in performance, context window size, and it supports images. This is around the time ChatGPT hits 100 million users and is really becoming a reliable, widely adopted tool. This is also the same time that tools like Cursor are hitting the market, though they haven't exploded yet. Models are just now getting "good enough" for these kinds of applications

- GPT-4-Turbo comes out in November 2023, with way larger context windows and lower pricing.

- About 12 months ago, GPT-4o released, showing slightly increased performance on existing benchmarks over 4, but now with state-of-the-art audio capability support for something like 50 languages.

- 5 months ago, o1 releases. This is a big moment for scaling compute at test time, which is a major current research direction in ML. Shows huge improvements (something like 8x over 4) on some math/reasoning benchmarks. Within months, we have o3 and o4, which substantially improve these scores even further.

- In February of this year, we get 4.5, and then months later, the confusingly named 4.1, which shows improvements over 4o.

So to be clear, in 2019 we had an interesting research project that only a few people could tinker with.

18 months later, we had a better model that you could play with via an API, but was still a toy.

It takes more than two years to go from that to ChatGPT, and a few more months (nearly 3 years total) to get to the "useful" version of ChatGPT that really sets the world on fire. It took roughly 4 + 1/2 years to go from "novelty text generation" to "useful text generation".

In the 2 years since then, we've gotten multimodal models, a new class of reasoning models, baseline improvement across performance, and more. If anything, there is more fundamental research and wider variety of directions now (the kind of stuff that shifts paradigms) than before.

bigstrat2003 · 21h ago
I don't agree. I think it's still just a toy that can't get anything useful done. It still hallucinates quite a bit, and gives confidently wrong answers all the time. It's impressive that we could make a program that does all those things, but until they get the accuracy to a reasonable level it won't be useful.
NitpickLawyer · 21h ago
> I don't agree. I think it's still just a toy that can't get anything useful done.

Yeah, sorry, but your view is not supported by data. There are billions of $ spent on tokens every year (between oAI, anthropic, goog, etc). This is not a toy anymore. People are using it, one way or another so it's useful to them - to the tune of billions of dollars. How useful it'll turn out to be in the abstract is still up for debate, but it is useful today.

ghaff · 21h ago
Companies are willing to spend money on it today which isn't the same thing. Personally, I probably wouldn't spend a penny on it (OK, maybe a dollar) but really haven't found generative AI to be particularly useful.
NitpickLawyer · 20h ago
Companies usually go for "business gpt" in azure or whatever oAI & MS are offering. I'm talking about regular users at 20$ a pop. Unless all the providers are lying, billions of $ are being spent right now, by regular users. You may not find it useful, but others are.
ghaff · 20h ago
Possibly unless it's just FOMO because Bill at the next desk is using it. I have tried some of the products. Just didn't find them more than casually useful. But I obviously can't speak for everyone's experiences. And different people have different needs.
ripe · 20h ago
Some of that money is these companies trying to find a use for this technology, because their investors don't want them to miss the AI train. Not sure what proportion is seeing any returns right now.

If the product were to disappear tomorrow, I doubt the real economy would notice the loss of 0.05% in productivity.

[1] https://www.technologyreview.com/2025/02/25/1111207/a-nobel-...

bandrami · 20h ago
Have those billions of dollars produced a single viable product?
NitpickLawyer · 19h ago
Is this in jest? Chatgpt in itself is a viable product (based on how many active paying users they have). And then there's code w/ the myriad of implementations. Unless you've been hiding under a rock since 2023, we went from "cute toy, look it even looks like python" to "well, this agent-built, zero intervention, scribbled notes prompt to full-stack application probably has some security bugs, so be careful" real quick.
bandrami · 14h ago
OK, so where are these applications it has built? What’s a good case study I can look at?
calebkaiser · 19h ago
The number of software engineers using IDEs like Cursor is staggering.

Among knowledge workers in general, ChatGPT is used widely for basically any task that requires writing or researching.

This is obviously not going to be true for every single knowledge worker in every single role, and it seems that you don't find it particularly useful, but the volume of paying users is hard to dismiss out of hand.

bandrami · 15h ago
I don't doubt it's widely used, what I don't see is any productivity increase among knowledge workers since its adoption. Are you aware of any signs of that?
threecheese · 20h ago
I spend less time on web search, I suppose. But in trade I need to pay for it, while still providing my behavior data to the surveillance capitalism platforms.

It’s also improved my ability to analyze data, generating graphs and insights. But I still need to run that last mile myself, because I can’t fully trust its output. Same for web search actually, when I have a need to be comprehensive.

bandrami · 19h ago
But has that resulted in you making a commercially viable product?

That's the step that still seems to be missing

jcgrillo · 20h ago
Companies spending money on SaaS means, generally, that a detached executive many rungs away from the actual users was sufficiently wowed by a demo/steak dinner/whatever to sign a deal. It has nothing in general to do with utility. To get an industry to spend billions you need only target a few hundred or a few thousand people.
raincole · 22h ago
Saying today's SOTA models feel the same as GPT-3 did is the hot take of the year.
drdrek · 23h ago
I've got AI fatigue as the next guy, but this is over correcting.

VC money is in a constant state of FOMO, this is nothing new. Companies dress up as AI, or web3 or web2, or fintech or what ever to more easily attract capital. If 57.9% of dollars went to AI startups this year is not because everything is AI, I would bet 25% are just companies that tacked AI on to an unrelated business model and its skewing the statistics. I'll promise you that 10 years from now 57.9% of VC funding is going to be in some other buzzword and its not going to be AI.

rco8786 · 22h ago
Can confirm. I’m at a startup in a pretty boring space and we’re having lots of success just by bringing modern software into the industry, but we’re gearing up for a series A and we absolutely must include AI in our pitch deck. So for the last 6-8 months we’ve just been scrambling to find some sort of reasonable use cases for AI in our product, but in reality it’s not a differentiator for us at all.
ta988 · 22h ago
Maybe your sales pitch is that this can't be disrupted by AI yet...
rco8786 · 20h ago
I take it you haven’t been chatting with many VCs lately :-P. No such thing as something that AI won’t disrupt.
ta988 · 20h ago
May it disrupt them too.
n_ary · 17h ago
If you can find nothing else, you can add a little irritating floating chatbot icon on your product(I am making huge assumptions that it has somekind of UI whether native or web or touch screen or something), which you can converse with to may be RAG stuff from some panel/tab and respond to questions.

I have seen way too many products suddenly becoming ProductX to now ProductX-AI by simply adding a RAG powered document conversation popup.

rco8786 · 16h ago
Yea that’s the thing we really don’t want to do. We’ve found a couple legitimate use cases, in all honesty. But they’re like marginal improvement type features not industry disrupting features.
kevinventullo · 20h ago
Have you tried asking AI how to plausibly include some slides or bullets about AI in the existing pitch?

When you need a dash of convincing b******t, they are excellent generators.

econ · 21h ago
Have it monitor the entire operation in real time. Me and my way with words :)
bo1024 · 22h ago
[flagged]
drdrek · 21h ago
Market are efficient-ish which is probably as good as it gets, if your framework cant account for dishonesty its not a very useful tool to analyze human endeavors
rco8786 · 22h ago
I mean. It’s just reality lol. We need money. We tailor our deck to what investors want to hear.
jeffreygoesto · 22h ago
But why do you want them in your business and will have to cash them out someday? Why not organically grow and own your business?
intended · 21h ago
Theres a lot of conditions for this to be a good ideas, but - Typically you can grow at rate X, over time. If you raise enough money to buy someone else, you can grow at maybe 2X.

If you can raise money, at a low cost, which allows you to grow NOW, as opposed to grow later ? All things being equal, this is the better choice.

NB: This is a toy model, terms and conditions apply.

rco8786 · 20h ago
Because that’s not feasible for our product. It’s the whole reason investors exist in the first place.
hluska · 21h ago
Not all businesses can survive that model.
bigstrat2003 · 21h ago
I would argue that any business which cannot survive that model is a business that should not exist. If you can't grow organically over time with your profits, chances are you aren't making anything actually useful or worthwhile.
msteffen · 13h ago
…first of all, what about every business that isn’t software? Restaurants need to secure a lease and buy equipment. Manufacturing anything requires you to build a factory. Then, is it so far-fetched to imagine a software business that needs to write a bunch of code before it can start selling?
hock_ads_ad_hoc · 19h ago
They have to seek vc funding to grow because one of their competitors will. If they don’t, they will lose the market to the vc backed hyper scaler.
rco8786 · 20h ago
That’s pretty silly. Not every business can be started and reach profitability without upfront capital. Even if they did that doesn’t mean it’s bad for a company to take investments to scale faster. Economies of scale are a real thing
mystified5016 · 21h ago
Because that's simply not how business works
lucianbr · 19h ago
SBF or Elizabeth Holmes or Madoff could say the exact same thing. I am convinced they thought the exact same thing.
rco8786 · 16h ago
There’s such an enormous divide between tailoring a pitch deck to the investor to highlight the things they want to see and outright fraud I don’t even know where to begin with it.
candiddevmike · 20h ago
Sounds like securities fraud
rco8786 · 16h ago
It would be fraud if we were lying about any of it, but we’re not. That’s why we’re trying to find AI use cases instead of just making up some BS.
echelon · 22h ago
It's not dishonest. It's opportunity cost.
hluska · 21h ago
A truly efficient market would be one where all public and private information is incorporated into price. There’s no room to beat the market in an efficient market.

Venture capital is definitely not an efficient market. But I’m not sure what your point is.

throwaway314155 · 22h ago
User describes dishonest ecosystem. Gets upvotes. Another user expresses empathy that they're trapped in said dishonest ecosystem (which is inarguably dishonest)? Downvotes.

Don't change HN.

rco8786 · 19h ago
I did not perceive it as empathy fwiw
hluska · 21h ago
I downvoted that comment because efficiency is a very specific term with a precise meaning in markets.

No comments yet

jbmsf · 21h ago
Ditto.
ghaff · 21h ago
In fairness, as a company, you pretty much have to jump on the next bandwagon a lot of the time even if you have doubts whether it's there yet or is going to be a splash in the pan.

My story is that OpenStack didn't really work out but, if you were serious about the cloud thing, you sort of had to hop aboard even if, in general, the landscape ended up playing out differently with containers.

esseph · 18h ago
OpenStack probably has more users right this minute than it ever did before.

Broadcom is to thank for that.

ghaff · 18h ago
It’s definitely pretty big in telco but the original vision was probably bigger relative to how containers played out than it was originally seen to be.

And yeah probably a sizable way from VMware but not ready for containers momentum which is ironically not a small part of initial VMware momentum from physical servers.

esseph · 13h ago
Enterprise is pivoting now, and have been for about 4y.

Many went to Nutanix for virtualization and containerization, but many shops will stay virtualization-only for security and governance reasons for a long time.

ericmcer · 21h ago
Yeah this article is dumb, it is asserting that: Interest rates will never come back down, and AI is the last VC funded tech craze that will happen.

There is no way both of those are even remotely true.

oli5679 · 23h ago
It’s really easy to be cynical.

There is a big upside potential for high growth companies taking advantage of technology trends.

Today, Google’s revenue is £263.66 Billion. This is nearly 300x the revenue Google generated in 2003 ($961.9 million). The company went public on August 19, 2004, at $85 per share, valuing the company at $23 billion. After the IPO, Google reported $1.47 billion in revenue for fiscal year 2003, with a profit of $105.6 million.

matthewdgreen · 21h ago
But let’s ask a different question: aside from re-allocating the economy’s marketing and advertising budget into Google (from, presumably, local newspapers and TV before Google existed) how much of that revenue comes from actual tangible new wealth creation?

To put some context on this, 78% of Google’s revenue is advertising. Overall US ad spending has been increasing at about 1.6% per year since 2001, with no obvious indication of an acceleration (beyond some bumps around 2007/8.) So is there actually a success story beyond market capture here? And if all we’re doing is concentrating existing business into new channels, is this something we should be excited about?

econ · 21h ago
You simply pay more for a product to find you. The more overpriced ones find you first. Googles business model is to make it as hard as possible for products to find you while simultaneously pretending to be the go-to place for precisely the opposite. A truly magical accomplishment.

Wealth creation?

pzo · 19h ago
Google created android - the most popular OS. Sure maybe samsung or nokia would be used instead but definitely the helped expend ad business with android. Same like Meta/ByteDance expanded Ad business with Intagram/Tiktok. Even if ad spending grew 1.6% per year it's not sure if it grew as much if android didn't exist. Also need to take into account probably reduced cost of advertising - this product just got cheaper. That the ad market grew 50% in 25 years doesn't mean we have only 50% ads served same like 50% grow (in $) in smartphone market doesn't have to mean you have only 50% more smartphones if they got cheaper.
piva00 · 6h ago
Technically, Andy Rubin and Chris White weren't at Google when creating Android. In usual big tech fashion, Google did a good acquisition but didn't actually "create" Android, they bought it.
nottorp · 21h ago
The antitrust people in various governments should definitely get excited about it :) And they apparently are indeed.
jayd16 · 20h ago
Online commerce is a huge innovation and Google is part of that.
matthewdgreen · 19h ago
But is a Google with 78% market share actually important to this? Could we have a network of companies doing this job, perhaps less efficiently, and the world would be just about as decent?
hluska · 21h ago
This is an interesting topic and I’m not sure it has an answer. In 1995 advertising was really spray and pray. Testing ads was a really difficult proposition so we saw things like bearer coupons (mention this ad or bring in this coupon for 20% off!). The dominant advice out of radio was to play an ad constantly. That advice worked well for traditional media but not so well for advertisers. That model worked so well for advertisers that thirty years later, people around my age in my city can all sing the same five advertising jingles.

Google provided a toolkit to test ads and figure out which are most effective. Now the other side of that argument is that in industry, a massive of percentage of qualified people still spray and pray. The advertising industry as a whole is far from data driven.

At one point, there was an argument this was good for the planet. My newspapers are much thinner than they were 30 years ago when I could collect a metre of newsprint a month if I subscribed to the Globe and Mail plus a local. But I don’t think anyone can claim now that data centres are environmental miracles. This has also decimated local journalism to such a point that people are less aware of environmental catastrophes in their own relative backyards.

It’s possible the net effect was positive and advertising is more efficient. It’s more accurate to say advertisers have a toolkit to analyze effectiveness but many don’t or aren’t capable.

Edit - I’m going to give a very specific example of a radio jingle. If anyone is around forty or older and from a major city in Saskatchewan, they will be able to finish this.

“I said no no no no don’t pay anymore, no GST and no money down.”

matthewdgreen · 19h ago
Maybe another way to say this is: do targeted ads materially advance society? Is there an argument that better ad-targeting has increased GDP, or improved overall economic growth in some other way? Would a less-efficient online advertising system produce dramatically worse outcomes? And did we get more or less back from the older system (TV+local journalism) than we get from Google?
toomuchtodo · 22h ago
Is that because of innovation? Or is that because of Google’s antitrust activity the US government is currently busting? Safari default search engine deal, etc.
bobxmax · 22h ago
"Antitrust" is just lawyer-talk for winning strategies that we later arbitrarily decide is not good for capitalism.

They weren't a bunch of gremlins in a cave conspiring to commit "anti-trust violations" in 2005. They were smart as hell and invested in the right areas.

Microsoft would get hit with the same anti-trust Google is being hit with if Bing and Windows Phone were successful - they're getting away with it because they're terrible.

intended · 21h ago
Anti Trust is not lawyer speak for winning strategies. It’s a specific term, and theres a time and place to use it. Anti trust is what people, especially programmers, have been saying from time immemorial when these firms became this big.

Inefficient markets are bad for humans and are bad markets. The allocate resources inefficiently. The google graveyard is (arguably) a case in point.

The reason Khan reached the FTC was because her thesis at college, made the case that amazon’s actions reduced customer welfare. A fact that was covered here, on HN. This isn’t something a community notices unless it matters to them.

tossandthrow · 22h ago
> They weren't a bunch of gremlins in a cave conspiring to commit "anti-trust violations" in 2005. They were smart as hell and invested in the right areas.

You still killed the man regardless of intention.

wavemode · 20h ago
You seem to possess the (very common) misconception that monopolies are illegal. They are not. Rather, it is illegal to intentionally use one's monopolistic position to make it difficult or impossible for others to compete.
nottorp · 21h ago
> winning strategies

Winning for who? Not for society as a whole, that's certain.

To put it in money terms so even you can understand it, how much time has been wasted globally because Google is peddling ads and spam sites instead of pointing people to useful results?

Is that free? We should substract it from the GDP calculations if you ask me...

toomuchtodo · 22h ago
> "Antitrust" is just lawyer-talk for winning strategies that we later arbitrarily decide is not good for capitalism.

“I don’t like the law and its application” isn’t an argument.

tqi · 22h ago
These aren't like the laws of physics, there's obviously a lot of post hoc interpretation that happens.
toomuchtodo · 22h ago
Certainly, when a law that wasn’t applied begins to be applied again, I can see certain mental models taking issue with that. Regardless, the law and its historical application and consequences didn’t change, only a politically driven low enforcement period. That was the anomaly.
KerrAvon · 21h ago
The meaning of “antitrust” is very clearly defined. You’re allowed not to like it or to think that laws against it should be struck down, or whatever, but you can’t say it’s something it’s not.
echelon · 21h ago
There are models that are close enough.

If you allow one company to achieve market dominance, it suffocates the ecosystem and stifles evolutionary growth pressures. It's concentrated malinvestment into a local maxima that salts the playing field so thoroughly that escape velocity is unattainable by anyone else.

There are models of this. And historical anecdotes and evidence.

snowstormsun · 23h ago
Don't confuse cynical with realistic, though.
AbstractH24 · 2h ago
It’s literally a matter of perspective
asdf6969 · 21h ago
Google search, google maps, gmail, YouTube, and Chrome have all been good functional products for over a decade. I genuinely don’t know what they’ve been doing since then other than milking us and getting new customers. Maybe 10% of this growth leads to a real improvement in human lives.
TrapLord_Rhodo · 21h ago
I agree with the article in faith, but i think they've gotten the cause wrong.

The problem is, scaling was ALWAYS the hard part. at a certain level, you don't have to worry about sharding and replicating databases, moving over to NoSQL, async race conditions, etc. etc. Why bet the house on one business idea, when you can have 10 "Micro-SaaS's" that are all bootstrapped but might make 10-20k in MRR.

In the day and age where the average business person has like 20-30 subscriptions for random tools, emails, websites, marketing, email lists, automations, SaaS products, freelancers, etc. it very much lends itself to the micro model.

The 'VC' business model is starting to break down. Just by looking around youtube and Indie Hackers, most of the successful businesses now adays are bootstrapped where the founder has some kind of community where they blog, youtube, have a patreon, X, etc.. They become the brand and they have no use for VC's. As soon as they launch a new app idea, they have 200K people on twitter, 150k people on youtube that will atleast give the app a look.

wslh · 20h ago
I agree that today's focus is more on integration than relying on a single "corporate" system. However, I believe a major issue with micro-SaaS in general is security. While even FAANG companies face security challenges, relying on many smaller SaaS providers introduce weak points into your system, and security is a challenging factor for small company budgets.
benoau · 20h ago
This isn't exclusively or particularly solved by VC-funded or giant tech companies - Dropbox once deployed a bad build that accepted any password, Apple accidentally had a blank root password, I'm sure there are many more embarrassing tales like this.

https://techcrunch.com/2011/06/20/dropbox-security-bug-made-...

https://www.macrumors.com/how-to/temporarily-fix-macos-high-...

wslh · 20h ago
Not saying that big companies don't have security issues. Expressing it differently: having multiple heterogeneous dependencies increases the effectiveness of supply chain attacks.
esseph · 18h ago
Now imagine you have 16 national industries that you've defined as basically "needing the best security reasonable", and most of those industries only deal with security as much as their insurance companies make them.

It's a nightmare out there.

TrapLord_Rhodo · 17h ago
I don't really understand this comment.

You are the exact people that i am trying to avoid with this model. I'm not trying to make big deals with big companies who can be impacted by security. The Micro-SaaS model requires that when i get a client asking me those kind of questions, i run from them and tell them my tools probably not for you. Any app that requires sensitive data transfering shouldn't be done on the micro-saas model.

Micro-Saas requires small, simple tools that may be low-hanging fruit. Sometimes they aren't micro-Saas's, but just random tools that make money for you by creating a glorified Open AI wrapper and a bunch of integrations. Honestly, alot of the tools I see that make money for people are made on Make or replit. No code even required but definitly not going after the "we need sensitive info or PII" market.

All payments just go through their respective provider so not really a risk there too.

enahs-sf · 22h ago
The article somehow misses the mark on how VC firms actually make money apart from carry, its management fees. Thus, a16z raising a 20b fund on 3% management fee and 30% carry effectively guarantees them 600M even if the fund goes to zero and they have many such funds.

Sure, they would prefer to make money through carry, but the management fee is a nice downside protection.

alephnerd · 22h ago
Funds like A16Z can demand a high management fee because they have shown that they can deliver reasonable exits.

Most funds have management fees in the 1-2% range and a carry at around 20%. VC is a power curve, where a couple of large funds have an outsized impact.

And if a fund or VC (from associate to partner) cannot deliver, your career in the space is basically over.

999900000999 · 21h ago
Is it too late for me to pitch my AI driven startup that just takes a very common application ( task management) and adds some API calls to LLMs ?

Can I apply for YC again and get my annual rejection? So I can cry upper middle class tears.

I really need a business partner to keep me focused on features people actually want.

But my main business friend is focused on much more important things ( raising a new family) now.

Thinking about what's more important right now, making some games I know will make no money.

Creating a B2B startup that will also make no money.

jparishy · 22h ago
IMO the best innovations tend to happen when someone has their hands bound and has no other choices but to figure out a new way or die. Now seems like an incredible time for new models and startup paradigms.

To me I think VC's figured out a way to market a very specific way to build companies and convinced a lot of people it's the only way for 20-ish years. Then there was this sort of shift to selling to enterprise, I think because B2C got harder and easy money was the goal. By then a lot of enterprise design makers were probably in the networks of the people selling. There's a meme about YC-of-late being mostly companies that sell shovels to each other.

But when you optimize for enterprise, I think you end up losing a lot of diversity of opinion in where the value comes from, which leads to top-heavy companies.

My main issue is that after the ZIRP era I don't believe the money is gone or unavailable. It just seems to be hoarded for some reason. There is astronomical wealth out there that could be used for trying new economic models that compete with the last generation of VCs. But it isn't happening.

Maybe the next era of VC decision makers, the ones who themselves were funded on big bets, just don't have the same appetite for risk? Or maybe the era of "developing your brand" has made them not want to share their success? I'm not sure but it's weird to me.

sonicgear1 · 21h ago
I can't wait for the downfall of AI. It has become like a plague at this point.
AbstractH24 · 2h ago
Sounds like something folks would have said in the lead up to dot com bubble bursting
culebron21 · 20h ago
I've noticed AI evangelists -- those 2nd tier experts, who're not on every show and youtube channel, but do consulting and once in a while appear here and there -- jumped on the hype wagon 1.5-2 years ago.

They'd pedal FOMO and would promise eldorado to everyone who joins. "AI will do everything for you, you'll be fired."

Now I see them change the tone. It's like: "c'mon, it's a special tool, you need to use it properly, and give it what it needs."

They sell quickly made courses. Same guys who in '12 would advertize "Mobile strategies" consulting (remember that thing here on HN?), then AR with Google glass in '14, then crypto in '17, then web3, and so on.

shubb · 17h ago
I think there is something like a pyramid scheme of people who sell classes, and people who sell classes about how to sell classes. The top level slowly pull the whole chain to the next fad and then the next. Drop shipping, ghost writing books for kindle unlimited, various flavors of wellness... you can watch them flow into a space and usually destroy it.

I'm curious how centralized the operation is. Individually it's a bunch of hustlers running their own little personal branding operation, but if each of them is in a masterclass, and the masterclass leaders are in a masterclass, has a small group of mega-influencers formed, and who are they?

culebron21 · 14h ago
5 hours later: Klarna changes its AI tune and again recruits humans for customer service https://news.ycombinator.com/item?id=43955374

"of course, AI is just another tool, and has its niche" :D

jmclnx · 23h ago
I guess it is time to do your homework and invest in real tangible products instead of trying to make a quick buck and get out before the company fails.
mistrial9 · 23h ago
I can work for a year and produce something worth a few thousand dollars in profits.. meanwhile the financial grind generated that in a portion of a day. Those are the same dollars used to buy a house or save for retirement.

see Thomas Piketty .. this will get worse before it gets better

cadamsdotcom · 17h ago
What’re your thoughts on how it’ll get worse, and how it’ll get better?
tim333 · 13h ago
He seems a bit over negative on AI saying VCs are saying

>Just as “internet” evolved from buzzword to business backbone, AI is following the same playbook.

>No it isn’t! ... Stop saying dumb things!

Meanwhile the tech is cracking along https://x.com/waitbutwhy/status/1919870578502021257

n_ary · 17h ago
I am curious, if everything is going to oAI, how are Anthropic, xAI and other unknown ones being funded? That being said, I suddenly realized that, I know only of things that are heavily on the media. Other than these few, I know Meta's LLama, Google's Gemini, DeepSeek, Mistral and Qwen.

Aside: I shed a few tears reading the article, since the death of n-gate.com, nothing like that existed until today I found pivot-to-ai.com.

vessenes · 21h ago
This is so wrong as to be almost laughable. My VC friends are working double shifts on financing companies that are disrupting nearly every market vertical. There’s almost an infinite greenfield out there right now.

Quick example : a company founded in the last three or four months that provides appointment setting and calendar management for a single healthcare vertical. They are already profitable. There are at least 5,000 market verticals like this in the US alone.

Tech to provide this is going to keep commodifying and that will leave early entrants as wealthy incumbents; journalism telling people to be certain that the opposite will happen is borderline irresponsible and certainly missing the situation full stop.

ffsm8 · 20h ago
Totally off topic, but the term market verticals was new to me, so I googled it... It's definition is really strange for the used term. I expected something about verticality, like how apple provides their devices - with everything being provided by them: hardware, software and the shop it's being sold in. Or how BYD is creating their EVs, with everything coming from their own factories etc

But instead, it's about specializing your product to the needs of specific customers / finding a niche and then provide exactly the service they desire. How is that in any way vertical?

Etymology is strange

robocat · 3h ago
The words horizontal and vertical get abused badly in business jargon. Often meaninglessly.
SpicyLemonZest · 19h ago
The structure within a large company that's trying to target it is vertical. You have your normal "horizontal" structure, where engineers try to build a general product, marketing tries to advertise it in the most promising places, and then sales tailors their pitch to whatever prospects they happen to have. Then you might have a vertical with e.g. healthcare sales, healthcare marketing, and engineers for your healthcare-focused product all under common leadership and executing on common goals.
shubb · 16h ago
Right, but isn't this sort of the opposite to the usual companies VC funded which were about massive potential scale based on solving a very common problem?

Like, I thought the whole point with VC was to find ideas that could 100x your return, which by definition would be horizontal markets.

tim333 · 15h ago
I'm not sure David Gerard, the author is very good at this stuff. He's spent the last several years moaning about cryptocurrency in a not very deep way and now seems to have pivoted to moaning about AI and VC without a deep knowledge or either. He also pops up as a Wikipedia editor moaning about stuff https://news.ycombinator.com/item?id=40928248

For an alternative take check out say the FT "AI frenzy leads US venture capital to biggest splurge in three years" https://news.ycombinator.com/item?id=40928248

(for amusement here he is going on about bitcoin being a pump and dump when it was $16 in 2011 https://newstechnica.com/2011/06/18/bitcoin-to-revolutionise...)

chis · 12h ago
Only sane comment here lol. I just don't know what the point of reading rants like this is vs going to a primary source like Pitchbook, which will tell you that venture deals are almost as high as the 2021 craziness right now

https://nvca.org/wp-content/uploads/2025/04/Q1-2025-PitchBoo...

Anyone even close to the space will tell you there are a million tiny AI-adjacent startups getting funded right now. Which if anyone is in one and looking for an early hire engineer please do reach out, I have the background but haven't found anything in my network yet.

moneywoes · 21h ago
I seem to be bad at market research but as someone who's tried appointment setting for healthcare verticals all user interviews indicated either their EHR e.g. Cerner, Oracle, Athena or if smaller ( independent) something like Calendly worked

Am I missing anything, what's the differentiation

JohnMakin · 13h ago
imagination
RainyDayTmrw · 20h ago
Intentional or not, that sounds too uncomfortably close to personal bragging.
spitfire · 19h ago
Didn’t Patrick McKenzie (patio11) do this already with appointment reminder?
ldjkfkdsjnv · 19h ago
There's a quiet gold rush going on for people in the know
asdev · 21h ago
the expected value of their AI bets panning out is so high that they're obligated to make those bets
Beltiras · 21h ago
Same as it ever was.
ghaff · 21h ago
Anecdote != data of course but I've had multiple VCs reach out to me looking for funding and one angel investor I talked to complain about the dire situation with exits.
jay-barronville · 20h ago
Another thing I’ve observed for a while now (i.e., since everyone became obsessed with LLMs) is that a lot of founders nowadays—particularly at the pre-seed and seed stages—seem to view raising venture capital as a one-and-done type of thing.

I’m not exactly sure how prevalent this mindset is, but I’ve talked with lots of founders within the past couple of years and I’ve encountered this mindset a lot, which, to me, is a huge contrast to the mindset I encountered, e.g., about ten years ago when I was a young founder and I was raising a small round (mostly from angels)—back then, it seemed like every single founder was chasing venture capital non-stop and usually was already thinking about their next round before even closing their current round.

If this mindset is (or becomes) prevalent, unless a lot of these startups are quickly acquired for large sums, is venture capital, as it currently exists, ready to deal with this shift?

jgalt212 · 22h ago
ZIRP is gone for sure, but most of the $7T the Fed printed during COVID and the $3.5T the Fed printed post-GFC remains (net +8T). For this reason, I believe the era of VC-driven nonsense is far from over. A simple example is that stonks have recovered all of almost of all the post Liberation Day losses. This excess cash / Helicopter Money for the investor class has to go somewhere.

https://fred.stlouisfed.org/series/WALCL

yhoots · 20h ago
Very funny that people will buy entire domains to spout Reddit-brained anti-tech takes. Same ppl who said crypto will go to zero and self driving cars will never work.

Yes many AI companies will go to zero. This is how every tech bubble works and innovation happens by ppl trying stuff and mostly failing. But in the end the survivors will remake the world for the better. Very sad to see this sort of drivel being popular here.

No comments yet

RainyDayTmrw · 19h ago
I'm not sure where this originated. During the software boom of the mid-2010s, someone told me that the vast majority of venture capitalists are lemmings[1]. That idea is as true as it ever was, and perhaps even more so in today's time of AI.

[1] Implying that they blindly follow anything and everything. The origin of this metaphor has since been debunked, but the metaphor itself lives on. https://www.adfg.alaska.gov/index.cfm?adfg=wildlifenews.view...

jimbob45 · 19h ago
I’m to believe cultured meat wouldn’t also hit a VC lottery ticket? Or the right mRNA vaccine?
mapt · 22h ago
> The report somehow fails to mention the bit where the Silicon Valley VC and executive crowd worked their backsides off to elect Trump and several of them sat in the front row at his inauguration. Then they were actually surprised when the leopard ate their faces too.

Many of these people are still running the Trump Administration from the shadows. Elon Musk turns out to just be one example of the billionaire to right wing brainrot lunatic infohazard funnel.

It mentions the Semafor story. Here's Adam Conover covering that - https://youtu.be/3_PKKUFxRyk?feature=shared

etaioinshrdlu · 23h ago
Ignoring politics, which I think makes this article somewhat off-topic, I would argue with many of the points in this article - but just one would be that ZIRP specifically caused inflation and required higher interest rates.

The way I remember it, rates were already raised substantially, during the first Trump admin, not due to inflation, lowered again during COVID, and raised due to inflation, where a likely causative factor in the inflation being high government spending and stimulus.

jeltz · 22h ago
If the issue had been as simple as government spending we would not have seen the inflation in countries which did not have the same government spending the US had but we did. Maybe it contributed but I doubt it was the primary cause since inflation happened all over the world and not just in countries with generous stimulus packages.

Plus a lot of the inflation was caused by increasing energy prices in 2022, totally unrelated to government spending.

etaioinshrdlu · 21h ago
Sure, but I think COVID disruptions broadly are a much better explanation for inflation than the previous decade of ZIRP, which the article asserts.
wombatpm · 22h ago
Rate increases didn’t start until 2022. I was with a startup in the mortgage industry and got to experience that company’s first layoff followed by a layoff at a different startup 10 months later because raising another round of capital was seen as impossible. First action by the Federal Reserve March 16, 2022: Increased to 0.25%–0.50%
etaioinshrdlu · 21h ago
In the chart in the article: https://pivot-to-ai.com/wp-content/uploads/2025/05/zirp-2005...

You can clearly see non-zero interest rates from 2016 to 2020...

wombatpm · 18h ago
Very slow increase to around 2 in an attempt to end QE, dipping to 1.5 before falling off a cliff with COVID. Compare to rates going back to 2000
toomuchtodo · 22h ago
Broad macro inflation is due to smaller working age populations. The pandemic pulled this forward due to excess deaths and retirements (~3M). It’s also why the labor market has been so resilient (unemployment rate) in the face of so much uncertainty and volatility. There are simply not enough workers.

Egg inflation is Avian flu related, housing is because of a housing shortage caused by building pipeline impairment since the 2008 GFC (which will likely never recover, due to structural demographics and no appetite to allow immigration at the level required for the construction trades).

(~4M Boomers retire a year, ~11k/day, ~2M people 55+ die every year, about half of which are in the labor force; that means ~13k-14k workers leave the labor force every day in the US)

https://www.axios.com/2024/06/27/labor-shortage-workforce-ec...

http://charleshughsmith.blogspot.com/2022/08/are-older-worke...

https://www.stlouisfed.org/timely-topics/retirements-increas...

scarface_74 · 22h ago
You can’t ignore politics when it’s the direct cause of this economic mess
jeffreyrogers · 22h ago
It's a little hard to say for sure because companies can stay private longer and in some cases don't need to go public at all, but it seems that the last wave of huge successes were founded pre-2015 and since then the industry has been looking for the next wave, first with crypto, now with AI, and there are some tentative pushes into manufacturing/defense.

AI actually seems like a great fit for the VC business model, much more so than most SaaS companies are. Successes are likely to make a ton of money and they can't self finance or finance with debt because they need to spend a huge amount of money.

Herring · 23h ago
I think the problem is businesses are tiny little fascist dictatorships. They are always trying to pay less taxes, evade regulations, layoff workers, monopolize, destroy competitors etc. This is their first time ever having to think about the public sphere, or common good, or government, or democracy, or rule of law. They suck at that, it goes against all their training and instincts.
gruez · 22h ago
>I think the problem is businesses are tiny little fascist dictatorships. They are always trying to pay less taxes, evade regulations, layoff workers, monopolize, destroy competitors etc

Are you just using "fascist dictatorships" as a generic label for things you don't like? The things you've listed might be bad, but they're neither dictatorial nor fascist. It's even questionable whether some of the things are even bad. Don't we all try to minimize our tax burden? Is there anyone out there who refuse tax credits because that means "paying less taxes"?

thethethethe · 21h ago
Not a huge fan of calling random things you don't like fascist but op has a point here

> The things you've listed might be bad, but they're neither dictatorial nor fascist.

Uhh I'm pretty sure that CEOs/executives act very similar to dictators. Large companies certainly don't act like democracies. Companies often employ many forms of totalitarian control used by fascist dictatorships. There's often mass surveillance (mouse trackers, email auditing, etc), suppression of speech, suppression of opposition, fear of termination, cult of personality.

The tax stuff is irrelevant imo though

econ · 21h ago
Where are all the good ideas to defeat this formula? If we can't why are we using democracy to run countries?
thethethethe · 20h ago
Democratic control of production. See the mondragon corporation for an imperfect but interesting example.

Strong unions are another alternative to totalitarian control of companies. Not ideal, but there are plenty of examples throughout history.

I'm not claiming these alternatives are better or worse, I'm just pointing out that other systems are possible and already exist.

Fwiw, whenever my team has done democratic planning it has always led to bad outcomes

econ · 20h ago
I read the mondragon corporation works according to Ica.

https://ica.coop

One member one vote doesn't seem very imaginative.

Compared to a dictator a focused team effort will have better results but a set of people who don't care or have an overly limited grasp of the topic won't do well. This probably doesn't matter to much if things are going well.

I fool around with the concept of department specific voting certificates with each component of the department written into its own "law" that one can vote yes/no and remove on. Each cert adding weight to the vote. People writing the rules are elected by the same mechanic. To activate a rule or board member it needs 55% "yes" to deactivate it needs 55% "no" and to remove it needs 65%.

One can participate in all departments and each certificate comes with a small pay raise.

Herring · 20h ago
Better anti-monopoly enforcement, better worker-rights regulations, better taxation schemes for redistribution, better healthcare etc. Even stuff you wouldn't think about like free college or good Singapore-style public housing reduces economic pressure on workers, which reduces companies' leverage.
econ · 19h ago
Interesting, yes employee maintenance cost like healthcare and specially housing hurts the economy magnificently. That said, those things only make the dictatorship model more palatable. I want a system to compete with it and kill it.
Herring · 18h ago
Well ping me if you find it. I think the winds are seriously blowing against you right now: https://americanaffairsjournal.org/2020/08/the-china-models-...
gruez · 21h ago
>Uhh I'm pretty sure that CEOs/executives act very similar to dictators. Large companies certainly don't act like democracies. Companies often employ many forms of totalitarian control used by fascist dictatorships. There's often mass surveillance (mouse trackers, email auditing, etc), suppression of speech, suppression of opposition, fear of termination, cult of personality.

Where does employment/voluntary association end and "fascist dictator" begin? If you're being paid for your time, it's only fair that whoever's paying you can monitor your work and decide what you're doing. I agree that some businesses go beyond this and try to regulate what you do outside of work, but it's a stretch to make a broad claim like "businesses are tiny little fascist dictatorships". That makes as much sense as "governments are tiny little fascist dictatorships", just because some of them are authoritarian.

hackable_sand · 16h ago
> If you're being paid for your time, it's only fair that whoever's paying you can monitor your work and decide what you're doing.

I disagree. It is authoritarian to assume ownership over someone's body. It doesn't matter how much you've paid. You cannot compel someone to labor.

thethethethe · 20h ago
You are taking my counterpoint a little too far.

All I am saying is that there certainly are similarities between the way fascist governments and large corporations, not that they are the same thing.

Based on your response, it sounds like you agree that companies often act in an authoritarian manner, its just that you think it is justified in some way.

To be clear, I am not making a value statement here, I am just pointing out similarities between two systems. I don't claim to have better systems for managing corporations. Tbh, I wouldnt want the majority of my coworkers calling the shots and if I was CEO, I would work to consolidate power

Herring · 21h ago
You're missing the point. My response was to the article:

> The report somehow fails to mention the bit where the Silicon Valley VC and executive crowd worked their backsides off to elect Trump and several of them sat in the front row at his inauguration. Then they were actually surprised when the leopard ate their faces too.

They vibe with Trump because they have the same training, and they've done very little actual democratic governance. Very little thinking about the common good. You can argue most companies are actually more like benign dictatorships, but that's irrelevant.

To be fair I'm often a fan of markets, but not when the companies are monopolies larger than most nation states, actively increasing inequality and fighting counters like regulation/unions, not to mention affecting elections like fb/musk. In that case it's not voluntary. Wikipedia has an entire section on market failures https://en.wikipedia.org/wiki/Market_failure

lurk2 · 21h ago
> Are you just using "fascist dictatorships" as a generic label for things you don't like?

That has been the pattern for the last 10 years.

ed-209 · 22h ago
Replace "training and instincts" with "incentives" and you may have the rudiments of an argument, though it's not clear to me what that argument is.
epistasis · 22h ago
I don't think that's true at all, business has had to think about the public sphere and how to shape it to their benefit since corporations started.

For a long time Silicon Valley tried to avoid politics, or at least that was the vibe of everyone. But even in the 1980s, when Japan was rising and threatening to outcompete the US semiconductor industry, Captains of Industry that had been former strident libertarians went hat in hand to Uncle Sam to tilt the field to favor their survival.

CalChris · 22h ago
One thing I've learned in my long life is that there ain't no such thing as a libertarian.
Herring · 22h ago
You missed the point. Yes I agree they're more concerned with "how to tilt the field", but that's not what good governance is (should be) about. It's a bit like a kid trying to push the bounds of acceptable behavior, versus a parent trying to figure out how to raise multiple kids well.
mistrial9 · 22h ago
there might be insight there, but.. too broad brush. As "everyone knows" Silicon Valley started as a dot-mil fab but California invented the personal computer and sold it. Hey wait, just California?! yes, mostly.. See the West Coast Computer Fair .. shout out to Byte Magazine from New England too.

"avoid politics" was true for some stripes of participants and completely not true for others.. It is true IMHO that consumer gear generated a lot of success and was largely apolitical.

nadir_ishiguro · 22h ago
No, they always very actively did everything they could to get as much money and influence out of whatever political system they're operating under.

The Nazis very happily and intentionally worked together with corporations and the corporations were happy to exploit the free slave labor and lack of competition.

eWeSaYYY · 22h ago
That’s endemic to American capitalism not VCs of modern times.

Wall Street is built on looting the pensions of retirees.

The American economy has always been “already absurdly rich person takes 9 of the widgets worker produced and leaves workers 1 to share.”

In the money printer era this allowed big salaries for some compared to the norm but the rich still took their 9 too.

We wrapped mafia like thug behavior of the pre and post war world into empty semantics.

The only option for unlocking a massive amount of liquidity for the public to stabilize their individual situations is taxing the rich.

atmavatar · 21h ago
I've heard modern US politics described like the following:

A rich man, a blue-collar worker, and an immigrant are sitting around a table. In the center of the table is a plate of a dozen cookies. The rich man takes 11 of them, then leans in and whispers to the blue-collar worker "hey, I think that guy wants your cookie."

eWeSaYYY · 21h ago
The most insane are the science minded who understand it’s made up social gossip but role play sycophant nonetheless.

All our best thinkers, the propaganda goes. Their best idea for the economy is submit the status quo.

Their biology lives the experience every day that as a knowledge worker with little real world skill at providing for themselves, it’s actually they who are the most easily manipulated and kowtowed.