> Time to 365B Annual Searches = ChatGPT 5.5x Faster vs. Google
Is this really relevant? Google was formed when there were no gazillion phones to do searches a million times a day.
ChatGPT was formed recently, when every strata of society, and every country in the world has double digit internet penetration.
npunt · 7h ago
At first glance its like Hollywood movies announcing they're the best selling of all time, ignoring inflation. In other words a ratchet just to get clicks.
However this is relevant because this is an investor report helping people forecast, and this stat helps calibrate readers expectations of just how fast a product can scale in this day & age, using a relevant comparison of products in the same category that when launched offered the same step change in value.
Also, quantity has a quality of its own.
kumarvvr · 6h ago
My gripe is not with relevancy of the data, its with the chosen comparison. Comparing with Google at the beginning of the internet revolution, to now with billions of internet enabled devices across the world, is not a fair comparison and does not give any meaningful insight.
spyckie2 · 4h ago
but that’s precisely the point and it does give insight. Google scaled off of existing infrastructure like computers. Computers scale off of existing infrastructure like electricity.
The point is to compare current era of scaling to the previous era and see how much faster it is.
It’s not comparing Google to Open Ai. It’s comparing the environment that produced Google to the environment that produced Open Ai.
patcon · 4h ago
With all respect, I'm not convinced what you're saying is wisdom
bee_rider · 4h ago
What does it mean to be fair in this context? The world is different now… it is plausible at least that the best comparison isn’t a fair one.
mcmcmc · 7h ago
Yes? With internet access being more prevalent than ever, it is expected that new product categories will have faster adoption. This demonstrates how much faster using ChatGPT and Google as proxies for their respective product categories.
nhinck2 · 6h ago
Sure but it doesn't mean anything, how much has the internet grown since Google's inception?
yorwba · 1h ago
How can it not mean anything that the internet has grown a lot since Google's inception?
jonny_eh · 8h ago
Right, needs more context. How many more people have internet access now?
airstrike · 7h ago
This may be my single biggest pet peeve. I think of this problem more broadly every time I see some new movie is the highest grossing movie of all time. No shit, Sherlock. More people, more screens, more movie theaters, inflation... the record is always going to be broken.
I hate this so much I actually ran the numbers and saw that per capita box office revenues have remained generally stable since the 1980s.
kumarvvr · 7h ago
Exactly. As a VC isn't it their job to bring out relevant, thought provoking metrics?
Instead, we have this monstrosity of metrics that make no sense.
seanhunter · 3h ago
No. As a vc their job is to raise capital from investors and then invest that capital to make a return.
Data scientists bring out relevant, thought-provoking metrics. They work for the people who work for the people who are the target audience here.
antithesizer · 5h ago
Come on. That's like being bugged about a pizza joint describing itself as "world famous"
matkoniecz · 3h ago
Well, I dislike both. And in either case I am going to trust less whoever repeats it uncritically.
silenced_trope · 7h ago
These types of comparisons are always annoying.
I saw another one recently that said something like "ChatGPT has 350 millions unique visits per month, if it were a country, it would be the 3rd largest in the world"
Or something along those lines.
creato · 3h ago
I also wonder how much of ChatGPT's usage is to basically cheat on homework. Basically 100% of the users of ChatGPT I know mostly use it to do their homework for them.
Maybe this will turn out to be a valuable user segment, but I'm not sure.
pdevr · 7h ago
Observations and personal opinions, after glancing through the slides:
* The entertainment industry will be transformed drastically. Music and movies will be transformed by AI to such an extent that the next generation will find it hard to believe how the industry operated.
* The moonshot will be biological research and research in general. When a breakthrough happens, it will transform our health for the better in astonishing ways.
* In terms of direct adoption, the urban-rural divide is vast.
* Less democratic countries will have an advantage over the democratic countries in terms of fast execution, unless the latter manage to integrate private-public operations effectively.
* I've liked Mary Meeker's reports since the heydays of TechCrunch. This report has a lot of details that I did not know. Nevertheless, I didn't see a single point that stood out.
psychoslave · 5h ago
> Less democratic countries will have an advantage over the democratic countries in terms of fast execution, unless the latter manage to integrate private-public operations effectively.
This argument is like a classic. But on which time frame is it supposed to be operative? Is that back with any empirical data to test the claim?
Less involvement of edge nodes in the decision process is also encouraging "don't give a shit to suggest improvements" and "utter whatever lie is expected by the system to avoid claims of dissidence".
There are actual pragmatic benefits in democratic systems, it's not only pure idealistic motivation at stake that can argue in their favor.
adventured · 4h ago
There are exceptionally few less democratic countries that are functional in a manner such that they can take great advantage of the AI potential, much less execute in some sort of super fast manner compared to the democratic nations. You can count those less democratic nations on one hand.
It's overwhelmingly the case that affluence and national wealth goes hand in hand with greater democracy, there is a tight correlation (and of course there are exceptions). All you need to do is look at the top ~50 nations in terms of GDP per capita or median wealth per adult, then look at the bottom 50.
Less democratic nations will be left even further behind, as the richer democratic nations race ahead as they have been doing for most of the post WW2 era. The richer democratic nations will have the resources to make the required enormous investments. The more malevolent less democratic nations will of course make use of good-enough AI to do malicious things, not much about that will change. Their power position won't fundamentally change however.
> AI User Adoption (ChatGPT as Proxy) = Materially Faster vs. Internet Comparables
This is also interesting to see. ChatGPT currently has about 20 million paying subscribers and 400 million weekly active users.
Did 100 million people use chatGPT a few months into opening. Sure, but that was because of massive hype and word of mouth viral moments.
Is is comparable to Internet usage, especially from "years launched" X axis. Definitely not.
> 59 AI User Adoption (ChatGPT as Proxy) = Materially Faster + Cheaper vs. Other Foundational Technology Products
Another pointless comparison.
chatmasta · 8h ago
At some point I eyeballed a comparison to WhatsApp/Instagram/Snapchat growth, and IIRC although it was within the same order of magnitude, it still didn’t reach the rate of growth of those hypergrowth social apps.
CompoundEyes · 5h ago
I’d like to hear more discussion of AI being applied in a ways that are “good enough”. So much focus on it having to be 100% or it sucks. There are use cases where it provides a lot of value and doesn’t have to be perfect to replace tasks done by an imperfect employee who sometimes misses details too. Audit the output with a human like Taco Bell (Yum) is doing in AI drive through orders. Are most the day to day questions the a person asks so critical in nature that hallucinations cause any more issues than bad advice from a person or an inaccurate Wikipedia or news article or mishearing? Tolerance of correctness proportional to the importance of the task i guess. I wouldn’t publish government health policy citing hallucinated research or devise tariff algorithms but I’m cool with my generative pumpkin bars recipe accidentally having a tbsp tsp error I’d notice in making them.
max_on_hn · 2h ago
I think we see this a lot with software development AI; the tab complete only has to be “good enough” to be worth tweaking. Often “good enough” first pass from the AI is a few motions on the keyboard away from shippable.
Now with headless agents (like CheepCode[0], the one I built) that connect directly to the same task management apps that we do as human programmers, you can get “good enough” PRs out of a single Linear ticket with no need to touch an IDE. For copy changes and other easy-to-verify tweaks this saves developers a lot of overhead checking out branches, making PRs, etc so they can stay focused on the more interesting/valuable work. At $1/task a “good enough” result is well worth it compared to the cost of human time.
I think biochemistry might be the biggest beneficiary of "good enough" AI. Much of the expensive and slow parts of discovery (like creating new drug or making sense of some crazy complicated protein structure) are already speedrunning and while I'm on the conservative side of estimates, with mathematical certainty will see groundbreaking new drugs, discoveries from biochemistry field and not in the distant future but in a few years.
For the other stuff like automating white collar jobs, good enough might not suffice due to the intricate dependencies and implicit contracts formed naturally out of human groups.
Creative jobs will be the most impacted by "good enough" depending on the number of features. For 2d art it was almost certainly over (unless you add text feature to it like manga). You can see with increasing features, like starting with general photography, stock photography and now product photography are overnight made redundant. ex) with the latest Flux image editor negates a need to hire a photo editor, photographer, camera equipment, lighting, product artist. Veo3 not quite there but handles speech features in video generation that other models did not and getting closer to replacing videographers. I think 3d model is the next frontier here following the trend but is still quite difficult as it involves mesh generation/texture/rigging/animation/physics that also must come with shaders and interaction with other 3d models.
Software engineering falls somewhat in the creative field but also shares the complexity from white collar jobs for the same reason that will prevent it from being completed automatable with "good enough".
The hallucination issue is less of an issue and an old trope. The truly challenging enemy of AI of "good enough" is due to "not enough context" and "poor context compression and recall". The problems I listed in white collar and software engineering jobs is context problem. The compression of contexts cannot be stable as the former isn't solved. The fast efficient recall of contexts then cannot take place due to poor compression and so on.
This is just my observation of seeing how things are progressing. I do feel that we will see something different from LLM altogether that could solve some of the context issues but a major misalignment of incentives is what I think would prevent an AGI know-all-see-all type of deal. ex) you might not have any incentive to share all the essential context with the AI because you might become irrelevant and want it to stay in the dark. you might have a union or some social organization to legislate monopoly of human knowledge/skill workers in a field.
but perhaps THE most difficult problem even after we solve the context problem is the inability for the God AGI to be awake or conscious which is absolutely critical in many real world applications.
I like to focus more on the very near impact of what AI is currently doing in the labs and its impact on humans than worrying about who and when all of the other problems are going to be addressed.
Whether we get a UBI-first socialist world order or a continuation of technological feudalism with the poors still using GPTs while the rich sell the energy and chips (software would almost be worthless on its own by then) is the least of my concern.
I'm an optimist and I'm very excited for the very-near and immediate impact of our currently available AI tools doing the "good enough" in very positive ways.
jaynetics · 1h ago
I think some more white collar jobs might be affected, not just creative ones. There is a substantial amount of jobs where the end result needs to be of a certain quality, but all context can be inferred or provided up front, and checking and correcting a result is quicker than producing it manually. Think e.g. law or translations. Translators, proofreaders, and others are already feeling the squeeze.
In other cases, like software development, there is a split between tasks of a narrow scope and those of a wide scope. Creating one-shot pieces of software is kind of a solved issue now. Maintaining some relatively self-contained piece of software might soon turn into a task for single maintainers that review AI PRs. The more the bottleneck is context tracking, as opposed to producing code, the less useful the AI. I am uncertain, however, how the millions of devs in the world are distributed on this continuum.
I am also skeptical about legal protections or unionization, as many of these jobs are quite suited to international competition.
amazingamazing · 6h ago
Did a quick search for profit: nada.
I am genuinely curious if lay people will pay for AI. I know people who spend literally hours daily on YouTube and complain about the ads and don’t want to pay the quarter a day to get rid of em.
Will these people pay $50 a month for gpt? We will see.
jonas21 · 6h ago
ChatGPT already has over 20 million paying subscribers, even in its early state.
But asking if average people will pay for AI is the wrong question. It's like asking if average people will pay for Salesforce or Oracle. Even if consumers don't pay for AI en masse, their employers will as the value proposition is a lot clearer.
amazingamazing · 6h ago
Is it profitable though? I do agree that employers may pay, but it’s unclear to me at least if it is profitable for them either.
jononor · 2h ago
OpenAI is not at all profitable right now. Which is their plan. They have from the start been focusing on hypergrowth to become one of the very biggest players. The monitization phase comes later. Probably still a few years from now. Of course it remains to be seen if they are actually able to capture a significant amount of the value when the time comes. There are many other strong players - Google and Microsoft have a massive advantages wrt distribution and data collection. Meta as well. And they have existing profitable businesses, so they can afford to give away "AI" for a long time. Cost to serve users right now is quite high, that will likely go down by a factor 10x over the next 10 years (assuming the energy prices don't go haywire).
jononor · 2h ago
For knowledge workers in western companies, 20 USD/month for ChatGPT style tool, seems like it is likely worth it for employers to front.
I think over time we will see this included in "standard office tools" like email/messaging/videoconf/documents. Which would make Microsoft and Google very well positioned.
But probably everyone will adopt it - so it may not give a competitive advantage to any given company.
vivzkestrel · 6h ago
what he s trying to ask is if 5 billion people will pay 20$ a month to the best AI model out there
sottol · 6h ago
It's easy to forget that half the world's population lives in a few dollars a day [1] and sparing $20 month is unrealistic.
Also, access to market leading AI is not going to cost $20/mo when everything is said and done.
I am pretty sure the bottom 50% or even 80% of the population does not need AI chat tools that much.
abletonlive · 5h ago
right because it's going to be $20 a month for everybody around the world. that's how the world works right?
czarofvan · 6h ago
I think people who realize its not about avoiding inconvenience but a tool that will make life significantly easier will kost likely pay.
Just like paying for a smartphone or data or broadband.
fernly · 6h ago
Based only on the headings and titles of the charts on the first few pages? I smell an AI writing. I mean, do these titles sound like an intelligent human wrote them?
"Charts paint thousands of words..."
"Leading USA-Based LLM User" -- what does that even mean? With a value of "800MM" where MM is what units?
"AI Usage + Cost + Loss Growth = Unprecedented" -- how can you add three things that don't appear to be commensurable? Also, what is "Loss Growth" and how does it add to "Usage"?
There are plenty more examples in the charts later. The Overview section, while not so obviously AI-written, has an over-the-top enthusiasm and loose structure that makes me squeamish. I don't trust it, but that's just me I guess.
intended · 4h ago
MM is USD millions, in financial documents.
kumarvvr · 7h ago
> 75 Enterprise AI Adoption = Rising Priority…Yum! Brands – Byte by Yum! (2/25)
Above is the press release. AI comes up 3 times, including the title. Release talks about AI-Driven platform to streamline operations of franchisees. Ok fine. But how exactly does AI enhance operations at the franchisee level? Especially if all they are providing as part of the SaaS are "Backed by artificial intelligence, Byte by Yum! offers franchisees leading technology capabilities with advantaged economics made possible by the scale of Yum!"
I mean, sure, if at their corporate office, they are using data analysis to predict demand and allocate resources or raw materials to improve profitability, ok. But that can be done quite effectively with statistical analysis.
Where exactly does AI come into the picture is unclear.
Yet, the slides show this as some sort of a monumental achievement, especially highlighting "25000 restaurants are using at-least 1 product". Sure, they are going to use, if that is what the franchise owner provides them, probably for additional cost.
Yum brand has 61000 restaurants as per their website. Looks like they rolled out a new solution, and about 1/3rd have been successful in adopting the new platform. Others may be in the line to do the same. Is this related to AI, or is this related to regular software changes / updates / revamps?
apwell23 · 4h ago
lmao why do restaurants need AI
codr7 · 7h ago
Big surprise, it's VC that wants a piece of the bazillions currently being wasted.
nwlotz · 5h ago
If even the hyperscalers like OpenAI aren't making a profit, when exactly to companies adopting AI start making money? I wonder if we'll start seeing the hyperscalers start raising prices and squeezing customers when investors finally start expecting a return, and if that will put the breaks on adoption as a result.
user32489318 · 4h ago
Depends a lot on the investor expectations. If they think the opportunity for growth and market expansion is coming to its end, then they would force for higher returns.
Corporates are going to talk about the current in-vogue thing always.
They talked about Blockchain.
They talked about crypto
They talked about anything that when not spoken about made the investors feel the board is behind in the world.
> 72 Enterprise AI Adoption = Rising Priority…Bank of America – Erica Virtual Assistant (6/18)
Ok fine. People are using it. 2 important questions. Did the users have other choices, over which they chose this. Did the users feel happier than other methods.
Without this data, this is just feeding the hype train.
w10-1 · 4h ago
Super helpful assembly of lots of data on point - a bit too much to digest quickly. Mary Meeker is great.
Some limitations:
- Unhelpful modernity-scale trend/hype-lines. Everyone knows prospects are big and real.
- No significant coverage of robotics, factory automation? (TAM for physical products is 15X search+streaming+saas)
- No insight? No new categories, surprising predictions, critical technologies identified?
Surprises:
- AI productivity improvements are marginal, esp. relative to the concern over jobs
- US ratio of top public companies by market cap increased from ~50% in 1995 to ~85% in 2025. Seems big; or is it an artifact of demographics of retirement investments? Or is it less significant due to growing private capital markets?
What I would like addressed: The AI means of production seem very capital-intensive, even as marginal cost of consumption is Saas-scalable (i.e., big producers, small consumers). I have some concern that AI development directions are decided in relatively few companies (which are biased to Saas over manufacturing, where consumers are closer to producers in size). This increases the likelihood of a generational whiff (a mistake I suspect China won't make).
As an aside, I wish Elon Musk would pivot xAI out of Saas AI (and science AI), focusing exclusively on manufacturing robotics -- dogfooding at Tesla, SpaceX and even Boring -- with the simpler autonomy of controlled environments but the hard problem of not custom building everything every time. They're well positioned, he could learn some discipline from working with downstream and upstream partners on par (instead of slavish employees, fan investors, and dull consumers or slow governments as customers). He'd redeem himself as a builder of stuff that builds, so we can make infrastructure for generations to come.
kumarvvr · 8h ago
> 39 AI Developer Growth (Google Ecosystem as Proxy) = +5x to 7MM Developers Y/Y
I am sure a majority of those are from India.
A small industry has formed here imparting "AI" training, for very cheap, online and courses with duration of a few months to 2 years.
India School of Business, one of the highest ranked business schools in India, is offering leadership courses with a touch of AI, for as little as 10 lakh rupees (about 11000 USD). For a majority of IT folk in India, that is a very reasonable amount for a course from a very reputed institute.
Many IT folk here are scared to the core about their jobs and there has been a mass movement towards AI certificates. While the courses teach the basics, none of them are at a university level. Most of the students are only users of tools though.
Would you call them AI developers? Meh. They get by. Most of the work in India is back-office work anyway, and these AI engineers end up doing data related tasks mostly.
Very few are actually building worthwhile AI stuff.
redwood · 8h ago
Did self driving (assumed Waymo) really exceed rideshare by number of rides per week in SF already?! Wow I need to visit more often
patrickhogan1 · 8h ago
The graph on page 6 makes it look like Waymo has overtaken "Ride Share" in general—but the chart is based on the data from page 302, which only compares Waymo and Lyft.
Waymo is clearly growing fast, but it's not bigger than Uber—yet. Super impressive its already surpassed Lyft. From personal experience, it also seems to have driven Uber prices down materially.
In SF proper. Most revenue is airport rides and they’re excluding that.
thomassmith65 · 8h ago
Did the Meeker report always go on the site immediately like this? I could swear they used to delay them.
aaronbrethorst · 8h ago
They used to be published by Kleiner Perkins, and she's now at BOND—a VC firm she founded (cofounded?)
matkoniecz · 3h ago
"Share of total current users/years in" on slide 3 seems like nonsense metric to me that is far from being interesting or relevant. Yes, we know that there was hype growth for LLM while Google has long history of growth. This chart gives no useful info and pretends to reveal something while just being repeat of "USA-based LLM users" on the same page.
Looks like yet another "numbers going up" report, not going to spend time on reading 300+ slides like that.
Maybe I am unfair, but I have no time to read every single report and I am willing to read only exceptional ones with no (or very limited) marketing slop.
cs702 · 7h ago
340 slides. That's a lot, even for Meeker.
Many of the slides are packed with details.
Who, exactly, has time to read all that stuff?
Only AI models, I'd guess.
matkoniecz · 4h ago
340 slides (not even dense text!) is not much iff content is worth reading.
(yesterday evening I read about 400 pages of text as entertainment, it is not much - though admittedly this specific one failed as entertainment and it was not content worth reading)
Is this really relevant? Google was formed when there were no gazillion phones to do searches a million times a day.
ChatGPT was formed recently, when every strata of society, and every country in the world has double digit internet penetration.
However this is relevant because this is an investor report helping people forecast, and this stat helps calibrate readers expectations of just how fast a product can scale in this day & age, using a relevant comparison of products in the same category that when launched offered the same step change in value.
Also, quantity has a quality of its own.
The point is to compare current era of scaling to the previous era and see how much faster it is.
It’s not comparing Google to Open Ai. It’s comparing the environment that produced Google to the environment that produced Open Ai.
I hate this so much I actually ran the numbers and saw that per capita box office revenues have remained generally stable since the 1980s.
Instead, we have this monstrosity of metrics that make no sense.
Data scientists bring out relevant, thought-provoking metrics. They work for the people who work for the people who are the target audience here.
I saw another one recently that said something like "ChatGPT has 350 millions unique visits per month, if it were a country, it would be the 3rd largest in the world"
Or something along those lines.
Maybe this will turn out to be a valuable user segment, but I'm not sure.
* The entertainment industry will be transformed drastically. Music and movies will be transformed by AI to such an extent that the next generation will find it hard to believe how the industry operated.
* The moonshot will be biological research and research in general. When a breakthrough happens, it will transform our health for the better in astonishing ways.
* In terms of direct adoption, the urban-rural divide is vast.
* Less democratic countries will have an advantage over the democratic countries in terms of fast execution, unless the latter manage to integrate private-public operations effectively.
* I've liked Mary Meeker's reports since the heydays of TechCrunch. This report has a lot of details that I did not know. Nevertheless, I didn't see a single point that stood out.
This argument is like a classic. But on which time frame is it supposed to be operative? Is that back with any empirical data to test the claim?
Less involvement of edge nodes in the decision process is also encouraging "don't give a shit to suggest improvements" and "utter whatever lie is expected by the system to avoid claims of dissidence".
There are actual pragmatic benefits in democratic systems, it's not only pure idealistic motivation at stake that can argue in their favor.
It's overwhelmingly the case that affluence and national wealth goes hand in hand with greater democracy, there is a tight correlation (and of course there are exceptions). All you need to do is look at the top ~50 nations in terms of GDP per capita or median wealth per adult, then look at the bottom 50.
Less democratic nations will be left even further behind, as the richer democratic nations race ahead as they have been doing for most of the post WW2 era. The richer democratic nations will have the resources to make the required enormous investments. The more malevolent less democratic nations will of course make use of good-enough AI to do malicious things, not much about that will change. Their power position won't fundamentally change however.
This is also interesting to see. ChatGPT currently has about 20 million paying subscribers and 400 million weekly active users.
Did 100 million people use chatGPT a few months into opening. Sure, but that was because of massive hype and word of mouth viral moments.
Is is comparable to Internet usage, especially from "years launched" X axis. Definitely not.
> 59 AI User Adoption (ChatGPT as Proxy) = Materially Faster + Cheaper vs. Other Foundational Technology Products
Another pointless comparison.
Now with headless agents (like CheepCode[0], the one I built) that connect directly to the same task management apps that we do as human programmers, you can get “good enough” PRs out of a single Linear ticket with no need to touch an IDE. For copy changes and other easy-to-verify tweaks this saves developers a lot of overhead checking out branches, making PRs, etc so they can stay focused on the more interesting/valuable work. At $1/task a “good enough” result is well worth it compared to the cost of human time.
[0] https://cheepcode.com
For the other stuff like automating white collar jobs, good enough might not suffice due to the intricate dependencies and implicit contracts formed naturally out of human groups.
Creative jobs will be the most impacted by "good enough" depending on the number of features. For 2d art it was almost certainly over (unless you add text feature to it like manga). You can see with increasing features, like starting with general photography, stock photography and now product photography are overnight made redundant. ex) with the latest Flux image editor negates a need to hire a photo editor, photographer, camera equipment, lighting, product artist. Veo3 not quite there but handles speech features in video generation that other models did not and getting closer to replacing videographers. I think 3d model is the next frontier here following the trend but is still quite difficult as it involves mesh generation/texture/rigging/animation/physics that also must come with shaders and interaction with other 3d models.
Software engineering falls somewhat in the creative field but also shares the complexity from white collar jobs for the same reason that will prevent it from being completed automatable with "good enough".
The hallucination issue is less of an issue and an old trope. The truly challenging enemy of AI of "good enough" is due to "not enough context" and "poor context compression and recall". The problems I listed in white collar and software engineering jobs is context problem. The compression of contexts cannot be stable as the former isn't solved. The fast efficient recall of contexts then cannot take place due to poor compression and so on.
This is just my observation of seeing how things are progressing. I do feel that we will see something different from LLM altogether that could solve some of the context issues but a major misalignment of incentives is what I think would prevent an AGI know-all-see-all type of deal. ex) you might not have any incentive to share all the essential context with the AI because you might become irrelevant and want it to stay in the dark. you might have a union or some social organization to legislate monopoly of human knowledge/skill workers in a field.
but perhaps THE most difficult problem even after we solve the context problem is the inability for the God AGI to be awake or conscious which is absolutely critical in many real world applications.
I like to focus more on the very near impact of what AI is currently doing in the labs and its impact on humans than worrying about who and when all of the other problems are going to be addressed.
Whether we get a UBI-first socialist world order or a continuation of technological feudalism with the poors still using GPTs while the rich sell the energy and chips (software would almost be worthless on its own by then) is the least of my concern.
I'm an optimist and I'm very excited for the very-near and immediate impact of our currently available AI tools doing the "good enough" in very positive ways.
In other cases, like software development, there is a split between tasks of a narrow scope and those of a wide scope. Creating one-shot pieces of software is kind of a solved issue now. Maintaining some relatively self-contained piece of software might soon turn into a task for single maintainers that review AI PRs. The more the bottleneck is context tracking, as opposed to producing code, the less useful the AI. I am uncertain, however, how the millions of devs in the world are distributed on this continuum.
I am also skeptical about legal protections or unionization, as many of these jobs are quite suited to international competition.
I am genuinely curious if lay people will pay for AI. I know people who spend literally hours daily on YouTube and complain about the ads and don’t want to pay the quarter a day to get rid of em.
Will these people pay $50 a month for gpt? We will see.
But asking if average people will pay for AI is the wrong question. It's like asking if average people will pay for Salesforce or Oracle. Even if consumers don't pay for AI en masse, their employers will as the value proposition is a lot clearer.
But probably everyone will adopt it - so it may not give a competitive advantage to any given company.
Also, access to market leading AI is not going to cost $20/mo when everything is said and done.
[1] https://blogs.worldbank.org/en/developmenttalk/half-global-p...
Just like paying for a smartphone or data or broadband.
"Charts paint thousands of words..."
"Leading USA-Based LLM User" -- what does that even mean? With a value of "800MM" where MM is what units?
"AI Usage + Cost + Loss Growth = Unprecedented" -- how can you add three things that don't appear to be commensurable? Also, what is "Loss Growth" and how does it add to "Usage"?
There are plenty more examples in the charts later. The Overview section, while not so obviously AI-written, has an over-the-top enthusiasm and loose structure that makes me squeamish. I don't trust it, but that's just me I guess.
This does not pass the smell test.
https://www.yum.com/wps/portal/yumbrands/Yumbrands/news/pres...
Above is the press release. AI comes up 3 times, including the title. Release talks about AI-Driven platform to streamline operations of franchisees. Ok fine. But how exactly does AI enhance operations at the franchisee level? Especially if all they are providing as part of the SaaS are "Backed by artificial intelligence, Byte by Yum! offers franchisees leading technology capabilities with advantaged economics made possible by the scale of Yum!"
I mean, sure, if at their corporate office, they are using data analysis to predict demand and allocate resources or raw materials to improve profitability, ok. But that can be done quite effectively with statistical analysis.
Where exactly does AI come into the picture is unclear.
Yet, the slides show this as some sort of a monumental achievement, especially highlighting "25000 restaurants are using at-least 1 product". Sure, they are going to use, if that is what the franchise owner provides them, probably for additional cost.
Yum brand has 61000 restaurants as per their website. Looks like they rolled out a new solution, and about 1/3rd have been successful in adopting the new platform. Others may be in the line to do the same. Is this related to AI, or is this related to regular software changes / updates / revamps?
Corporates are going to talk about the current in-vogue thing always.
They talked about Blockchain.
They talked about crypto
They talked about anything that when not spoken about made the investors feel the board is behind in the world.
> 72 Enterprise AI Adoption = Rising Priority…Bank of America – Erica Virtual Assistant (6/18)
Ok fine. People are using it. 2 important questions. Did the users have other choices, over which they chose this. Did the users feel happier than other methods.
Without this data, this is just feeding the hype train.
Some limitations:
- Unhelpful modernity-scale trend/hype-lines. Everyone knows prospects are big and real.
- No significant coverage of robotics, factory automation? (TAM for physical products is 15X search+streaming+saas)
- No insight? No new categories, surprising predictions, critical technologies identified?
Surprises:
- AI productivity improvements are marginal, esp. relative to the concern over jobs
- US ratio of top public companies by market cap increased from ~50% in 1995 to ~85% in 2025. Seems big; or is it an artifact of demographics of retirement investments? Or is it less significant due to growing private capital markets?
What I would like addressed: The AI means of production seem very capital-intensive, even as marginal cost of consumption is Saas-scalable (i.e., big producers, small consumers). I have some concern that AI development directions are decided in relatively few companies (which are biased to Saas over manufacturing, where consumers are closer to producers in size). This increases the likelihood of a generational whiff (a mistake I suspect China won't make).
As an aside, I wish Elon Musk would pivot xAI out of Saas AI (and science AI), focusing exclusively on manufacturing robotics -- dogfooding at Tesla, SpaceX and even Boring -- with the simpler autonomy of controlled environments but the hard problem of not custom building everything every time. They're well positioned, he could learn some discipline from working with downstream and upstream partners on par (instead of slavish employees, fan investors, and dull consumers or slow governments as customers). He'd redeem himself as a builder of stuff that builds, so we can make infrastructure for generations to come.
I am sure a majority of those are from India.
A small industry has formed here imparting "AI" training, for very cheap, online and courses with duration of a few months to 2 years.
https://www.shiksha.com/online-courses/artificial-intelligen...
India School of Business, one of the highest ranked business schools in India, is offering leadership courses with a touch of AI, for as little as 10 lakh rupees (about 11000 USD). For a majority of IT folk in India, that is a very reasonable amount for a course from a very reputed institute.
Many IT folk here are scared to the core about their jobs and there has been a mass movement towards AI certificates. While the courses teach the basics, none of them are at a university level. Most of the students are only users of tools though.
Would you call them AI developers? Meh. They get by. Most of the work in India is back-office work anyway, and these AI engineers end up doing data related tasks mostly.
Very few are actually building worthwhile AI stuff.
Waymo is clearly growing fast, but it's not bigger than Uber—yet. Super impressive its already surpassed Lyft. From personal experience, it also seems to have driven Uber prices down materially.
Page 6: https://www.bondcap.com/report/pdf/Trends_Artificial_Intelli...
Page 302: https://www.bondcap.com/report/pdf/Trends_Artificial_Intelli...
Looks like yet another "numbers going up" report, not going to spend time on reading 300+ slides like that.
Maybe I am unfair, but I have no time to read every single report and I am willing to read only exceptional ones with no (or very limited) marketing slop.
Many of the slides are packed with details.
Who, exactly, has time to read all that stuff?
Only AI models, I'd guess.
(yesterday evening I read about 400 pages of text as entertainment, it is not much - though admittedly this specific one failed as entertainment and it was not content worth reading)