Does anybody know of other evidence? If not, then it looks bogus, a case of "il faudrait l'inventer" which got traction by piggybacking on an old-fashioned fraud story.
To sum up: the substantiated claim is boring and the lurid claim is unsubstantiated. When have we ever seen that before? And why did I waste half an hour on this?
There are personal testimonials in the indiandevelopers subreddit from quite a while ago, if those are to be believed.
pyman · 1h ago
The news about BuilderAI using 700 devs instead of AI is false. Here's why.
I've seen a lot of posts coming out of India claiming "we were the AI". So I looked into it to see if Builder AI was lying, or if this was just a case of unpaid developers from India spreading rumours after the company went bust.
Here's what some of the devs are saying:
> "We were the AI. They hired 700 of us to build the apps"
Sounds shocking, but it doesn't hold up.
The problem is, BuilderAI never said development was done using AI. Quite the opposite. Their own website explains that a virtual assistant called "Natasha" assigns a human developer to your project. That developer then customises the code. They even use facial recognition to verify it's the same person doing the work.
> "Natasha recommends the best suited developer for your app project, who then customises your code on our virtual desktop. We also use facial recognition to check that the developer working on your code is the same one Natasha picked."
I also checked the Wayback Machine. No changes were made to that site after the scandal. Which means: yes, those 700 developers were probably building apps, but no, they weren't "the AI". Because the company never claimed the apps were built by AI to begin with.
Verdict: FAKE NEWS
pyman · 1d ago
I couldn't find any reference on the BuilderAI website claiming they use GenAI to build software. So the second claim lacks evidence.
Update: They mention AI to assemble features, not to generate code. So it's impossible to know whether they were actually using ML (traditional AI) to resolve dependencies and pull packages from a repo.
ivape · 1d ago
Speculating, don’t they offer dev services that’s supposed to be done by AI? If the dev services were offered by devs, then that would be the scam. Now that I’ve said the second part, it does seem lurid because who the hell is paying for AI first code deliverables.
—-
Message to HN:
Instead of founding yet another startup, please build the next Tech Vice News and fucking goto the far corners of the tech world like Shane Smith did with North Korea with a camera. I promise to be a founding subscriber at whatever price you got.
Things you’ll need:
1) Credentialed Ivy League grad. Make sure they are sporadic like that WeWork asshole.
2) Ex VC who exudes wealth with every footstep he/she takes
3) The camera
4) And as HBO Silicon Valley suggests, the exact same combination of white guy, Indian guy, Chinese guy to flesh out the rest of the team.
See, I need to know what’s it like working for a scrum master in Tencent for example during crunch time. Also, whatever the fuck goes on inside a DeFi company in executive meetings. And of course, find the next Builder.ai, or at least the Microsoft funding round discussions. We’ve yet to even get a camera inside those Arab money meetings where Sam Altman begs for a trillion dollars. We shouldn’t live without such journalism.
pyman · 1d ago
The short answer is no, their website doesn't claim that development is done using AI.
My gut feeling is that a lot of people, including developers, are posting hate messages and spreading fake news because of their fear of AI, which they see as a threat to their jobs.
If you look at their website, builder.ai, they tell customers that their virtual assistant, "Natasha", assigns a developer (I assume from India):
> Natasha recommends the best suited developer for your app project, who then customises your code on our virtual desktop. We also use facial recognition to check that the developer working on your code is the same one Natasha picked.
They also have another page explaining how they use deep learning and transformers for speech-to-text processing. They list a bunch of libraries like MetaPath2Vec, Node2Vec, GraphSage, and Flair:
It sounds impressive, but listing libraries doesn't prove they built an actual LLM.
So, the questions that remain unanswered are:
1. Did Craig Saunders, the Head of AI at Builder.ai (and ex-Director of AI at Amazon), ever show investors or clients a working demo of Natasha, or a product roadmap? How do we know Natasha was actually an LLM and not just someone sitting in a call centre in India?
2. Was there a technical team behind Saunders capable of building such a model?
3. Was the goal really to build a domain-specific foundation model, or was that just a narrative to attract investment?
Having said that, the company went into insolvency because the CEO and CFO were misleading investors by significantly inflating sales figures through questionable financial practices. According to the Financial Times, BuilderAI reportedly engaged in "round-tripping" with VerSe Innovation. This raised red flags for investors, regulators and prosecutors, and led to bankruptcy proceedings
paxys · 1d ago
> Less than two months ago, Builder.ai admitted to revising down core sales numbers and engaging auditors to inspect its financials for the past two years. This came amidst concerns from former employees who suggested sales performance had been inflated during prior investor briefings.
I was hoping for something interesting, but it is just plain old fashioned accounting fraud.
This is fake news. Builder.ai, like any other dev shop, had clients and was building apps using developers in India, pretty much like Infosys or any other Indian dev shop. Nothing wrong with that.
From what I read online, the real issue was "Natasha", their virtual assistant powered by a dedicated foundation model. They ran out of money before it got anywhere.
profsummergig · 1d ago
This is so obviously fake news that it's a good litmus test of the people who are boosting it.
There's no way that a team of programmers can ever produce code quickly enough to mimic anything close to the response time of a coding LLM.
threeseed · 1d ago
But it’s not just about coding quickly but also correctly.
Coding LLMs do not solve the problem of it hallucinating, using antiquated libraries and technologies and screwing up large code bases because of the limited context size.
Given a well architected component library and set of modules I would bet that on average I could build a correct website faster.
pyman · 1d ago
I did a bit of research…
Builder.ai didn't tell investors they were competing with GitHub Copilot, Cody, or CodeWhisperer. Those are code assistants for developers. They told investors they were building a virtual assistant for customers. This assistant was meant to "talk" to clients, gather requirements and automate parts of the build process. Very different space.
And like I said in another comment, creating a dedicated, pre-trained foundation model is expensive. Not to mention a full LLM.
Questions:
1. Did Craig Saunders, the VP of AI (and ex-Amazon), ever show investors or clients any working demo of Natasha? Or a product roadmap?
2. Was there a technical team behind Saunders capable of building such a model?
3. Was the goal really to build a domain-specific foundation model, or was that just a narrative to attract investment?
daveguy · 1d ago
> creating a dedicated, pre-trained foundation model is expensive. Not to mention a full LLM.
Creating a dedicated pretrained model is a prerequisite of any LLM. What do you mean by "full LLM"?
pyman · 1d ago
Just to clarify: I said "pre-trained foundation model".
LLMs are a type of foundation model, but not all foundation models are LLMs. What Builder.ai was building with Natasha sounded more like a domain-specific assistant, not a general-purpose LLM.
bartread · 1d ago
> This is fake news. Builder.ai, like any other dev shop, had clients and was building apps using developers in India, pretty much like Infosys or any other Indian dev shop. Nothing wrong with that.
Yeeeah... that's a fairly disingenuous take.
The difference between every other offshore dev shop backed by developers in India and Builder.ai is that - and I say this as someone who thinks Infosys is a shit company - Infosys and all those other dev shops are at least up front about how their business works and where and who will be building your app. Whereas Builder.ai spent quite a long time pretending like they had AI doing the work when actually it was a lot of devs in India.
That is deliberately misleading and it is not OK. It's fraudulent. It's literally what Theranos did with their Edison machines that never worked so whereas they claimed they had this wondrous new blood testing technology they were actually running tests with Siemens machines, diluting blood samples, etc. The consequences of Theranos's actions were much more serious (misdiagnoses and, indeed missed diagnoses of thousands of patients), rather than just apps built by humans rather than AI, but lying and fraud is lying and fraud.
pyman · 1d ago
I don't agree. Even Infosys markets AI as part of their offering, just look at their "AI for Infrastructure" pitch:
Every big dev shop does this. Overselling tech happens all the time in this space. The line between marketing and misleading isn't always so clear. The difference is Builder.ai pushed the AI angle harder, but that doesn't make it Theranos-level fraud.
aylmao · 1d ago
> The line between marketing and misleading isn't always so clear.
In the general I kind of disagree with this. I am not a lawyer, so I don't know all the details, but if you look for it, you should be able to find the line since it's generally illegal to mislead customers. There's also a whole set of contractural and perhaps even legal obligations when it comes to investors.
For contracts and the law to be enforceable, they need draw lines as clearly as possible. There's always some amount of details that are up to interpretation, but companies make sure to pay legal counsel to make sure they don't cross these lines.
Now, specifically in this case, I do agree with you. This case doesn't seem to be a legal matter of customer or investor misleadings (thus far). Viola Credit did seize $37 million, so IMO there clearly was a violation of contract in all this, but it seems like that had nothing to do with the whole AI overselling.
aprilthird2021 · 1d ago
> Overselling tech happens all the time in this space.
Overselling is fraud and is a crime at a certain point, which they clearly passed otherwise they wouldn't have had their lenders pull back money and leave them bankrupt
pyman · 1d ago
Just to play the devil's advocate: if a software company tells you your data is secure and then someone hacks their server and steals your photos and personal data, did their CEO and marketing department oversell their level of security? Is this fraud as well?
aprilthird2021 · 1d ago
"Your data is secure" is known to never be 100%. But what assessments and technology they say they use for security needs to be followed. And if it's found out that those are lies, then it's fraud.
Kind of like how these guys lied about the volume of sales they had. Textbook fraud. They aren't in trouble for saying "AI is going to be great"
pyman · 1d ago
I agree. But using the "your data is secure" analogy, BuilderAI never actually told customers that development was done using AI, that's something people made up. If you look at their website (builder.ai) they explain that their virtual assistant "Natasha" assigns a developer (I assume from India). That part doesn’t sound like fraud to me, and it's the part everyone seems to be focusing on.
The company went into insolvency because the CEO and CFO were misleading investors by significantly inflating sales figures through questionable financial practices. According to the Financial Times, the Indian founders and accountants reportedly engaged in round-tripping with VerSe Innovation. That raised red flags for investors, regulators, and prosecutors, and led to bankruptcy proceedings.
mistercheph · 1d ago
Arguably, Theranos was also somewhere in a gray area between marketing and fraud.
Everyone in the industry incentivizes and participates in this behavior, but once in a while, let's grab a few stand-out individuals to scapegoat once in a while for all the harm caused by/to the entire group with this behavior. Make sure you pick someone big/ugly enough to be credibly dangerous to the whole group, but who isn't too dangerous and well connected so that you can be sure that when the card flips on them everyone around them scatters.
It's the same reason groups of individual humans do it: Scapegoating is a much lower resistance path to follow than the horrifying alternative (self-consciousness, reflection, love)
pyman · 1d ago
Theranos was dealing with people's health. Misdiagnoses, delayed treatments, etc, that's real harm. Imo, comparing that to building web apps isn't the same.
aprilthird2021 · 1d ago
The actual crime Theranos founder went to jail for was not misdiagnosing people. It was defrauding investors because they made them believe their machines were doing the tests when really they were sending them out to separate labs
pyman · 1d ago
Completely different story. With Theranos the investors sued the founders, with Builder AI they didn't. This suggests they knew what was really going on, so it wasn't fraud in their eyes.
aprilthird2021 · 1d ago
It is not a completely different story. The lender yanked back the money they lent because they found out about fraudulent sales numbers. That led to the bankruptcy. It was still the people whose money was in the game who brought the company down in both scenarios because fraud is a big red line for anyone whose money is on the line
pyman · 1d ago
I understand where you're coming from, but we need to stick to the facts. If there are no court cases, we can't imply that fraud was committed. We don't know what kind of agreements were in place, why the money was being transferred, or what the expectations were on both sides.
We also don't know what was discussed in private. For example, it could have been something like: "We want to be part of this investment opportunity, we'll give you $40 million. But if regulators start asking questions, we want the money back."
Without full context or legal findings, everything else is just speculation.
I'm surprised no one is talking about Microsoft's investment in BuilderAI, a total loss. It's unlikely they'll recover much, if anything. So why aren't they suing the CEO and CFO? Maybe some of the issues were handled quietly behind the scenes to avoid public exposure or reputational damage? I don't know.
Retric · 1d ago
Theranos was using the same testing equipment and techniques as any other lab for most of their diagnostic services. Which is how they avoided being instantly exposed when their results ended up being meaningless. “In October 2015, John Carreyrou of The Wall Street Journal reported that Theranos was using traditional blood testing machines instead of the company's Edison devices to run its tests, and that the company's Edison machines might provide inaccurate results.” https://en.wikipedia.org/wiki/Theranos
They did plenty of shady shit including producing poor results, but that’s largely incompetence independent of fraud vs intentionally putting people’s lives on the line.
IMO, the fraud kind of hides the equally important story where incompetent 19 year old collage dropout shockingly doesn’t know how to effectively setup and manage complex systems.
No comments yet
lotsofpulp · 1d ago
>Arguably, Theranos was also somewhere in a gray area between marketing and fraud.
Theranos was clear fraud. She claimed scientific advances that did not exist.
mistercheph · 1d ago
What about traditional auto manufacturers making claims about solid state battery technology they will achieve in the next decade that they haven't yet?
There are always unsolved engineering and scientific challenges that stand between today and future product, and nothing is guaranteed, but you have to sell investors on the future technology (see: frontier model makers pushing AGI/ASI hype)
Obviously there are differences between Toyota's SS battery claims and Theranos' claims, but it's not a black and white line, it's a spectrum.
aprilthird2021 · 1d ago
Why are so many people here pretending fraud is ambiguous?
Saying "We will have great batteries 10 years from now" is not fraud. It's your belief about the future. Everyone knows no one can predict the future.
Saying "this hydrogen powered truck works, here is a video of it running on the road right now" but the video is edited so you don't see that it's going down hill and the car isn't actually running" that's fraud.
Theranos wasn't in trouble for saying their machines would be great one day. They got in trouble for lying about the current state of things, saying they were performing blood tests on their machines when they were not.
pyman · 15h ago
BuilderAI never actually told customers that development was done using AI, that's something people made up after the company went bust. If you look at their website (builder.ai), they explain that their virtual assistant "Natasha" assigns a developer, and then uses face recognition to verify the identity of the developer.
Take a minute to visit their site and get informed. We live in a time where people form opinions just by reading a headline.
This is so weird, its not that hard to actually build an app builder. There are multiple open-source repos (bolt etc), they could have just paid their "AI engineer" to actually build an AI engineer.
Shameless plug, but we built (https://v1.slashml.com), in roughly 2 weeks. Granted its not as mature, but we don't have billions :)
driverdan · 1d ago
> its not that hard to actually build an app builder
Besides simple one page self-contained apps, yes, it's quite hard. So hard that it's still an unsolved problem.
not really, lovable, v0, and bolt all are mutlipage. They connect to supabase for db and auth. Replit can spinup custom dbs on-demand, and have a full-fledged IDE.
I did my research before jumping into this space :)
aitchnyu · 1d ago
Which ones prevent anybody with a browser accessing other user's data? I have been discussing vibe coding and Supabase's Postgres row level security misconfiguration.
fazkan · 19h ago
replit, from what I know, and lovable to a certain extent.
glutamate · 1d ago
They launched in 2016
throwaway314155 · 1d ago
Should have pivoted faster.
xkcd-sucks · 1d ago
It's plausible they started with a typical software consultancy and its crappy in house app builder scripts, and reformed it as an AI thing in order to inflate its value?
downrightmike · 1d ago
That'd be shameful, and a complete disgrace, it'd be like adding "bitcoin" to your company name or 10k fillings a few years ago to boost your stock
mikestew · 1d ago
In case anyone thinks parent is speaking hypothetically:
news to me buddy. this is perhaps a useless comment but then i think, articles resurface every now and again and it's intentional and welcome for those that missed. and this isn't exactly that of course, rather makes me think it's worth a comment: news is relative. discussion ensues, it's all good
macintux · 1d ago
Except that it's contrary to the site FAQ.
> If a story has not had significant attention in the last year or so, a small number of reposts is ok. Otherwise we bury reposts as duplicates.
ricardobeat · 1d ago
So.. where did the $450M go? A team of 700 developers in India built over eight years would have cost a fraction of that.
rokob · 1d ago
Why do you think it would be to pay for actual costs? The whole point of running a scam is to spend the money.
antithesizer · 1d ago
I really wish I'd read this before starting my career as a scammer ten years ago.
CSMastermind · 1d ago
How do you figure? $450M / 8 years / 700 developers = $80k / year per developer.
cubano · 1d ago
Typically, scams like this are very top-heavy with the vast majority of the pilfered cash going to a few well-placed "bros" at the top of the company pyramid.
My guess? Most of the cash is socked away in BTC or some such wealth sink just waiting for the individuals to clear their bothersome legal issues.
owebmaster · 1d ago
> My guess? Most of the cash is socked away in BTC
Had they done this years ago they would be so rich it would be worthy keep builder.ai going just to avoid legal problems.
casion · 1d ago
Average salary for a developer in India is about 1/10th of that.
spamizbad · 1d ago
That hasn't been the case in like 20 years. Engineering salaries are around 40K USD, although they can even stretch into the six figures for major companies with deep pockets wanting to attract elite talent. The band is pretty wide and is largely based on whether you work in a body shop consultancy (low end) or a major tech company like Google (high end).
And, like many things in this world, you'll find you'll pay for what you get.
darth_avocado · 1d ago
Median salary of a reasonable developer is about 1/2th of that and if you are talking about Microsoft, Uber, Google etc., then that’s the salary of a senior dev.
But more importantly, we’re all pretending, the only cost of building anything is salaries. A company that size could blow a million dollars a month just on AWS, and the AI stuff is waaaay more expensive.
aprilthird2021 · 1d ago
No, it's not
polyaniline · 1d ago
It is
bigfatkitten · 1d ago
Only if they’re all ex FAANG staff/principal.
paxys · 1d ago
They have been operating since 2016. Companies can and have burned through $450M in funding a hell of a lot faster than that.
OpenAI is on track to spend $14 billion this year.
No comments yet
monksy · 1d ago
The Chai budget is completely justifiable expense. (Probably more so than the difference being run away with)
TrackerFF · 1d ago
These kind of scumbags pocket 90% of the cash.
Wouldn't surprise me if the developers were hired from sweatshop staffing agencies, or just working directly for minimum wage - if that even.
more_corn · 1d ago
[flagged]
dang · 1d ago
Please don't do this here.
pryelluw · 1d ago
$400M!
I get $100M.
Maybe even $200M.
But $400M?
Unforgivable.
nadermx · 1d ago
You figure 700 employees. 400m. Avg cost per hooker can't be more than a few hundred.
So by this math each employee got 1,900ish hookers. Since i figure male hookers for the female employees where cheaper well round up to 2,000.
That is in fact unforgivable. 1,000 would of been acceptable. 2,000... just excess
pryelluw · 1d ago
Did you factor in the nose candy?
That estimate seems off. Please crunch the numbers once again. Make sure to factor in inflation.
kridsdale1 · 1d ago
Shit, those benefits are way better than Suicide Bomber.
pyman · 1d ago
Elon Musk spent $6 billion training his model. Sam Altman spent $40 billion. Where did Builder AI's $500 million go? Probably into building a foundation model, not even a full LLM.
1oooqooq · 1d ago
shhhh. we don't talk about the ongoing scams. those you keep hyping and try to sell your SaaS around it.
Really weird considering how much AI is actually available now
wongarsu · 1d ago
If you have an idea for a cool AI startup it's faster to build your first prototype without the actual AI, just faking that part. But if your Actual Indians had 95% accuracy and you can't get an AI to do more than 85% then you are kind of stuck if you raised money and got customers pretending that your Actual Indians are Artificial Intelligence.
TYPE_FASTER · 1d ago
This is the way. Funny how AI could also stand for Actual Intelligence. Or, Artisanal Intelligence? "Now 100% organic handcrafted thoughts, unique for your business problem."
No comments yet
more_corn · 1d ago
Not true it’s super easy to fine tune and deploy one of the open models. I should teach a course.
msgodel · 1d ago
The technical aspects of training and tuning are trivial. GP is pointing out that you might not be able to get the model to succeed at the task as often for any number of reasons that you won't know before you actually train one.
Although I guess your point is that it's also cheap to train them, probably cheaper than doing this. But startups are started by social people, not technical people. Stuff like this will always be expensive for social people since they have to pay one of us to do it. YC interviews their CEOs from time to time, it's really clear that's how that works.
mrweasel · 1d ago
Also it can't have been fast. Didn't customers and investors feel that it was weird that CoPilot spits out code as fast as you can type, but Builder.ai needs days or weeks to generate your app. Or where these Indian developers just really really fast?
givemeethekeys · 1d ago
Maybe they use GPT :)
helloplanets · 1d ago
There's this one super secret agentic framework that beats all the benchmarks...
hyperadvanced · 1d ago
Available sure, but cost effective? My guess is that they tried a lot of things to get ChatGPT to work and burnt out of money before it got cheap enough to fit with a reliable business model. Early but not wrong, I guess.
immibis · 1d ago
Almost like it doesn't work as well as they market it as working?
The boring claim is that the company inflated its sales through a round-tripping scheme: https://www.bloomberg.com/news/articles/2025-05-30/builder-a... (https://archive.ph/1oyOw). That's consistent with other recent reporting (e.g. https://news.ycombinator.com/item?id=44080640)
The lurid claim is that the company's AI product was actually "Indians pretending to be bots". From skimming the OP and https://timesofindia.indiatimes.com/technology/tech-news/how..., the only citation seems to be this self-promotional LinkedIn post: https://www.linkedin.com/feed/update/urn:li:activity:7334521... (https://web.archive.org/web/20250602211336/https://www.linke...).
Does anybody know of other evidence? If not, then it looks bogus, a case of "il faudrait l'inventer" which got traction by piggybacking on an old-fashioned fraud story.
To sum up: the substantiated claim is boring and the lurid claim is unsubstantiated. When have we ever seen that before? And why did I waste half an hour on this?
(Thanks to rafram and sva_ for the links in https://news.ycombinator.com/item?id=44172409 and https://news.ycombinator.com/item?id=44175373.)
I've seen a lot of posts coming out of India claiming "we were the AI". So I looked into it to see if Builder AI was lying, or if this was just a case of unpaid developers from India spreading rumours after the company went bust.
Here's what some of the devs are saying:
> "We were the AI. They hired 700 of us to build the apps"
Sounds shocking, but it doesn't hold up.
The problem is, BuilderAI never said development was done using AI. Quite the opposite. Their own website explains that a virtual assistant called "Natasha" assigns a human developer to your project. That developer then customises the code. They even use facial recognition to verify it's the same person doing the work.
> "Natasha recommends the best suited developer for your app project, who then customises your code on our virtual desktop. We also use facial recognition to check that the developer working on your code is the same one Natasha picked."
Source: https://www.builder.ai/how-it-works
I also checked the Wayback Machine. No changes were made to that site after the scandal. Which means: yes, those 700 developers were probably building apps, but no, they weren't "the AI". Because the company never claimed the apps were built by AI to begin with.
Verdict: FAKE NEWS
Update: They mention AI to assemble features, not to generate code. So it's impossible to know whether they were actually using ML (traditional AI) to resolve dependencies and pull packages from a repo.
—-
Message to HN:
Instead of founding yet another startup, please build the next Tech Vice News and fucking goto the far corners of the tech world like Shane Smith did with North Korea with a camera. I promise to be a founding subscriber at whatever price you got.
Things you’ll need:
1) Credentialed Ivy League grad. Make sure they are sporadic like that WeWork asshole.
2) Ex VC who exudes wealth with every footstep he/she takes
3) The camera
4) And as HBO Silicon Valley suggests, the exact same combination of white guy, Indian guy, Chinese guy to flesh out the rest of the team.
See, I need to know what’s it like working for a scrum master in Tencent for example during crunch time. Also, whatever the fuck goes on inside a DeFi company in executive meetings. And of course, find the next Builder.ai, or at least the Microsoft funding round discussions. We’ve yet to even get a camera inside those Arab money meetings where Sam Altman begs for a trillion dollars. We shouldn’t live without such journalism.
My gut feeling is that a lot of people, including developers, are posting hate messages and spreading fake news because of their fear of AI, which they see as a threat to their jobs.
If you look at their website, builder.ai, they tell customers that their virtual assistant, "Natasha", assigns a developer (I assume from India):
> Natasha recommends the best suited developer for your app project, who then customises your code on our virtual desktop. We also use facial recognition to check that the developer working on your code is the same one Natasha picked.
Source: https://www.builder.ai/how-it-works
They also have another page explaining how they use deep learning and transformers for speech-to-text processing. They list a bunch of libraries like MetaPath2Vec, Node2Vec, GraphSage, and Flair:
Source: https://www.builder.ai/under-the-hood
It sounds impressive, but listing libraries doesn't prove they built an actual LLM.
So, the questions that remain unanswered are:
1. Did Craig Saunders, the Head of AI at Builder.ai (and ex-Director of AI at Amazon), ever show investors or clients a working demo of Natasha, or a product roadmap? How do we know Natasha was actually an LLM and not just someone sitting in a call centre in India?
2. Was there a technical team behind Saunders capable of building such a model?
3. Was the goal really to build a domain-specific foundation model, or was that just a narrative to attract investment?
Having said that, the company went into insolvency because the CEO and CFO were misleading investors by significantly inflating sales figures through questionable financial practices. According to the Financial Times, BuilderAI reportedly engaged in "round-tripping" with VerSe Innovation. This raised red flags for investors, regulators and prosecutors, and led to bankruptcy proceedings
I was hoping for something interesting, but it is just plain old fashioned accounting fraud.
No comments yet
From what I read online, the real issue was "Natasha", their virtual assistant powered by a dedicated foundation model. They ran out of money before it got anywhere.
There's no way that a team of programmers can ever produce code quickly enough to mimic anything close to the response time of a coding LLM.
Coding LLMs do not solve the problem of it hallucinating, using antiquated libraries and technologies and screwing up large code bases because of the limited context size.
Given a well architected component library and set of modules I would bet that on average I could build a correct website faster.
Builder.ai didn't tell investors they were competing with GitHub Copilot, Cody, or CodeWhisperer. Those are code assistants for developers. They told investors they were building a virtual assistant for customers. This assistant was meant to "talk" to clients, gather requirements and automate parts of the build process. Very different space.
And like I said in another comment, creating a dedicated, pre-trained foundation model is expensive. Not to mention a full LLM.
Questions:
1. Did Craig Saunders, the VP of AI (and ex-Amazon), ever show investors or clients any working demo of Natasha? Or a product roadmap?
2. Was there a technical team behind Saunders capable of building such a model?
3. Was the goal really to build a domain-specific foundation model, or was that just a narrative to attract investment?
Creating a dedicated pretrained model is a prerequisite of any LLM. What do you mean by "full LLM"?
LLMs are a type of foundation model, but not all foundation models are LLMs. What Builder.ai was building with Natasha sounded more like a domain-specific assistant, not a general-purpose LLM.
Yeeeah... that's a fairly disingenuous take.
The difference between every other offshore dev shop backed by developers in India and Builder.ai is that - and I say this as someone who thinks Infosys is a shit company - Infosys and all those other dev shops are at least up front about how their business works and where and who will be building your app. Whereas Builder.ai spent quite a long time pretending like they had AI doing the work when actually it was a lot of devs in India.
That is deliberately misleading and it is not OK. It's fraudulent. It's literally what Theranos did with their Edison machines that never worked so whereas they claimed they had this wondrous new blood testing technology they were actually running tests with Siemens machines, diluting blood samples, etc. The consequences of Theranos's actions were much more serious (misdiagnoses and, indeed missed diagnoses of thousands of patients), rather than just apps built by humans rather than AI, but lying and fraud is lying and fraud.
https://www.infosys.com/services/cloud-cobalt/offerings/ai-i...
Every big dev shop does this. Overselling tech happens all the time in this space. The line between marketing and misleading isn't always so clear. The difference is Builder.ai pushed the AI angle harder, but that doesn't make it Theranos-level fraud.
In the general I kind of disagree with this. I am not a lawyer, so I don't know all the details, but if you look for it, you should be able to find the line since it's generally illegal to mislead customers. There's also a whole set of contractural and perhaps even legal obligations when it comes to investors.
For contracts and the law to be enforceable, they need draw lines as clearly as possible. There's always some amount of details that are up to interpretation, but companies make sure to pay legal counsel to make sure they don't cross these lines.
Now, specifically in this case, I do agree with you. This case doesn't seem to be a legal matter of customer or investor misleadings (thus far). Viola Credit did seize $37 million, so IMO there clearly was a violation of contract in all this, but it seems like that had nothing to do with the whole AI overselling.
Overselling is fraud and is a crime at a certain point, which they clearly passed otherwise they wouldn't have had their lenders pull back money and leave them bankrupt
Kind of like how these guys lied about the volume of sales they had. Textbook fraud. They aren't in trouble for saying "AI is going to be great"
The company went into insolvency because the CEO and CFO were misleading investors by significantly inflating sales figures through questionable financial practices. According to the Financial Times, the Indian founders and accountants reportedly engaged in round-tripping with VerSe Innovation. That raised red flags for investors, regulators, and prosecutors, and led to bankruptcy proceedings.
Everyone in the industry incentivizes and participates in this behavior, but once in a while, let's grab a few stand-out individuals to scapegoat once in a while for all the harm caused by/to the entire group with this behavior. Make sure you pick someone big/ugly enough to be credibly dangerous to the whole group, but who isn't too dangerous and well connected so that you can be sure that when the card flips on them everyone around them scatters.
It's the same reason groups of individual humans do it: Scapegoating is a much lower resistance path to follow than the horrifying alternative (self-consciousness, reflection, love)
We also don't know what was discussed in private. For example, it could have been something like: "We want to be part of this investment opportunity, we'll give you $40 million. But if regulators start asking questions, we want the money back."
Without full context or legal findings, everything else is just speculation.
I'm surprised no one is talking about Microsoft's investment in BuilderAI, a total loss. It's unlikely they'll recover much, if anything. So why aren't they suing the CEO and CFO? Maybe some of the issues were handled quietly behind the scenes to avoid public exposure or reputational damage? I don't know.
They did plenty of shady shit including producing poor results, but that’s largely incompetence independent of fraud vs intentionally putting people’s lives on the line.
IMO, the fraud kind of hides the equally important story where incompetent 19 year old collage dropout shockingly doesn’t know how to effectively setup and manage complex systems.
No comments yet
Theranos was clear fraud. She claimed scientific advances that did not exist.
There are always unsolved engineering and scientific challenges that stand between today and future product, and nothing is guaranteed, but you have to sell investors on the future technology (see: frontier model makers pushing AGI/ASI hype)
Obviously there are differences between Toyota's SS battery claims and Theranos' claims, but it's not a black and white line, it's a spectrum.
Saying "We will have great batteries 10 years from now" is not fraud. It's your belief about the future. Everyone knows no one can predict the future.
Saying "this hydrogen powered truck works, here is a video of it running on the road right now" but the video is edited so you don't see that it's going down hill and the car isn't actually running" that's fraud.
Theranos wasn't in trouble for saying their machines would be great one day. They got in trouble for lying about the current state of things, saying they were performing blood tests on their machines when they were not.
Take a minute to visit their site and get informed. We live in a time where people form opinions just by reading a headline.
No comments yet
Shameless plug, but we built (https://v1.slashml.com), in roughly 2 weeks. Granted its not as mature, but we don't have billions :)
Besides simple one page self-contained apps, yes, it's quite hard. So hard that it's still an unsolved problem.
I did my research before jumping into this space :)
Insider trading charges filed over Long Island Iced Tea’s blockchain ‘pivot’ https://www.cnn.com/2021/07/10/investing/blockchain-long-isl...
> If a story has not had significant attention in the last year or so, a small number of reposts is ok. Otherwise we bury reposts as duplicates.
My guess? Most of the cash is socked away in BTC or some such wealth sink just waiting for the individuals to clear their bothersome legal issues.
Had they done this years ago they would be so rich it would be worthy keep builder.ai going just to avoid legal problems.
And, like many things in this world, you'll find you'll pay for what you get.
https://www.levels.fyi/t/software-engineer/locations/greater...
But more importantly, we’re all pretending, the only cost of building anything is salaries. A company that size could blow a million dollars a month just on AWS, and the AI stuff is waaaay more expensive.
OpenAI is on track to spend $14 billion this year.
No comments yet
Wouldn't surprise me if the developers were hired from sweatshop staffing agencies, or just working directly for minimum wage - if that even.
I get $100M. Maybe even $200M.
But $400M?
Unforgivable.
So by this math each employee got 1,900ish hookers. Since i figure male hookers for the female employees where cheaper well round up to 2,000.
That is in fact unforgivable. 1,000 would of been acceptable. 2,000... just excess
That estimate seems off. Please crunch the numbers once again. Make sure to factor in inflation.
Microsoft-backed UK tech unicorn Builder.ai collapses into insolvency - https://news.ycombinator.com/item?id=44080640 - May 2025 (136 comments)
No comments yet
Although I guess your point is that it's also cheap to train them, probably cheaper than doing this. But startups are started by social people, not technical people. Stuff like this will always be expensive for social people since they have to pay one of us to do it. YC interviews their CEOs from time to time, it's really clear that's how that works.