> We will offer a free (rate-limited) service that everyone can use, once we have sorted out the legal issues regarding the possibility of mixing code snippets originating from open-source projects with different licenses (e.g., GPL-licensed tests will simply refuse to pass BSD-licensed code snippets).
Well, looks like they sorted em out!
siva7 · 1h ago
> We are pleased to announce the Real TDD, our latest innovation in the Program Synthesis field, where you write only the tests and have the computer write the code for you!
Boy would they only know 10 years later you don't even need to write tests anymore. Must feel like Sci-fi timeline if you warped one of these blog authors into our future
bathtub365 · 1h ago
Now we can simply sit back and assume the computer is doing a good job while we fold laundry
protonbob · 50m ago
Man. I wish the computer did the laundry and let me do the coding. What happened here?
NitpickLawyer · 1h ago
To put things into perspective: DeepMind was founded in 2010, bought by goog in 2014, the year of this "prank". 11 years later and ... here we are.
Also, a look at how our expectations / goalposts are moving. In 2010, one of the first "presentations" given at Deepmind by Hassabis, had a few slides on AGI (from the movie/documentary "The Thinking Game"):
Quote from Shane Legg: "Our mission was to build an AGI - an artificial general intelligence, and so that means that we need a system which is general - it doesn't learn to do one specific thing. That's really key part of human intelligence, learn to do many many things".
Quote from Hassabis: "So, what is our mission? We summarise it as <Build the world's first general learning machine>. So we always stress the word general and learning here the key things."
And the key slide (that I think cements the difference between what AGI stood for then, vs. now):
AI - one task vs. AGI - many tasks
at human level intelligence.
----
I'm pretty sure that if we go by that definition, we're already there. I wish I'd have a magic time traveling machine, to see Legg and Hassabis in front of gemini2.5/o3/whatever top model today, trained on "next token prediction" and performing on so many different levels - gold at IMO, gold at IoI, playing chess, writing code, debugging code, "solving" NLP, etc. I'm curious if they'd think the same.
But having a slow ramp up, seeing small models get bigger, getting to play with gpt2, then gpt3, then chatgpt, I think it has changed our expectations and our views on what is truly AGI. And there's a bit of that famous quote "AI is everything that hasn't been done before"...
bitwize · 25m ago
Back in the 90s, Pixar put out a joke SIGGRAPH paper about rendering food with lots of food-related puns and so forth. In 2007 they released Ratatouille, which required them to actually develop new rendering techniques, especially around subsurface scattering, to make food look realistic and delicious.
Kuraj · 2h ago
If I didn't read past the concept and the date I would've accepted it as real without a blink of an eye
hinkley · 1h ago
It probably could though. Or at least to the extent that declarative languages ever really work for real world problems.
But iif you perfected it then it would also be the thing that actually kills software development. Because if I told you your whole job is now writing tests, you’d find another job.
nemomarx · 1h ago
Isn't this project management, kinda? writing requirements and acceptance criteria and broad designs to hand off to a dev
jessekv · 1h ago
> We once saw a comment in the generated code that said "I need some coffee".
outside1234 · 32m ago
They knew the future in 2014 and somehow wasted 10 years
seanmcdirmid · 2h ago
We aren't really far off from that, perhaps.
hnuser123456 · 1h ago
We're beyond that, now we can vibecode both the tests and the implementation.
seanmcdirmid · 41m ago
I've been thinking about this a lot and we don't really do tests right. But if we did, ya, maybe we could just vibe code an entire system (the AI would have to run tests and fix things if it didn't work out).
Well, looks like they sorted em out!
Boy would they only know 10 years later you don't even need to write tests anymore. Must feel like Sci-fi timeline if you warped one of these blog authors into our future
Also, a look at how our expectations / goalposts are moving. In 2010, one of the first "presentations" given at Deepmind by Hassabis, had a few slides on AGI (from the movie/documentary "The Thinking Game"):
Quote from Shane Legg: "Our mission was to build an AGI - an artificial general intelligence, and so that means that we need a system which is general - it doesn't learn to do one specific thing. That's really key part of human intelligence, learn to do many many things".
Quote from Hassabis: "So, what is our mission? We summarise it as <Build the world's first general learning machine>. So we always stress the word general and learning here the key things."
And the key slide (that I think cements the difference between what AGI stood for then, vs. now):
AI - one task vs. AGI - many tasks
at human level intelligence.
----
I'm pretty sure that if we go by that definition, we're already there. I wish I'd have a magic time traveling machine, to see Legg and Hassabis in front of gemini2.5/o3/whatever top model today, trained on "next token prediction" and performing on so many different levels - gold at IMO, gold at IoI, playing chess, writing code, debugging code, "solving" NLP, etc. I'm curious if they'd think the same.
But having a slow ramp up, seeing small models get bigger, getting to play with gpt2, then gpt3, then chatgpt, I think it has changed our expectations and our views on what is truly AGI. And there's a bit of that famous quote "AI is everything that hasn't been done before"...
But iif you perfected it then it would also be the thing that actually kills software development. Because if I told you your whole job is now writing tests, you’d find another job.