> Despite being trained on more compute than GPT-3, AlphaGo Zero could only play Go, while GPT-3 could write essays, code, translate languages, and assist with countless other tasks. The main difference was training data.
This is kind of weird and reductive, comparing specialist to generalist models? How good is GPT3’s game of Go?
The post reads as kind of… obvious, old news padding a recruiting post? We know OpenAI started hiring the kind of specialist workers this post mentions, years ago at this point.
rcxdude · 34m ago
Also, the main showcase of the 'zero' models was that they learnt with zero training data: the only input was interacting with the rules of the game (as opposed to learning to mimic human games), which seems to be the kind of approach the article is asking for.
9rx · 55m ago
> This is kind of weird and reductive, comparing specialist to generalist models
It is even weirder when you remember that Google had already released Meena[1], which was trained on natural language...
[1] And BERT before it, but it is less like GPT.
jrimbault · 1h ago
> This meant that while Google was playing games, OpenAI was able to seize the opportunity of a lifetime. What you train on matters.
Very weird reasoning. Without AlphaGo, AlphaZero, there's probably no GPT ? Each were a stepping stone weren't they?
vonneumannstan · 1h ago
>Very weird reasoning. Without AlphaGo, AlphaZero, there's probably no GPT ? Each were a stepping stone weren't they?
Right but wrong. Alphago and AlphaZero are built using very different techniques than GPT type LLMs. Google created Transformers which leads much more directly to GPTs, RLHF is the other piece which was basically created inside OpenAI by Paul Cristiano.
jimbo808 · 40m ago
Google Brain invented transformers. Granted, none of those people are still at Google. But it was a Google shop that made LLMs broadly useful. OpenAI just took it and ran with it, rushing it to market... acquiring data by any means necessary(!)
msp26 · 42m ago
OpenAI's work on Dota was also very important for funding
phreeza · 1h ago
Transformers/Bert yes, alphago not so much.
rob74 · 1h ago
It's kind of reassuring that the old adage "garbage in, garbage out" still applies in the age of LLMs...
This is kind of weird and reductive, comparing specialist to generalist models? How good is GPT3’s game of Go?
The post reads as kind of… obvious, old news padding a recruiting post? We know OpenAI started hiring the kind of specialist workers this post mentions, years ago at this point.
It is even weirder when you remember that Google had already released Meena[1], which was trained on natural language...
[1] And BERT before it, but it is less like GPT.
Very weird reasoning. Without AlphaGo, AlphaZero, there's probably no GPT ? Each were a stepping stone weren't they?
Right but wrong. Alphago and AlphaZero are built using very different techniques than GPT type LLMs. Google created Transformers which leads much more directly to GPTs, RLHF is the other piece which was basically created inside OpenAI by Paul Cristiano.