Not sure why this has drawn silence and attacks - whence the animus to Ng? His high-level assessments seem accurate, he's a reasonable champion of AI, and he speaks credibly based on advising many companies. What am I missing? (He does fall on the side of open models (as input factors): is that the threat?)
He argues that landscape is changing (at least quarterly), and that services are (best) replaceable (often week-to-week) because models change, but that orchestration is harder to replace, and that there are relatively few orchestration platforms.
So: what platforms are available? Are there other HN posts that assess the current state of AI orchestration?
(What's the AI-orchestration acronym? not PAAS but AIOPAAS? AOP? (since aspect-oriented programming is history))
lubujackson · 12h ago
I'm guessing because this is basically an AI for Dummies overview, while half of HN is deep in the weeds with AI already. Nothing wrong with the talk! Except his focus on "do everything" agents already feels a bit stale as the move seems to be going in the direction of limited agents with a much stronger focus on orchestration of tools and context.
davorak · 11h ago
> I'm guessing because this is basically an AI for Dummies
I second this, for the silence at least, I listened to the talk because it was Andrew Ng and it is good or at least fun to listen to talks by famous people, but I did not walk away with any new key insights, which is fine, most talks are not that.
hakanderyal · 11h ago
From the recent threads, it feels like the other half is totally, willfully ignorant. Hence the responses.
jart · 11h ago
I like Andrew Ng. He's like the Mister Rogers of AI. I always listen when he has something to say.
mnky9800n · 10h ago
And he’s been doing it forever and all from the original idea that he could offer a Stanford education on ai for free on the Internet thus he created coursera. The dude is cool.
koakuma-chan · 11h ago
Is he affiliated with nghttp?
dmoy · 9h ago
No?
ng*, ng-*, or *-ng is typically "Next Generation" in software nomenclature. Or, star trek (TNG). Alternatively, "ng-" is also from angular-js.
Ng in Andrew Ng is just his name, like Wu in Chinese.
No need to add AI to the name, especially if it works. PaaS and IaaS are sufficient.
stego-tech · 12h ago
> So: what platforms are available?
I couldn't tell you, but what I can contribute to that discussion is that orchestration of AI in its current form would focus on one of two approaches: consistent output despite the non-deterministic state of LLMs, or consistent inputs that leans into the non-deterministic state of LLMs. The problem with the former (output) is that you cannot guarantee the output of an AI on a consistent basis, so a lot of the "orchestration" of outputs is largely just brute-forcing tokens until you get an answer within that acceptable range; think the glut of recent "Show HN" stuff where folks built a slop-app by having agents bang rocks together until the code worked.
On the input side of things, orchestration is less about AI itself and more about ensuring your data and tooling is consistently and predictably accessible to the AI such that the output is similarly predictable or consistent. If you ask an AI what 2+2 is a hundred different ways, you increase the likelihood of hallucinations; on the other hand, ensuring the agent/bot gets the same prompt with the same data formats and same desired outputs every single time makes it more likely that it'll stay on task and not make shit up.
My engagement with AI has been more of the input-side, since that's scalable with existing tooling and skillsets in the marketplace instead of the output side, which requires niche expertise in deep learning, machine learning, model training and fine-tuning, etc. In other words, one set of skills is cheaper and more plentiful while also having impacts throughout the organization (because everyone benefits from consistent processes and clean datasets), while the other is incredibly expensive and hard to come by with minimal impacts elsewhere unless a profound revolution is achieved.
One thing to note is that Dr. Ng gives the game away at the Q&A portion fairly early on: "In the future, the people who are the most powerful are the people who can make computers do exactly what you want it to do." In that context, the current AI slop is antithetical to what he's pitching. Sure, AI can improve speed on execution, prototyping, and rote processes, but the real power remains in the hands of those who can build with precision instead of brute-force. As we continue to hit barriers in the physical capabilities of modern hardware and wrestle with the effects of climate change and/or poor energy policies, efficiency and precision will gradually become more important than speed - at least that's my thinking.
void-star · 5h ago
Really valid points. I agree with the bits about “expertise in getting the computer to do what you want” being the way of the future, but he also raises really valid points about people having strong domain knowledge (a la his colleague with extensive art history knowledge being better at midjourney than him) after saying it’s okay to tell people to just let the LLM write code for you and learn to code that way. I am having a hard time with the contradictions, maybe it’s me. Not meaning to rag on Dr. Ng, just further the conversation. (Which is super interesting to me.)
EDIT: rereading and realizing I think what resonates most is we are in agreement about the antithetical aspects of the talk. I think this is the crux of the issue.
vlovich123 · 6h ago
> The problem with the former (output) is that you cannot guarantee the output of an AI on a consistent basis
Do you mean you cannot guarantee the result based on a task request with a random query? Or something else? I was under the impression that LLMs are very deterministic if you provide a fixed seed for the samplers, fixed model weights, and fixed context. In cloud providers you can't guarantee this because of how they implement this (batching unrelated requests together and doing math). Now you can't guarantee the quality of the result from that and changing the seed or context can result in drastically different quality. But maybe you really mean non-deterministic but I'm curious where this non-determinism would come from.
handfuloflight · 10h ago
This is great thinking, thank you for writing this.
handfuloflight · 12h ago
We've defined agents. Let's now define orchestration.
ramraj07 · 11h ago
Bold claim. I am not convinced anyone's done a good job defining agents and if they did 99% of the population has a different interpretation.
handfuloflight · 11h ago
Okay. We've tried to define agents. Now let's try to define orchestration.
lhuser123 · 10h ago
And make it more complicated than K8s
jliptzin · 7h ago
Not possible
vajrabum · 6h ago
The platforms I've seen live on top of kubernetes so I'm afraid it is possible. nvidia-docker, all the cuda libraries and drivers, nccl, vllm,... Large scale distributed training and inference are complicated beasties and the orchestration for them is too.
fjjckj · 7h ago
Kkkkk
imranq · 9h ago
My two takeaways is you build
1) Having a precise vision of what you want to achieve
2) Being able to control / steer AI towards that vision
Teams that can do both of these things, especially #1 will move much faster. Even if they are wrong its better than vague ideas that get applause but not customers
void-star · 5h ago
Yes this! The observation that being specific versus general in the problems you want to solve is a better start-up plan is true for all startups ever, not just ones that use LLMs to solve them. Anecdotal/personal startup experiences support this strongly and I read enough on here to know that I am not alone…
pchristensen · 12h ago
I have had reservation about Ng from a lot of his past hype, but I thought this talk was extremely practical and tactical. I recommend watching it before passing judgement.
Ng is now a businessman who sells courses. What startup has he built with "AI" himself?
crystal_revenge · 8h ago
A good chunk of Ng's work these days seems to be around AI Fund [0] which he explicitly mentioned in the video, in the first 5 seconds, involves co-founding these startups and being in the weeds with the initial development.
Additionally, he does engage pretty closely with the teams behind the content of his deeplearning.ai lectures and does make sure he has a deep understanding of the products these companies are highlighting.
He certainly is a businessman, but that doesn't exlcudethe possibility that he remains highly knowledgeable about this space.
dcreater · 8h ago
He's lost credibility in my eyes given that his courses essentially have a pay to play model for startups like langchain
crystal_revenge · 7h ago
Except they aren't pay to play unless you consider doing the work for the course the "payment". There's certainly an exchange since there is a lot of work involved, but DLAI provides a team to help design, structure and polish the course and then the team creating the course does the majority of the work creating the content, but there's no financial exchange.
The DLAI team is also pretty good about ensuring the content covers a topic not a product in general.
dcreater · 6h ago
The content is a repackage of previously existing, publicly available notebooks, docs, YouTube videos. I wouldnt be surprised if the repackaging was done by AI.
raincole · 5h ago
Courses are not academic journals, dude. They're supposed to be teaching you existing knowledge.
crystal_revenge · 5h ago
Again this is not true. I’ve known several people who have made courses for DLAI and they all put substantial time into creating the courses.
hoegarden · 12h ago
Baidu.
bgwalter · 12h ago
The video's description is about building startups through vibe coding, not using "AI" like self-driving or chatbots in startups.
Additionally, Baidu wasn't a startup when he joined in 2014.
hoegarden · 12h ago
Ng built baidu's AI department and began their start in various sectors with actual AI system design, so yes, he isn't a failed startup entrepreneur like any vibe startup maker who already wants to stop and give advice.
Maybe you can help me hire a vibe coder with 10 years experience?
bgwalter · 11h ago
He built it without LLMs in 2014 and now he is selling LLMs for coding to the young. That is the entire point of this subthread.
hoegarden · 11h ago
Right.. He's just a giant, not a midget with a step ladder.
But I do question why anyone who played a significant role in the foundation of the current AI generation would teach an obvious new Zuckerberg generation who will apparently think they are the start of everything if they get a style working in the prompt.
If not for 3 people in 2012, I find it highly unlikely a venture like OpenAI could have occurred and without Ng in particular I wouldn't be surprised if the field would have been missing a few technical pieces as well as the hire-able engineers.
reactordev · 12h ago
He doesn’t have to at this point, he just throws money at younger ones that will build it.
I haven’t watched the video yet, but title does sound like quantity over quality.
Why faster and not better with AI?
pinkmuffinere · 1h ago
I think this is an interesting question, and I’d like to genuinely attempt an answer.
I essentially think this is because people prefer to optimize what they can measure.
It is hard to measure the quality of work. People have subjective opinions, the size of opportunities can be different, etc, making quality hard to pin down. It is much easier to measure the time required for each iteration on a concept. Additionally, I think it is generally believed that a project with more iterations tends to have higher quality than a project with less, even putting aside the concern about measuring quality itself. Therefore, we put aside the discussion of quality (which we’d really like to improve), and instead make the claim that we can actually measure (time to do something), with the strong implication that this _also_ will tend to increase quality.
He argues that landscape is changing (at least quarterly), and that services are (best) replaceable (often week-to-week) because models change, but that orchestration is harder to replace, and that there are relatively few orchestration platforms.
So: what platforms are available? Are there other HN posts that assess the current state of AI orchestration?
(What's the AI-orchestration acronym? not PAAS but AIOPAAS? AOP? (since aspect-oriented programming is history))
I second this, for the silence at least, I listened to the talk because it was Andrew Ng and it is good or at least fun to listen to talks by famous people, but I did not walk away with any new key insights, which is fine, most talks are not that.
ng*, ng-*, or *-ng is typically "Next Generation" in software nomenclature. Or, star trek (TNG). Alternatively, "ng-" is also from angular-js.
Ng in Andrew Ng is just his name, like Wu in Chinese.
I couldn't tell you, but what I can contribute to that discussion is that orchestration of AI in its current form would focus on one of two approaches: consistent output despite the non-deterministic state of LLMs, or consistent inputs that leans into the non-deterministic state of LLMs. The problem with the former (output) is that you cannot guarantee the output of an AI on a consistent basis, so a lot of the "orchestration" of outputs is largely just brute-forcing tokens until you get an answer within that acceptable range; think the glut of recent "Show HN" stuff where folks built a slop-app by having agents bang rocks together until the code worked.
On the input side of things, orchestration is less about AI itself and more about ensuring your data and tooling is consistently and predictably accessible to the AI such that the output is similarly predictable or consistent. If you ask an AI what 2+2 is a hundred different ways, you increase the likelihood of hallucinations; on the other hand, ensuring the agent/bot gets the same prompt with the same data formats and same desired outputs every single time makes it more likely that it'll stay on task and not make shit up.
My engagement with AI has been more of the input-side, since that's scalable with existing tooling and skillsets in the marketplace instead of the output side, which requires niche expertise in deep learning, machine learning, model training and fine-tuning, etc. In other words, one set of skills is cheaper and more plentiful while also having impacts throughout the organization (because everyone benefits from consistent processes and clean datasets), while the other is incredibly expensive and hard to come by with minimal impacts elsewhere unless a profound revolution is achieved.
One thing to note is that Dr. Ng gives the game away at the Q&A portion fairly early on: "In the future, the people who are the most powerful are the people who can make computers do exactly what you want it to do." In that context, the current AI slop is antithetical to what he's pitching. Sure, AI can improve speed on execution, prototyping, and rote processes, but the real power remains in the hands of those who can build with precision instead of brute-force. As we continue to hit barriers in the physical capabilities of modern hardware and wrestle with the effects of climate change and/or poor energy policies, efficiency and precision will gradually become more important than speed - at least that's my thinking.
EDIT: rereading and realizing I think what resonates most is we are in agreement about the antithetical aspects of the talk. I think this is the crux of the issue.
Do you mean you cannot guarantee the result based on a task request with a random query? Or something else? I was under the impression that LLMs are very deterministic if you provide a fixed seed for the samplers, fixed model weights, and fixed context. In cloud providers you can't guarantee this because of how they implement this (batching unrelated requests together and doing math). Now you can't guarantee the quality of the result from that and changing the seed or context can result in drastically different quality. But maybe you really mean non-deterministic but I'm curious where this non-determinism would come from.
Teams that can do both of these things, especially #1 will move much faster. Even if they are wrong its better than vague ideas that get applause but not customers
Additionally, he does engage pretty closely with the teams behind the content of his deeplearning.ai lectures and does make sure he has a deep understanding of the products these companies are highlighting.
He certainly is a businessman, but that doesn't exlcudethe possibility that he remains highly knowledgeable about this space.
The DLAI team is also pretty good about ensuring the content covers a topic not a product in general.
Additionally, Baidu wasn't a startup when he joined in 2014.
Maybe you can help me hire a vibe coder with 10 years experience?
But I do question why anyone who played a significant role in the foundation of the current AI generation would teach an obvious new Zuckerberg generation who will apparently think they are the start of everything if they get a style working in the prompt.
If not for 3 people in 2012, I find it highly unlikely a venture like OpenAI could have occurred and without Ng in particular I wouldn't be surprised if the field would have been missing a few technical pieces as well as the hire-able engineers.
I want an Andrew Ng Agent.
(I'll see myself out ...)
Like with actual mortar, brick by brick?
Why faster and not better with AI?
I essentially think this is because people prefer to optimize what they can measure.
It is hard to measure the quality of work. People have subjective opinions, the size of opportunities can be different, etc, making quality hard to pin down. It is much easier to measure the time required for each iteration on a concept. Additionally, I think it is generally believed that a project with more iterations tends to have higher quality than a project with less, even putting aside the concern about measuring quality itself. Therefore, we put aside the discussion of quality (which we’d really like to improve), and instead make the claim that we can actually measure (time to do something), with the strong implication that this _also_ will tend to increase quality.