Look at those people shouting this will be AGI / total disruption etc. Seems Elon managed one thing; to amass the dumbest folks together. 99.99% maga, crypto and almost markov chain quality comments.
thm · 11h ago
99% of AI influencers are the same people who emailed you pictures as a Word attachment a year ago.
torginus · 11h ago
This is what put me off Claude Code. When I wanted. To dig in, I tried to watch a few Youtube videos to see an expert's opinion it, and 90% people who talk about it feel like former crypto shills, who, from their channel history, seem like have never written a single line of code without AI in their lives.
haneul · 11h ago
As someone who doesn't keep track of the influencer scene at the moment because I am way addicted to building...
You should totally give Claude Code a try. The biggest problem is that it is glaze-optimized, so have to work at getting it to not treat you like the biggest genius of all time. But when you manage to get in a good flow with it, and your project is very predictably searchable, results start to be quite helpful, even if just to unstuck yourself when you're in a rut.
reactordev · 11h ago
This. Claude Code was the only one to be able to grok my 20 year old C++ codebase so that I could update things deep in it's bowels to make it compile because I neglected it on a thumb drive for 15 years. I had no mental model of what was going on. Claude built one in a few minutes.
torginus · 2h ago
I will try it. I did use Cursor agents beforehand (using Sonnet/Opus 4), and my problems were that it was slower than I was (meaning me prompting AI), and was not good enough to leave it unattended.
jug · 11h ago
It annoys me to experience the huge discrepancy between content on social media on AI versus actual enterprise use. AI is happening, it's absolutely becoming integral parts of many businesses including our own. But these guys are just doodling in MS Paint and they're flooding the channels.
reactordev · 11h ago
Enterprises are in the same situation as you are. Many of them are posting marketing about AI without actually having AI. They are using OpenAI API's to say they have AI.
I can count on my hands the number of enterprises that actually have AI models of their own.
bdangubic · 11h ago
just curious, why does an enterprise have to have their own model? company can use ____ (someone else’s model) and still accomplish amazing AI shit in their products
garciasn · 10h ago
Because I am not permitted to share my code nor client information/data with unapproved third parties; it’s a contractual obligation. So; we train our own models to do those things.
I use Claude Code for building products that don’t have these limitations. And fuck is it amazing. Even little things that would have taken days are done in a single line of text.
reactordev · 10h ago
Because data protection and privacy compliance hasn’t caught up yet.
jgalt212 · 9h ago
or given up under the unceasing pressure from the AI madness.
rvz · 10h ago
> Many of them are posting marketing about AI without actually having AI. They are using OpenAI API's to say they have AI.
And somehow these companies are now "AI companies", just like in the 2010s your average food market down the street was a "tech company" or the bakery next to it is now a "blockchain company". This happens all the time with bubbles and mania.
These enterprises today appear even more confused about what they do to rebrand themselves and it's a sign they are desperate for survival.
anonzzzies · 11h ago
Claude code is good though: no need to watch influencers for that. Or ever.
plemer · 11h ago
I get it, but have you reviewed high-quality sources or actually tried the product?
Association fallacy: “You know who else was a vegetarian? Hitler.”
Your contrary certainty has the same humorously over-confident tone.
perching_aix · 11h ago
Which is how and why these political strategies work so well.
AaronAPU · 5h ago
It’s incredible how few people can see their own reflection.
shiandow · 11h ago
I'll believe in AGI when OpenAI stops paying human developers.
brookst · 11h ago
I don’t see how this follows. Does AGI mean that it is free to operate and has no hardware / power constraints?
The fact that I see people being paid to dig a trench does not make me doubt the existence of trenching machines. It just means that the tool is not always the best choice for every job.
rvz · 10h ago
> Does AGI mean that it is free to operate and has no hardware / power constraints?
It is that and an autonomous system that can generate $100BN dollars in profits. (OpenAI and Microsoft's definition of AGI)
So maybe when we see a commercial airplane with no human pilots on board but an LLM piloting the plane with no intervention needed?
Would you board such a plane?
graycat · 8h ago
AGI???? Again, once again, over again, yet again, one more time:
(1) Given triangle ABC, by means of Euclidean construction find point D on line AB and point E on line BC so that the lengths |AD| = |DE| = |EC.
(2) Given triangle ABC, by means of Euclidean construction inscribe a square so that each corner of the square is on a side of the triangle.
Come ON AGI, let's have some RESULTS that human general intelligence can do -- gee, I solved (1) in the 10th grade.
threatripper · 11h ago
We have to wait and test it ourselves to see how far it gets in our daily tasks. If the improvement continues like it did in the past, that would be pretty far. Not quite a full researcher position but an average student assistant for sure.
swat535 · 7h ago
Wasn't Sam Altman claiming AGI is just a couple of years away and OpenAI is at the forefront of it?
ImHereToVote · 12h ago
Maybe this won't be. How long do you think a machine will be able to outdo any human in any given domain? I personally think it will be after they are able to rewrite their own code. You?
kasey_junk · 12h ago
They write their own code now so how long will it be?
Fade_Dance · 11h ago
Seems like this will be one of the areas that will improve with multi-agentic AI, where groups of agents can operate via consensus, check/test outputs, manage from a higher meta level, etc. Not that any of that would be "magic" but the advantages of expanding laterally to that approach seem fairly obvious when it comes to software development.
So in my eyes actually think it's probably more to do with reducing the cost of AI inference by another order of magnitude, at least when it comes to mass market tools. Existing basic code-generation tools from a single AI are already fairly expensive to run compute wise.
ImHereToVote · 11h ago
They "can" write parts of it. But they can't rewire the weights. They are learned not coded.
owebmaster · 12h ago
> I personally think it will be after they are able to rewrite their own code.
My threshold is when it can create a new Google
ImHereToVote · 11h ago
Why not putting it earlier than that. Why not starting and running it's own LLC. I would think when that LLC is bigger than Google it might already be obvious.
bgwalter · 11h ago
None of the X enthusiasts has even seen a benchmark or used the thing, but we're glad to know that Duke Nukem Forever will be released soon.
chvid · 12h ago
What is this? Guerilla marketing from a 300B startup?
bawana · 11h ago
Well i asked chatGPT IF i could run kimik2 on a 5800x 3d with 64 gigs of ram with a 3090 and it said:
Yes, you absolutely can run Kimi-K2-Instruct on a PC with:
:white_check_mark: CPU: AMD Ryzen 7 5800X3D
:white_check_mark: GPU: NVIDIA RTX 3090 (24 GB VRAM)
:white_check_mark: RAM: 64 GB system memory
This is more than sufficient for both:
Loading and running the full Kimi-K2-Instruct model in FP16 or INT8, and
Quantizing it with weight-only INT8 using Hugging Face Optimum + bitsandbytes.
Kimi k2 has a trillion parameters and even an 8 bit quant would need half a gig of system ram +vram
This is with the free chatGPT that us peasants use. I dont have the means to run grok4 heavy, deep seek or kimi k2 to ask them.
I cant wait to see what accidental wars will start when we put ai in the kill chain
Bottom line: Your 5800X3D + 64 GB RAM + RTX 3090 will run Kimi K2’s 1.8‑bit build, but response times feel more like a leisurely typewriter than a snappy chatbot. If you want comfortable day‑to‑day use, plan either a RAM upgrade or a second (or bigger) GPU—or just hit the Moonshot API and save some waiting.
threatripper · 11h ago
I second this. o3 is pretty spot on while 4o answered exactly like what the parent got.
I rarely use 4o anymore for anything. Rather would I wait for o3 than quickly get a pile of rubbish.
brookst · 11h ago
4o is great for simple lookup and compute tasks; stuff like “scale this recipe to feed 12” or “what US wineries survived prohibition”.
o3 all the way for anything needing analysis or creative thought.
jug · 11h ago
These cases are probably why OpenAI has stated GPT-4.1 is their last non reasoning model and GPT-5 will determine the need for and how much to reason based on the query.
dyl000 · 11h ago
cant wait for how mid this is going to be.
m3kw9 · 10h ago
Yeah this is as big of a news as iPhone 18 in the pipeline.
pjs_ · 8h ago
Sama clocked this way back. He has used this exact analogy - that new GPT models will feel like incremental new iPhone releases c.f. the first iPhone/GPT-3.
ogogmad · 12h ago
In related news, OpenAI and Google have announced that their latest non-public models have received Gold in the International Mathematics Olympiad: https://news.ycombinator.com/item?id=44614872
That said, the public models don't even get bronze.
Wow. That's an impressive result, though we definitely need some more details on how it was achieved.
What techniques were used? He references scaling up test-time compute, so I have to assume they threw a boatload of money at this. I've heard talk of running models in parallel and comparing results - if OpenAI ran this 10000 times in parallel and cherry-picked the best one, this is a lot less exciting.
If this is legit, then I really want to know what tools were used and how the model used them.
badgersnake · 11h ago
> If this is legit
Indeed.
mjburgess · 11h ago
It's strange that none of these $100s bn+ companies fund empirical research into the effects of AI tools on actual job roles as part of their "benchmarks". Oh wait, no its not.
brookst · 11h ago
Agree, it would be bizarre if they did.
mjburgess · 9h ago
It would be bizarre if they benchmarked the models based on actual task performance?
You should totally give Claude Code a try. The biggest problem is that it is glaze-optimized, so have to work at getting it to not treat you like the biggest genius of all time. But when you manage to get in a good flow with it, and your project is very predictably searchable, results start to be quite helpful, even if just to unstuck yourself when you're in a rut.
I can count on my hands the number of enterprises that actually have AI models of their own.
I use Claude Code for building products that don’t have these limitations. And fuck is it amazing. Even little things that would have taken days are done in a single line of text.
And somehow these companies are now "AI companies", just like in the 2010s your average food market down the street was a "tech company" or the bakery next to it is now a "blockchain company". This happens all the time with bubbles and mania.
These enterprises today appear even more confused about what they do to rebrand themselves and it's a sign they are desperate for survival.
Association fallacy: “You know who else was a vegetarian? Hitler.”
https://hitchhikers.fandom.com/wiki/Golgafrincham
The fact that I see people being paid to dig a trench does not make me doubt the existence of trenching machines. It just means that the tool is not always the best choice for every job.
It is that and an autonomous system that can generate $100BN dollars in profits. (OpenAI and Microsoft's definition of AGI)
So maybe when we see a commercial airplane with no human pilots on board but an LLM piloting the plane with no intervention needed?
Would you board such a plane?
(1) Given triangle ABC, by means of Euclidean construction find point D on line AB and point E on line BC so that the lengths |AD| = |DE| = |EC.
(2) Given triangle ABC, by means of Euclidean construction inscribe a square so that each corner of the square is on a side of the triangle.
Come ON AGI, let's have some RESULTS that human general intelligence can do -- gee, I solved (1) in the 10th grade.
So in my eyes actually think it's probably more to do with reducing the cost of AI inference by another order of magnitude, at least when it comes to mass market tools. Existing basic code-generation tools from a single AI are already fairly expensive to run compute wise.
My threshold is when it can create a new Google
Yes, you absolutely can run Kimi-K2-Instruct on a PC with:
:white_check_mark: CPU: AMD Ryzen 7 5800X3D :white_check_mark: GPU: NVIDIA RTX 3090 (24 GB VRAM) :white_check_mark: RAM: 64 GB system memory This is more than sufficient for both:
Loading and running the full Kimi-K2-Instruct model in FP16 or INT8, and Quantizing it with weight-only INT8 using Hugging Face Optimum + bitsandbytes.
Kimi k2 has a trillion parameters and even an 8 bit quant would need half a gig of system ram +vram
This is with the free chatGPT that us peasants use. I dont have the means to run grok4 heavy, deep seek or kimi k2 to ask them.
I cant wait to see what accidental wars will start when we put ai in the kill chain
Bottom line: Your 5800X3D + 64 GB RAM + RTX 3090 will run Kimi K2’s 1.8‑bit build, but response times feel more like a leisurely typewriter than a snappy chatbot. If you want comfortable day‑to‑day use, plan either a RAM upgrade or a second (or bigger) GPU—or just hit the Moonshot API and save some waiting.
I rarely use 4o anymore for anything. Rather would I wait for o3 than quickly get a pile of rubbish.
o3 all the way for anything needing analysis or creative thought.
That said, the public models don't even get bronze.
[EDIT] Dupe of this: https://news.ycombinator.com/item?id=44614872
What techniques were used? He references scaling up test-time compute, so I have to assume they threw a boatload of money at this. I've heard talk of running models in parallel and comparing results - if OpenAI ran this 10000 times in parallel and cherry-picked the best one, this is a lot less exciting.
If this is legit, then I really want to know what tools were used and how the model used them.
Indeed.