Show HN: Atari Missile Command Game Built Using AI Gemini 2.5 Pro
136 vbtechguy 140 4/7/2025, 6:18:10 AM missile-command-game.centminmod.com ↗
A modern HTML5 canvas remake of the classic Atari game from 1980. Defend your cities and missile bases from incoming enemy attacks using your missile launchers. Initially built using the Google Gemini 2.5 Pro AI LLM model.
I worry that because we are now able to instantly produce a bunch of JS to do X thing, we will be incentivized not to change the underlying tools (because one, only AI's are using it, and two AI's won't know how to use the new thing)
I worry this will stall progress.
It's not really that different from taking your 2022 car to a shop to adjust your camless engine, and assuming everything's fine, but not having a clue what they did to it or how to fix it if the engine explodes the next day. You can't even prove it had something to do with what the shop did, or if they actually did anything at all. They probably don't even know what they did.
It won't stall progress for clever people who actually want to figure things out and know what they're doing. But it will certainly produce lots more garbage.
That said, the game is impressive for something stitched together by an LLM.
You still need paradigmatic shifts in architecture to enable delivering scale and quality from a smaller amount of materials, and it has not made a dent there, yet.
The standard for new frameworks won't be "does this make humans more productive using new concepts". It will be "can I get an LLM to generate code that uses this framework".
Gemini in particular is really good at this
Obviously it will do even better if you give it the full documentation but it doesn't do that badly in general with the language when you provide a sample app where it can basically just pick up the patterns of the language/framework.
I also find it interesting that the ai code submissions like this one are generally vague about the process.
This seem to be created using Cline on VsCode, prompting to Gemini 2.5 Pro using OpenRouter.
The commit history implies that a crude version was created by the LLM using an initial prompt and then gradually improved with features, fixes etc. Assuming ongoing conversations with the AI agent.
All code in a single index.html file which might not be great for human coders but who cares to be fair.
All in all, a prompt history would be really educational if anyone thinks about doing something similar.
As we start to develop AI first programs, I believe we will need to start connecting LLM conversations to code, not only for educational purpose but for maintenance as well. I'm currently experimenting with what I call Block-UUIDs, and they are designed to make it easy to trace LLM generated code. You can see what I mean in the link below, which contains a simple hello world example.
https://app.gitsense.com/?chat=7d09a63f-d684-4e2c-97b2-71aa1...
Something worth noting is, you can't expect the LLM to properly generate a UUID. If you ask the LLM, it'll says it can, but I don't trust it to do it correctly all the time. Since I can't trust the LLM, I instruct the LLM to use a template string, which I can replace on the server side. I've also found LLMs will not always follow instructions and will generate UUID and how I handle this is, when the LLM stops streaming, I will validate and fix if needed, any invalid UUIDs.
How I see things playing out in the future is, we will alway link to LLM conversations which will give use the Block-UUIDs generated and by looking at the code, we can see what Block-UUID was used and how it came about.
Full Disclosure: This is my tool
Additionally, you get more directly usable text out of a 'git blame'
This might be something I would do. My only concern is, the conversations can be quite long and mean very long. My "Chat Flow" right now is to discuss what needs to be done. Produce the code, which can span multiple chats and then have the LLM summarize things.
What I think might make sense in the future is to include detailed chat summaries in commit messages and PRs. Given that we get a lot of text diarrhea from LLMs, I think putting them in a LLM as is, may do more harm than good.
Ultimately the code needs to stand alone, but if you discover that a specific version of an LLM produced vulnerable code, you have no recourse but to try again and read the generated code more carefully. And reading code carefully is the opposite of vibe-coding.
I would say AI has generated about 98% of my code for my chat app in the last 3 months and it was definitely not vibe coding. Every function and feature was discussed in detail and some conversations took over a week.
My reasoning for building my chat app wasn't to support vibe coding, but rather to 10x senior developers. Once you know how things work, the biggest bottleneck to a senior developers productivity is typing and documentation. The speed at which LLMs can produce code and documentation cannot be matched by humans.
The only downside is, LLM don't necessary produce pretty or more readable code. The more readable code is something I would like to tackle in the future, as I believe post-processing tools can make LLM code much more readable.
I wonder if there is some tool support yet that supports that.
thanks for sharing!
i.e. who cares when the LLM starts giving up when the file is too big and confusing?
i.e. who cares when the LLM introduces bugs / flaws it isn't away of?
If the AI fucks up then its a lost cause. And maybe you’ll better off create a new version from scratch instead of trying to maintain one when LLM starts to fail. Just ask Gemini 3.5 this time around.
The AI can write obfuscated code. Name all variables from a to z. Even emit binaries directly. Who cares if it works?
So how does it save time? Do I also tell customers we're blowing things up and redoing it every few months with potential risks like data loss?
I personally do not think any of this is a good idea but here we are.
And I was making fun of AI images with the weird fingers and shit just a year ago, now it's hard to identify AI generated images. The code gen now can create a single file space invaders which is impressive but shitty according to all coding metrics.
They are getting better. At some point the single file shit stew will be good enough cause the context windows and capabilities of LLMs will be able to handle those files. That's when nobody gonna care I guess.
IF...
> weird fingers and shit just a year ago, now it's hard to identify AI generated images
It's still easy and it's never been about the fingers. Things like lighting are still way off (on the latest models).
> At some point the single file shit stew will be good enough
At some point we'll be living in Mars too.
AI won't cope well as that file gets larger, or best hope that experimental diff feature is working well. I find stuff really breaks if you don't refactor it down.
Could not parse specific advice due to formatting issues in the AI response content.
They don't "reason" - they're just AutoComplete on steroids
https://news.ycombinator.com/item?id=43648528
“I had a long flight today, so in the airport I coded something”
we cam start the discussion now i.e.: is using cursor still coding ?
is a senior team lead coding when the only thing he does is code reviews and steering the team?
You didn’t even mention AI.
Happy to give more details if there's any way to get in touch outside this thread.
I had something like $500,000. I bought up the entire inventory (at least, until the Buy buttons stopped working - there were still items available).
It then became a 'click as fast as you can and don't care about strategy' game, so I stopped.
Again, the first time I went to the store I could max out with everything, so there wasn't the accumulative build-up, or decision to prioritize one purchase strategy over another.
There wasn't any sign there would be new things to buy which I couldn't yet afford.
There was no hint that future game play would be anything other than clicking madly, with no need to conserve or focus resource use.
Part of Missile Command is to wait until just the right point so a single missile can do a chain reaction to take out several incoming warheads; to watch if your strategy was effective, not simply twitch-fire; and even the slow dread of watching an incoming warhead come towards a city after you've depleted your missile supply.
Here's an example from the novelization of WarGames, from https://archive.org/details/wargames00davi/page/n45/mode/2up... :
> David Lightman hovered over the controls of the Atari Missile Command tucked neatly between the Frogger machine and the Zaxxon game. ...
> Goddamned Smart Bombs! he thought as a white buzzing blip snuck through his latest volley of shots and headed for one of his six cities at the bottom of the screen. He spun the control ball, stitched a neat three-X line just below the descending bomb with the cursor, and watched with immense satisfaction as his missiles streaked white lines to their targets, blowing the bomb right out of the phosphor-dot sky.
I didn't get the same sense of satisfaction playing this version.
What was the idea behind this choice?
Are the game state updates and input coupled with the frame rate, as in most beginner game development tutorials out there, or did the "AI" do the right thing and decouple them?
Should we be checking in our prompt history into version control as a kind of source code or requirements spec? Seems prompt revision and improvement would be valuable to keep a history of.
It's amusing that on one hand, there's been a push for "reproduceable builds", where we try to make sure that for some set of input (source code files, configuration, libraries), that we can get an identical output. On the other hand, we have what we see here where, without a huge amount of external context, no two "builds" will ever be identical.
https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/
The reasoning models seem to respond quite well to the "ask me one question at a time, building on my previous answer" for the sake of coming up with a solid blueprint for the project, then build from there. I used this method to make a nice little inventory/rma tracker for my job using Rails 8, and it got me MVP in a matter of hours, with most delays being just me refining the code on my own since some of it was a bit clunky. I used o3 mini to generate the initial prompts, then fed those to Claude.
The hallucinations/forgetfulness was relatively minor, but if I did not have prior Ruby/JS knowledge, I doubt I would have caught some of the mistakes, so as much as I was impressed by the method outlined in the blog post, I am not at all saying that someone with no knowledge of the language(s) they wish to use is going to create a great piece of software with it. You still have to pay attention and course correct, which requires a working understanding of the dev process.
My lessons learned for prompting were to have project broken down into clearly defined modules (duh...), and to constantly feed the latest module source back in as context along with the prompts. This helps ground the responses to only adjust the code related to the prompt, and to not break stuff that was already working.
What does this mean?
A new dev tech shows up. Old devs say: That's not real programming. Real programmers use the old ways. You take all the skill out of it. You'll never learn to do it the "right way". And then, it becomes standard, an no-one wants to go back but a few hobbyists.
It's been this way with switching from assembly to more human-readable languages.
It's been this way with syntax highlighting.
... and with IDEs.
I remember when we scoffed at IntelliSense over the Water Cooler because them kids didn't have to memorise stuff anymore.
I kept cursing at Docker and npm insanity, having colourful languages for people who hid behind abstraction because they did not understand basic functionality.
And today, it is AI. Right now, it divides. Those who love it, those who consider it 'cheating' or 'stealing other people's code'. In reality, it is just another level of abstraction in the development stack. Tomorrow, it'll just be 'the standard way to do things'.
I wonder what comes next.
This is neither innovative nor showing creativity IMO and reminds me more of twitter hype-bro posts, than something truly HN front-page worthy.
Ya'll are gonna be blindsided by AI if you don't turn around and see what is happening.
Programming is getting democratized. People who have never written a CLI command in their life will get the ability to tell a computer what they want it to do. Instead of having to get bent over with a stupid feature-packed $9.99/mo app, they can say "Make me a program that saves my grocery list and suggests healthier alternatives via AI analyses." And get a program that is dead simple and does exactly the simple task they want it to do.
Keep in mind that Excel '98 probably covers 90% of excel uses for 90% of users. Yet here we are in 2025 with society having spent billions over the years to upgrade and now subscribe to msoffice so Amy in HR can add up columns of payroll.
You have completely lost sight of reality if you think seeing a program like this is dumb because "anyone can just go and copy the git from a million different clones, load it into their choice programming enviroment, get the relevant dependencies, compile it and be playing in no time!". Maybe you live in a bubble where those words are English to everyone, but it is not reality.
For SWE's, I have great sympathy and the comradery for engineers solving hard problems all day.
But for the software industry as a whole? I hope it burns hell. The tolls to cross the moat of "telling the computer what to do" have been egregious and predatory for years.
I have no idea what that is supposed to mean but I keep hearing the same about art, music and other creative fields and it sure sounds like contempt for creative people.
I personally don't lose any sleep over LLMs being powerful wizards for getting started on new projects.. that's what LLMs are good at.. pulling together bits of things they've seen on the internet. There's a chasm between that and maintaining, iterating on a complex project. Things that require actual intelligence.
Instead of dealing with the costs associated with using, developing and printing from film, as well as the skills associated with knowing what a photo would look like before it was developed, digital cameras allowed new photographers to enter the industry relatively cheaply and shoot off a few thousand photos at a wedding at a relatively negligible cost. Those photographers rapidly developed their skills, and left studios with massive million dollar Kodak digital chemical printers in the dust. I know because I was working at one.
If you remember, this was in the time where the studio owned your negatives ostensibly forever, and you had to pay for reprints or enlargements. What were amateur photographers could enter this high-margin market, produce images of an acceptable quality, charge far less and provide far more.
I'm not able to say whether this will happen to software development, but the democratization of professional photography absolutely shook the somewhat complacent industry to its core.
In that case it had nothing to do with contempt for creative people, it was the opposite, anyone who wanted to be creative now could be.
I can give you the real example of recently needing to translates ancient 90's era manufacturing files into modern ones, while also generating companion automation files from it (which needs to by done manually but with tooling to facilitate the process).
I found a company that sells software capable of doing this. A license is $1000/yr/usr.
The next day I was able to get claude 3.7 to make a program that does all that, translate the files, and then have a GUI that renders them so an engineer can go through and manually demarcate points, which then are used to calculate and output the automation file. This took about 45 minutes and is what the department is using now. We have thousands of these files that will need to get modernized as they get used.
I see this everywhere now, and I have been building bespoke programs left an right. Whether it be an audio spectrum analyzer that allows me to finely tune my stereo equalizer visually based on feedback for any given track, or an app on my phone that instantly calculates futures prices and margin requirements for given position sizes and predicted market movements.
People think LLMs will be a paradigm shift, I can tell you the shift has already happened, it's just a matter of awareness now.
That sounds like something for which one should be spending the money on professionally developed and well-tested software. What's the expression? Penny wise, pound foolish.
It’s taken her a month to get up and running.
Everyone needs to realize this is not a matter of AI stealing dev jobs but the supply of low skilled developers exploding and eating into the market.
https://pluralistic.net/2023/01/21/potemkin-ai/#hey-guys
I don't doubt that LLMs can make programmers more productive. It's happening today, and I expect it will continue to improve, but it requires knowing what code they should generate, what the actual goals are, and how it should be tested. It can generate standard solutions to standard problems with standard bugs. That's fine they're a tool.
What the inexperienced expect them to do is read their mind, and implement what they want without testing (other than did it crash the first time I used it). Unfortunately, knowing the questions to ask is at least half of the problem, which by definition the inexperienced don't know how to do. You can already see that with vibecoding prompts to "write clear comments", "don't write bugs", and "use best practices".
So why does it lead to the enshitification of the programming experience? Because regular folks will be led to believe (Startrek movie Wargames hacker style) that this is how things are done. The will accept and expect rehashed garbage UI and implementations without security or corner case checking, because that's what they always get when they press a button and wait a minute. Now, why can't YOU stupid programmer, get the same results faster? I told you I wanted a cool game that would make me lots of money fast with no bugs!
I do have hope that some people will learn to be more clear in their descriptions of things, but guess what, english isn't really the language for that.
I'm talking about people talking in english to an AI on one screen, and compiled functioning programs appearing on the other. An "app playground" where you just tell your phone what you need an app to do, and a new bespoke app is now in your app draw.
Forget about UIs too. They won't be that important. You don't need a tree of options and menus, tool bars and buttons. You would just tell the program what you want it to do..."Don't put my signature on this email"..."wrap the text around this image properly"(cough msword cough)..."split these two parts and move the red one to the top layer"...or even "Create a button that does this and place it over there".
I think part of what you want is voice applications, because deleting your signature by hand is probably easier than trying to build a program that does it. Maybe the app could just search help and tell you what feature already does what you're asking for. Certainly, context sensitive voice recognition has gotten a LOT better with the latest LLMs. Not sure I'm looking forward to the guy on the train narrating to his laptop for an excel page, though.
But using AI to generate something in that style doesn't make you an artist. It isn't art, it's just a product.
Celebrating the 'democratization' of these skills is just showing adversity to basic learning and thinking. I'm not gonna celebrate a billion dollar corp trying to replace fundamentals of being human.
The reality is that you cannot become an expert in everything. I have songs I'd love to compose in my head, but it would be totally impractical for me to go through the hundreds/thousands of hours of training that would be needed to realize these songs in reality. Nor am I particularly motivated to pay someone else to sit there for hours trying to compose what I am telling them.
This is true for hundreds of activities. Things I want to do, but cannot devote the time to learn the intermediate steps to get there.
So the alternative is that you'll pay a tech company instead -- to use their model trained on unlicensed and uncredited human works to generate a mishmash of plagiarized songs, the end result of which nobody will ever want to listen to?
You don't have to though. Anyone who's spent a decent amount of time in a creative hobby will tell you they sucked when they started but they enjoyed the process of learning and exploring. I think you're depriving yourself of the mental benefits of learning a new skill and being creative. It flexes your mind in new ways.
If you just want something to exist, sure, but when you can press buttons and have a magic box spit out whatever you want with no effort, how much are you actually going to value it?
This is probably how people felt at the advent of calculators
But... using a calculator doesn't make you a mathematician either. And one could argue that society has born real negative consequences from the inability of most people to do even basic math because of the ubiquity of calculators. There is a big difference between using a tool and having the tool do everything for you.
Do you really believe that society will benefit when most people don't know how to express themselves creatively in any way other than asking the magic box to make a pretty thing for them?
Generative AI forces us to reconsider what original means because it's producing a "remix" of what it has seen before with no credit going to those who created those original works.
Our current laws are made to handle AI
The can for sure maintain and iterate on smaller projects -- which millions of people are happily feeding into them as training data.
Going from a project with 1,000 to 1,000,000 lines of code is a tiny leap compared to going from 0 to 1000. Once your argument is based on scale then you pretty much lost to the robots.
I'm not saying they are going to invent some free energy source (other than using humans as batteries) anytime soon but when the argument is "you're going to be blindsided" and the response shows, and I'm not trying to be insulting or anything like that, willful ignorance of the facts on the ground I'm going to say you probably will be blindsided when they take your app writing job or whatever.
I'm not attacking you at all just saying that there's a bunch of people who chose to keep their heads in the sand and hope this all just goes away.
Are you sure the leap is tiny? It's a much easier problem to get only 1,000 lines of code to be correct, because the lines only have to be consistent with each other.
And yet I feel more secure in my job today than I did a year ago because I'm constantly hitting the limits of what a language model can do. I've realized that the decisions I make every day as a senior engineer aren't mostly about what lines of code usually come after each other.
Can you please explain this contraction?
And 90% of what in business software? Ideas? Features? Implementation? I doubt 90% is the number for any of that, or LLM is good at any of those. Based on my own recent experience of LLMs (yes, switching between several different models) and seeing their disastrous performance on writing code for business logic that is just a little bit more specific, I am not convinced.
No comments yet
And another thing to consider, is if you are copying software from another business you need to compete in some way. Yours needs to have more polish (LLMs are bad at this), or a unique feature, or a different focus. LLM copying another business will only allow you to compete on price, but not those other things.
The fallacy that people are making is they look at the current state of things as 'the way it'll always be' while the AI companies, and I mean all the AI companies, are looking to corner the market so have no moral issues with taking out a whole swath of skilled jobs.
The cost to compete with another business's software is a high-end GPU.
1. Used https://gemini.google.com Gemini 2.5 Pro canvas mode for Gemini Advanced subscription account to ask it to build Atari missile command game in HTML5. Tweeted about it at https://x.com/George_SLiu/status/1906236287931318747 almost one shotted but after I passed level 1, it jumped to level 182 and pulverised my cities and bases LOL
2. Iterations in subsequent chat sessions in Gemini web to fix it up, add in-game store, leaderboard and end-hame AI analysis and when Gemini 'something went wrong' messages, jumped over to Claude 3.7 Sonnet via Claude Pro subscription web interface to finish off the task
3. A few days ago, I tried Cline with VSC for the first time with Openrouter AI API key and Gemini 2.5 Pro free LLM model has helped me with further development. Also now trying out Google Firebase for more Gemini 2.5 Pro access https://studio.firebase.google.com/ ^_^
The bar is not that high though, and is not the same for everyone depending on the content.
It's a similar religious matter?
> I also checked your profile and again, wondered why you are putting this: "Only admins see your email below. To share publicly, add to the 'about' box." as your bio!
Why not? I have "Error retrieving about text" in my "About" field on a certain sms replacement chat app...