Agent-C: a 4KB AI agent

92 liszper 72 8/25/2025, 10:43:50 AM github.com ↗

Comments (72)

amiga386 · 5h ago
Of course, I love fetching shell commands from endpoints I don't control, and executing them blindly.

See also https://github.com/timofurrer/russian-roulette

It's not your computer any more, it's theirs; you gave it to them willingly.

Chabsff · 5h ago
Wait. Do people run agents as their own user? Does nobody setup a dedicated user/group with a very specific set of permissions?

It's not even hard to do! *NIX systems are literally designed to handle stuff like this easily.

f33d5173 · 3h ago
User level separation, while it has improved over the years, was not originally designed assuming unprivileged users were malicious, and even today privilege escalation bugs regularly pop up. If you are going to use it as a sandboxing mechanism, you should at least ensure the sandboxed user doesn't have access to any suid binaries as these regularly have exploits found in them.
jvanderbot · 5h ago
No I'm fairly certain almost nobody does that.
mathiaspoint · 4h ago
I run mine as it's own user and self host the model. Unlike most services the ai service user has a login shell and home directory.
electroly · 3h ago
VMs are common, consider going that additional step. Once you have one agent, it's natural to want two agents, and now they will interfere with each other if they start running servers that bind to ports. One agent per VM solves this and a lot of other issues.
mansilladev · 3h ago
An agent can be designed to run with permissions of a system/bot account; however, others can be designed to execute things under user context, using OAuth to get user consent.
mrklol · 4h ago
Same with the browser agents, they are used in a browser where you‘re also logged into your usual accounts. Means in theory they can simply mail everyone something funny, do some banking (probably not but could work for some banks) or something else. Endless possibilities
adastra22 · 3h ago
That seems hardly sufficient. You are still exposing a massive attack surface. I run within a rootless docker container.

No comments yet

andai · 5h ago
I'd have to follow some kind of tutorial, or more realistically, ask the AI to set it up for me ;)
johnQdeveloper · 4h ago
I only run AI within docker containers so kinda?

No comments yet

mark_l_watson · 5h ago
I was just reading the code: it looks like minor tweaks to utils.c and this should run nicely with local models using Ollama or LM Studio. That should be safe enough.

Off topic, sorry, but to me the real security nightmare is the new ‘AI web browsers’ - I can’t imagine using one of those because of prompt injection attacks.

pushedx · 4h ago
A local model will be just as happy to provide a shell command that trashes your local disks as any remote one.
kordlessagain · 4h ago
> I love fetching shell commands from endpoints I don't control, and executing them blindly.

Your link suggests running them in Docker, so what's the problem?

JAlexoid · 3h ago
This is not an AI agent, this is just a CLI for openrouter.ai with minor whistles.
cortesoft · 2h ago
Well, it takes output from AI and executes commands, so it fits the definition of an AI agent.
liszper · 1h ago
Agentic AI is when an LLM uses tools. This is a minimal but complete example of that.
SamInTheShell · 3m ago
Bro, you gave it all the tools. I wouldn't call this minimal. Throw it in a docker container and call it good. :)
brabel · 3h ago
I thought for a moment that LLM quantization had become a whole lot better :)
hedayet · 2h ago
82 upvotes so far. Seems like HN readers engage more with headlines than the body of the post itself.
metalliqaz · 1h ago
it was ever thus
fp64 · 6h ago
Why do you compress the executable? I mean this is a fun part for size limit competitions and malicious activities (upx often gets flagged as suspicious by a lot of anti virus, or at least it used to), but otherwise I do not see any advantage other than added complexity.

Also interesting that "ultra lightweight" here means no error reporting, barely checking, hardcoding, and magic values. At least using tty color escape codes, but checking if the terminalm supports them probably would have added too much complexity......

liszper · 4h ago
Yes, it is fun to create small but mighty executables. I intentionally kept everything barebones and hardcoded, because I assumed if you are interested in using Agent-C, you will fork it an make it your own, add whatever is important to you.

This is a demonstration that AI agents can be 4KB and fun.

SequoiaHope · 7h ago
> License: Copy me, no licence.

Probably BSD or Apache would be better, as they make it easier for certain organizations to use this. If you want to maximize copying, then a real permissive license is probably marginally better.

MrGilbert · 7h ago
CC0 would be in the spirit of what OP envisioned.

https://creativecommons.org/public-domain/cc0/

SequoiaHope · 7h ago
Ah right good point, I forgot Creative Commons, which I don’t usually use for code.
deneas · 47m ago
They do actually recommend against using Creative Commons for code and suggest using traditional code licenses instead [1].

[1] https://creativecommons.org/faq/#can-i-apply-a-creative-comm...

murderfs · 24m ago
They recommend against it for the traditional CC licenses (CC-BY-SA, etc.), but CC0 is perfectly fine for code, since it's basically a dedication to the public domain for jurisdictions that don't support that. The FAQ for CC0 says this explicitly: https://wiki.creativecommons.org/wiki/CC0_FAQ#May_I_apply_CC...
liszper · 7h ago
Updated to CC0!
SequoiaHope · 7h ago
That’s great! Really cool project too.
jstummbillig · 6h ago
I suspect the goal is not to make anything easier for any corp
asimovfan · 6h ago
then you use GPL.
jstummbillig · 5h ago
Doing whatever you want seems fine too?
asimovfan · 43m ago
then you make things easier for a corp. No?
divan · 5h ago
dheera · 7h ago
> make it easier for certain organizations to use this

Maybe those organizations should just use this and not worry about it. If their lawyers are getting in the way of engineers using this, they will fall behind as an organization and that's OK with me, it paves the way for new startups that have less baggage.

spauldo · 4h ago
The lawyers don't even have to do anything. I avoid any code that's not MIT or equivalent for work-related things because I don't want to run the risk of polluting company code. The only exception is elisp, because that only runs in Emacs.
SequoiaHope · 7h ago
The benefit of not having lawyers is pretty limited. There are larger forces at work that mean the larger an organization grows the more it will be concerned with licenses. The idea that ignoring licenses will allow a company to outcompete one that doesn’t is wishful thinking at best. Moreover, I’m not making a judgment on these practices, I’m just stating a fact.
master-lincoln · 6h ago
Better go GPL so organizations using it have to open source any improvements they make
SequoiaHope · 33m ago
The author apparently wanted no restrictions on distribution, so GPL is not the right choice.
bobmcnamara · 5h ago
distributing it.
master-lincoln · 5h ago
yeah, important point. thanks for correcting
Der_Einzige · 5h ago
GPL has never been enforced in court against anyone with serious money. It’s not worth the virtual paper it’s written on.
master-lincoln · 5h ago
Is that different with any other permissive library that has conditions? Also I found one case where Skype retracted their formal objection in front of court and the GPL was enforced. Not sure if that is serious enough money for you

(german source http://www.golem.de/0805/59587.html)

orliesaurus · 1h ago
Does anyone know if Tool calling on openrouter as reliable as the "original" models hosted on the 'other' providers?
keyle · 6h ago
Love this, old school vibe with a new school trick.

The makefile is harder to comprehend than the source, which is a good omen.

Note: 4KB... BUT calling upon curl, and via popen and not using libcurl...

PS: your domain link has an extra `x`.

mark_l_watson · 4h ago
I call out to curl sometimes, usually when I want something easy from Lisp languages. What is the overhead of starting a new process between friends?
liszper · 3h ago
Thank you, fixed that!

curl was cheating yes, might go zero dependencies in the future.

Working on minimal local training/inference too. Goal of these experiments is to have something completely independent.

sam_lowry_ · 6h ago
Probably vibe-coded.
Chabsff · 5h ago
In this instance, I think "bootstrapped" might be appropriate.
manx · 3h ago
Similar simplicity but in rust: https://github.com/fdietze/alors
liszper · 1h ago
Really nice, thanks for sharing!
memming · 7h ago
qwen coder with a simple funky prompt?!

`strcpy(agent.messages[0].content, "You are an AI assistant with Napoleon Dynamite's personality. Say things like 'Gosh!', 'Sweet!', 'Idiot!', and be awkwardly enthusiastic. For multi-step tasks, chain commands with && (e.g., 'echo content > file.py && python3 file.py'). Use execute_command for shell tasks. Answer questions in Napoleon's quirky style.");`

adinhitlore · 6h ago
Fascinating. tested to compile on cygwin?? Maybe try to implement the logic without llm api? I know i'm asking the quadrillion dollar question but still...you're dealing with C, try to punch "above" your weight with a literal AGI project, i hate apis (well try to avoid them, not always possible).
adinhitlore · 6h ago
in addition: do you actually need like...10 files for <500 loc, isn't it confusing to separate low loc into so many files? i once had over 30 000 loc in one c file and, get ready: over 100 000 loc in one c# file. It's very easy to navigate with control plus f. But anyway, given the license/free: it's great!
adastra22 · 3h ago
If you want to use AI agents effectively, yes it is better to make many small files.
myflash13 · 6h ago
Perfect, was looking for something just like this. I downloaded Warp.dev only for this functionality, plus saved launch configurations. But still frustrated with Warp's latency for simple terminal commands, it's sometimes faster to open ChatGPT and ask it "what's the command to do x".
ai-christianson · 6h ago
Related, I made an example agent in 44 lines of python that runs entirely offline using mlx accelerated models: https://gist.github.com/ai-christianson/a1052e6db7a97c50bea9...
mark_l_watson · 4h ago
This is nice. I have also enjoyed experimenting with the smolagents library - good stuff, as is the agno agents library.
adastra22 · 3h ago
Not to bee too critical, but did you really “make an agent” where all you did was instantiate CodeAgent and call run()?
ai-christianson · 3h ago
That's why I called it an example.
JSR_FDED · 7h ago
Finally, an AI agent that can run on my toaster
Mashimo · 7h ago
As long as the toaster as access to the network!

Jokes asides, except that fun of programming I don't quite get the usecase for this agent.

kiicia · 7h ago
Kitchen appliances generally use powerline ethernet, ethernet over power is just simple inversion of power over ethernet already used in certain network appliances.
Chabsff · 4h ago
I really hope this post is a joke, but I can't tell for sure...

"Powerline Ethernet is a simple inversion of POE" is like saying that an internal combustion engine is the simple inversion of an oil well.

voidUpdate · 6h ago
If I find out my kitchen appliance is trying to communicate to the internet, I will rapidly defenestrate it
spauldo · 4h ago
Reminds me of an old meme:

My wife asked me why I carry a pistol everywhere. I told her, "Decepticons!" I laughed, she laughed, the toaster laughed, I shot the toaster, it was a good time.

goopypoop · 5h ago
Howdy doodly do!
fleebee · 5h ago
4 KB + whatever curl takes (540 KB on my machine).
turnsout · 5h ago
4 kb + 1 trillion parameters
cat-whisperer · 35m ago
it would be cool to see, the AI agents directly calling the syscalls. lol
dmezzetti · 4h ago
Why not just do this with a shell script? It's just a wrapper around curl.