Show HN: I vibe-coded some unusual transformer models
* demonstrate that LLMs are smart enough to conduct ML experiments pretty much on their own
* specifically, vibe-coding isn't just for web stuff
* encourage people to conduct these small experiments
* in particular, to get better understanding of the concepts
Background: I had a linear algebra course in university, but no proper ML training. Nevertheless, 5 years ago things like AI Dungeon and GPT-3 got me really interested and I started watching Yannic Kilcher videos to understand how it works. I even got some ideas for experiments with transformer architecture, but actually performing them seemed a bit too tedious.Enter vibe coding. Specifically, Claude Code. Is it smart enough to organize an experiment: prepare data set, make a model, training code, debug it, etc?
Basically, yes. It takes some effort to describe what you want and make sure it does not cheat, but Claude is smart enough to write model code from scratch.
Other models like Gemini 2.5 Pro, o3 might be even better.
A lot of people believe that LLMs cannot write new code, they can only rehash existing code. I don't think it's true. It's hard to say with certainty that code was 100% unique, but it was at least rather unusual.
Anyway, here's what I did:
1. Encoder-only non-autoregressive transformer.
Pretty much all generative LLMs are based on decoder-only autoregressive transformer architecture, which generates one token at a time. (I.e. to generate token (n+1) it relies on data from tokens 1..n.) This type of transformers can be efficiently trained (causal mask gives training signal for each token using only a single forward pass), bug generation process is slow and inefficient. Gemini 2.5 Flash allows 1M tokens of input but only 65k token output. You can't really transform large amount of text.
But what if we directly generate the target sequence using just a single forward pass?.. I.e. instead of predicting the next token, we can predict tokens of output sequence. There's no fundamental reason it can't work, but it's more challenging as NN has to keep track of both input and output token positions, etc.
And, well, the experiment shows it can work for simple languages, at least: in this example transformer learned how to expand parentheses, e.g. for input "a(b+c)" it generates "ab+a*c". https://github.com/killerstorm/expere/tree/master/non_autore...
I'm sure there's a better way to do it, but at least it's enough to confirm there's no fundamental reason it can't work. It took ~20 minutes to make code, example trains in 2 minutes on RTX 4070.
I tried few more experiments:
2. try to improve attention by adding a small MLP on top of per-head attention scores. 3. make a hybrid between RWKV and transformer.
That also worked well enough to start training and get a plausible loss curve. (Although it took me >30 minutes to get Claude to fix code, it had a bit more difficulty here.) Although training a real language model takes a beefier GPU and time and I didn't wait for it to finish.
I think with a bit better prompts and better models it can conduct experiments fully autonomously, and that can happen this year.