Vibe Coding Through the Berghain Challenge

14 nkko 18 9/6/2025, 2:01:25 PM nibzard.com ↗

Comments (18)

Avicebron · 1h ago
Does anyone else feel the desperation oozing out when they read these kinds of posts/browse linkedin? I sort of get the same frantic desperate vibe, cool if you own it though I guess
weitendorf · 44m ago
I think humans are hard-wired to distrust and dislike certain forms of self-promotion because of the risk of false signalling. In small tribes of apes everybody knows everyone so trumpeting one’s accomplishments is basically trying to change people’s perception of something without changing the actual underlying signal.

The higher status strategy almost always ends up being countersignaling, where “trying too hard” is basically the opposite of counter signaling. The problem (this is something I am actively learning in my work) is that the way society is set up right now requires you to participate in the “attention economy” and build your brand/reputation in a group far larger than an ape-sized tribe. Because you’re not established in those circles a priori you have to start with signaling instead of counter signaling.

Basically, you have to have a PR team and win the hearts and minds of The Atlantic and Forbes before you can make a public spectacle of your ketamine habits. If you skip straight to that you’re just an insecure loser with a drug problem. But after everybody knows you and what you’ve done then you can establish yourself as a tortured artist, which is socially “better” than being just a regular artist.

caminanteblanco · 31m ago
>PS: And the kicker? Claude wrote this entire article too. I just provided the direction and feedback. The AI that helped me solve the Berghain Challenge also helped me tell you about it.

>Meta-collaboration all the way down.

Would've preferred to know this going in.

nkko · 23m ago
Sorry, will move it up. For me it was, hey let’s loop it once more over the repo and let it write about it, more like an archeological dig to unearth the process, as I wasn’t involved in it, especially not algo decisions and later optimizations.
jaynetics · 22m ago
I mean, it only takes a few paragraphs of filler text, hyperbole, "catchy" juxtapositions, and loose logical threads to raise suspicions.

But yeah, I would also like these two minutes of my life back.

Well, as someone who has also generated some text with LLMs, at least I learned that it's still possible to generate truly excruciating stuff with the "right" model and prompt.

kookamamie · 1h ago
> Why This Challenge Will Make You Question Everything

This kind of headlines make an article an annoying read.

ryanwhitney · 1h ago
I think chatgpt is their writing partner too. Maybe the other way around.
Bleibeidl · 40m ago
That's discosed at the ending of the "article".
the_af · 38m ago
Wow. I thought the tone of TFA was infuriating. Now I know why (I quit in disgust before reaching the end where he clarifies this).

I guess AI-slop in writing will be the norm now.

(I wonder if Claude repeatedly quoting itself saying "you're absolutely right!" was edited in by the human author, or yet another case of unintentional humor).

nkko · 30m ago
Nope, pure Claude there, during editing itself.
the_af · 27m ago
Thanks for replying. Now I feel I must apologize for my rudeness.

I think the experiment itself was valuable, you did find something interesting.

I just cannot help it, I hate reading AI slop, and I'm depressed that this seems to be the future of internet writing.

nkko · 14m ago
No reason, all fine. Honestly, it is very hard to find time to write anything down and imagine 15k words deep analysis of the process like this. And LLMs are ideal log keepers. I also did bunch of similar experiments like doing a research paper from data to code and writing. We can only expect things to get better from here.
ebiester · 36m ago
They admitted it was Claude at the end.
nkko · 25m ago
That should be moved toward the top, will do it manually. This was 98% loop, albeit very messy one. More an exploration of the process itself. The biggest value is the meta learning. Ideally we should save traces of prompts and process itself as a verifiable or observable artifact, instead of code itself. At the end of the day, outcome over code.
next_xibalba · 31m ago
This reads like it was written by an LLM:

``` Here’s what Listen did that was pure genius:

    Stage 1: Cryptic billboard → Curiosity
    Stage 2: Token puzzle → Technical community engagement
    Stage 3: OEIS speculation → Community-driven solving
    Stage 4: Berghain Challenge → Viral optimization addiction
```
the_af · 23m ago
It was. The author admits at the end Claude wrote the entire article.

Note the self-parodic humor in Claude quoting itself saying "you're absolutely right!". The author claims they didn't direct this, it truly is how Claude "sees" itself!

YetAnotherNick · 48m ago
As someone who attempted it, it was such a bad challenge. Firstly you can get close to optimal pretty easily. In first challenge it was easy to exactly solve it in optimal way using DP. Secondly that doesn't matter because the optimal solution has big deviation based on rng. And you just need to submit the challenge multiple times till you get lucky.

That's why challenge problem should take in code and run for hidden cases on their server and reveal the results post contest, not allow it via API call.

nkko · 19m ago
Yeah, also would be fun to see the code behind the solutions.