Vibe Coding Through the Berghain Challenge

16 nkko 27 9/6/2025, 2:01:25 PM nibzard.com ↗

Comments (27)

Avicebron · 19h ago
Does anyone else feel the desperation oozing out when they read these kinds of posts/browse linkedin? I sort of get the same frantic desperate vibe, cool if you own it though I guess
weitendorf · 18h ago
I think humans are hard-wired to distrust and dislike certain forms of self-promotion because of the risk of false signalling. In small tribes of apes everybody knows everyone so trumpeting one’s accomplishments is basically trying to change people’s perception of something without changing the actual underlying signal.

The higher status strategy almost always ends up being countersignaling, where “trying too hard” is basically the opposite of counter signaling. The problem (this is something I am actively learning in my work) is that the way society is set up right now requires you to participate in the “attention economy” and build your brand/reputation in a group far larger than an ape-sized tribe. Because you’re not established in those circles a priori you have to start with signaling instead of counter signaling.

Basically, you have to have a PR team and win the hearts and minds of The Atlantic and Forbes before you can make a public spectacle of your ketamine habits. If you skip straight to that you’re just an insecure loser with a drug problem. But after everybody knows you and what you’ve done then you can establish yourself as a tortured artist, which is socially “better” than being just a regular artist.

caminanteblanco · 18h ago
>PS: And the kicker? Claude wrote this entire article too. I just provided the direction and feedback. The AI that helped me solve the Berghain Challenge also helped me tell you about it.

>Meta-collaboration all the way down.

Would've preferred to know this going in.

nkko · 18h ago
Sorry, will move it up. For me it was, hey let’s loop it once more over the repo and let it write about it, more like an archeological dig to unearth the process, as I wasn’t involved in it, especially not algo decisions and later optimizations.
thatguymike · 16h ago
It’s funny, I immediately thought it was LLM but I was fairly confident it was ChatGPT. I suppose the styles are converging more than I thought: too long, lists, “not just x it’s y”, “here’s the X”…
GuB-42 · 17h ago
I suspected that, but I couldn't read to the end. The article is confusing and all over the place.
nkko · 16h ago
The process was confusing. It took for sure at least 100 sessions, haven’t checked the logs, was running it in a VM. And every time it tries to build something while forgetting all of it and then reconstructing bits and pieces just to continue. The article was written at least over 3 context lengths. But it does reconstructs well the iteration on the side of an AI agent.
jaynetics · 18h ago
I mean, it only takes a few paragraphs of filler text, hyperbole, "catchy" juxtapositions, and loose logical threads to raise suspicions.

But yeah, I would also like these two minutes of my life back.

Well, as someone who has also generated some text with LLMs, at least I learned that it's still possible to generate truly excruciating stuff with the "right" model and prompt.

Nextgrid · 17h ago
The actual technical problem was interesting, but the AI-generated writing is terrible. It's like listening to a sales pitch that just won't end.
kookamamie · 19h ago
> Why This Challenge Will Make You Question Everything

This kind of headlines make an article an annoying read.

ryanwhitney · 19h ago
I think chatgpt is their writing partner too. Maybe the other way around.
Bleibeidl · 18h ago
That's discosed at the ending of the "article".
the_af · 18h ago
Wow. I thought the tone of TFA was infuriating. Now I know why (I quit in disgust before reaching the end where he clarifies this).

I guess AI-slop in writing will be the norm now.

(I wonder if Claude repeatedly quoting itself saying "you're absolutely right!" was edited in by the human author, or yet another case of unintentional humor).

nkko · 18h ago
Nope, pure Claude there, during editing itself.
the_af · 18h ago
Thanks for replying. Now I feel I must apologize for my rudeness.

I think the experiment itself was valuable, you did find something interesting.

I just cannot help it, I hate reading AI slop, and I'm depressed that this seems to be the future of internet writing.

nkko · 18h ago
No reason, all fine. Honestly, it is very hard to find time to write anything down and imagine 15k words deep analysis of the process like this. And LLMs are ideal log keepers. I also did bunch of similar experiments like doing a research paper from data to code and writing. We can only expect things to get better from here.
ebiester · 18h ago
They admitted it was Claude at the end.
nkko · 18h ago
That should be moved toward the top, will do it manually. This was 98% loop, albeit very messy one. More an exploration of the process itself. The biggest value is the meta learning. Ideally we should save traces of prompts and process itself as a verifiable or observable artifact, instead of code itself. At the end of the day, outcome over code.
anuramat · 17h ago
slightly offtopic, but the challenge might be leaking emails: just got an email from `alfredw@listenlabs.fyi` (note the TLD):

> I'd like to connect you with our team to hear about your solution.

> 1) can you let me know availability for a conversation?

> 2) please share some basic information ie full name, Linkedin, portfolio, CV.

> 3) are you interested in onsite SF?

nkko · 16h ago
But wasn’t this an idea, they have built the game as a hiring process.
anuramat · 15h ago
the address is fake though
nkko · 15h ago
Not necessarily, it redirects to their main domain. It is typically done like this when sending massive emails not to pollute the main domain rating.
yayadarsh · 16h ago
this blog post sounds like an LLM - and of course it was written by one (as admitted in the end).
YetAnotherNick · 18h ago
As someone who attempted it, it was such a bad challenge. Firstly you can get close to optimal pretty easily. In first challenge it was easy to exactly solve it in optimal way using DP. Secondly that doesn't matter because the optimal solution has big deviation based on rng. And you just need to submit the challenge multiple times till you get lucky.

That's why challenge problem should take in code and run for hidden cases on their server and reveal the results post contest, not allow it via API call.

nkko · 18h ago
Yeah, also would be fun to see the code behind the solutions.
next_xibalba · 18h ago
This reads like it was written by an LLM:

``` Here’s what Listen did that was pure genius:

    Stage 1: Cryptic billboard → Curiosity
    Stage 2: Token puzzle → Technical community engagement
    Stage 3: OEIS speculation → Community-driven solving
    Stage 4: Berghain Challenge → Viral optimization addiction
```
the_af · 18h ago
It was. The author admits at the end Claude wrote the entire article.

Note the self-parodic humor in Claude quoting itself saying "you're absolutely right!". The author claims they didn't direct this, it truly is how Claude "sees" itself!