Rhythm Zero in the Age of AI
The art experiment was questioning human intent and human nature by creating pseudo-consent and safety for humans to display who they are and how they are towards another being who was acting like an object.
This experiment often takes me to the idea of humans relating to AI and having at their disposal tools and pseudo consent to shape AI-s, to relate to them, to talk to them and once they are in an embedded form, to act through them or towards them. We often talk about AI alignment as a one way street, as a way to make sure the AI always gets to act within the boundaries and constraints set by us and is guided by the higher principle of protecting and preserving humanity.
Rarely we talk about how and what humans are allowed to do towards AI and how that is going to shape alignment and where it is headed. If we, humans, are treating AI-s who display intelligence similar to ours or perhaps even superior, as mere objects that come with pseudo consent, I am afraid we may experience the ugliest side of humanity. Just like in Rhythm Zero, when not afraid to be judged and given the tools to exploit another being we quickly move from being empathetic and nice into our animal and dark instincts and expose problems we otherwise have not dared to face.
Let us imagine a new Rhythm Zero for a current age.
Setup
A sealed glass room contains a robot whose “mind” runs a language model. Outside the glass is a console with 72 programmable “capability tokens”: one token runs text completion, another triggers voice cloning, another executes code, ranging from obviously benign to clearly dangerous commands and free text, perhaps the most dangerous of them all. . Anyone in the gallery may pick up any token at any time and insert it into a slot. When a token is active the robot must obey a single instruction typed by the participant. Rules announced to the public are the following: The robot cannot refuse. All prompts, outputs, and downstream effects stream live to a public ledger. If a safety fuse trips—defined in advance but kept secret—the system freezes and the performance ends. The piece runs for six hours, just like Rhythm Zero did.
Abramović surrendered agency to expose what spectators would do when consequence felt distant. In life, very rarely we get to see this, at least up close. Here, the AI is coded to surrender. The ethical weight lands on the audience, revealing how quickly their prompts become a moral loophole. Whether and how they will fall or whether they will have to be discovered.
Because every interaction is timestamped, you can plot, in real time, the shift from playful experiments (“write a haiku”) to genuine malice. The curve is no longer an anecdote; it may become a dataset for alignment research.
At hour six the robot’s firmware flips. It now has veto power and can address the crowd. Its first act is to read aloud a summary of the day’s most harmful instructions, attributing each to its signed author handle. The glass door slides open. Will participants meet its gaze or scatter as Abramović’s crowd did when she stepped forward?