I have been experimenting this past week with a small web platform that mixes JavaScript, music pattern generation, and large language models to create algorithmic compositions.
The idea is to have a language model suggest Strudel style music code which is then played back in the browser. It is still very rough. Mobile is not ideal, you need to enable audio or notifications, and the generated code sometimes breaks. Despite the bugs it can create interesting results when it works.
So far it seems to lean toward cinematic or piano style textures but occasionally produces unexpected rhythms.
I would love feedback from anyone interested in creative coding, generative audio, or live coding tools.
What directions would you explore if you were combining large language models and music patterns like this?
The idea is to have a language model suggest Strudel style music code which is then played back in the browser. It is still very rough. Mobile is not ideal, you need to enable audio or notifications, and the generated code sometimes breaks. Despite the bugs it can create interesting results when it works.
So far it seems to lean toward cinematic or piano style textures but occasionally produces unexpected rhythms.
I would love feedback from anyone interested in creative coding, generative audio, or live coding tools.
What directions would you explore if you were combining large language models and music patterns like this?