Convo-Lang: LLM Programming Language and Runtime

75 handfuloflight 39 8/14/2025, 5:40:19 AM learn.convo-lang.ai ↗

Comments (39)

Disposal8433 · 1d ago
The new COBOL. The next step is obviously to add syntax when you need to specify the type of the variables: put the type first, then the name and its value, and finish with a semicolon because it's fun, like "int n = 0;"
convo-lang · 21h ago
I remember seeing comments like this when React and JSX first hit the scene. It's different for sure, but it solves a real problem.
taneq · 1d ago
COBOL ? Hurrah! If there’s anything that would improve vibe coding, it’s a “come from“ statement. :P
Y_Y · 1d ago

  MARKETING DIVISION
warkdarrior · 1d ago
"Divide by zero error encountered."
brainless · 1d ago
I have thought of this issue quite a few times. I use Claude Code, Gemini CLI, etc. for all my new projects. Each of the typical CLAUDE.md/GEMINI.md file exists. I do not use MCPs. I ask agents to use `gh` command, all my work happens around Git/GitHub.

But text is just that, while scripts are easier to rely on. I can prompt and document all mechanisms to, say, check code format. But once I add something, say a pre-commit hook, it becomes reliable.

I am looking for a human readable (maybe renderable) way to codify patterns.

convo-lang · 21h ago
I very much agree with you. I wanted to minimal scripting language that worked with LLMs and had as few abstractions as possible.

I'm actually working on a system that uses Convo-Lang scripts as form "sub-agents" that are controls by a master Convo-Lang script.

And regarding your "maybe renderable" comment, Convo-Lang scripts are parsed and stored in memory as a set of message objects similar to a DOM tree. The ConversationView in the @convo-lang/convo-lang-react NPM package uses the message objects to render a conversation as a chat view and can be extended to render custom components based on tags / metadata that is attached to the messages of the conversation.

zuzuen_1 · 1d ago
Perhaps when LLMs introduce a lot more primitives for modifying behvavior such a programming language would be necessary.

As such for anyone working with LLMs, they know most of the work happens before and after the LLM call, like doing REST calls, saving to database, etc. Conventional programming languages work well for that purpose.

Personally, I like JSON when the data is not too huge. Its easy to read (since it is hierarchical like most declarative formats) and parse.

zuzuen_1 · 1d ago
One pain point such a PL could address is encoding tribal knowledge about optimal prompting strategies for various LLMs, which changes with each new model release.
khalic · 1d ago
Cool concept that brings a little structure to prompts. I wouldn't use the semantic part that much, English is fine for this, but there is a real need for machine instructions. There is no need for an LLM guess if "main" is a function or a file for exemple.
benswerd · 1d ago
How do you think about remote configurability?

Stuff like a lot of this needing to be A/B tested, models hot swapped, and versioned in a way thats accessible to non technical people?

How do you think about this in relation to tools like BAML?

machiaweliczny · 1d ago
Why not library?
convo-lang · 1d ago
Its both a library and a language. You can use it directly in TypeScript and Javascript using the `convo` tagged template literal function from the @convo-lang/convo-lang NPM package.

https://www.npmjs.com/package/@convo-lang/convo-lang

Here is an example of using in TypeScript: ``` ts import {convo} from "@convo-lang/convo-lang"

const categorizeMessage=convo`

    > define
    UserMessage = struct(
        sentiment: enum("happy" "sad" "mad" "neutral")
        type: enum("support-request" "complaint" "compliment" "other")
        # A an array of possible solutions for a support-request or complaint
        suggestedSolutions?: array(string)
        # The users message verbatim
        userMessage: string
    )

    @json UserMessage
    > user
    Categorize the following user message:

    <user-message>
    ${userMessage}
    </user-message>
`

console.log(categorizeMessage)

```

And for a userMessage that looks something like:

---- My Jackhawk 9000 broke in half when I was trying to cut the top of my 67 Hemi. This thing is a piece of crap. I want my money back!!! ----

The return JSON object would look like: ``` json

{

    "sentiment": "mad",

    "type": "complaint",

    "suggestedSolutions": [

        "Offer a full refund to the original payment method",

        "Provide a free replacement unit under warranty",

        "Issue a prepaid return shipping label to retrieve the broken item",

        "Offer store credit if a refund is not preferred",

        "Escalate to warranty/support team for expedited resolution"

    ],

    "userMessage": "My Jackhawk 9000 broke in half when I was trying to cut the top of my 67 Hemi. This thing is a piece of crap. I want my money back!!!"
}

```

brabel · 1d ago
I like it. Much nicer than having to use some python SDK in my opinion. Is this a standalone language or it requires Python or other languages to run it?
convo-lang · 1d ago
Its an interpreted language. The interpreter and parser are written in TypeScript so it does use Javascript at runtime but its not a transpiled language that is just converted to Javascript.

The Convo-Lang CLI allows you to run .convo files directly on the command line or you can embed the language directly into a TypeScript or Javascript applications using the @convo-lang/convo-lang NPM package. You can also use the Convo-Lang VSCode and Cursor extensions to execute prompt directly in your editor.

The Convo-Lang runtime also provides state management for on-going conversations and handles the transport of messages to and from LLM providers. And the @convo-lang/convo-lang-react NPM packages provides a set of UI components for building chat interfaces and generated images.

trehans · 1d ago
I'm not sure what this is about, would anyone mind ELI5?
xwowsersx · 1d ago
Not sure I'm sold on this particular implementation, but here's my best steelman: working with LLMs through plain text prompts can be brittle...tiny wording changes can alter outputs, context handling is improvised, and tool integration often means writing one-off glue code. This is meant to be DSL to add structure: break workflows into discrete steps, define vars, manage state, explicitly control when and how the model acts, and so on.

It basically gives you a formal syntax for orchestrating multi-turn LLM interactions, integrating tool calls + managing context in a predictable, maintainable way...essentially trying being some structure to "prompt engineering" and make it a bit more like a proper, composable programming discipline/model.

Something like that.

pryelluw · 1d ago
Like terraform for prompts.

Put that on the landing page.

convo-lang · 1d ago
I like that :) thank you
swoorup · 1d ago
Money Incinerator Lang would be fitting name as well.
yewenjie · 1d ago
What is a motivating use case that this solves?
otabdeveloper4 · 1d ago
Riding the LLM hype train to its exhaustion.
N_Lens · 1d ago
ChooChoo!
aurumque · 1d ago
This is a really great experiment that gets a lot of things right!
convo-lang · 21h ago
Thank you. I created Convo-Lang out of my needs as a developer building AI into applications so a lot of it features were driven by real world needs.
mrs6969 · 1d ago
Nice try. We will eventually get there, but I think this can and need to get better.
meindnoch · 1d ago

  @on user
  > onAskAboutConvoLang() -> (
      if(??? (+ boolean /m last:3 task:Inspecting message)
          Did the user ask about Convo-Lang in their last message
      ???) then (
  
          @ragForMsg public/learn-convo
          ??? (+ respond /m task:Generating response about Convo-Lang)
              Answer the users question using the following information about Convo-Lang
          ???
      )
  )
  
  > user
Who in their right mind would come up with such a "syntax"? An LLM?
convo-lang · 1d ago
Sometimes I feel like an LLM . I takes a little getting used to, but that is the same for any new language. And the Convo-Lang syntax highlighter helps to.

The triple questions marks (???) are used to enclose natural language that is evaluated by the LLM and is considered an inline-prompt since it is evaluated inline within a function / tool call. I wanted there to be a very clear delineation between the deterministic code that is executed by the Convo-Lang interpreter and the natural language that is evaluated by the LLM. I also wanted there to be as little need for escape characters as possible.

The content in the parentheses following the triple question marks is the header of the inline-prompt and consists of modifiers that control the context and response format of the LLM.

Here is a breakdown of the header of the first inline-prompt: (+ boolean /m last:3 task:Inspecting message)

----

- modifier: +

- name: Continue conversation

- description: Includes all previous messages of the current conversation as context

----

- modifier: /m

- name: Moderator Tag

- description: Wraps the content of the prompt in a <moderator> xml tag and injects instruction into the system describing how to handle moderator tags

----

- modifier: last:{number}

- name: Select Last

- description: Discards all but the last three messages from the current conversation when used with the (+) modifier

----

- modifier: task:{string}

- name: Task Description

- description: Used by UI components to display a message to the user describing what the LLM is doing.

----

Here is a link to the Convo-Lang docs for inline-prompts - https://learn.convo-lang.ai/#inline-prompts

lnenad · 1d ago
I have to agree, it looks wild, even the simpler examples don't feel ergonomic.
ljm · 1d ago
… I think I’ll just stick with pydantic AI for now
bn-l · 1d ago
It’s a noisy / busy syntax. Just my own opinion.
devops000 · 1d ago
Why not as a library in Ruby or Python?
convo-lang · 20h ago
Hi everybody, I'm Scott, the creator of Convo-Lang. I created Convo-Lang to solve a lot of my personal needs while building AI applications.

Convo-Lang originally started off as a prompt templating and conversation state management system. It gave me a way to load a prompt template into a chat interface and reuse the same code to handle sending messages between the user and an LLM. This was in the early days of OpenAI when DaVinci was the top model.

As Convo-Lang grow in complexity I created a VSCode extension for syntax highlighting to make templates easier to read and write. And as new patterns like RAG, JSON Mode and tool calling hit the scene I added support for them. Before long I had a pretty decent framework that was easy to integrate into TypeScript applications and solved most of my AI needs.

As I built more applications that used tool calling I realized that I was writing less TypeScript, and a good amount of the TypeScript I as writing was basic callback functions called by tools the LLM decided to invoke. At that point I realized if I created a simple scripting language that could do basic things like make an HTTP requests I could build the majority of my agents purely in Convo-Lang and encapsulate all of its logic a single file.

I found the idea of single file that encapsulated an agent in a simple text file very appealing, and then I did as I do. I ignore all of my other responsibilities as a developer for the next few days and built a thing \(ᵔᵕᵔ)/

After those few sleepless nights I had a full fledge programming language and a runtime and CLI that could execute it. It's been about a year and a half since then and I've continued to improve and refine the language.

Links:

Convo-Lang Docs - https://learn.convo-lang.ai/

GitHub - https://github.com/convo-lang/convo-lang

Core NPM package - https://www.npmjs.com/package/@convo-lang/convo-lang

All NPM package - https://www.npmjs.com/~convo-lang

VSCode extension - https://marketplace.visualstudio.com/items?itemName=iyio.con...

r/ConvoLang sub Reddit - https://www.reddit.com/r/ConvoLang/

Any stars on GitHub would be much appreciated, thank you.

dmundhra · 1d ago
How is it different than DSPy?
xwowsersx · 1d ago
I haven't used DSPy this much, but as I understand it: this lang is more like an orchestration DSL for writing and running LLM conversations and tools, whereas DSPy is a framework that compiles and optimizes LLM programs into better-performing prompts...like DSPy has automatic improvement of pipelines using its compilers/optimizers. With DSPy you deal with modules and signatures.
croes · 1d ago
Next step, an LLM that writes convo-lang programs to programs with an LLM
convo-lang · 1d ago
I've already done that LOL
gnubee · 1d ago
This looks a lot like another effective way of interacting with LLMs: english-lang. Some of english-lang 's features are that it can be used to convey meaning, and it's largely accepted (network effect!). I'm excited to see what convo brings to the table /s
ttoinou · 1d ago
You're absolutely right!