The new COBOL. The next step is obviously to add syntax when you need to specify the type of the variables: put the type first, then the name and its value, and finish with a semicolon because it's fun, like "int n = 0;"
taneq · 16h ago
COBOL ? Hurrah! If there’s anything that would improve vibe coding, it’s a “come from“ statement. :P
Y_Y · 13h ago
MARKETING DIVISION
warkdarrior · 11h ago
"Divide by zero error encountered."
brainless · 20h ago
I have thought of this issue quite a few times. I use Claude Code, Gemini CLI, etc. for all my new projects. Each of the typical CLAUDE.md/GEMINI.md file exists. I do not use MCPs. I ask agents to use `gh` command, all my work happens around Git/GitHub.
But text is just that, while scripts are easier to rely on. I can prompt and document all mechanisms to, say, check code format. But once I add something, say a pre-commit hook, it becomes reliable.
I am looking for a human readable (maybe renderable) way to codify patterns.
zuzuen_1 · 18h ago
Perhaps when LLMs introduce a lot more primitives for modifying behvavior such a programming language would be necessary.
As such for anyone working with LLMs, they know most of the work happens before and after the LLM call, like doing REST calls, saving to database, etc. Conventional programming languages work well for that purpose.
Personally, I like JSON when the data is not too huge. Its easy to read (since it is hierarchical like most declarative formats) and parse.
zuzuen_1 · 18h ago
One pain point such a PL could address is encoding tribal knowledge about optimal prompting strategies for various LLMs, which changes with each new model release.
pryelluw · 9h ago
Like terraform for prompts.
Put that on the landing page.
convo-lang · 3h ago
I like that :) thank you
khalic · 20h ago
Cool concept that brings a little structure to prompts. I wouldn't use the semantic part that much, English is fine for this, but there is a real need for machine instructions. There is no need for an LLM guess if "main" is a function or a file for exemple.
brabel · 17h ago
I like it. Much nicer than having to use some python SDK in my opinion. Is this a standalone language or it requires Python or other languages to run it?
convo-lang · 5h ago
Its an interpreted language. The interpreter and parser are written in TypeScript so it does use Javascript at runtime but its not a transpiled language that is just converted to Javascript.
The Convo-Lang CLI allows you to run .convo files directly on the command line or you can embed the language directly into a TypeScript or Javascript applications using the @convo-lang/convo-lang NPM package. You can also use the Convo-Lang VSCode and Cursor extensions to execute prompt directly in your editor.
The Convo-Lang runtime also provides state management for on-going conversations and handles the transport of messages to and from LLM providers. And the @convo-lang/convo-lang-react NPM packages provides a set of UI components for building chat interfaces and generated images.
machiaweliczny · 21h ago
Why not library?
convo-lang · 4h ago
Its both a library and a language. You can use it directly in TypeScript and Javascript using the `convo` tagged template literal function from the @convo-lang/convo-lang NPM package.
Here is an example of using in TypeScript:
``` ts
import {convo} from "@convo-lang/convo-lang"
const categorizeMessage=convo`
> define
UserMessage = struct(
sentiment: enum("happy" "sad" "mad" "neutral")
type: enum("support-request" "complaint" "compliment" "other")
# A an array of possible solutions for a support-request or complaint
suggestedSolutions?: array(string)
# The users message verbatim
userMessage: string
)
@json UserMessage
> user
Categorize the following user message:
<user-message>
${userMessage}
</user-message>
`
console.log(categorizeMessage)
```
And for a userMessage that looks something like:
----
My Jackhawk 9000 broke in half when I was trying to cut the top of my 67 Hemi. This thing is a piece of crap. I want my money back!!!
----
The return JSON object would look like:
``` json
{
"sentiment": "mad",
"type": "complaint",
"suggestedSolutions": [
"Offer a full refund to the original payment method",
"Provide a free replacement unit under warranty",
"Issue a prepaid return shipping label to retrieve the broken item",
"Offer store credit if a refund is not preferred",
"Escalate to warranty/support team for expedited resolution"
],
"userMessage": "My Jackhawk 9000 broke in half when I was trying to cut the top of my 67 Hemi. This thing is a piece of crap. I want my money back!!!"
}
```
benswerd · 22h ago
How do you think about remote configurability?
Stuff like a lot of this needing to be A/B tested, models hot swapped, and versioned in a way thats accessible to non technical people?
How do you think about this in relation to tools like BAML?
swoorup · 17h ago
Money Incinerator Lang would be fitting name as well.
trehans · 20h ago
I'm not sure what this is about, would anyone mind ELI5?
xwowsersx · 17h ago
Not sure I'm sold on this particular implementation, but here's my best steelman: working with LLMs through plain text prompts can be brittle...tiny wording changes can alter outputs, context handling is improvised, and tool integration often means writing one-off glue code. This is meant to be DSL to add structure: break workflows into discrete steps, define vars, manage state, explicitly control when and how the model acts, and so on.
It basically gives you a formal syntax for orchestrating multi-turn LLM interactions, integrating tool calls + managing context in a predictable, maintainable way...essentially trying being some structure to "prompt engineering" and make it a bit more like a proper, composable programming discipline/model.
Something like that.
aurumque · 10h ago
This is a really great experiment that gets a lot of things right!
yewenjie · 22h ago
What is a motivating use case that this solves?
otabdeveloper4 · 21h ago
Riding the LLM hype train to its exhaustion.
N_Lens · 21h ago
ChooChoo!
meindnoch · 18h ago
@on user
> onAskAboutConvoLang() -> (
if(??? (+ boolean /m last:3 task:Inspecting message)
Did the user ask about Convo-Lang in their last message
???) then (
@ragForMsg public/learn-convo
??? (+ respond /m task:Generating response about Convo-Lang)
Answer the users question using the following information about Convo-Lang
???
)
)
> user
Who in their right mind would come up with such a "syntax"? An LLM?
convo-lang · 4h ago
Sometimes I feel like an LLM . I takes a little getting used to, but that is the same for any new language. And the Convo-Lang syntax highlighter helps to.
The triple questions marks (???) are used to enclose natural language that is evaluated by the LLM and is considered an inline-prompt since it is evaluated inline within a function / tool call. I wanted there to be a very clear delineation between the deterministic code that is executed by the Convo-Lang interpreter and the natural language that is evaluated by the LLM. I also wanted there to be as little need for escape characters as possible.
The content in the parentheses following the triple question marks is the header of the inline-prompt and consists of modifiers that control the context and response format of the LLM.
Here is a breakdown of the header of the first inline-prompt: (+ boolean /m last:3 task:Inspecting message)
----
- modifier: +
- name: Continue conversation
- description: Includes all previous messages of the current conversation as context
----
- modifier: /m
- name: Moderator Tag
- description: Wraps the content of the prompt in a <moderator> xml tag and injects instruction into the system describing how to handle moderator tags
----
- modifier: last:{number}
- name: Select Last
- description: Discards all but the last three messages from the current conversation when used with the (+) modifier
----
- modifier: task:{string}
- name: Task Description
- description: Used by UI components to display a message to the user describing what the LLM is doing.
I have to agree, it looks wild, even the simpler examples don't feel ergonomic.
ljm · 15h ago
… I think I’ll just stick with pydantic AI for now
mrs6969 · 21h ago
Nice try. We will eventually get there, but I think this can and need to get better.
bn-l · 21h ago
It’s a noisy / busy syntax. Just my own opinion.
devops000 · 21h ago
Why not as a library in Ruby or Python?
dmundhra · 20h ago
How is it different than DSPy?
xwowsersx · 17h ago
I haven't used DSPy this much, but as I understand it: this lang is more like an orchestration DSL for writing and running LLM conversations and tools, whereas DSPy is a framework that compiles and optimizes LLM programs into better-performing prompts...like DSPy has automatic improvement of pipelines using its compilers/optimizers. With DSPy you deal with modules and signatures.
croes · 21h ago
Next step, an LLM that writes convo-lang programs to programs with an LLM
convo-lang · 5h ago
I've already done that LOL
gnubee · 21h ago
This looks a lot like another effective way of interacting with LLMs: english-lang. Some of english-lang 's features are that it can be used to convey meaning, and it's largely accepted (network effect!). I'm excited to see what convo brings to the table /s
But text is just that, while scripts are easier to rely on. I can prompt and document all mechanisms to, say, check code format. But once I add something, say a pre-commit hook, it becomes reliable.
I am looking for a human readable (maybe renderable) way to codify patterns.
As such for anyone working with LLMs, they know most of the work happens before and after the LLM call, like doing REST calls, saving to database, etc. Conventional programming languages work well for that purpose.
Personally, I like JSON when the data is not too huge. Its easy to read (since it is hierarchical like most declarative formats) and parse.
Put that on the landing page.
The Convo-Lang CLI allows you to run .convo files directly on the command line or you can embed the language directly into a TypeScript or Javascript applications using the @convo-lang/convo-lang NPM package. You can also use the Convo-Lang VSCode and Cursor extensions to execute prompt directly in your editor.
The Convo-Lang runtime also provides state management for on-going conversations and handles the transport of messages to and from LLM providers. And the @convo-lang/convo-lang-react NPM packages provides a set of UI components for building chat interfaces and generated images.
https://www.npmjs.com/package/@convo-lang/convo-lang
Here is an example of using in TypeScript: ``` ts import {convo} from "@convo-lang/convo-lang"
const categorizeMessage=convo`
`console.log(categorizeMessage)
```
And for a userMessage that looks something like:
---- My Jackhawk 9000 broke in half when I was trying to cut the top of my 67 Hemi. This thing is a piece of crap. I want my money back!!! ----
The return JSON object would look like: ``` json
{
}```
Stuff like a lot of this needing to be A/B tested, models hot swapped, and versioned in a way thats accessible to non technical people?
How do you think about this in relation to tools like BAML?
It basically gives you a formal syntax for orchestrating multi-turn LLM interactions, integrating tool calls + managing context in a predictable, maintainable way...essentially trying being some structure to "prompt engineering" and make it a bit more like a proper, composable programming discipline/model.
Something like that.
The triple questions marks (???) are used to enclose natural language that is evaluated by the LLM and is considered an inline-prompt since it is evaluated inline within a function / tool call. I wanted there to be a very clear delineation between the deterministic code that is executed by the Convo-Lang interpreter and the natural language that is evaluated by the LLM. I also wanted there to be as little need for escape characters as possible.
The content in the parentheses following the triple question marks is the header of the inline-prompt and consists of modifiers that control the context and response format of the LLM.
Here is a breakdown of the header of the first inline-prompt: (+ boolean /m last:3 task:Inspecting message)
----
- modifier: +
- name: Continue conversation
- description: Includes all previous messages of the current conversation as context
----
- modifier: /m
- name: Moderator Tag
- description: Wraps the content of the prompt in a <moderator> xml tag and injects instruction into the system describing how to handle moderator tags
----
- modifier: last:{number}
- name: Select Last
- description: Discards all but the last three messages from the current conversation when used with the (+) modifier
----
- modifier: task:{string}
- name: Task Description
- description: Used by UI components to display a message to the user describing what the LLM is doing.
----
Here is a link to the Convo-Lang docs for inline-prompts - https://learn.convo-lang.ai/#inline-prompts