There’s a lot of focus (and money) on scaling LLMs, but while building Twenty [1] we’ve observed that current models are already capable enough for most use-cases. We think that what’s missing today is a new kind of software architecture. One that evolves, learns from feedback, and compile complex patterns to adapt to users. I wrote about this and some of our learnings in the article above, curious to hear your thoughts!
There’s a lot of focus (and money) on scaling LLMs, but while building Twenty [1] we’ve observed that current models are already capable enough for most use-cases. We think that what’s missing today is a new kind of software architecture. One that evolves, learns from feedback, and compile complex patterns to adapt to users. I wrote about this and some of our learnings in the article above, curious to hear your thoughts!
[1] https://github.com/twentyhq/twenty
memory for software