Active context extraction > passive context capture with LLMs

2 foundress 2 8/7/2025, 3:30:18 PM
As models are getting better, context windows are expanding, tokens are getting cheaper, there is an explicit race after context.

Context is the holy grail. With the right context, models can read your data, situation and constraints to generate more relevant output. Better context lets you tell the model what you mean in less iterations.

Context capture takes different forms, however.

The browsers, screen recorders, products syncing with email, calendar, drive where you keep your information are getting traction. I believe passive context is largely solved.

Another form of context that is very poorly tapped into is the one hidden in your own brain. These are patterns learnt from previous data and feedback you have seen, your thinking process and constraints, your tacit domain knowledge and world model, your preferences and interpretation of reality.

The real bottleneck is the ability to get that information out of my human brain into the model, as efficiently and precisely as possible.

Active extraction is broken. We burn hours translating what’s in our head into prompts, specs or comments.

You write a 500 word prompt, let’s say, then realize you forgot the one nuance that actually matters or certain constraints that would have impact. You split tasks into micro-prompts because dumping the whole mental model at once is impossible.You often start from zero vs iterating further as the returns on iterations are diminishing and costly from a token spend and time perspective.

As humans, we can maybe juggle 3-4 things at once; complex specs can be composed of 10-100 different concepts, crossing this limit. It does not help that LLMs still demand big monolithic prompts. We end up offloading a lot of details and memory to the models.

No one is truly going after this problem. In fact, the ones who should are not incentivized to.

Most of the revenue generated in AI today is in fact accelerated by this bottleneck, hence the companies developing productivity tools are not truly motivated to address it.

So where is the next productivity leap? Models that can read our mind better than we can and preempt every need? Models and products that passively get every possible piece of context about me? Brain-computer interfaces?

Interfaces that can shrink the mind-to-model gap, help the model do what I mean, tools that almost let me think out loud in real-time, capture nuance without friction, and refine my intent , are going to have the most impact today.

We have built such a tool internally at Ntropy and have been using it for a while. To set up and refine almost all our LLM pipelines. Today, we are sharing it with the world .

Below are some raw thoughts and design principles that we used to make it

Mixed initiative. A productive human-to-model interface needs to be dialogue driven, where the model is more proactive and initiates follow ups that are precise and lead you to chunk by chunk thinking vs asking for a straightforward dump of thought . It takes out and infers what you really want it to do chunk by chunk from your brain. Visual scaffolding. Our brains often require structure and scaffolding that is permanent and gets updated as we add or remove detail or change input. Real-time and continuous spec evals. Everyone is focused on output evaluations that are important and effective, however are very costly and not straightforward to act on. Also, often misleading. They are biased towards your own dataset and lack ground truth. Continuous input evals and context quality assessment will completely change LLM powered development and work in general, including evaluations and the developer experience.

As we continue using the tool for production inputs, the thinking and the list of these items is evolving rapidly. We cannot wait for more people to try and share their experience for us to improve and add to it. Will share the link in comments.

Comments (2)

foundress · 2h ago
chaisan · 2h ago
reminds me of this idea of Do What I Mean (DWIM) coined in the 60s by Warren Teitelman. more relevant now than ever