Brokk: AI for Large Codebases

41 handfuloflight 21 5/8/2025, 5:48:12 PM brokk.ai ↗

Comments (21)

insin · 13m ago
LLM for Large Codebases
danjl · 2h ago
The "Read" file list sounds a lot like Copilot Edit mode, where you manually specify the list of files that are added to the context. Similarly, Copilot has an Ask (Chat) mode that doesn't change the code. One of the downsides of all these new IDEs is that it is difficult, even for the developers of those tools, to have enough time to test out coding in each of their competitors. Also, the switching cost of changing IDEs is pretty high, even if they are forks of the same code base, which makes it hard for the users to really test out all the options. In the long run, I expect that the "larger" IDE providers will purchase the smaller ones. IOW, if you wait long enough, all the good bits will be in Copilot (or maybe Cursor with their new funding).
jbellis · 2h ago
(creator here)

idk, everyone else seems to want to take the 40 year old IDE paradigm we're all used to (really! that's how old Turbo Pascal 3 is!) and graft AI onto it. I think we need a fundamentally different design to truly take advantage of the change from "I'm mostly reading and writing code at human speeds" to "I'm mostly supervising the AI which is better at generating syntax than I am."

of course the downside to going against the crowd is that the crowd is usually right, we'll see how it goes!

danjl · 2h ago
I'd love to see things like Brokk experiment a bit more with what other information to include in our git repositories, besides the code, that helps improve AI-based code generation. For example, perhaps the repo should include more design information about the look-and-feel, as visual information or Figma files, rather than just, say the CSS and HTML. Or it might help if the repository included more business requirements so that the AI has better information to guide prioritization of changes. Obviously other bits, like coding standards, should be included as well, though perhaps using a larger context might mitigate the need for coding standards if the generated code followed the existing code (which often doesn't happen).
danjl · 2h ago
I am a huge supporter of completely re-working the IDE UI as well. I'm not arguing for keeping the existing IDE interfaces. I like that folks are experimenting with entirely new interfaces. In fact, I'd go further and suggest that all of the overly complex interfaces used on any sort of content-creation app, like Unity, Unreal, Photoshop, as well as code IDEs, will eventually be completely refactored to remove all the old complexity in favor of either chat-based or other AI-driven interfaces. My point is simply that there are too many new AI-driven IDEs for folks to try out, even the developers of those IDEs. Many of the features in Brokk that were seemingly described in the Brokk 101 blog video as "differentiators" are existing Copilot features. Has the author ever used Copilot? Or just Cursor? Or another AI variant?
tschellenbach · 4h ago
wrote a guide on how to use cursor for large codebases here: https://getstream.io/blog/cursor-ai-large-projects/ working well over here

cool to see more AI tools address this

ElijahLynn · 4h ago
Thank you! I think this is the next evolution of using LLM for coding. Understanding all the context from large codebases...
jbellis · 4h ago
Hi all, Brokk creator here, happy to answer any questions!

I made an intro video with a live demo here: https://www.youtube.com/watch?v=Pw92v-uN5xI

neoncontrails · 2h ago
I'd be interested to try this out. I'm especially keen on AI tools that implement a native RAG workflow. I've given Cursor documentation links, populated my codebase with relevant READMEs and diagram files that I'm hoping might provide useful context, and yet when I ask it to assist on some refactoring task it often spends 10-20 minutes simply grepping for various symbol names and reading through file matches before attempting to generate a response. This doesn't seem like an efficient way for an LLM to navigate a medium-sized codebase. And for an IDE with first-class LLM tooling, it is a bit surprising that it doesn't seem to provide powerful vector-based querying capabilities out of the box — if implemented well, a Google-like search interface to one's codebase could be useful to humans as well as to LLMs.

What does this flow look like in Brokk? Do models still need to resort to using obsolete terminal-based CLI tools in order to find stuff?

lutzleonhardt · 2h ago
We implemented a multi-step process to find the required context:

1. Quick Context Shows the most relevant files based on a pagerank algorithm (static analysis) and semantic embeddings (JLama inference engine). The input are the instructions and the AI workspace fragments (i.e. files).

2. Deep Scan A richer LLM receives the summaries of the AI workspace files (+instructions) and returns a recommendation of files and tests. It also recommends the type of inclusion (editable, read-only, summary/skeleton).

3. Agentic Search The AI has access to a set of tools for finding the required files. But the tools are not limited to grep/rg. Instead you can: - find symbols (classes, methods, ...) in the project - ask for summaries/skeletons of files - provide class or method implementations - find usages of symbols (where is x used?) - call sites (in/out) ...

You can read more about this in the Brokk.ai blog: https://brokk.ai/blog/brokk-under-the-hood

bchapuis · 3h ago
Really cool project! I tried it a couple of weeks ago with an Anthropic API key and will give it another shot.

Could you share a bit more about how you handle code summarization? Is it mostly about retaining method signatures so the LLM gets a high-level sense of the project? In Java, could this work with dependencies too, like source JARs?

More generally, how’s it been working with Java for this kind of project? Does limited GPU access ever get in the way of summarization or analysis (Jlama)?

jbellis · 2h ago
That officially makes you an early adopter, thanks!

Yes, it's basically just parsing for declarations. (If you doubleclick on any context in the Workspace it will show you exactly what's inside.)

You have to import the dependencies via File -> Decompile Dependency and then it gets parsed like the rest of your source, only read-only.

I have a love-hate relationship with Java, mostly love lately, the OpenJDK team is doing a great job driving the language forward. It's so much faster than Python, it's nice being able to extend a language in itself and get native performance.

Since we're just using Jlama to debounce the LLM requests, we can use a tiny model that runs fine on CPU alone. The latest Jlama supports GPU as well but we're not using that.

saratogacx · 2h ago
Likely not an important note but the name sounds close enough to grok that I assumed this was a spin off of some xAI product. I had to look around to see if it was actually associated (it looks like it isn't) but it may be something to be aware of.
corysama · 2h ago
How large is "Large"? Are we testing on Unreal Engine? :D
jbellis · 2h ago
no, but I've tested on intellij (~5M loc, takes forever to import b/c of delombok, do not recommend)
lutzleonhardt · 2h ago
I tested it with Ghidra recently and got very good results
soco · 4h ago
Is there something also to read for those of us who will never watch videos?
lutzleonhardt · 4h ago
Hi, yes there are some blog posts:

https://brokk.ai/blog/brokk-under-the-hood

silverlake · 2h ago
No offense, but that video is brutally boring. Even at 1.5x speed I couldn’t get past 10 min. You should transcribe the audio and use an LLM to write a punchy sales pitch.
lutzleonhardt · 4h ago
The amazing thing here is that the Brokk AI can access your code like an IDE, can ask for usages or gather the summary of a file before deciding to get the implementation of a method! It mimics like a Dev is navigating the codebase. And this is more reliable and token-efficient than the usual grep/rg approach

No comments yet

esafak · 2h ago
This ought to be an IDE plugin. Don't make me context switch.