DeltaDB sounds of being a >git innovation for coding itself, and would fulfill Zed's promises in Nathan Sobo's debate/discussion with Steve Yegge recently.
Seems to solve a real problem which is growing rapidly, both in the old way and in the new way ... if it can overcome _slop_ in LLM chats, and the sheer enormity of code/data ahead. Trying to picture how coherence will survive.
With claims/hype/concern floating around that >90% of code will be LLM-generated within 3-6 months, with the insinuation/tone [1] that the same amount of code will be written by humans as now ( at least at first ) but LLM code will radically grow to dilute the space ( as is happening ) ... seems like DeltaDB being done right/well is going to be do-or-die on whether coherence remains possible!
I think they're tackling a real problem that is revealing itself via agentic coding, and I think they've positioned themselves in an interesting spot.
I'd be interested to know how much data DeltaDB accumulates over time - because the level of granularity is so high - and are they going to want to use that data as training data?
Seems to solve a real problem which is growing rapidly, both in the old way and in the new way ... if it can overcome _slop_ in LLM chats, and the sheer enormity of code/data ahead. Trying to picture how coherence will survive.
With claims/hype/concern floating around that >90% of code will be LLM-generated within 3-6 months, with the insinuation/tone [1] that the same amount of code will be written by humans as now ( at least at first ) but LLM code will radically grow to dilute the space ( as is happening ) ... seems like DeltaDB being done right/well is going to be do-or-die on whether coherence remains possible!
[1] https://www.businessinsider.com/anthropic-ceo-ai-90-percent-...
I'd be interested to know how much data DeltaDB accumulates over time - because the level of granularity is so high - and are they going to want to use that data as training data?