Show HN: CocoIndex – Open-Source Real-time Data transformation framework
I’ve been working on CocoIndex, an open-source ultra-performant framework to transform data for AI. It is optimized for data freshness, with incremental processing out-of-box.
You can start a CocoIndex with `pip install cocoindex` and declare a data flow that can build data transformation like LEGO - vector embeddings - knowledge graphs, - or extract, transform data with LLMs
It is a data processing framework beyond SQL. When you run the data flow either with live mode or batch mode, it will process the data incrementally with minimal recomputation and make it super fast to update the target stores on source changes.
The core engine is written in Rust. I've been a big fan of Rust before I left my last job. It is my first choice on the open source project for the data framework because of 1) robustness 2) performance 3) ability to bind to different languages.
I’ve made a few tutorials and new projects since last launch, with different use cases: - https://www.youtube.com/@cocoindex-io - https://cocoindex.io/blogs/tags/examples
Previously, I’ve worked at Google on projects like search indexing and ETL infra for 8 years. After I left Google last year, I built various projects and went through pivoting hell.
In all the projects I’ve built, data still sits in the center of the problem and I find myself focusing on building data infra other than the business logic I need for data transformation. The current prepackaged RAG-as-service doesn't serve my needs, because I need to choose a different strategy for the context, and I also need deduplication, clustering (items are related), and other custom features that are commonly needed. That’s where CocoIndex starts.
A simple philosophy behind it - data transformation is similar to formulas in spreadsheets. The ground of truth is at the source data, and all the steps to transform, and final target store are derived data, and should be reactive based on the source change. If you use CocoIndex, you only need to worry about defining transformations like formulas.
Data flow paradigm came in as an immediate choice. because there’s no side effect, lineage and observability just come out of the box.
Incremental processing - If you are a data expert, an analogy would be a materialized view beyond SQL. The framework tracks pipeline states in database (Postgres) and only re-processes necessary portions. When data has changed, the framework handles the change data capture comprehensively and combines the mechanism for push and pull. Then clear stale derived data/versions and re-index data based on tracking data/logic changes or data TTL settings. There’s lots of edge cases to do it right, for example, when a row is referenced in other places, and the row changes. These should be handled at the level of the framework.
At the compute engine level - the framework should consider the multiple processes and concurrent updates. It should consider how to resume existing states from terminated execution. In the end, we want to build a framework that is easy to build with exceptional velocity, but scalable and robust in production.
Standardized the interface throughout the data flow - really easy to plugin custom logic like LEGO; with a variety of native built-in components. One example is that it takes a few lines to switch among Qdrant, Postgres, Neo4j.
CocoIndex is licensed under Apache 2.0 https://github.com/cocoindex-io/cocoindex Getting started: https://cocoindex.io/docs/getting_started/quickstart
I have rolled out over 25 releases since last HN launch and it has significantly improved in all aspects, especially supporting property graphs (Neo4j, Kuzu), supporting queue based CDC (AWS S3, SQS) and lots of infra updates including CLI, resilience and error handlings.
Excited to learn your thoughts, and thank you so much!
Linghua
No comments yet