Launch HN: Nao Labs (YC X25) – Cursor for Data
See our demo here: https://www.youtube.com/watch?v=QmG6X-5ftZU
Writing code with LLMs is the new normal in software engineering. But not when it comes to manipulating data. Tools like Cursor don’t interact natively with data warehouses — they autocomplete SQL blindly, not knowing your data schema. Most of us are still juggling multiple tools: writing code in Cursor, checking results in the warehouse console, troubleshooting with an observability tool, and verifying in BI tool no dashboard broke.
When you want to write code on data with LLMs, you don’t care much about the code, you care about the data output. You need a tool that helps you write code relevant for your data, lets you visualize its impact on the output, and quality check it for you.
Christophe and I have each spent 10 years in data — Christophe was a data engineer and has built data platforms for dozens of orgs, I was head of data and helped data teams building their analytics & data products. We’ve seen how the business asks you to ship data fast, while you’re here wondering if this small line of code will mistakenly multiply the revenue on your CEO dashboard by x5. Which leaves you 2 choices: test extensively and ship slow. Not test and ship fast. That’s why we wanted to create nao: a tool really adapted to our data work, that would allow data teams to ship at business pace.
nao is a fork of VS Code, with built-in connectors for BigQuery, Snowflake, and Postgres. We built our own AI copilot and tab system, gave them a RAG of your data warehouse schemas and of your codebase. We added a set of agent tools to query data, compare data, understand data tools like dbt, assess the downstream impact of code in your whole data lineage.
The AI tab and the AI agent write straight away code matching your schema, may it be for SQL, python, yaml. It shows you code diffs and data diffs side by side, to visualize what your change did to the data output. And you can leave the data quality checks to the agent: detect missing or duplicated values, outliers, anticipate breaking changes downstream or compare dev and production data differences.
Data teams usually use nao for writing SQL pipelines, often with dbt. It helps them create data models, document them, test them, while making sure they’re not breaking data lineage and figures in the BI. In run mode, they also use it to run some analytics, and identify data quality bugs in production. For less technical profiles, it’s also a great help to strengthen their code best practices. For large teams, it ensures that the code & metrics remain well factorized and consistent.
Software engineers use nao for the database exploration part: write SQL queries with nao tab, explore data schema with the agent, and write DDL.
Question we often get is: why not just use Cursor and MCPs? Cursor has to trigger many MCP calls to get full context of the data, while nao has it always available in one RAG. MCPs stay in a very enclosed part of Cursor: they don’t bring data context to the tab. And they don’t make the UI more adapted to data workflows. Besides, nao comes as pre-packaged for data teams: they don’t have to set up extensions, install and authenticate in MCPs, build CI/CD pipelines. Which means even non-technical data teams can have a great developer experience.
Our long-term goal is to become the best place to work with data. We want to fine-tune our own models for SQL, Python and YAML to give the most relevant code suggestions for data. We want to enlarge our comprehension of all data stack tools, to become the only agnostic editor for any of your data workflow.
You can try it here: https://sunshine.getnao.io/releases/ - download nao, sign up for free and start using it. Just for HN Launch, you can create a temporary account with a simple username if you’d prefer not to use your email. For now, we only have Mac version but Linux and Windows are coming.
We’d love to hear your feedback — and get your thoughts on how we can improve even further the data dev experience!
The chat for exploratory data analysis ("what can you tell me about this column I just added?"), the worksheets and column lineage are real game-changers for dbt development. These features feel purposefully designed for how I actually work.
Claire and Christophe are super responsive to feedback, implementing features and fixes quickly. You can see the product evolving in all the right directions!
I built Buckaroo as a data table UI for Jupyter and Pandas/Polars, that first lets you look at the data in a modern performant table with histograms, formatting, and summary stats.
Yesterday I released autocleaning for Buckaroo. This looks at data and heuristically chooses cleaning methods with definite code. This is fast (less than 500ms). Multiple cleaning strategies can be cycled through and you can choose the best approach for your data. For the simple problems we shoudn't need to consult an LLM to do the obvious things.
All of this is open source and extensible.
[1] https://youtube.com/shorts/4Jz-Wgf3YDc
[2] https://github.com/paddymul/buckaroo
[3] https://marimo.io/p/@paddy-mullen/buckaroo-auto-cleaning Live WASM notebook that you can play with - no downloads or installs required
But, if you use the chat/agent you can explain that you're using Kysely and give the warehouse context he will probably handle this.
I did not know Kysely, but from the gif on the project landing it looks like the autocomplete is great? It's different than a tab I agree tho.
One quick issue - unable to connect to my postgres instance that requires SSL.
SSH tunneling seems to be broken as well because when the box is checked I am unable to select a private key path and the connect button is gone
Parsing DB URI would be a helpful feature as well!
Thanks so much, excited to get this up and running when everything is fixed!
Just one question what makes you pick Exasol rather than going with an open-source warehouse tech (e.g. Clickhouse or a lake with Trino)?
Next in the list is the next edit suggestion dedicated to data work, especially with dbt (or SQL transformations) where when you change a query you have to change the downstream queries directly.
The goal for us is to be the best way to do data with AI.
With data I think that is very hard, I wrote a SQL query (without AI) which ran and showed what look like correct numbers only to years later realise it was incorrect.
When doing more complex calculations, I am not clear how to check if the output is correct.
Tho, i'd say this is like when writing tests in software, you can't catch everything the first time (even when going 100% code coverage), especially in data when most of the time it breaks because of upstream producers.
It will still require live observability tools monitoring data live in the near future.