Ask HN: Data engineers, What suck when working on exploratory data-related task?
5 robz75 11 6/18/2025, 10:27:48 AM
Hey guys,
Founder here. I’m working on building my next project and I don’t want to waste time solving fake problems.
Right now, what's currently extremely painful & annoying to do in your job? (You can be very brutally honest)
More specifically, I'm interested how you handle exploratory data-related tasks from your team?
Very curious to get your current workflows, issues and frustrations :)
You have two issues that computers cannot help with (by their nature). And this incidental complexity dominates all the rest.
1. What people want to do with data
2. Bureaucracies are willfully oblivious to this problem domain
What people actually want to do with data: Answer questions that are interesting to them. It is all about the problem domain and its geometry.
Problem: You can only falsify hypothesis when asking reality questions. Everything else will bankrupt you. You can only work with the data that you have. Collecting data will always be hard. Computers are only involved, because they happen to be good with crunching numbers.
Bureaucracies only care about process and never about outcomes. And LLMs can now produce random plausible PowerPoint material to satisfy this demand. Only plausibility ever mattered, because it is empirically sufficient as an excuse for CYA.
---------
Naval Ravikant (abridged): "Tell truth, don't waste word."
For exploratory data-related tasks, these are mostly related to checking data format or malformed data, so it is not a huge issue. But since you are building a product, I'll share my experience -> What I need is a quick way to explore schema changes in a column of a database table (not the schema of the table). Imagine you have a table `user` which has a column says `context` which is a bunch of JSON payload, I need to quick way to summarize and give me all "variations" of the schema of that field.
Related to this is obtaining data in bulk - teams (understandably) are usually not willing to hand out direct read access to their databases and would prefer you use their API, and they've usually built APIs intended for accessing single records at a relatively slow rate. It often takes some convincing (DoSing their API) to get a more appropriate bulk solution.
- Things that prevent you from starting the job. Org silos, security, and permissions
- Things that prevent you from doing the job. This is primarily data cleaning.
- Things that make the job more difficult. This involves poor tooling, and you'll struggle to break the stranglehold that SQL and python-pandas have in this area. I'll also add plotting libraries to this. Many of them suck in a seemingly unavoidable way.
On the second and third points llms will most likely own these soon enough, though maybe there's room to build something small and local that's more efficient if the scope of the agent is reduced?
The first point is organizational generally, and it's very difficult to solve outside of integrating your system into an environment which is the strategy pursued by companies like snowflake and databricks.
Compare output from a spoctrometer (or spectrograph) vs. eliminating outliers from an almost linear process. One will wreck your data and the other is the only correct thing to do.
**** ****Does VS here mean Visual Studio? I would not call myself a data engineer, I just play one at work sometimes. Many hats, yknow?
VS = compared to, versus