Ask HN: Data engineers, What suck when working on exploratory data-related task?

11 points by robz75 a day ago

20 comments

Hey guys,

Founder here. I’m working on building my next project and I don’t want to waste time solving fake problems.

Right now, what's currently extremely painful & annoying to do in your job? (You can be very brutally honest)

More specifically, I'm interested how you handle exploratory data-related tasks from your team?

Very curious to get your current workflows, issues and frustrations :)

PaulShin 14 hours ago

Great question. As a founder also working on my next project, the fear of "solving fake problems" is something I think about every day. Thanks for asking it.

For me, the single most frustrating part of any data-related task isn't the data itself. It's the "work about the work" – the soul-crushing feeling that I'm doing the same thing two or three times in different windows.

The biggest irony is that this is often caused by the very "smart work" tools that are supposed to make us more productive.

My typical workflow looks like this:

A request for data comes in on Slack. I pull the data, analyze it, and share a conclusion in the Slack thread. Then, I have to go to Jira to create a ticket that summarizes what I just said on Slack. Finally, I have to open Notion to write a brief document explaining the findings for the record. The context is constantly being copied, pasted, and fragmented. It's exhausting and feels like a waste of human potential.

This isn't a pitch, but this exact frustration is the only thing I'm focused on solving right now. My entire thesis is that the endless context switching between our communication layer (chat) and our execution layer (tasks, docs) is the biggest source of "fake work" in modern companies.

I'm building a tool where that entire "copy/paste the context" cycle is eliminated. A place where the conversation is the task, is the doc, is the context—all in one single flow.

I'm just a founder who is sincerely obsessed with this problem, and it's validating to see I'm not the one who feels this pain.

axegon_ 9 hours ago

Not a data engineer but my work revolves around processing a ton of data(let's call it partial data engineering). Much of the data I get is inputted by humans from different sources and platforms and countries. My biggest pain in a nutshell - the human factor. Believe it or not, people have managed to misspell "Austria" over 11,000 times(accents, spaces, different encodings, alphabets, languages, null characters and so on. Multiply that by 250-something countries and multiply that by around 90-100 other fields which suffer from similar issues and multiply that by 2.something billion rows and you get the picture.

  • didgetmaster 9 hours ago

    I am building a new data management system that can also handle relational data well. It is really good at finding anomalies in data to help clean it up after the fact, but I wanted to find a way to prevent the errors from polluting the data set in the first place.

    My solution was to enable the user to create a 'dictionary' of valid values for each column in the table. In your case you would create a list of all valid country names for that particular column. When inserting a new row, it checks to make sure the country name matches one of the values. I thought this might slow things down significantly, but testing shows I can insert millions of rows with just a minor performance hit.

    The next step is to 'auto correct' error values to the closest matching one instead of just rejecting it.

    • axegon_ 9 hours ago

      This isn't wildly different from what I've done but it's the sheer volume of crap that's scattered through all the different fields. The countries are the least of my problems. There are others where I'm faced with tens of millions of different combinations. The countries are a relatively trivial problem in comparison.

      • didgetmaster 5 hours ago

        My solution will catch a lot of trivial errors like simple misspellings. You could have a relational tables with dozens of columns, each with its own dictionary; but that won't catch wrong combinations between columns.

        For example, there is a Paris, Texas; but I doubt there is a London, Texas. Dictionaries of state names and city names would not catch someone's error of matching the wrong city with a state when entering an address.

        Is this the kind of error you encounter often?

clejack a day ago

The main issues for problems like this fall into 3 categories

- Things that prevent you from starting the job. Org silos, security, and permissions

- Things that prevent you from doing the job. This is primarily data cleaning.

- Things that make the job more difficult. This involves poor tooling, and you'll struggle to break the stranglehold that SQL and python-pandas have in this area. I'll also add plotting libraries to this. Many of them suck in a seemingly unavoidable way.

On the second and third points llms will most likely own these soon enough, though maybe there's room to build something small and local that's more efficient if the scope of the agent is reduced?

The first point is organizational generally, and it's very difficult to solve outside of integrating your system into an environment which is the strategy pursued by companies like snowflake and databricks.

  • robz75 a day ago

    What are the pain points your are facing with data cleaning? How do you handle it for now?

    • dapperdrake a day ago

      Data cleaning depends on the problem domain.

      Compare output from a spoctrometer (or spectrograph) vs. eliminating outliers from an almost linear process. One will wreck your data and the other is the only correct thing to do.

               *         
      
      **** ****
daemonologist a day ago

As clejack said, "Org silos, security, and permissions" - this is usually the largest single time sink on any project that needs production data.

Related to this is obtaining data in bulk - teams (understandably) are usually not willing to hand out direct read access to their databases and would prefer you use their API, and they've usually built APIs intended for accessing single records at a relatively slow rate. It often takes some convincing (DoSing their API) to get a more appropriate bulk solution.

  • ahahs a day ago

    my experiences are pretty much this. having db access would make my life so much easier.

dapperdrake a day ago

Have been working on this for a while with real stakes.

You have two issues that computers cannot help with (by their nature). And this incidental complexity dominates all the rest.

1. What people want to do with data

2. Bureaucracies are willfully oblivious to this problem domain

What people actually want to do with data: Answer questions that are interesting to them. It is all about the problem domain and its geometry.

Problem: You can only falsify hypothesis when asking reality questions. Everything else will bankrupt you. You can only work with the data that you have. Collecting data will always be hard. Computers are only involved, because they happen to be good with crunching numbers.

Bureaucracies only care about process and never about outcomes. And LLMs can now produce random plausible PowerPoint material to satisfy this demand. Only plausibility ever mattered, because it is empirically sufficient as an excuse for CYA.

---------

Naval Ravikant (abridged): "Tell truth, don't waste word."

ferguess_k a day ago

Mostly human problems especially if you work with Analytic teams. I need a PO for data. We usually don't have dedicated PO for data products so we have to do all the requirement findings by ourselves.

For exploratory data-related tasks, these are mostly related to checking data format or malformed data, so it is not a huge issue. But since you are building a product, I'll share my experience -> What I need is a quick way to explore schema changes in a column of a database table (not the schema of the table). Imagine you have a table `user` which has a column says `context` which is a bunch of JSON payload, I need to quick way to summarize and give me all "variations" of the schema of that field.

[removed] a day ago
[deleted]
saulpw 19 hours ago

VisiData makes a lot of things easier.

squircle a day ago

Conversations and interviews > Jupyter notebook

  • robz75 a day ago

    Why? What's currently annoying about notebooks that you have to deal with compared to just directly going to users?

    • squircle a day ago

      Ah, well, rereading your original post I realize now this isn't necessarily painful for me. Perhaps though, the annoying aspect is seeing others use proprietary excel spreadsheets without a data lake. Conway's Law?

      Does VS here mean Visual Studio? I would not call myself a data engineer, I just play one at work sometimes. Many hats, yknow?

      • robz75 a day ago

        "the annoying aspect is seeing others use proprietary excel spreadsheets without a data lake" => what's painful about that?

        VS = compared to, versus

        • squircle a day ago

          Hah okay. I read VS different from vs. The pain, in part, is hidden functions, rarely ever inline documentation, difficult to reuse or repurpose, Windows-centric, etc.