Handling data

How to handle data

Handling data in a data science project is very tricky. Primarily, we have to worry about the following:

  1. Availability: How do I make the data that my project depends on available to others who want to work on the project?
  2. Validation: How do I know whether my data are exactly what I think it should be?
  3. Flow complexity: How do I combat the entropy (complexity) that grows as the project develops?
  4. Provenance: If I have a problem with the data, whom should I ask questions about it?

The notes linked in this section should give you an overview on how to approach handling data in a sane fashion on your project.

Programmatically clean your data rather than manually

Why clean your data programmatically

To illustrate, let's look at a simple but common scenario.

Data came to you from an upstream source with some errors. You went into Excel or a text editor and manually corrected those errors. You then went about your analysis. At a later date, your upstream source provided you with an updated data file... and it still contained the exact same errors you fixed manually earlier.

This scenario makes the case for baking in all data cleaning (i.e. processing) steps into code. In doing so, we declare that "the source of truth for the state of my data is whatever the data source gives me", and our code will do the cleaning.

What tools can we use to programmatically clean our data?

In the Python world, the pandas library is one tool that is available. Its API has become the de facto standard that other libraries target, especially if they are trying to make accelerated data processing libraries. Some examples of dataframe libraries that adopt the pandas API include:

Building on top of the pandas API, I built a port of the R package janitor, called pyjanitor. Inside pyjanitor, you'll find a library of data cleaning routines, each routine having an API that is implemented as a DataFrame class method.

How do we design good data cleaning pipelines?

Here, it pays to think a bit like a software engineer. Your dataframe is basically your "data API" -- your downstream model, which is effectively a numerical program of some kind, depends on your input data having certain guarantees. What are these guarantees? I have described some of these considerations in Validate your data wherever practically possible, but here are some of the more common ones to consider:

  • Presence of certain columns.
  • Columns having certain dtypes.
  • Data being present/absent.

As mentioned in the data validation note, validate your data wherever you can!

One technique I find particularly handy is to map out all of the necessary data transformations as a graph. Nodes are the resulting dataframes and their columns; edges are the functions that take in dataframes and return dataframes. This map will be immensely helpful: it will help you think through where you could decompose larger data transformation functions to be more modular, and where you might be able to reuse transformed data.

Iteratively scope out and define the most appropriate data structures for your problem

Why you need to define good data structures

Data structures are incredibly important to any modelling problem.

Data structures, when designed well, give us an efficient handle over the problem at hand. Especially when a data structure is paired with a programmatic API.

How to design good data structures for a problem

Consider the example where you have a time series measurement. Here's a simple data structure you can use: two lists. It'd look like:

time_index = [0, 5, 10, ...]
values = [193, 283, 111, ...]

Now, while simplistic, it's not ideal. The time index doesn't start at 1, and it's difficult to index into the values corresponding to a particular time step. Manipulating and analyzing this data is difficult, because of a poor choice of data structures.

By contrast, if we instead stuck the data inside a dataframe, things would start to look a bit more sane.

df = pd.DataFrame({"time_index": time_index, "measurement": values})

Now, our time index and measurements are no longer divorced from one another. We can write queries against them easily. Plotting is a cinch too, because the dataframe API supports it. Hence, by choosing to structure our data in a dataframe rather than in two lists, we gain a world of capabilities afforded to us from the dataframe API.

Dataframe considerations

Designing a good "dataframe" takes effort too. Once you have your raw data loaded in memory from your single source of truth (see: Define single sources of truth for your data sources), you probably will end up defining new derived columns. These are columns that are calculated on the basis of, or otherwise "derived" from, the "raw data" columns. Examples include:

  • Binarization/quantization of a continuous column.
  • Joining two dataframes together on a key column.
  • Gaussian-standardization of a column.

The "raw data" form the baseline logical unit that can be validated (see: Validate your data wherever practically possible). On top of this baseline logical unit, you can make an arbitrary number of changes to the dataframe. How many changes form a new "logical unit" of changes for which you'll want to define new schema validation checks? This is an important question to think about, because after all, your dataframes form the "data API", and it'll be implicated in the pandera schemas and data descriptors you end up writing! (see: Write data descriptor files for your data sources).

Get bootstrapped on your data science projects

Why this knowledge base exists

I'm super glad you made it to my knowledge base on bootstrapping your data science machine - otherwise known as getting set up for success with great organization and sane structure. The content inside here has been battle-tested through real-world experience with colleagues and others skilled in their computing domains, but a bit new to the modern tooling offered to us in the data science world.

This knowledge base exists because I want to encourage more data scientists to adopt sane practices and conventions that promote collaboration and reproducibility in our data work. These are practices that, through years of practice in developing data projects and open source software, I have come to see the importance of.

Where I think you, the reader, are coming from

The most important thing I'm assuming about you, the reader, is that you have experienced the same challenges I encountered when structure and workflow were absent from my work. I wrote down this knowledge base for your benefit. Based on one decade (as of 2023) of continual refinement, you'll learn how to:

  1. structure your computer for data analysis projects, and
  2. structure a data analysis project for maximum effectiveness.

Because I'm a Pythonista who uses Jupyter and VSCode, some tips are specific to the language and these tools. However, being a Python programmer isn't a hard requirement. More than the specifics, I hope this knowledge base imparts to you a particular philosophy of how to work. That philosophy should be portable across languages and tooling, though having specific tooling can sometimes help you adhere to the philosophy. To read more about the philosophies behind this knowledge base, check out the page: The philosophies that ground the bootstrap.

For the beginner

As you grow in your knowledge and skillsets, this knowledge base should help you keep an eye out for critical topics you might want to learn.

For the moderately experienced

If you're looking to refine your skillsets, this knowledge graph should give you the base from which you dive deeper into specific tools.

For the seasoned data scientist

If you're a seasoned data practitioner, this guide should be able to serve you the way it helps me: as a cookbook/recipe guide to remind you of things when you forget them.

Things you'll learn

The things you'll learn here cover the first steps, starting at configuring your laptop or workstation for data science development up to some practices that help you organize your projects, regardless of where you do your computing.

I have a recommended order below, based on my experience with colleagues and other newcomers to a project:

  1. Configure your machine
  2. Get prepped per project
  3. Navigate the packaging world
  4. Handling data
  5. Choose and customize your development environment

However, you may wish to follow the guide differently and not read it in the way I prescribed above. That's not a problem! The online version is intentionally structured as a knowledge graph and not a book so that you can explore it on your own terms.

Apply these ideas just-in-time

As you go through this content, I would also encourage you to keep in mind: Time will distill the best practices in your context. Don't feel pressured to apply every single thing you see here to your project. Incrementally adopt these practices as they make sense. They're all quite composable with one another.

Not everything written here is applicable to every single project. Indeed, rarely do I use 100% of everything I've written here. Sometimes, my projects end up being more software tool development oriented, and hence I use a lot of the software-oriented ideas. Sometimes my projects are one-off, and so I ease off on the reproducibility aspect. Most of the time, my projects require a lot of exploratory work beyond simple exploratory data analysis, and imposing structure early on can be stifling for the project.

So rather than see this collection of notes as something that we must impose on every project, I would encourage you to be picky and choosy, and use only what helps you, in a just-in-time fashion, to increase your effectiveness in a project. Just-in-time adoption of a practice or a tool is preferable, because doing so eases the pressure to be rigidly complete from the get-go. In my own work, I incorporate a practice into the project just-in-time as I sense the need for it.

Moreover, as my colleague Zachary Barry would say, none of these practices can be mastered overnight. It takes running into walls to appreciate why these practices are important. For an individual who has not yet encountered problems with disorganized code, multiple versions of the same dataset, and other issues I describe here, it is difficult to deeply appreciate why it matters to apply simple and basic software development practices to your data science work. So I would encourage you to use this knowledge base as a reference tool that helps you find out, in a just-in-time fashion, a practice or tool that helps you solve a problem.

Ways to support the project

If you wish to support the project, there are a few ways:

Firstly, I spent some time linearizing this content based on my experience guiding skilled newcomers to the DS world. That's available on the eBook version on LeanPub. If you purchase a copy, you will get instructions to access the repository that houses the book source and automation to bootstrap each of your Python data science projects easily!

Secondly, you can support my data science education work on Patreon! My supporters get early access to the data science content that I make.

Finally, if you have a question regarding the content, please feel free to reach out on LinkedIn. (If I make substantial edits on the basis of your comments or questions, I might reach out to you to offer a free copy of the eBook!)

Validate your data wherever practically possible

What is "data validation"?

To understand data validation, we have to back up a little bit and consider the simplest case of tabular data.

We canonically understand tabular data as having columns and rows. Rows are, in a statistical sense, "samples". Columns, then, are measured attributes of the samples. Each of the measured attributes has a range of values for which it is semantically valid. (In statistics, this is analogous to the statistical support, which is the range of values that define the probability distribution.) Validation of tabular data, then, refers to the act of ensuring that the measured attributes are, for lack of a better word, valid.

To make this clearer, let me illustrate the ways that "validated" data might look.

From a statistical standpoint:

  1. For continuous measurement data, the measurement values fall within semantically valid ranges.
    1. Unbounded in statistical language means support from negative to positive infinity.
    2. Bounded data usually would have at least one of "minimum" or "maximum" values stated.
  2. For discrete measurement data, the measurement values fall within a set of semantically valid options.
  3. There are no null values present in columns that should not have them.

From a computational standpoint:

  1. Each column's data are of the correct data type (integer, float, categorical, object) for interoperability with other code that you might write.
  2. Column names are named precisely in line with their references in the codebase.

When to validate data

For interactive computational use cases, just-in-time checks are handy in helping you identify errors in data before using them. That means the verification ideally happens right before you consume the data and right after your data processing/handling function returns the data. You probably could call this runtime data validation.

On the other hand, if your project ends up being part of a more complex pipeline, especially one with continually updating data, you might want to validate the data at the point of ingestion. You could catch any data points that fail the validation checks that you have defined at the time of upload. You might even go further and periodically run the validation checks on a regular interval. If the data source is large, you might opt to sample a small subset of data rather than perform full data scans. For this strategy, you might want to call it storage time validation.

Parallels to software testing

Software tests check that the functions that you write behave correctly. By contrast, data validation ensures that the input data to your functions satisfy the assumptions that you make in the data processing functions you write.

Just as you should be able to run tests to check your data automatically and continuously, you should be able to constantly check that the data you put into your functions should satisfy the assumptions you possess about them.

Tools for validating data

At the moment, I see two open-source projects that are well-developed and maintained for data validation.

Pandera

Pandera targets validation of pandas dataframes in your Python code and comes with a very lightweight API for tacking on automatic runtime validation to your functions.

Great Expectations

Great Expectations is a bit more heavyweight than Pandera, and in my opinion, is more suitable for heavy-duty pipelines that continuously process data that gets continually fed (whether streamed or in batch) into the data storage system.

Your database system's schemas

If you are ingesting data into a database, which is inherently already structured, rather than being dumped into a data lake, which is intrinsically unstructured, then your database schemas can serve as an automated check for some parts of data validity, such as data being in the right range, or having the right data types.

Build your projects thinking in terms of pipelines

Why think in "pipelines"?

Our data science projects, at some point, end up looking a lot like data pipelines. Data flows through a sequence of data preparation functions, which yield so-called "clean" data. That cleaned data then flows through a trained model, which then returns a prediction. The prediction then flows through some automated reporting system, giving end-users a way to consume the result.

The science portion of data science includes the art of figuring out how that pipeline looks. Once the science portion of the work, which is essentially scoping out what we need to automate, is complete, we can now take things into an engineering paradigm where we build pipelines to automate the good things we've uncovered.

What to look out for in pipelining tools?

The biggest thing to look out for is the ability to avoid repeating unnecessary computations. Tools that do this well will provide you with a syntax for naming build steps and defining dependencies between them. They will also automatically cache intermediate results.

Other than that, some pipelining tools will come with niceties. One example is a "graph view" that lets you see the dependency graph between steps. Another example is a library of built-in steps that let you accomplish commonly-available tasks.

What pipelining tools exist?

Here's a quick overview of pipelining tools that are available. One thing to keep in mind is that the ecosystem is changing quickly. As such, I would advise two things: Firstly, treat this listing as an incomplete and evolving document. Secondly, be ready to learn multiple tools and scope out whether they work well for your use cases.

Make

make is the "big grand daddy" of pipelining tools. It is also the lightest weight tool that you can use. It's usually shipped with every UNIX-like system, making it ubiquitous and hence easy to get started with. make uses a Makefile that lives in the project repository root directory. There's a delightful tutorial on how to use Make that you can follow to learn how to use Make.

While scoping out the tooling set for Make, I learned that there are convenient tools available for Make. One such example is the Python package makefile2dot, which lets you visualize the Makefile dependency graph. (This composability of tooling that each do one and only one thing well is well in-line with the corresponding UNIX philosophy.)

Make is usually run on a local machine. In my experience, it's most convenient for providing a top-level command-line interface to interact with the project's files. For example, I would put code style checks under a style command, allowing me to execute the command make style to conveniently run all code style checks.

If you're starting with pipelining, I would recommend starting with Make ahead of the rest of the tools listed below, as its simplicity and ubiquity will help you master the concepts of pipelining.

Snakemake

Snakemake started as a bioinformatics pipelining tool but eventually grew to be a general-purpose pipelining too. If my recollection of history is correct, it initially started as a tool designed for use on "local" (though powerful) systems, such as the heavy-duty Linux workstations that are bioinformaticians' daily driver machines. Eventually, it grew to support distributed cluster/cloud workflows as well. You can check out Snakemake's website and docs.

Kedro

Kedro is built by Quantum Black, which is McKinsey's specialized data science consultancy. Kedro is somewhat opinionated about certain things, and some of their suggested practices might look slightly different from the exact ways I suggest to do something here, but I believe the underlying philosophy does make sense. You can check out Kedro here.

Prefect

Prefect is an open-source pipeline orchestration tool with a commercial offering by the company of the same name. One nice thing about Prefect is that its syntax is entirely in Python code, and its orchestration server comes with a dashboard for live monitoring of the jobs you've defined.

Kubeflow

Kubeflow is a pipelining tool designed to work on Kubernetes. Its primary use case is machine learning pipelines, which sometimes is one of the end products of data science projects. If your organization has already made significant investments in using Kubernetes, then Kubeflow might be a viable option for you to consider.

GitHub Actions

If you live on the GitHub ecosystem, then GitHub Actions is not a bad idea to consider. Its syntax for configuring builds is easy to learn, and it comes with a graph view, and the ability to trigger builds automatically is superb.

Define single sources of truth for your data sources

Why define single sources of truth for data

Let me describe a scenario: there's a project you're working on with others, and everybody depends on an Excel spreadsheet. This was before the days of collaboratively editing a single Excel spreadsheet was a possibility. To avoid conflicts, someone creates a spreadsheet_v2.xlsx, and then at the same time, another person creates spreadsheet_TE_edits.xlsx.

Which version do you trust?

The worst part? Neither of those spreadsheets contained purely raw data; they were a mix of both raw data and derived data (i.e. columns that are calculated off or from other columns). The derived data are not documented with why and how they were calculated; their provenance is unknown, in that we don't know who made those changes, and who to ask questions on those columns.

Rather than wrestling with multiple sources of truth, a data analysis workflow can be much more streamlined by defining a single source of truth for raw data that does not contain anything derived, followed by calculating the derived data in a custom source code (see: Place custom source code inside a lightweight package), written in such a way that they yield logical derived data structures for the problem (see: Iteratively scope out and define the most appropriate data structures for your problem). Those single sources of truth can also be described by a ground truth data descriptor file (see Write data descriptor files for your data sources), which give you the provenance of the file and a human-readable descriptor of each of the sources.

Examples of single sources of data truth in action

Data on an s3-like bucket

If your organization uses the cloud, then AWS S3 (or compatible bucket stores) might be available. A data source might be dumped on there and referenced by a single URL. That URL is your "single source of data"

Data on an internal data store

Your organization might have the resources to build out a data store with proper access controls and the likes. They might provide a unique key and a software API (RESTful, or Python or R package) to download data in an easy fashion. That "unique key" + the API defines your single source of truth.

Data on a shared network store

Longer-lived organizations might have started out with a shared networked filesystem, with access controls granted by UNIX-style user groups. In this case, the /path/to/the/data/file + access to the shared filesystem is your source of truth.

Data on the internet

This one should be easy to grok: a URL that points to the exact CSV, Parquet, or Excel table, or a zip dump of images, is your unique identifier.

Never commit data into version control repositories

Why you should never commit data to Git

Data should never be committed into your Git repositories. This is because git was designed to version small files of source code; committing data, a different category of things from source code, into your repositories will first and foremost lead to repository size bloat. Also, committing data into repositories means the data get shipped alongside the source code to anybody who has access to the source code. This might not necessarily be in-line with organizational practices.

Add data to .gitignore

That said, in a pinch sometimes you need to work with data locally, so you might have a data/ directory underneath the project root in which you temporarily store data. You might have chosen data/ rather than /tmp/ because it is easier to reference. To avoid accidentally committing any data to the repository, you might want to add the data directory to your .gitignore file:

# Above is the rest of your .gitignore
data/

The alternative is to ignore any file extensions that you know exclusively belong to the category of things called "data":

# Above is the rest of your .gitignore
*.csv
*.xlsx
*.Rdata

See also

Write data descriptor files for your data sources

Why write data descriptor files

When you get a new CSV file, how do you know what the semantic meaning of each column is, what null values are, and other background information of that file?

Usually, we'd go in and ask another person. However, that's not scalable. Instead, if we provided a human-readable text file that provided all of the aforementioned information, that would be awesome! In comes the data descriptor file. (In the clinical research world, they are also known as "data dictionaries".)

But beyond that, the data descriptor file has another benefit! It takes manual work to sit down and comb through each file and provide a description of each of its columns, where the data came from, and more. This is all part of the process of understanding the data generating process, which is incredibly helpful for downstream modelling efforts. In essence, writing a data descriptor file per data file is an incredibly great first step in the exploratory data analysis (EDA) stage of doing data analysis, because you are literally exploring the structure of the data.

These are two great reasons to write descriptor files, which beat out the single downside: "it takes time".

How do you write data descriptor files

At its most basic form, you can simply write a README file for each data source. Plain text, fully customizable.

That said, some lightweight structure can help. I have previously opted for a YAML file format, which is both human readable and computer-parseable. In that YAML file, we can describe the table schema using the frictionless data TableSchema spec. One can also go for the full JSON that they specify (but it's not as easy to write by hand). In choosing to go with a specification, we effectively gain a checklist, helping us remember to describe everything that could be necessary!

Alternatives to data descriptor files

If you primarily handle tabular data (which, if my understanding is correct, forms the vast majority of data science use cases), then I would strongly suggest using pandera to not only validate your data (see: Validate your data wherever practically possible) but also to generate dataframe schemas that you can store as code. Pandera comes with the ability to generate a starter dataframe schema that one can continually update as data arrive. Storing your data descriptor as code not only allows you to annotate it with comments but also use it for validation itself: a double win.

index

Why this knowledge base exists

I'm super glad you made it to my knowledge base on bootstrapping your data science machine - otherwise known as getting set up for success with great organization and sane structure. The content inside here has been battle-tested through real-world experience with colleagues and others skilled in their computing domains, but a bit new to the modern tooling offered to us in the data science world.

This knowledge base exists because I want to encourage more data scientists to adopt sane practices and conventions that promote collaboration and reproducibility in our data work. These are practices that, through years of practice in developing data projects and open source software, I have come to see the importance of.

Where I think you, the reader, are coming from

The most important thing I'm assuming about you, the reader, is that you have experienced the same challenges I encountered when structure and workflow were absent from my work. I wrote down this knowledge base for your benefit. Based on one decade (as of 2023) of continual refinement, you'll learn how to:

  1. structure your computer for data analysis projects, and
  2. structure a data analysis project for maximum effectiveness.

Because I'm a Pythonista who uses Jupyter and VSCode, some tips are specific to the language and these tools. However, being a Python programmer isn't a hard requirement. More than the specifics, I hope this knowledge base imparts to you a particular philosophy of how to work. That philosophy should be portable across languages and tooling, though having specific tooling can sometimes help you adhere to the philosophy. To read more about the philosophies behind this knowledge base, check out the page: The philosophies that ground the bootstrap.

For the beginner

As you grow in your knowledge and skillsets, this knowledge base should help you keep an eye out for critical topics you might want to learn.

For the moderately experienced

If you're looking to refine your skillsets, this knowledge graph should give you the base from which you dive deeper into specific tools.

For the seasoned data scientist

If you're a seasoned data practitioner, this guide should be able to serve you the way it helps me: as a cookbook/recipe guide to remind you of things when you forget them.

Things you'll learn

The things you'll learn here cover the first steps, starting at configuring your laptop or workstation for data science development up to some practices that help you organize your projects, regardless of where you do your computing.

I have a recommended order below, based on my experience with colleagues and other newcomers to a project:

  1. Configure your machine
  2. Get prepped per project
  3. Navigate the packaging world
  4. Handling data
  5. Choose and customize your development environment

However, you may wish to follow the guide differently and not read it in the way I prescribed above. That's not a problem! The online version is intentionally structured as a knowledge graph and not a book so that you can explore it on your own terms.

Apply these ideas just-in-time

As you go through this content, I would also encourage you to keep in mind: Time will distill the best practices in your context. Don't feel pressured to apply every single thing you see here to your project. Incrementally adopt these practices as they make sense. They're all quite composable with one another.

Not everything written here is applicable to every single project. Indeed, rarely do I use 100% of everything I've written here. Sometimes, my projects end up being more software tool development oriented, and hence I use a lot of the software-oriented ideas. Sometimes my projects are one-off, and so I ease off on the reproducibility aspect. Most of the time, my projects require a lot of exploratory work beyond simple exploratory data analysis, and imposing structure early on can be stifling for the project.

So rather than see this collection of notes as something that we must impose on every project, I would encourage you to be picky and choosy, and use only what helps you, in a just-in-time fashion, to increase your effectiveness in a project. Just-in-time adoption of a practice or a tool is preferable, because doing so eases the pressure to be rigidly complete from the get-go. In my own work, I incorporate a practice into the project just-in-time as I sense the need for it.

Moreover, as my colleague Zachary Barry would say, none of these practices can be mastered overnight. It takes running into walls to appreciate why these practices are important. For an individual who has not yet encountered problems with disorganized code, multiple versions of the same dataset, and other issues I describe here, it is difficult to deeply appreciate why it matters to apply simple and basic software development practices to your data science work. So I would encourage you to use this knowledge base as a reference tool that helps you find out, in a just-in-time fashion, a practice or tool that helps you solve a problem.

Ways to support the project

If you wish to support the project, there are a few ways:

Firstly, I spent some time linearizing this content based on my experience guiding skilled newcomers to the DS world. That's available on the eBook version on LeanPub. If you purchase a copy, you will get instructions to access the repository that houses the book source and automation to bootstrap each of your Python data science projects easily!

Secondly, you can support my data science education work on Patreon! My supporters get early access to the data science content that I make.

Finally, if you have a question regarding the content, please feel free to reach out on LinkedIn. (If I make substantial edits on the basis of your comments or questions, I might reach out to you to offer a free copy of the eBook!)