Validate your data wherever practically possible
To understand data validation, we have to back up a little bit and consider the simplest case of tabular data.
We canonically understand tabular data as having columns and rows. Rows are, in a statistical sense, "samples". Columns, then, are measured attributes of the samples. Each of the measured attributes has a range of values for which it is semantically valid. (In statistics, this is analogous to the statistical support, which is the range of values that define the probability distribution.) Validation of tabular data, then, refers to the act of ensuring that the measured attributes are, for lack of a better word, valid.
To make this clearer, let me illustrate the ways that "validated" data might look.
From a statistical standpoint:
From a computational standpoint:
For interactive computational use cases, just-in-time checks are handy in helping you identify errors in data before using them. That means the verification ideally happens right before you consume the data and right after your data processing/handling function returns the data. You probably could call this runtime data validation.
On the other hand, if your project ends up being part of a more complex pipeline, especially one with continually updating data, you might want to validate the data at the point of ingestion. You could catch any data points that fail the validation checks that you have defined at the time of upload. You might even go further and periodically run the validation checks on a regular interval. If the data source is large, you might opt to sample a small subset of data rather than perform full data scans. For this strategy, you might want to call it storage time validation.
Software tests check that the functions that you write behave correctly. By contrast, data validation ensures that the input data to your functions satisfy the assumptions that you make in the data processing functions you write.
Just as you should be able to run tests to check your data automatically and continuously, you should be able to constantly check that the data you put into your functions should satisfy the assumptions you possess about them.
At the moment, I see two open-source projects that are well-developed and maintained for data validation.
Pandera targets validation of pandas
dataframes in your Python code and comes with a very lightweight API for tacking on automatic runtime validation to your functions.
Great Expectations is a bit more heavyweight than Pandera, and in my opinion, is more suitable for heavy-duty pipelines that continuously process data that gets continually fed (whether streamed or in batch) into the data storage system.
If you are ingesting data into a database, which is inherently already structured, rather than being dumped into a data lake, which is intrinsically unstructured, then your database schemas can serve as an automated check for some parts of data validity, such as data being in the right range, or having the right data types.
Handling data
Handling data in a data science project is very tricky. Primarily, we have to worry about the following:
The notes linked in this section should give you an overview on how to approach handling data in a sane fashion on your project.
Iteratively scope out and define the most appropriate data structures for your problem
Data structures are incredibly important to any modelling problem.
Data structures, when designed well, give us an efficient handle over the problem at hand. Especially when a data structure is paired with a programmatic API.
Consider the example where you have a time series measurement. Here's a simple data structure you can use: two lists. It'd look like:
time_index = [0, 5, 10, ...]
values = [193, 283, 111, ...]
Now, while simplistic, it's not ideal. The time index doesn't start at 1, and it's difficult to index into the values corresponding to a particular time step. Manipulating and analyzing this data is difficult, because of a poor choice of data structures.
By contrast, if we instead stuck the data inside a dataframe, things would start to look a bit more sane.
df = pd.DataFrame({"time_index": time_index, "measurement": values})
Now, our time index and measurements are no longer divorced from one another. We can write queries against them easily. Plotting is a cinch too, because the dataframe API supports it. Hence, by choosing to structure our data in a dataframe rather than in two lists, we gain a world of capabilities afforded to us from the dataframe API.
Designing a good "dataframe" takes effort too. Once you have your raw data loaded in memory from your single source of truth (see: Define single sources of truth for your data sources), you probably will end up defining new derived columns. These are columns that are calculated on the basis of, or otherwise "derived" from, the "raw data" columns. Examples include:
The "raw data" form the baseline logical unit that can be validated (see: Validate your data wherever practically possible). On top of this baseline logical unit, you can make an arbitrary number of changes to the dataframe. How many changes form a new "logical unit" of changes for which you'll want to define new schema validation checks? This is an important question to think about, because after all, your dataframes form the "data API", and it'll be implicated in the pandera schemas and data descriptors you end up writing! (see: Write data descriptor files for your data sources).
Programmatically clean your data rather than manually
To illustrate, let's look at a simple but common scenario.
Data came to you from an upstream source with some errors. You went into Excel or a text editor and manually corrected those errors. You then went about your analysis. At a later date, your upstream source provided you with an updated data file... and it still contained the exact same errors you fixed manually earlier.
This scenario makes the case for baking in all data cleaning (i.e. processing) steps into code. In doing so, we declare that "the source of truth for the state of my data is whatever the data source gives me", and our code will do the cleaning.
In the Python world, the pandas
library is one tool that is available. Its API has become the de facto standard that other libraries target, especially if they are trying to make accelerated data processing libraries. Some examples of dataframe libraries that adopt the pandas
API include:
Building on top of the pandas
API, I built a port of the R package janitor
, called pyjanitor
. Inside pyjanitor
, you'll find a library of data cleaning routines, each routine having an API that is implemented as a DataFrame class method.
Here, it pays to think a bit like a software engineer. Your dataframe is basically your "data API" -- your downstream model, which is effectively a numerical program of some kind, depends on your input data having certain guarantees. What are these guarantees? I have described some of these considerations in Validate your data wherever practically possible, but here are some of the more common ones to consider:
As mentioned in the data validation note, validate your data wherever you can!
One technique I find particularly handy is to map out all of the necessary data transformations as a graph. Nodes are the resulting dataframes and their columns; edges are the functions that take in dataframes and return dataframes. This map will be immensely helpful: it will help you think through where you could decompose larger data transformation functions to be more modular, and where you might be able to reuse transformed data.
Write data descriptor files for your data sources
When you get a new CSV file, how do you know what the semantic meaning of each column is, what null values are, and other background information of that file?
Usually, we'd go in and ask another person. However, that's not scalable. Instead, if we provided a human-readable text file that provided all of the aforementioned information, that would be awesome! In comes the data descriptor file. (In the clinical research world, they are also known as "data dictionaries".)
But beyond that, the data descriptor file has another benefit! It takes manual work to sit down and comb through each file and provide a description of each of its columns, where the data came from, and more. This is all part of the process of understanding the data generating process, which is incredibly helpful for downstream modelling efforts. In essence, writing a data descriptor file per data file is an incredibly great first step in the exploratory data analysis (EDA) stage of doing data analysis, because you are literally exploring the structure of the data.
These are two great reasons to write descriptor files, which beat out the single downside: "it takes time".
At its most basic form, you can simply write a README file for each data source. Plain text, fully customizable.
That said, some lightweight structure can help. I have previously opted for a YAML file format, which is both human readable and computer-parseable. In that YAML file, we can describe the table schema using the frictionless data TableSchema spec. One can also go for the full JSON that they specify (but it's not as easy to write by hand). In choosing to go with a specification, we effectively gain a checklist, helping us remember to describe everything that could be necessary!
If you primarily handle tabular data (which, if my understanding is correct, forms the vast majority of data science use cases), then I would strongly suggest using pandera
to not only validate your data (see: Validate your data wherever practically possible) but also to generate dataframe schemas that you can store as code. Pandera comes with the ability to generate a starter dataframe schema that one can continually update as data arrive. Storing your data descriptor as code not only allows you to annotate it with comments but also use it for validation itself: a double win.