Validate your data wherever practically possible

## What is "data validation"?

To understand data validation, we have to back up a little bit and consider the simplest case of tabular data.

We canonically understand tabular data as having columns and rows. Rows are, in a statistical sense, "samples". Columns, then, are measured attributes of the samples. Each of the measured attributes has a range of values for which it is semantically valid. (In statistics, this is analogous to the statistical support, which is the range of values that define the probability distribution.) Validation of tabular data, then, refers to the act of ensuring that the measured attributes are, for lack of a better word, valid.

To make this clearer, let me illustrate the ways that "validated" data might look.

From a statistical standpoint:

1. For continuous measurement data, the measurement values fall within semantically valid ranges.
1. Unbounded in statistical language means support from negative to positive infinity.
2. Bounded data usually would have at least one of "minimum" or "maximum" values stated.
2. For discrete measurement data, the measurement values fall within a set of semantically valid options.
3. There are no null values present in columns that should not have them.

From a computational standpoint:

1. Each column's data are of the correct data type (integer, float, categorical, object) for interoperability with other code that you might write.
2. Column names are named precisely in line with their references in the codebase.

## When to validate data

For interactive computational use cases, just-in-time checks are handy in helping you identify errors in data before using them. That means the verification ideally happens right before you consume the data and right after your data processing/handling function returns the data. You probably could call this runtime data validation.

On the other hand, if your project ends up being part of a more complex pipeline, especially one with continually updating data, you might want to validate the data at the point of ingestion. You could catch any data points that fail the validation checks that you have defined at the time of upload. You might even go further and periodically run the validation checks on a regular interval. If the data source is large, you might opt to sample a small subset of data rather than perform full data scans. For this strategy, you might want to call it storage time validation.

## Parallels to software testing

Software tests check that the functions that you write behave correctly. By contrast, data validation ensures that the input data to your functions satisfy the assumptions that you make in the data processing functions you write.

Just as you should be able to run tests to check your data automatically and continuously, you should be able to constantly check that the data you put into your functions should satisfy the assumptions you possess about them.

## Tools for validating data

At the moment, I see two open-source projects that are well-developed and maintained for data validation.

### Pandera

Pandera targets validation of pandas dataframes in your Python code and comes with a very lightweight API for tacking on automatic runtime validation to your functions.

### Great Expectations

Great Expectations is a bit more heavyweight than Pandera, and in my opinion, is more suitable for heavy-duty pipelines that continuously process data that gets continually fed (whether streamed or in batch) into the data storage system.