(c) 2018 Justin Bois. With the exception of pasted graphics, where the source is noted, this work is licensed under a Creative Commons Attribution License CC-BY 4.0. All code contained herein is licensed under an MIT license.
This document was prepared at Caltech with financial support from the Donna and Benjamin M. Rosen Bioengineering Center.
This lesson was generated from a Jupyter notebook. You can download the notebook here.
import numpy as np
import pandas as pd
In the last lesson, we learned about Pandas and dipped our toe in to see its power. In this lesson, we will work with a more complicated data set and use Pandas to handle it and pull out what we need.
Hadley Wickham wrote a great article in favor of "tidy data." Tidy DataFrame
s follow the rules:
DataFrame
.This is less pretty to visualize as a table, but we rarely look at data in tables. Indeed, the representation of data which is convenient for visualization is different from that which is convenient for analysis. A tidy DataFrame
is almost always much easier to work with than non-tidy formats.
You may raise some objections about tidy data. Here are a few, and my responses.
Objection: Looking at a table of tidy data is ugly. It is not intuitively organized. I would almost never display a tidy data table in a publication.
Response: Correct! Having tabular data in a format that is easy to read as a human studying a table is a very different thing than having it in a format that is easy to explore and work with using a computer. As my friend Daniel Chen put it, "There are data formats that are better for reporting and data formats that are better for analysis." We are using the tidy data frames for analysis, not reporting (though we will see in the coming lessons that having the data in a tidy format makes making plots much easier, and plots are a key medium for reporting.)
Objection: Isn't it better to sometimes have data arranged in other ways? Say in a matrix?
Response: This is certainly true for things like images, or raster-style data in general. It makes more sense to organize an image in a 2D matrix than to have it organized as a data frame with three columns (row in image, column in image, intensity of pixel), where each row corresponds to a single pixel. For an image, indexing it by row and column is always unambiguous, my_image[i, j]
means the pixel at row i
and column j
.
For other data, though, the matrix layout suffers from the fact that there may be more than one way to construct a matrix. If you know a data frame is tidy, you already know its structure. You need only to ask what the columns are, and then you immediately know how to access data. In other formats, you might have to read and write extensive comments to understand the structure of the data. Of course, you can read and write comments, but it opens the door for the possibility of misinterpretation or mistakes.
Objection: But what about time series? Clearly, that can be in matrix format. One column is time, and then subsequent columns are observations made at that time.
Response: Yes, that is true. But then the matrix-style described could be considered tidy, since each row is a single observation (time point) that has many facets.
Objection: Isn't this an inefficient use of memory? There tend to be lots of repeated entries in tidy data frames.
Response: Yes, there are more efficient ways of storing and accessing data. But for data sets that are not "big data," this is seldom a real issue. The extra expense in memory, as well as the extra expense in access, is a small prices to pay for the simplicity and speed of the human user in accessing the data.
Objection: Once it's tidy, we pretty much have to use Boolean indexing to get what we want, and that can be slower than other methods of accessing data. What about performance?
Response: See the previous response. Speed of access really only because of a problem with big, high-throughput data sets. In those cases, there are often many things you need to be clever about.
Conclusion: I really think that tidying a data set allows for fluid exploration. We will focus on tidy data sets going forward. The techniques for bringing untidy data into tidy format use many of Pandas's functions, but are largely beyond the scope of this bootcamp. You will explore that a little bit in the exercises, but for more of the bootcamp, our data sets are already tidy.
We will again use the data set from the Beattie, et al. paper on facial matching under sleep deprivation. Let's load in the original data set and add the column on insomnia as we did in thelast lesson.
df = pd.read_csv('data/gfmt_sleep.csv', na_values='*')
df['insomnia'] = df['sci'] <= 16
# Take a look
df.head()
This data set is in tidy format. Each row represents a single test on a single participant. The aspects of that person's test are given in each column. We already saw the power of having the data in this format when we did Boolean indexing in the last lesson. Now, we will see how this format allows use to easily do an operation we do again and again with data sets, split-apply-combine.
Let's say we want to compute the median percent correct face matchings for subjects with insomnia and the median percent correct face matchings for those without. Ignoring for the second the mechanics of how we would do this with Python, let's think about it in English. What do we need to do?
'insomnia'
field, i.e., split it up so we have a separate data set for the two classes of subjects, those with insomnia and those without.We see that the strategy we want is a split-apply-combine strategy. This idea was put forward by Hadley Wickham in this paper. It turns out that this is a strategy we want to use very often. Split the data in terms of some criterion. (We may want to split by genotype.) Apply some function to the split-up data. Combine the results into a new data frame.
Note that if the data are tidy, this procedure makes a lot of sense. Choose the column you want to use to split by. All rows with like entries in the splitting column are then grouped into a new data set. You can then apply any function you want into these new data sets. You can then combine the results into a new data frame.
Pandas's split-apply-combine operations are achieved using the groupby()
method. You can think of groupby()
as the splitting part. You can then apply functions to the resulting DataFrameGroupBy
object. The Pandas documentation on split-apply-combine is excellent and worth reading through. It is extensive though, so don't let yourself get intimidated by it.
Let's go ahead and do our first split-apply-combine on this tidy data set. First, we will split the data set up by insomnia condition.
grouped = df.groupby('insomnia')
# Take a look
grouped
There is not much to see in the DataFrameGroupBy
object that resulted. But there is a lot we can do with this object. Typing grouped.
and hitting tab will show you the many possibilities. For most of these possibilities, the apply and combine steps happen together and a new DataFrame
is returned. The grouped.median()
method is exactly what we want.
df_median = grouped.median()
# Take a look
df_median
In the outputted DataFrame
has the medians of all quantities, including the percent correct
that we wanted. Note that this DataFrame
has insomnia
as the name of the row index. If we want to instead keep insomnia
(which, remember, is what we used to split up the data set before we computed the summary statistics) as a column, we can use the reset_index()
method.
df_median.reset_index()
We can also used multiple columns in our groupby()
operation. For example, we may wish to look at four groups, male insomniacs, female insomniacs, male non-insomniacs, and female non-insomniacs. To do this, we simple pass in a list of columns into df.groupby()
.
df.groupby(['gender', 'insomnia']).median().reset_index()
This type of operation is called an aggregation. That is, we split the data set up into groups, and then computed a summary statistic for each group, in this case the median.
Instead of summarizing data in a group with single summary statistics by aggregation, we can also do a transformation in which each row gets a new entry within a given group. As a simple example, we could generate a column that gives the rank of each participate in terms of percent correct for an insomniac group versus non-insomniac groups. That is, we perform a rank ordering with the insomniac group and within the non-insomniac group.
grouped['percent correct'].rank()
This gave us a column of ranks with the indexing of the original DataFrame
preserved. We can put this column into the DataFrame
.
df['rank grouped by insomnia'] = grouped['percent correct'].rank(method='first')
# Take a look
df.head()
To verify that this worked correctly, and also to show some nice sorting properties of DataFrames
, we will sort the DataFrame
by insomnia
and then by percent correct
and make sure the ranks worked accordingly.
df_sorted = df.sort_values(by=['insomnia', 'percent correct'])
# Look at relevant columns
df_sorted[['insomnia', 'percent correct', 'rank grouped by insomnia']]
Indeed it worked!
Let's say we want to compute the coefficient of variation (CoV, the standard deviation divided by the mean) of data in columns of groups in the DataFrame
. There is no built-in function to do this. We have to write our own function to compute the CoV and then use it with the agg()
method of a DataFrameGroupBy
object. In the function below, the values of each column are denoted by data
.
def coeff_of_var(data):
"""Compute coefficient of variation from an array of data."""
return np.std(data) / np.mean(data)
Now we can apply it as an aggregating function.
grouped.agg(coeff_of_var)
We can take a similar approach with a transformation. In this case, we will actually do something useful. We will compute the y
-values we need for plotting an empirical cumulative distribution function (ECDF), which we will use in our upcoming lessons with plotting. Briefly, an ECDF for a data set is defined as follows. The ECDF evaluated at x
for a set of measurements is defined as
ECDF(x) = fraction of measurements ≤ x.
As you can convince yourself if you work it out, the y
-values of an ECDF are the rankings of the entries, divided by the total number of entries. We will want to plot ECDFs for each group, so we apply the function for each group. First, we define the function.
def ecdf_y(data):
"""Give y-values of an ECDF for an unsorted column in a data frame."""
return data.rank(method='first') / len(data)
Now, we can compute the ECDF on the data set grouped by insomnia state.
df['ecdf_y grouped by insomnia'] = grouped['percent correct'].transform(ecdf_y)
# Take a look
df.head()
You now have tremendous power in your hands. When your data are tidy, you can rapidly accelerate the ubiquitous split-apply-combine methods. Importantly, you now have a logic framework to think about how you slice and dice your data. As a final, simple example, I will show you how to go start to finish after loading the data set into a DataFrame
, splitting by insomnia and gender, and then getting the quartiles and extrema, in addition to the mean and standard deviation, in the percent of correct facial matchings.
df.groupby(['gender', 'insomnia'])['percent correct'].describe()
Yes, that's right. One single, clean, easy to read line of code. In coming tutorials, we will see how to use tidy data to quickly generate plots.