(c) 2018 Justin Bois. With the exception of pasted graphics, where the source is noted, this work is licensed under a Creative Commons Attribution License CC-BY 4.0. All code contained herein is licensed under an MIT license.
This document was prepared at Caltech with financial support from the Donna and Benjamin M. Rosen Bioengineering Center.
This lesson was generated from a Jupyter notebook. You can download the notebook here.
import glob
import numpy as np
import pandas as pd
import altair as alt
import bootcamp_utils
!head -20 data/frog_tongue_adhesion.csv
So, each frog has associated with it an age (adult or juvenile), snout-vent-length (SVL), body weight, and species (either cross or cranwelli). For a tidy DataFrame
, we should have a column for each of these values. Your task is to load in the data, and then add these columns to the DataFrame
. For convenience, here is a DataFrame
with data about each frog.
df_frog = pd.DataFrame(data={'ID': ['I', 'II', 'III', 'IV'],
'age': ['adult', 'adult', 'juvenile', 'juvenile'],
'SVL (mm)': [63, 70, 28, 31],
'weight (g)': [63.1, 72.7, 12.7, 12.7],
'species': ['cross', 'cross', 'cranwelli', 'cranwelli']})
Note: This is a good exercise in searching through Pandas documentation and other online resources, such as StackOverflow. Remember, much of your programming efforts are spent searching through documentation and the internet.
There are lots of ways to solve this problem. This is a good exercise in searching through the Pandas documentation and other online resources, such as Stack Overflow. Remember, much of your programming efforts are spent searching through documentation and the internet.
After you have added this information to the data frame, make a scatter plot of adhesive force versus impact force and color the points by whether the frog is a juvenile or adult.
The most direct way is to use built-in pd.merge()
function. This function finds a common column between two DataFrames
, and then uses that column to merge them, filling in values that match in the common column. This is exactly what we want.
# Load the data
df = pd.read_csv('data/frog_tongue_adhesion.csv', comment='#')
# Perform merge
df = df.merge(df_frog)
Let's look at the DataFrame
to make sure it has what we expect.
df.head()
Looks good!
Peter and Rosemary Grant have been working on the Galápagos island of Daphne Major for over forty years. During this time, they have collected lots and lots of data about physiological features of finches. In 2014, they published a book with a summary of some of their major results (Grant P. R., Grant B. R., 40 years of evolution. Darwin's finches on Daphne Major Island, Princeton University Press, 2014). They made their data from the book publicly available via the Dryad Digital Repository.
We will investigate their measurements of beak depth (the distance, top to bottom, of a closed beak) and beak length (base to tip on the top) of Darwin's finches. We will look at data from two species, Geospiza fortis and Geospiza scandens. The Grants provided data on the finches of Daphne for the years 1973, 1975, 1987, 1991, and 2012. I have included the data in the files grant_1973.csv
, grant_1975.csv
, grant_1987.csv
, grant_1991.csv
, and grant_2012.csv
. They are in almost exactly the same format is in the Dryad repository; I have only deleted blank entries at the end of the files.
Note: If you want to skip the wrangling (which is very valuable experience), you can go directly to part (d). You can load in the DataFrame
you generate in parts (a) through (c) from the file ~/git/bootcamp/data/grant_complete.csv
.
a) Load each of the files into separate Pandas DataFrame
s. You might want to inspect the file first to make sure you know what character the comments start with and if there is a header row.
b) We would like to merge these all into one DataFrame
. The problem is that they have different header names, and only the 1973 file has a year entry (called yearband
). This is common with real data. It is often a bit messy and requires some wrangling.
- First, change the name of the
yearband
column of the 1973 data toyear
. Also, make sure the year format is four digits, not two!- Next, add a
year
column to the other fourDataFrame
s. You want tidy data, so each row in theDataFrame
should have an entry for the year.Change the column names so that all the
DataFrame
s have the same column names. I would choose column names
['band', 'species', 'beak length (mm)', 'beak depth (mm)', 'year']
Concatenate the
DataFrame
s into a singleDataFrame
. Be careful with indices! If you usepd.concat()
, you will need to use theignore_index=True
kwarg. You might also need to use theaxis
kwarg.
c) The band
field gives the number of the band on the bird's leg that was used to tag it. Are some birds counted twice? Are they counted twice in the same year? Do you think you should drop duplicate birds from the same year? How about different years? My opinion is that you should drop duplicate birds from the same year and keep the others, but I would be open to discussion on that. To practice your Pandas skills, though, let's delete only duplicate birds from the same year from the DataFrame
. When you have made this DataFrame
, save it as a CSV file.
Hint: The DataFrame
methods duplicated()
and drop_duplicates()
will be useful.
After doing this work, it is worth saving your tidy DataFrame
in a CSV document. To this using the to_csv()
method of your DataFrame
. Since the indices are uninformative, you should use the index=False
kwarg. (I have already done this and saved it as ~/git/bootcamp/data/grant_complete.csv
, which will help you do the rest of the exercise if you have problems with this part.)
d) It is informative to plot the measurement of each bird's beak as a point in the beak depth-beak length plane. For the 1987 data, plot beak depth vs. beak width for Geospiza fortis and for Geospiza scandens. Can you see the species demarcation?
e) Do part (d) again for all years. (Hint: Check out the row
encoding, and/or read about faceting in the Altair docs). Describe what you see. Do you see the changes in the differences between species (presumably as a result of introgressive hybridization)? In your plots, make sure all plots have the same range on the axes.
Upon inspecting the files, we see that the comment character is, as usual, #
. There is also a header row in each file, as the first row, so they files are pretty standard. It is important to note that not all of the column headings are the same, but the units of length in the measurements is millimeters. Let's go ahead and load them in! We will load them into a list. I will use the glob
module to load in all the csv
files with the substring 'grant'
.
# Get list of CSV files
csv_list = glob.glob('data/grant*19*.csv') + glob.glob('data/grant*20*.csv')
# Sort the list so we keep the years in order
csv_list.sort()
# Initialize list of DataFrames
df_list = []
# Load in each sequentially.
for csv_file in csv_list:
# Read in DataFrame
df = pd.read_csv(csv_file, comment='#')
# Place in list
df_list.append(df)
Let's take a quick look at the first entry in the list, just to make sure it loaded ok.
df_list[0].head()
Looks good!
b) Before moving on, we need to know what year is associated with each DataFrame
. Fortunately, we sorted the list of CSV files, we have the years in order. We can extract the year from the file names.
# Initialize years
years = []
for csv_file in csv_list:
years.append(int(csv_file[-8:-4]))
Let's check to make sure we got them, and sort them so we know the order.
years
Looks good. Now, we'll proceed with the steps we need to take to clean things up. First, we'll change the 'yearband'
column in the DataFrame
from 1973 to 'year'
, and change its year from 73
to 1973
.
# Rename to year
df_list[0] = df_list[0].rename(columns={'yearband': 'year'})
# No worries about Y2K
df_list[0]['year'] += 1900
# Check it out
df_list[0].head()
Great! Let's proceed to add a year
column to all of the other DataFrame
s. As we do it, we'll just reassign the 1973 year in that DataFrame
, but that's no big deal.
for i, df in enumerate(df_list):
df_list[i]['year'] = np.ones(len(df), dtype=int) * years[i]
Let's check one to make sure it makes sense.
df_list[3].head()
Looks good. Now, we need to change the column names so they are all the same for the respective DataFrame
s. We have few enough DataFrame
s that we could do that by hand, but it is more instructive (and re-usable) if we automate it. We will write a function to rename the columns. It first sniffs out which column should be 'band'
, which should be 'species'
, and so on. We can do this with Pandas's convenient .str
methods, which enable us to use string methods on many entries at once. This is perhaps best seen by example.
# Choose a DataFrame to try it on.
df = df_list[3]
# Look at the columns
df.columns
Now, if we are interested in the beak length column, we want to find a column heading that contains 'len'
, since pretty much anything that is about beak length would have the substring. We can use the convenient str.contains()
method.
# See which column had 'len' in it
df.columns.str.contains('len')
Now, we can slice out the column heading that has 'len'
in it.
df.columns[df.columns.str.contains('len')]
Finally, we just want the string, so we do
df.columns[df.columns.str.contains('len')][0]
We'll use this to identify the current column headings and then change them to what we want.
def rename_cols(df):
"""Rename columns so all DataFrames have same column headings."""
# Sniff out the key names from names that are close
band_key = df.columns[df.columns.str.contains('and')][0]
species_key = df.columns[df.columns.str.contains('ecies')][0]
length_key = df.columns[df.columns.str.contains('len')][0]
depth_key = df.columns[df.columns.str.contains('dep')][0]
year_key = df.columns[df.columns.str.contains('year')][0]
# Rename the columns using renaming dictionary
return df.rename(columns={band_key: 'band',
species_key: 'species',
depth_key: 'beak depth (mm)',
length_key: 'beak length (mm)',
year_key: 'year'})
Now, we can loop through the DateFrame
s and rename the columns.
for i, df in enumerate(df_list):
df_list[i] = rename_cols(df)
# Check the result
df_list[3].head()
Finally, we do the concatenation using pd.concat()
. We want to ignore the indices because they are not important identifiers.
df = pd.concat(df_list, axis=0, ignore_index=True, sort=True)
# Take a look
df
Great! We now have one convenient tidy DataFrame
to work with.
c) First, let's look for duplicate band numbers. There are many, so we'll just write out how many. The df.duplicated()
method returns True for each row if it is a duplicate. We will get all duplicates in the 'band'
column, and then get the unique values in the list of all duplicated. This will tell us how many birds were measured more than once.
# Stats about how many birds were measured more than once
print('There were', len(df['band'][df['band'].duplicated()].unique()),
'birds that were measured more than once.')
print('There were', len(df['band'].unique()), 'total birds measured.')
So, most birds were only measured once. Nonetheless, let's eliminate duplicates of birds that were measured twice. When we drop the duplicates, we will keep the first measurement.
# Drop all rows with matching year and band (keep first)
df = df.drop_duplicates(subset=['year', 'band'])
Finally, we will save the DataFrame
as a CSV file using the df.to_csv()
method. We do not want to print the indices (they are meaningless).
df.to_csv('data/grant_complete.csv', index=False)
d) Now let's make a plot of the 1987 data in the beak depth-beak length plane.
alt.Chart(df.loc[df['year']==1987, :]
).mark_point(
).encode(
x=alt.X('beak length (mm):Q', scale=alt.Scale(zero=False)),
y=alt.Y('beak depth (mm):Q', scale=alt.Scale(zero=False)),
color=alt.Color('species:N',)
)
e) To generate all of the plots, we can use the row
encoding.
alt.Chart(df,
height=180,
width=180
).mark_point(
opacity=0.2
).encode(
x=alt.X('beak length (mm):Q', scale=alt.Scale(zero=False)),
y=alt.Y('beak depth (mm):Q', scale=alt.Scale(zero=False)),
color='species',
row='year'
).interactive()
When we look at the data this way, we see the the species coming together.
The Anderson-Fisher data set is a famous data set collected by Edgar Anderson and promoted by Ronald Fisher for use in his technique of linear discriminant analysis in taxonometric problems. The data set is now a classic data set that is used in data analysis. In this problem, you will explore this data set and ways of looking at it with Pandas/Altair. The data set is available in ~/data/fisher_iris.csv
.
a) Generate a dash-dot plot of the petal width versus petal length. Why might this be a good way of visualizing this kind of data set?
b) Generate a matrix plot of this data set. What are the advantages of this kind of plot?
c) Explore for yourself! Come up with useful ways of plotting this multidimensional data set to help you explore it.
Before proceeding, let's load in the data set.
df = pd.read_csv('data/fisher_iris.csv')
a) We can more or less follow the example from the Altair docs to make the dash-dot plot.
brush = alt.selection(type='interval')
tick_axis = alt.Axis(labels=False, domain=False, ticks=False)
tick_axis_notitle = alt.Axis(labels=False, domain=False, ticks=False, title='')
points = alt.Chart(df
).mark_point(
).encode(
x=alt.X('petal length (cm)', axis=alt.Axis(title='')),
y=alt.Y('petal width (cm)', axis=alt.Axis(title='')),
color=alt.condition(brush,
alt.Color('species:N',
scale=alt.Scale(range=['#66c2a5', '#fc8d62', '#8da0cb'])),
alt.value('gray'))
).add_selection(
brush
)
x_ticks = alt.Chart(df
).mark_tick(
).encode(
alt.X('petal length (cm)', axis=tick_axis),
alt.Y('species', axis=tick_axis_notitle),
color=alt.condition(brush, 'species:N', alt.value('lightgray'))
).add_selection(
brush
)
y_ticks = alt.Chart(df
).mark_tick(
).encode(
alt.X('species', axis=tick_axis_notitle),
alt.Y('petal width (cm)', axis=tick_axis),
color=alt.condition(brush, 'species:N', alt.value('lightgray'))
).add_selection(
brush
)
y_ticks | (points & x_ticks)
This is a useful way of visualizing data because it not only shows the correlation between two variables, but also shows how they vary individually.
b) We can again more or less follow the Altair docs.
alt.Chart(df
).mark_point(
).encode(
alt.X(alt.repeat('column'), type='quantitative', scale=alt.Scale(zero=False)),
alt.Y(alt.repeat('row'), type='quantitative', scale=alt.Scale(zero=False)),
color=alt.Color('species:N',
scale=alt.Scale(range=['#66c2a5', '#fc8d62', '#8da0cb']))
).properties(
width=150,
height=150
).repeat(
row=list(df.columns[df.columns.str.contains('cm')]),
column=list(df.columns[df.columns.str.contains('cm')])[::-1]
).interactive()
A matrix plot allows you to look at how all pairs of variables are related. The diagonal is also like a strip plot, so you can see how the variable vary individually as well.