Lesson 25: Review of exercise 3

(c) 2019 Justin Bois. With the exception of pasted graphics, where the source is noted, this work is licensed under a Creative Commons Attribution License CC-BY 4.0. All code contained herein is licensed under an MIT license.

This document was prepared at Caltech with financial support from the Donna and Benjamin M. Rosen Bioengineering Center.

This lesson was generated from a Jupyter notebook. You can download the notebook here.

In [1]:
import glob

import pandas as pd

import bokeh_catplot

import bokeh.io
import bokeh.plotting

Loading BokehJS ...

Exercise 3.0: Complete practice exercises

Complete the practice exercises from lesson 21 and lesson 24.

Exercise 3.1: Automating scatter plots

Write a function that takes as input a tidy data frame and generates a scatter plot based on two columns of the data frame and colors the glyphs according to a third column that contains categorical variables. The minimal (you can add other kwargs if you want) call signature should be

scatter(data, cat, x, y)

You will of course test out your function while writing it, and the next problems give you lots of opportunities to use it.

Exercise 3.1 solution

I will offer some more kwargs, as described in the doc string below. I will demonstrate the use of this function in the next problem.

In [2]:
def scatter(
    df : Pandas DataFrame
        DataFrame containing tidy data for plotting.
    cat : hashable
        Name of column to use as categorical variable.
    x : hashable
        Name of column to use as x-axis.
    y : hashable
        Name of column to use as y-axis.
    p : bokeh.plotting.Figure instance, or None (default)
        If None, create a new figure. Otherwise, populate the existing
        figure `p`.
    palette : list of strings of hex colors, or single hex string
        If a list, color palette to use. If a single string representing
        a hex color, all glyphs are colored with that color. Default is
        the default color cycle employed by Vega-Lite.
    show_legend : bool, default False
        If True, show legend.
    tooltips : list of 2-tuples
        Specification for tooltips as per Bokeh specifications. For
        example, if we want `col1` and `col2` tooltips, we can use
        `tooltips=[('label 1': '@col1'), ('label 2': '@col2')]`. Ignored
        if `formal` is True.
    show_legend : bool, default False
        If True, show a legend.
    click_policy : str, default "hide"
        How to display points when their legend entry is clicked.
    marker_kwargs : dict
        kwargs to be passed to `p.circle()` when making the scatter plot.
        Any kwargs to be passed to `bokeh.plotting.figure()` when making 
        the plot.

    output : bokeh.plotting.Figure instance
        Plot populated with jitter plot or box plot.
    # Automatically name the axes
    if "x_axis_label" not in kwargs:
        kwargs["x_axis_label"] = x
    if "y_axis_label" not in kwargs:
        kwargs["y_axis_label"] = y

    # Instantiate figure
    if p is None:
        p = bokeh.plotting.figure(**kwargs)

    # Build plot (not using color factors) to enable click policies
    for i, (name, g) in enumerate(data.groupby(cat, sort=False)):
        marker_kwargs["color"] = palette[i % len(palette)]
        marker_kwargs["legend"] = str(name)
        p.circle(source=g, x=x, y=y, **marker_kwargs)

    if show_legend:
        p.legend.click_policy = click_policy
        p.legend.visible = False

    return p

Exercise 3.2: Adding data to a DataFrame

In Lesson 23, we looked at a data set consisting of frog strikes. Recall that the header comments in the data file contained information about the frogs.

In [3]:
!head -20 data/frog_tongue_adhesion.csv
# These data are from the paper,
#   Kleinteich and Gorb, Sci. Rep., 4, 5225, 2014.
# It was featured in the New York Times.
#    http://www.nytimes.com/2014/08/25/science/a-frog-thats-a-living-breathing-pac-man.html
# The authors included the data in their supplemental information.
# Importantly, the ID refers to the identifites of the frogs they tested.
#   I:   adult, 63 mm snout-vent-length (SVL) and 63.1 g body weight,
#        Ceratophrys cranwelli crossed with Ceratophrys cornuta
#   II:  adult, 70 mm SVL and 72.7 g body weight,
#        Ceratophrys cranwelli crossed with Ceratophrys cornuta
#   III: juvenile, 28 mm SVL and 12.7 g body weight, Ceratophrys cranwelli
#   IV:  juvenile, 31 mm SVL and 12.7 g body weight, Ceratophrys cranwelli
date,ID,trial number,impact force (mN),impact time (ms),impact force / body weight,adhesive force (mN),time frog pulls on target (ms),adhesive force / body weight,adhesive impulse (N-s),total contact area (mm2),contact area without mucus (mm2),contact area with mucus / contact area without mucus,contact pressure (Pa),adhesive strength (Pa)

So, each frog has associated with it an age (adult or juvenile), snout-vent-length (SVL), body weight, and species (either cross or cranwelli). For a tidy DataFrame, we should have a column for each of these values. Your task is to load in the data, and then add these columns to the DataFrame. For convenience, here is a DataFrame with data about each frog.

In [4]:
df_frog = pd.DataFrame(data={'ID': ['I', 'II', 'III', 'IV'],
                             'age': ['adult', 'adult', 'juvenile', 'juvenile'],
                             'SVL (mm)': [63, 70, 28, 31],
                             'weight (g)': [63.1, 72.7, 12.7, 12.7],
                             'species': ['cross', 'cross', 'cranwelli', 'cranwelli']})

Note: There are lots of ways to solve this problem. This is a good exercise in searching through the Pandas documentation and other online resources, such as Stack Overflow. Remember, much of your programming efforts are spent searching through documentation and the internet.

After you have added this information to the data frame, make a scatter plot of adhesive force versus impact force and color the points by whether the frog is a juvenile or adult. The function you wrote in Exercise 3.1 will be useful to do this.

Exercise 3.2 solution

The most direct way is to use built-in pd.merge() function. This function finds a common column between two DataFrames, and then uses that column to merge them, filling in values that match in the common column. This is exactly what we want.

In [5]:
# Load the data
df = pd.read_csv('data/frog_tongue_adhesion.csv', comment='#')

# Perform merge
df = df.merge(df_frog)

Let's look at the DataFrame to make sure it has what we expect.

In [6]:
date ID trial number impact force (mN) impact time (ms) impact force / body weight adhesive force (mN) time frog pulls on target (ms) adhesive force / body weight adhesive impulse (N-s) total contact area (mm2) contact area without mucus (mm2) contact area with mucus / contact area without mucus contact pressure (Pa) adhesive strength (Pa) age SVL (mm) weight (g) species
0 2013_02_26 I 3 1205 46 1.95 -785 884 1.27 -0.290 387 70 0.82 3117 -2030 adult 63 63.1 cross
1 2013_02_26 I 4 2527 44 4.08 -983 248 1.59 -0.181 101 94 0.07 24923 -9695 adult 63 63.1 cross
2 2013_03_01 I 1 1745 34 2.82 -850 211 1.37 -0.157 83 79 0.05 21020 -10239 adult 63 63.1 cross
3 2013_03_01 I 2 1556 41 2.51 -455 1025 0.74 -0.170 330 158 0.52 4718 -1381 adult 63 63.1 cross
4 2013_03_01 I 3 493 36 0.80 -974 499 1.57 -0.423 245 216 0.12 2012 -3975 adult 63 63.1 cross

Looks good! Now, let's make the plot using the function we wrote for making colored scatter plots.

In [7]:
p = scatter(
    x='impact force (mN)',
    y='adhesive force (mN)',


Peter and Rosemary Grant have been working on the Galápagos island of Daphne Major for over forty years. During this time, they have collected lots and lots of data about physiological features of finches. In 2014, they published a book with a summary of some of their major results (Grant P. R., Grant B. R., 40 years of evolution. Darwin's finches on Daphne Major Island, Princeton University Press, 2014). They made their data from the book publicly available via the Dryad Digital Repository.

We will investigate their measurements of beak depth (the distance, top to bottom, of a closed beak) and beak length (base to tip on the top) of Darwin's finches. We will look at data from two species, Geospiza fortis and Geospiza scandens. The Grants provided data on the finches of Daphne for the years 1973, 1975, 1987, 1991, and 2012. I have included the data in the files grant_1973.csv, grant_1975.csv, grant_1987.csv, grant_1991.csv, and grant_2012.csv. They are in almost exactly the same format is in the Dryad repository; I have only deleted blank entries at the end of the files.

Note: If you want to skip the wrangling (which is very valuable experience), you can go directly to part (d). You can load in the DataFrame you generate in parts (a) through (c) from the file ~/git/bootcamp/data/grant_complete.csv.

a) Load each of the files into separate Pandas DataFrames. You might want to inspect the file first to make sure you know what character the comments start with and if there is a header row.

b) We would like to merge these all into one DataFrame. The problem is that they have different header names, and only the 1973 file has a year entry (called yearband). This is common with real data. It is often a bit messy and requires some wrangling.

  1. First, change the name of the yearband column of the 1973 data to year. Also, make sure the year format is four digits, not two!
  2. Next, add a year column to the other four DataFrames. You want tidy data, so each row in the DataFrame should have an entry for the year.
  3. Change the column names so that all the DataFrames have the same column names. I would choose column names

    ['band', 'species', 'beak length (mm)', 'beak depth (mm)', 'year']

  4. Concatenate the DataFrames into a single DataFrame. Be careful with indices! If you use pd.concat(), you will need to use the ignore_index=True kwarg. You might also need to use the axis kwarg.

c) The band field gives the number of the band on the bird's leg that was used to tag it. Are some birds counted twice? Are they counted twice in the same year? Do you think you should drop duplicate birds from the same year? How about different years? My opinion is that you should drop duplicate birds from the same year and keep the others, but I would be open to discussion on that. To practice your Pandas skills, though, let's delete only duplicate birds from the same year from the DataFrame. When you have made this DataFrame, save it as a CSV file.

Hint: The DataFrame methods duplicated() and drop_duplicates() will be useful.

After doing this work, it is worth saving your tidy DataFrame in a CSV document. To this using the to_csv() method of your DataFrame. Since the indices are uninformative, you should use the index=False kwarg. (I have already done this and saved it as ~/git/bootcamp/data/grant_complete.csv, which will help you do the rest of the exercise if you have problems with this part.)

d) Make a plots exploring how beak depth changes over time for each species. Think about what might be effective ways to display the data.

e) It is informative to plot the measurement of each bird's beak as a point in the beak depth-beak length plane. For the 1987 data, plot beak depth vs. beak width for Geospiza fortis and for Geospiza scandens. The function you wrote in Exercise 3.1 will be useful to do this.

f) Do part (d) again for all years. Hint: To display all of the plots, check out the Bokeh documentation for layouts. In your plots, make sure all plots have the same range on the axes. If you want to set two plots, say p1 and p2 to have the same axis ranges, you can do the following.

p1.x_range = p2.x_range
p1.y_range = p2.y_range

Exercise 3.3: solution

Upon inspecting the files, we see that the comment character is, as usual, #. There is also a header row in each file, as the first row, so they files are pretty standard. It is important to note that not all of the column headings are the same, but the units of length in the measurements is millimeters. Let's go ahead and load them in! We will load them into a list. I will use the glob module to load in all the csv files with the substring 'grant'.

In [8]:
# Get list of CSV files
csv_list = glob.glob('data/grant*19*.csv') + glob.glob('data/grant*20*.csv')

# Sort the list so we keep the years in order

# Initialize list of DataFrames
df_list = []

# Load in each sequentially.
for csv_file in csv_list:
    # Read in DataFrame
    df = pd.read_csv(csv_file, comment='#')
    # Place in list

Let's take a quick look at the first entry in the list, just to make sure it loaded ok.

In [9]:
band species yearband beak length beak depth
0 20123 fortis 73 9.25 8.05
1 20126 fortis 73 11.35 10.45
2 20128 fortis 73 10.15 9.55
3 20129 fortis 73 9.95 8.75
4 20133 fortis 73 11.55 10.15

Looks good!

b) Before moving on, we need to know what year is associated with each DataFrame. Fortunately, we sorted the list of CSV files, we have the years in order. We can extract the year from the file names.

In [10]:
# Initialize years
years = []
for csv_file in csv_list:

Let's check to make sure we got them, and sort them so we know the order.

In [11]:
[1973, 1975, 1987, 1991, 2012]

Looks good. Now, we'll proceed with the steps we need to take to clean things up. First, we'll change the 'yearband' column in the DataFrame from 1973 to 'year', and change its year from 73 to 1973.

In [12]:
# Rename to year
df_list[0] = df_list[0].rename(columns={'yearband': 'year'})

# No worries about Y2K
df_list[0]['year'] += 1900

# Check it out
band species year beak length beak depth
0 20123 fortis 1973 9.25 8.05
1 20126 fortis 1973 11.35 10.45
2 20128 fortis 1973 10.15 9.55
3 20129 fortis 1973 9.95 8.75
4 20133 fortis 1973 11.55 10.15

Great! Let's proceed to add a year column to all of the other DataFrames. As we do it, we'll just reassign the 1973 year in that DataFrame, but that's no big deal.

In [13]:
for i, df in enumerate(df_list):
    df_list[i]['year'] = years[i]

Let's check one to make sure it makes sense.

In [14]:
band species blength bdepth year
0 2639 fortis 10.30 8.95 1991
1 2666 fortis 12.81 9.30 1991
2 2753 fortis 10.89 10.35 1991
3 2776 fortis 11.30 10.00 1991
4 4229 fortis 10.05 8.62 1991

Looks good. Now, we need to change the column names so they are all the same for the respective DataFrames. We have few enough DataFrames that we could do that by hand, but it is more instructive (and re-usable) if we automate it. We will write a function to rename the columns. It first sniffs out which column should be 'band', which should be 'species', and so on. We can do this with Pandas's convenient .str methods, which enable us to use string methods on many entries at once. This is perhaps best seen by example.

In [15]:
# Choose a DataFrame to try it on.
df = df_list[3]

# Look at the columns
Index(['band', 'species', 'blength', 'bdepth', 'year'], dtype='object')

Now, if we are interested in the beak length column, we want to find a column heading that contains 'len', since pretty much anything that is about beak length would have the substring. We can use the convenient str.contains() method.

In [16]:
# See which column had 'len' in it
array([False, False,  True, False, False])

Now, we can slice out the column heading that has 'len' in it.

In [17]:
Index(['blength'], dtype='object')

Finally, we just want the string, so we do

In [18]:

We'll use this to identify the current column headings and then change them to what we want.

In [19]:
def rename_cols(df):
    """Rename columns so all DataFrames have same column headings."""
    # Sniff out the key names from names that are close
    band_key = df.columns[df.columns.str.contains('and')][0]
    species_key = df.columns[df.columns.str.contains('ecies')][0]
    length_key = df.columns[df.columns.str.contains('len')][0]
    depth_key = df.columns[df.columns.str.contains('dep')][0]
    year_key = df.columns[df.columns.str.contains('year')][0]
    # Rename the columns using renaming dictionary
    return df.rename(columns={band_key: 'band',
                              species_key: 'species',
                              depth_key: 'beak depth (mm)',
                              length_key: 'beak length (mm)',
                              year_key: 'year'})

Now, we can loop through the DateFrames and rename the columns.

In [20]:
for i, df in enumerate(df_list):
    df_list[i] = rename_cols(df)
# Check the result
band species beak length (mm) beak depth (mm) year
0 2639 fortis 10.30 8.95 1991
1 2666 fortis 12.81 9.30 1991
2 2753 fortis 10.89 10.35 1991
3 2776 fortis 11.30 10.00 1991
4 4229 fortis 10.05 8.62 1991

Finally, we do the concatenation using pd.concat(). We want to ignore the indices because they are not important identifiers.

In [21]:
df = pd.concat(df_list, axis=0, ignore_index=True, sort=True)

# Take a look
band beak depth (mm) beak length (mm) species year
0 20123 8.05 9.25 fortis 1973
1 20126 10.45 11.35 fortis 1973
2 20128 9.55 10.15 fortis 1973
3 20129 8.75 9.95 fortis 1973
4 20133 10.15 11.55 fortis 1973
5 20136 9.85 11.15 fortis 1973
6 20138 8.85 10.05 fortis 1973
7 20142 10.15 11.25 fortis 1973
8 20143 8.15 9.15 fortis 1973
9 20146 8.55 9.25 fortis 1973
10 20149 10.65 11.75 fortis 1973
11 20150 8.05 9.05 fortis 1973
12 20152 9.75 11.15 fortis 1973
13 20155 10.15 11.25 fortis 1973
14 20156 9.85 10.45 fortis 1973
15 20157 9.75 10.65 fortis 1973
16 20163 8.35 10.05 fortis 1973
17 20164 8.55 10.45 fortis 1973
18 20169 10.25 11.45 fortis 1973
19 20172 9.35 11.15 fortis 1973
20 20174 10.15 10.95 fortis 1973
21 20176 8.55 9.55 fortis 1973
22 20177 8.85 10.65 fortis 1973
23 20178 9.85 10.05 fortis 1973
24 20181 10.75 11.25 fortis 1973
25 20183 9.85 10.85 fortis 1973
26 20203 8.45 10.05 fortis 1973
27 20206 10.55 11.95 fortis 1973
28 20211 8.35 9.85 fortis 1973
29 20215 8.25 9.35 fortis 1973
... ... ... ... ... ...
2274 21159 8.40 13.00 scandens 2012
2275 21167 8.30 12.70 scandens 2012
2276 21176 9.60 14.00 scandens 2012
2277 21248 9.40 14.10 scandens 2012
2278 21253 10.00 14.10 scandens 2012
2279 21255 8.90 13.00 scandens 2012
2280 21256 9.10 13.50 scandens 2012
2281 21257 9.80 13.40 scandens 2012
2282 21260 9.30 13.90 scandens 2012
2283 21263 9.90 13.10 scandens 2012
2284 21267 8.90 12.90 scandens 2012
2285 21268 8.50 14.00 scandens 2012
2286 21270 10.60 14.00 scandens 2012
2287 21271 9.30 14.10 scandens 2012
2288 21278 8.90 14.70 scandens 2012
2289 21279 8.90 13.40 scandens 2012
2290 21280 9.70 13.80 scandens 2012
2291 21281 9.80 13.40 scandens 2012
2292 21285 10.50 13.80 scandens 2012
2293 21286 8.40 12.40 scandens 2012
2294 21288 10.00 14.10 scandens 2012
2295 21289 9.00 12.90 scandens 2012
2296 21290 8.70 13.90 scandens 2012
2297 21291 8.80 14.30 scandens 2012
2298 21292 8.40 13.20 scandens 2012
2299 21295 9.30 14.20 scandens 2012
2300 21297 9.80 13.00 scandens 2012
2301 21340 8.90 14.60 scandens 2012
2302 21342 9.80 13.10 scandens 2012
2303 21347 9.10 15.20 scandens 2012

2304 rows × 5 columns

Great! We now have one convenient tidy DataFrame to work with.

c) First, let's look for duplicate band numbers. There are many, so we'll just write out how many. The df.duplicated() method returns True for each row if it is a duplicate. We will get all duplicates in the 'band' column, and then get the unique values in the list of all duplicated. This will tell us how many birds were measured more than once.

In [22]:
# Stats about how many birds were measured more than once
print('There were', len(df['band'][df['band'].duplicated()].unique()), 
      'birds that were measured more than once.')

print('There were', len(df['band'].unique()), 'total birds measured.')
There were 350 birds that were measured more than once.
There were 1954 total birds measured.

So, most birds were only measured once. Nonetheless, let's eliminate duplicates of birds that were measured twice. When we drop the duplicates, we will keep the first measurement.

In [23]:
# Drop all rows with matching year and band (keep first)
df = df.drop_duplicates(subset=['year', 'band'])

Finally, we will save the DataFrame as a CSV file using the df.to_csv() method. We do not want to print the indices (they are meaningless).

In [24]:
df.to_csv('data/grant_complete.csv', index=False)

d) As a first pass, I will use a jitter plot showing both species by year.

In [25]:
p = bokeh_catplot.strip(
    cats=['year', 'species'],
    val='beak depth (mm)',


The plot immediately shows that there are more G. fortis measurements and they are more broadly distributed. There seems to be a big jump toward shallower beaks from 1991 to 2012 for G. fortis, but the G. scandens beak depth stays relatively constant.

To better see the distributions, we can plot ECDFs.

In [26]:
palette_fortis = bokeh.palettes.Blues9
p_fortis = bokeh_catplot.ecdf(
    data=df.loc[df['species']=='fortis', :],
    val='beak depth (mm)',

palette_scandens = bokeh.palettes.Oranges9
p_scandens = bokeh_catplot.ecdf(
    data=df.loc[df['species']=='scandens', :],
    val='beak depth (mm)',

p_scandens.x_range = p_fortis.x_range

bokeh.io.show(bokeh.layouts.column([p_fortis, p_scandens]))

The big jump from 1991 to 2012 for G. fortis is really clear in the ECDF. In both cases, there appears to be a jump toward shallower beaks between 1973 and 1975, but there are not very many data points in 1973, so it is difficult to tell. (We will learn how to quantify this uncertainty in future lessons when we do bootstrapping.)

e) Now let's make a plot of the 1987 data in the beak depth-beak length plane. We can use the function we wrote in Exercise 3.1.

In [27]:
p = scatter(
    data=df.loc[df['year']==1987, :],
    x='beak length (mm)',
    y='beak depth (mm)',


f) To generate all of the plots, we can loop through a GroupBy object.

In [28]:
plots = []
for name, group in df.groupby('year'):
    p = scatter(
        x='beak length (mm)',
        y='beak depth (mm)',
# Link the ranges
for i in range(1, len(plots)):
    plots[i].x_range = plots[0].x_range
    plots[i].y_range = plots[0].y_range

When we look at the data this way, we see the the species coming together.

Computing environment

In [29]:
%load_ext watermark
%watermark -v -p pandas,bokeh,bokeh_catplot,jupyterlab
CPython 3.7.3
IPython 7.1.1

pandas 0.24.2
bokeh 1.2.0
bokeh_catplot 0.1.1
jupyterlab 0.35.5