(c) 2019 Justin Bois. With the exception of pasted graphics, where the source is noted, this work is licensed under a Creative Commons Attribution License CC-BY 4.0. All code contained herein is licensed under an MIT license.
This lesson was generated from a Jupyter notebook. You can download the notebook here.
The Python Standard Library has lots of built-in modules that contain useful functions and data types for doing specific tasks. You can also use modules from outside the standard library. And you will undoubtedly write your own modules!
A module is contained in a file that ends with
.py. This file can have classes, functions, and other objects. We will not discuss defining your own classes in the bootcamp, so your modules will essentially just contain functions.
A package contains several related modules that are all grouped together under one name. We will extensively use the NumPy, SciPy, Pandas, and Bokeh packages, among others, in the bootcamp, and I'm sure you will also use them beyond. As such, the first module we will consider is NumPy. We will talk a lot more about NumPy later in the bootcamp.
Say I have a list of numbers and I want to compute the mean. This happens all the time; you repeat a measurement multiple times and you want to compute the mean. We could write a function to do this.
def mean(values): """Compute the mean of a sequence of numbers.""" return sum(values) / len(values)
And it works as expected.
print(mean([1, 2, 3, 4, 5])) print(mean((4.5, 1.2, -1.6, 9.0)))
In addition to the mean, we might also want to compute the median, the standard deviation, etc. These seem like really common tasks. Remember my advice: if you want to do something that seems really common, a good programmer (or a team of them) probably already wrote something to do that. Means, medians, standard deviations, and lots and lots and lots of other numerical things are included in the Numpy module. To get access to it, we have to import it.
That's it! We now have the
numpy module available for use. Remember, in Python everything is an object, so if we want to access the methods and attributes, available in the
numpy module, we use dot syntax. In a Jupyter notebook or in the JupyterLab console, you can type
(note the dot) and hit tab, and we will see what is available. For Numpy, there is a huge number of options!
So, let's try to use Numpy's
numpy.mean() function to compute a mean.
print(numpy.mean([1, 2, 3, 4, 5])) print(numpy.mean((4.5, 1.2, -1.6, 9.0)))
Great! We get the same values! Now, we can use the
numpy.median() function to compute the median.
print(numpy.median([1, 2, 3, 4, 5])) print(numpy.median((4.5, 1.2, -1.6, 9.0)))
This is nice. It gives the median, including when we have an even number of elements in the sequence of numbers, in which case it automatically interpolates. It is really important to know that it does this interpolation, since if you are not expecting it, it can give unexpected results. So, here is an important piece of advice:
We can access the doc string of the
numpy.median() function in JupyterLab by typing
and looking at the output. An important part of that output:
Notes ----- Given a vector ``V`` of length ``N``, the median of ``V`` is the middle value of a sorted copy of ``V``, ``V_sorted`` - i e., ``V_sorted[(N-1)/2]``, when ``N`` is odd, and the average of the two middle values of ``V_sorted`` when ``N`` is even.
This is where the documentation tells you that the median will be reported as the average of two middle values when the number of elements is even. Note that you could also read the documentation here, which is a bit easier to read.
We use Numpy all the time. Typing
numpy over and over again can get annoying. So, it is common practice to use the
as keyword to import a module with an alias. Numpy's alias is traditionally
np, and this is the only alias you should use for Numpy.
import numpy as np np.median((4.5, 1.2, -1.6, 9.0))
I prefer to do things this way, though some purists differ. We will use traditional aliases for major packages like Numpy, Pandas, and Altair throughout the bootcamp.
Standard Python installations come with the standard library. Numpy and other useful packages are not in the standard library. Outside of the standard library, there are several packages available. Several. Ha! There are currently (June 12, 2019) about 180,000 packages available through the Python Package Index, PyPI. Usually, you can ask Google about what you are trying to do, and there is often a third party module to help you do it. The most useful (for scientific computing) and thoroughly tested packages and modules are available using
conda. Others can be installed using
pip, which we will do later in the bootcamp.
To write your own module, you need to create a
.py file and save it. You can do this using the text editor in JupyterLab. Let's call our module
na_utils, for "nucleic acid utilities." So, we create a file called
na_utils.py. We'll build this module to have two functions, based on things we've already written. We'll have a function
dna_to_rna(), which converts a DNA sequence to an RNA sequence (just changes
U), and another function
reverse_rna_complement(), which returns the reverse RNA complement of a DNA template. The contents of
na_utils.py should look as follows.
""" Utilities for parsing nucleic acid sequences. """ def dna_to_rna(seq): """ Convert a DNA sequence to RNA. """ # Determine if original sequence was uppercase seq_upper = seq.isupper() # Convert to lowercase seq = seq.lower() # Swap out 't' for 'u' seq = seq.replace('t', 'u') # Return upper or lower case RNA sequence if seq_upper: return seq.upper() else: return seq def reverse_rna_complement(seq): """ Convert a DNA sequence into its reverse complement as RNA. """ # Determine if original was uppercase seq_upper = seq.isupper() # Reverse sequence seq = seq[::-1] # Convert to upper seq = seq.upper() # Compute complement seq = seq.replace('A', 'u') seq = seq.replace('T', 'a') seq = seq.replace('G', 'c') seq = seq.replace('C', 'g') # Return result if seq_upper: return seq.upper() else: return seq
Note that the file starts with a doc string saying what the module contains.
I then have my two functions, each with doc strings. We will now import the module and then use these functions. In order for the import to work, the file
na_utils.py must be in your present working directory, since this is where the Python interpreter will look for your module. In general, if you execute the code
the Python interpreter will look first in the
pwd to find
import na_utils # Sequence seq = 'GACGATCTAGGCGACCGACTGGCATCG' # Convert to RNA na_utils.dna_to_rna(seq)
We can also compute the reverse RNA complement.
Wonderful! You now have your own functioning module!
These functions have minimal error checking of the input. For example, the
rna() function will take gibberish in and give jibberish out.
na_utils.dna_to_rna('You can observe a lot by just watching.')
'you can observe a lou by jusu wauching.'
In general, checking input and handling errors is an essential part of writing functions, and we will cover that in a later lesson.
As our first foray into the glory of PEP 8, the Python style guide, we quote:
Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants.
Imports should be grouped in the following order:
- standard library imports
- related third party imports
- local application/library specific imports
You should put a blank line between each group of imports.
You should follow this guide. I generally do it for Jupyter notebooks as well, with my first code cell having all of the imports I need. Therefore, going forward all of our lessons will have all necessary imports at the top of the document. The only exception is when we are explicitly demonstrating a concept that requires an import.
Once you have imported a module or package, the interpreter stores its contents in memory. You cannot update the contents of the package and expect the interpreter to know about the changes. You will need to restart the kernel and then import the package again in a fresh instance.
This can seem annoying, but it is good design. It ensures that code you are running does not change as you go through executing a notebook. However, when developing modules, it is sometimes convenient to have an imported module be updated as you run through the notebook as you are editing. To enable this, you can use the autoreload extension. To activate it, run the following in a code cell.
%load_ext autoreload %autoreload 2
When we wrote the
na_utils module, we stored it in the directory that we were working in, or the
pwd. But what if you write a module that you want to use regardless of what directory your are in? To allow this kind of usage, you can use the
setuptools module of the standard library to manage your packages. You should read the documentation on Python packages and modules to understand the details of how this is done, but what we present here is sufficient to get simple packages running and installed.
In order for the tools in
setuptools to effectively install your modules for widespread use, you need to follow a specific architecture for your package. I made an example
jb_bootcamp package that is present in the
The file structure is of the package is
/jb_bootcamp /jb_bootcamp __init__.py na_utils.py bioinfo_dicts.py ... setup.py README.md
The ellipsis above signifies that there are other files in there that we are not going to use yet. I am trying to keep it simple for now to show how package management works.
To set up the package, you can use your command line skills to make the directories and use the JupyterLab text editor to make the files, in this case, four of them,
bioinfo_dicts.py (the little module we mentioned in the lesson on dictionaries),
It is essential that the name of the root directory be the name of the package, and that there be a subdirectory with the same name. That subdirectory must contain a file
__init__.py. This file contains information about the package and how the modules of the package are imported, but it may be empty for simple modules. In this case, I included a string with the name and version of the package, as well as instructions to import appropriate modules. Here are the contents of
__init__.py. The first two lines of code tell the interpreter what to import when running
"""Top-level package for utilities for bootcamp.""" from .na_utils import * from .bioinfo_dicts import * __author__ = 'Justin Bois' __email__ = 'firstname.lastname@example.org' __version__ = '0.0.1'
Also within the subdirectory are the
.py files containing the code of the package. In our case, we have,
It is also good practice to have a README file (which I suggest you write in Markdown) that has information about the package and what it does. Since this little demo package is kind of trivial, the README is quite short. Here are the contents I made for
README.md (shown in unrendered raw Markdown).
# jb_bootcamp Utilities for use in the Introduction to Programming in the Biological Sciences Bootcamp.
Finally, in the main directory, we need to have a file called
setup.py, which contains the instructions for
setuptools to install the package. We use the
setuptools.setup() function to do the installation.
import setuptools with open("README.md", "r") as f: long_description = f.read() setuptools.setup( name='jb_bootcamp', version='0.0.1', author='Justin Bois', email@example.com', description='Utilities for use in bootcamp.', long_description=long_description, long_description_content_type='ext/markdown', packages=setuptools.find_packages(), classifiers=( "Programming Language :: Python :: 3", "Operating System :: OS Independent", ), )
This is a minimal
setup.py function, but will be sufficient for most packages you write for your own use. For your use, you make obvious changes to the
author, etc., fields.
You will install your package after the next lesson, when we put it under version control.
%load_ext watermark %watermark -v -p numpy,jupyterlab
CPython 3.7.3 IPython 7.1.1 numpy 1.16.4 jupyterlab 0.35.5