Introduction to Numpy

Python lists:

  • are very flexible
  • don't require uniform numerical types
  • are very easy to modify (inserting or appending objects).

However, flexibility often comes at the cost of performance, and lists are not the ideal object for numerical calculations.

This is where Numpy comes in. Numpy is a Python module that defines a powerful n-dimensional array object that uses C and Fortran code behind the scenes to provide high performance.

The downside of Numpy arrays is that they have a more rigid structure, and require a single numerical type (e.g. floating point values), but for a lot of scientific work, this is exactly what is needed.

The Numpy module is imported with:

In [1]:
import numpy

Although in the rest of this course, and in many packages, the following convention is used:

In [2]:
import numpy as np

This is because Numpy is so often used that it is shorter to type np than numpy.

Creating Numpy arrays

The easiest way to create an array is from a Python list, using the array function:

In [3]:
a = np.array([10, 20, 30, 40])
In [4]:
a
Out[4]:
array([10, 20, 30, 40])

Numpy arrays have several attributes that give useful information about the array:

In [5]:
a.ndim  # number of dimensions
Out[5]:
1
In [6]:
a.shape  # shape of the array
Out[6]:
(4,)
In [7]:
a.dtype  # numerical type
Out[7]:
dtype('int64')

Note: Numpy arrays actually support more than just one integer type and one floating point type - they support signed and unsigned 8-, 16-, 32-, and 64-bit integers, and 16-, 32-, and 64-bit floating point values.

There are several other ways to create arrays. For example, there is an arange function that can be used similarly to the built-in Python range function, with the exception that it can take floating-point input:

In [8]:
np.arange(10)
Out[8]:
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [9]:
np.arange(3, 12, 2)
Out[9]:
array([ 3,  5,  7,  9, 11])
In [10]:
np.arange(1.2, 4.4, 0.1)
Out[10]:
array([ 1.2,  1.3,  1.4,  1.5,  1.6,  1.7,  1.8,  1.9,  2. ,  2.1,  2.2,
        2.3,  2.4,  2.5,  2.6,  2.7,  2.8,  2.9,  3. ,  3.1,  3.2,  3.3,
        3.4,  3.5,  3.6,  3.7,  3.8,  3.9,  4. ,  4.1,  4.2,  4.3])

Another useful function is linspace, which can be used to create linearly spaced values between and including limits:

In [11]:
np.linspace(11., 12., 11)
Out[11]:
array([ 11. ,  11.1,  11.2,  11.3,  11.4,  11.5,  11.6,  11.7,  11.8,
        11.9,  12. ])

and a similar function can be used to create logarithmically spaced values between and including limits:

In [12]:
np.logspace(1., 4., 7)
Out[12]:
array([    10.        ,     31.6227766 ,    100.        ,    316.22776602,
         1000.        ,   3162.27766017,  10000.        ])

Finally, the zeros and ones functions can be used to create arrays intially set to 0 and 1 respectively:

In [13]:
np.zeros(10)
Out[13]:
array([ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.])
In [14]:
np.ones(5)
Out[14]:
array([ 1.,  1.,  1.,  1.,  1.])

Exercise 1

Create an array which contains 11 values logarithmically spaced between $10^{-20}$ and $10^{-10}$.

In [15]:
# your solution here

Create an array which contains the value 2 repeated 10 times

In [16]:
# your solution here

Try using np.empty(10) and compare the results to np.zeros(10) - why do you think there is a difference?

In [17]:
# your solution here

Create an array containing 5 times the value 0, as a 32-bit floating point array (this is harder)

In [18]:
# your solution here

Combining arrays

Numpy arrays can be combined numerically using the standard +-*/** operators:

In [19]:
x = np.array([1,2,3])
y = np.array([4,5,6])
In [20]:
x + 2 * y
Out[20]:
array([ 9, 12, 15])
In [21]:
x ** y
Out[21]:
array([  1,  32, 729])

Note that this differs from lists:

In [22]:
x = [1,2,3]
y = [4,5,6]
In [23]:
x + 2 * y
Out[23]:
[1, 2, 3, 4, 5, 6, 4, 5, 6]

Accessing and Slicing Arrays

Similarly to lists, items in arrays can be accessed individually:

In [24]:
x = np.array([9,8,7])
In [25]:
x[0]
Out[25]:
9
In [26]:
x[1]
Out[26]:
8

and arrays can also be sliced by specifiying the start and end of the slice (where the last element is exclusive):

In [27]:
y = np.arange(10)
In [28]:
y[0:5]
Out[28]:
array([0, 1, 2, 3, 4])

optionally specifying a step:

In [29]:
y[0:10:2]
Out[29]:
array([0, 2, 4, 6, 8])

As for lists, the start, end, and step are all optional, and default to 0, len(array), and 1 respectively:

In [30]:
y[:5]
Out[30]:
array([0, 1, 2, 3, 4])
In [31]:
y[::2]
Out[31]:
array([0, 2, 4, 6, 8])

Exercise 2

Given an array x with 10 elements, find the array dx containing 9 values where dx[i] = x[i+1] - x[i]. Do this without loops!

In [32]:
# your solution here

Multi-dimensional arrays

Numpy can be used for multi-dimensional arrays:

In [33]:
x = np.array([[1.,2.],[3.,4.]])
In [34]:
x.ndim
Out[34]:
2
In [35]:
x.shape
Out[35]:
(2, 2)
In [36]:
y = np.ones([3,2,3])  # ones takes the shape of the array, not the values
In [37]:
y
Out[37]:
array([[[ 1.,  1.,  1.],
        [ 1.,  1.,  1.]],

       [[ 1.,  1.,  1.],
        [ 1.,  1.,  1.]],

       [[ 1.,  1.,  1.],
        [ 1.,  1.,  1.]]])
In [38]:
y.shape
Out[38]:
(3, 2, 3)

Multi-dimensional arrays can be sliced differently along different dimensions:

In [39]:
z = np.ones([6,6,6])
In [40]:
z[::3, 1:4, :]
Out[40]:
array([[[ 1.,  1.,  1.,  1.,  1.,  1.],
        [ 1.,  1.,  1.,  1.,  1.,  1.],
        [ 1.,  1.,  1.,  1.,  1.,  1.]],

       [[ 1.,  1.,  1.,  1.,  1.,  1.],
        [ 1.,  1.,  1.,  1.,  1.,  1.],
        [ 1.,  1.,  1.,  1.,  1.,  1.]]])

Functions

In addition to an array class, Numpy contains a number of vectorized functions, which means functions that can act on all the elements of an array, typically much faster than could be achieved by looping over the array.

For example:

In [41]:
theta = np.linspace(0., 2. * np.pi, 10)
In [42]:
theta
Out[42]:
array([ 0.        ,  0.6981317 ,  1.3962634 ,  2.0943951 ,  2.7925268 ,
        3.4906585 ,  4.1887902 ,  4.88692191,  5.58505361,  6.28318531])
In [43]:
np.sin(theta)
Out[43]:
array([  0.00000000e+00,   6.42787610e-01,   9.84807753e-01,
         8.66025404e-01,   3.42020143e-01,  -3.42020143e-01,
        -8.66025404e-01,  -9.84807753e-01,  -6.42787610e-01,
        -2.44929360e-16])

Another useful package is the np.random sub-package, which can be used to genenerate random numbers fast:

In [44]:
# uniform distribution between 0 and 1
np.random.random(10)
Out[44]:
array([ 0.01643346,  0.32585845,  0.27758201,  0.55505831,  0.55628474,
        0.58441864,  0.23725591,  0.39776078,  0.73614927,  0.91676155])
In [45]:
# 10 values from a gaussian distribution with mean 3 and sigma 1
np.random.normal(3., 1., 10)
Out[45]:
array([ 3.66836993,  2.93423217,  1.88696003,  2.15840383,  4.18619608,
        3.06480279,  4.11898749,  0.6682424 ,  3.88342969,  1.59797832])

Another very useful function in Numpy is numpy.loadtxt which makes it easy to read in data from column-based data. For example, given the following file:

In [46]:
%cat data/columns.txt
1995.00274 0.944444
1995.00548 -1.61111
1995.00821 -3.55556
1995.01095 -9.83333
1995.01369 -10.2222
1995.01643 -9.5
1995.01916 -10.2222
1995.02190 -6.61111
1995.02464 -2.94444
1995.02738 1.55556
1995.03012 0.277778
1995.03285 -1.44444
1995.03559 -3.61111

We can either read it in using a single multi-dimensional array:

In [47]:
data = np.loadtxt('data/columns.txt')
data
Out[47]:
array([[  1.99500274e+03,   9.44444000e-01],
       [  1.99500548e+03,  -1.61111000e+00],
       [  1.99500821e+03,  -3.55556000e+00],
       [  1.99501095e+03,  -9.83333000e+00],
       [  1.99501369e+03,  -1.02222000e+01],
       [  1.99501643e+03,  -9.50000000e+00],
       [  1.99501916e+03,  -1.02222000e+01],
       [  1.99502190e+03,  -6.61111000e+00],
       [  1.99502464e+03,  -2.94444000e+00],
       [  1.99502738e+03,   1.55556000e+00],
       [  1.99503012e+03,   2.77778000e-01],
       [  1.99503285e+03,  -1.44444000e+00],
       [  1.99503559e+03,  -3.61111000e+00]])

Or we can read the individual columns:

In [48]:
date, temperature = np.loadtxt('data/columns.txt', unpack=True)
In [49]:
date
Out[49]:
array([ 1995.00274,  1995.00548,  1995.00821,  1995.01095,  1995.01369,
        1995.01643,  1995.01916,  1995.0219 ,  1995.02464,  1995.02738,
        1995.03012,  1995.03285,  1995.03559])
In [50]:
temperature
Out[50]:
array([  0.944444,  -1.61111 ,  -3.55556 ,  -9.83333 , -10.2222  ,
        -9.5     , -10.2222  ,  -6.61111 ,  -2.94444 ,   1.55556 ,
         0.277778,  -1.44444 ,  -3.61111 ])

There are additional options to skip header rows, ignore comments, and read only certain columns. See the documentation for more details.

Masking

The index notation [...] is not limited to single element indexing, or multiple element slicing, but one can also pass a discrete list/array of indices:

In [51]:
x = np.array([1,6,4,7,9,3,1,5,6,7,3,4,4,3])
x[[1,2,4,3,3,2]]
Out[51]:
array([6, 4, 9, 7, 7, 4])

which is returning a new array composed of elements 1, 2, 4, etc from the original array.

Alternatively, one can also pass a boolean array of True/False values, called a mask, indicating which items to keep:

In [52]:
x[np.array([True, False, False, True, True, True, False, False, True, True, True, False, False, True])]
Out[52]:
array([1, 7, 9, 3, 6, 7, 3, 3])

Now this doesn't look very useful because it is very verbose, but now consider that carrying out a comparison with the array will return such a boolean array:

In [53]:
x > 3.4
Out[53]:
array([False,  True,  True,  True,  True, False, False,  True,  True,
        True, False,  True,  True, False], dtype=bool)

It is therefore possible to extract subsets from an array using the following simple notation:

In [54]:
x[x > 3.4]
Out[54]:
array([6, 4, 7, 9, 5, 6, 7, 4, 4])

Conditions can be combined:

In [55]:
x[(x > 3.4) & (x < 5.5)]
Out[55]:
array([4, 5, 4, 4])

Of course, the boolean mask can be derived from a different array to x as long as it is the right size:

In [56]:
x = np.linspace(-1., 1., 14)
y = np.array([1,6,4,7,9,3,1,5,6,7,3,4,4,3])
In [57]:
y[(x > -0.5) & (x < 0.4)]
Out[57]:
array([9, 3, 1, 5, 6, 7])

Since the mask itself is an array, it can be stored in a variable and used as a mask for different arrays:

In [58]:
keep = (x > -0.5) & (x < 0.4)
x_new = x[keep]
y_new = y[keep]
In [59]:
x_new
Out[59]:
array([-0.38461538, -0.23076923, -0.07692308,  0.07692308,  0.23076923,
        0.38461538])
In [60]:
y_new
Out[60]:
array([9, 3, 1, 5, 6, 7])

A mask can also appear on the left hand side of an assignment:

In [61]:
y[y > 5] = 0.
In [62]:
y
Out[62]:
array([1, 0, 4, 0, 0, 3, 1, 5, 0, 0, 3, 4, 4, 3])

NaN values

In arrays, some of the values are sometimes NaN - meaning Not a Number. If you multiply a NaN value by another value, you get NaN, and if there are any NaN values in a summation, the total result will be NaN. One way to get around this is to use np.nansum instead of np.sum in order to find the sum:

In [63]:
x = np.array([1,2,3,np.nan])
In [64]:
np.nansum(x)
Out[64]:
6.0
In [65]:
np.nanmax(x)
Out[65]:
3.0

You can also use np.isnan to tell you where values are NaN. For example, array[~np.isnan(array)] will return all the values that are not NaN (because ~ means 'not'):

In [66]:
np.isnan(x)
Out[66]:
array([False, False, False,  True], dtype=bool)
In [67]:
x[np.isnan(x)]
Out[67]:
array([ nan])
In [68]:
x[~np.isnan(x)]
Out[68]:
array([ 1.,  2.,  3.])

Exercise 3

The data/munich_temperatures_average_with_bad_data.txt data file gives the temperature in Munich every day for several years:

In [69]:
!head data/munich_temperatures_average_with_bad_data.txt  # shows the 10 first lines of a file
1995.00274 0.944444
1995.00548 -1.61111
1995.00821 -3.55556
1995.01095 -9.83333
1995.01369 -10.2222
1995.01643 -9.5
1995.01916 -10.2222
1995.02190 -6.61111
1995.02464 -2.94444
1995.02738 1.55556

Read in the file using np.loadtxt. The data contains bad values, which you can identify by looking at the minimum and maximum values of the array. Use masking to get rid of the bad temperature values.

In [70]:
# your solution here