Tips for saving memory with pandas

In this article you’ll find some tips to reduce the amount of RAM used when working with pandas, the fundamental Python library for data analysis and data manipulation.

When dealing with large(ish) datasets, reducing the memory usage is something you need to consider if you’re stretching to the limits of using a single machine. For example, when you try to load some data from a big CSV file, you want to avoid your program crashing with a MemoryError. These tips can also help speeding up some downstream analytical queries.

The overall strategy boils down to choosing the right data types and loading only what you need. In this article you’ll learn about:

  • Finding out how much memory is used
  • Saving memory using categories
  • Saving memory using smaller number representations
  • Saving memory using sparse data (when you have a lot of NaN)
  • Choosing the right dtypes when loading the data
  • Loading only the columns you need
  • Loading only a subset of rows

Finding out how much memory is used

First, let’s look into some simple steps to observe how much memory is taken by a pandas DataFrame.

For the examples I’m using a dataset about Olympic history from Kaggle. The dataset is in CSV format and takes roughly 40Mb on disk.

>>> import pandas as pd
>>> athletes = pd.read_csv('athlete_events.csv')
>>> athletes.shape
(271116, 15)

There are ~271K records with 15 columns

For a breakdown of the memory usage, column by column, we can use memory_usage() on the whole DataFrame. The memory is reported in bytes:

>>> athletes.memory_usage(deep=True)
Index          128
ID         2168928
Name      20697535
Sex       15724728
Age        2168928
Height     2168928
Weight     2168928
Team      17734961
NOC       16266960
Games     18435888
Year       2168928
Season    17080308
City      17563109
Sport     18031019
Event     24146495
Medal      9882241
dtype: int64

The function also works for a single column:

>>> athletes['Name'].memory_usage(deep=True)
20697663

The difference between the two outputs is due to the memory taken by the index: when calling the function on the whole DataFrame, the Index has its own entry (128 bytes), while for a single column (i.e. a pandas Series) the memory used by the index is aggregated.

For an aggregated figure on the whole table, we can simply sum:

>>> athletes.memory_usage(deep=True).sum()
186408012  # roughly 178Mb

Why do we need deep=True? This flag will introspect the data deeply, reporting the actual system-level memory consumption. Without setting this flag, the function returns an estimate which could be quite far from the actual number, for example:

>>> athletes.memory_usage()
Index         128
ID        2168928
Name      2168928
Sex       2168928
Age       2168928
Height    2168928
Weight    2168928
Team      2168928
NOC       2168928
Games     2168928
Year      2168928
Season    2168928
City      2168928
Sport     2168928
Event     2168928
Medal     2168928
dtype: int64

>>> athletes['Name'].memory_usage()
2169056

Another way of getting the overall memory consumption is through the function info(), which is going to be useful because it also gives us information on the data types (dtype) used by the DataFrame. Notice again the use of deep introspection for the memory usage:

>>> athletes.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 271116 entries, 0 to 271115
Data columns (total 15 columns):
 #   Column  Non-Null Count   Dtype  
---  ------  --------------   -----  
 0   ID      271116 non-null  int64  
 1   Name    271116 non-null  object 
 2   Sex     271116 non-null  object 
 3   Age     261642 non-null  float64
 4   Height  210945 non-null  float64
 5   Weight  208241 non-null  float64
 6   Team    271116 non-null  object 
 7   NOC     271116 non-null  object 
 8   Games   271116 non-null  object 
 9   Year    271116 non-null  int64  
 10  Season  271116 non-null  object 
 11  City    271116 non-null  object 
 12  Sport   271116 non-null  object 
 13  Event   271116 non-null  object 
 14  Medal   39783 non-null   object 
dtypes: float64(3), int64(2), object(10)
memory usage: 177.8 MB

Notice how all the string fields are loaded as object, while all the numerical fields use a 64-bit representation, due to the architecture of the local machine — it could be 32-bit with older hardware.

Saving memory using categories

Some of the variables in our dataset are categorical, meaning they only have a handful of possible values. Rather than using a generic object for these variables, when appropriate we can use the more relevant Categorical dtype in pandas. For example, good candidates for this data type include the variables Medal, Season, or Team, amongst others.

If you don’t have a full description of the data, in order to decide which columns should be treated as categorical, you can simply observe the number of unique values to confirm this is much smaller than the dataset size:

>>> athletes['Medal'].unique()
array([nan, 'Gold', 'Bronze', 'Silver'], dtype=object)

There are only three different values, plus the null value nan.

Observe the difference in memory consumption between using object and using categories:

>>> athletes['Medal'].memory_usage(deep=True)
9882369  # 9.4+ Mb
>>> athletes['Medal'].astype('category').memory_usage(deep=True)
271539  # 0.25 Mb

Besides saving memory, another advantage of using categorical data in pandas is that we can include a notion of logical order between the values, different from the lexical order.

Saving memory using smaller number representations.

Let’s look at some numerical variables, for example ID (int64), Height (float64) and Year (int64).

You can observe their range by checking the minimum and maximum values:

>>> athletes['ID'].min(), athletes['ID'].max()
(1, 135571)

The int64 dtype is able to hold numbers on a much broader range, at the price of a much bigger memory footprint:

>>> import numpy as np
>>> np.iinfo('int64')  # integer info
iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64)

Using int32 for the column ID is enough to store its values and it will save us half of the memory space:

>>> athletes['ID'].memory_usage(deep=True)
2169056
>>> athletes['ID'].astype('int32').memory_usage(deep=True)
1084592

We can do the same with floats:

>>> athletes['Height'].min(), athletes['Height'].max()
(127.0, 226.0)

In this case, a float16 is enough, and costs a quarter of the memory price:

>>> athletes['Height'].memory_usage(deep=True)
2169056
>>> athletes['Height'].astype('float16').memory_usage(deep=True)
542360

Finally, let’s look at the variable Year:

>>> athletes['Year'].min(), athletes['Year'].max()
(1896, 2016)

In this case, it looks like an int16 would be enough. On a closer look though, we can consider the Year column to be categorical as well: there’s only a handful of possible values. Let’s check the difference:

>>> athletes['Year'].memory_usage(deep=True)
2169056
>>> athletes['Year'].astype('int16').memory_usage(deep=True)
542360
>>> athletes['Year'].astype('category').memory_usage(deep=True)
272596

For this particular situation, it makes more sense to use categories rather than numbers, unless we plan on performing arithmetic operations on this column (you cannot sum or multiply two categories).

Saving memory using sparse data (when you have a lot of NaN)

The sparse dtypes in pandas are useful wen dealing with columns that have a lot of null values. Depending on your variables, you may want to consider representing your data as sparse. The info() function used earlier tells us how many non-null records we have for each column, so if that number is much lower than the size of the dataset, it means we have a lot of null values.

This is exactly the case of the Medal column that we treated as categorical earlier:

>>> athletes['Medal'].memory_usage(deep=True)
9882369
>>> athletes['Medal'].astype('category').memory_usage(deep=True)
271539
>>> athletes['Medal'].astype('Sparse[category]').memory_usage(deep=True)
199067

Choosing the right dtypes when loading the data

So far we have looked at the memory usage of different dtypes, converting the columns after the dataset was loaded.

Once we have chosen the desired dtypes, we can make sure they are used when loading the data, by passing the schema as a dictionary to the read_csv() function:

>>> schema = {
... 'ID': 'int32',
... 'Height': 'float16',
... # add all your Column->dtype mappings
... }
>>> athletes = pd.read_csv('athlete_events.csv', dtype=schema)

Note: it’s not possible to use the Sparse dtype when loading the data in this way, we still need to convert the sparse columns after the dataset is loaded.

Loading only the columns you need

Depending on the application, we often don’t need the full set of columns in memory.

In our medal example, let’s say we simply want to compute the overall count of the medals per nation. For this specific use case, we only need to look at the columns Medal and NOC (National Olympic Committee).

We can pass the argument usecols to the read_csv() function:

>>> athletes = pd.read_csv('athlete_events.csv', usecols=['NOC', 'Medal'], dtype=schema)
>>> athletes.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 271116 entries, 0 to 271115
Data columns (total 2 columns):
 #   Column  Non-Null Count   Dtype   
---  ------  --------------   -----   
 0   NOC     271116 non-null  category
 1   Medal   39783 non-null   category
dtypes: category(2)
memory usage: 816.3 KB

If we only want to count the medals per NOC, we can use a groupby operation on the above DataFrame:

>>> athletes.groupby('NOC')['Medal'].count()
# (long output omitted)

It’s also worth noting that for this groupby operation there is a significant speed-up when using the categorical data types.

Loading only a subset of rows

To complete the picture, the read_csv() function also offers options to limit the number of rows we’re loading. This can be useful in a few circumstances, for example when we just want to take a peek at the data without looking at the whole dataset, or when the dataset is big enough that we can answer our analytical questions with a sample.

The first case, just taking a peek at the data, is straightforward:

>>> athletes = pd.read_csv('athlete_events.csv', nrows=1000)
>>> len(athletes)
1000

Using the nrows argument, we’ll load the first N records (1000 in the example above) into the DataFrame. This can often be enough to have a first feeling for the data before digging into further analysis.

If we want to implement some random sampling instead, the read_csv() function also offers the skiprows argument. Specifically, if we pass a custom function to this argument, we can implement our sampling logic. The function takes one argument (the row number) and should return True if you want to skip that row.

In the example, we want to keep the first row because it has the column names, and we load only ~10% of the data, using the function random() which returns a random float in the [0, 1) range (if this number is greater than 0.1, we skip the row):

>>> from random import random
>>> def skip_record(row_number):
...     return random() > 0.1 and row_number > 0
... 
>>> athletes = pd.read_csv('athlete_events.csv', skiprows=skip_record)
>>> len(athletes)
27176  # 27K rows in the sample, ~271K in the full dataset

Summary

In this article we have discussed some options to save memory with pandas choosing the most appropriate data types and loading only the data that we need for our analysis.

Do you need to upskill your team in pandas? Marco runs public and private training courses on Effective pandas and other Python topics, please get in touch to know more.

Sign up to the newsletter Musings on Data to receive periodic updates and recommendations from Marco on Data Science.

Follow Marco on Twitter.

Intervista Pythonista: Podcast Interview for the Italian Python Community

A few weeks ago I had the pleasure of chatting with Marco Santoni and Cesare Placanica from the Python Milano user group, as the first guest of their new Intervista Pythonista, a podcast about the Italian Python community. You can listen to the episode on Anchor/Spotify in Italian.

This blog post offers a very belated summary of the podcast episode for English readers. It’s not a full transcript but it sums up our chat.

Some of the points we discussed (questions from the hosts in bold):

  • How did you get into Python? And how do you use it nowadays?
    My first encounter with Python was random. At that time I was working as software engineer, mostly on web applications (PHP and JavaScript, long before Angular/React and friends). Someone, probably from a local Linux User Group, mentioned Python and I had a first look. When I got into Natural Language Processing, it felt like a good excuse to study this new (for me) language a bit more and to look into the ecosystem — only NLTK at the time, as far as I remember. Later I used it for my MSc dissertation and throughout my PhD work. In the meanwhile the Python ecosystem for Data Science became more and more robust, and it became my first weapon of choice. Nowadays I use it as my main tool for all my Data Science work, and I teach it at my corporate training courses.
  • How do you keep your knowledge up to date [with the latest Python developments]?
    Through a variety of channels. Conferences and meetups are a great way to see what people are working on, and to be exposed to fresh ideas. These days in-person gatherings are on hold so that part is mostly missing, but different user groups are still producing a lot of good content published on YouTube. Twitter or other social media channels can also be useful to catch up with some Python news, bump into a new library or new articles. Of course it’s difficult to keep up with everything so one has to be a bit selective when it comes to spend time digging into the details. When I’m really interested in a particular topic, after a first look at blogs and tutorials, I’d probably seek something more structured like a book or a video course.
  • How (and why) did you start a career as solo consultant? Bonus: how do you find your clients?
    I started my Data Science consultancy firm in 2015, mainly looking for independence. Taking the first step was simple (register your company online at Companies House and you’re good to go). Over time I’ve learned, and I’m learning, all the other facets of how to run a business, but taking that first step was the crucial moment. On the topic of finding clients, most of my work comes through my network, e.g. via word-of-mouth from people I’ve worked with in the past, or through a recommendation from a person I’ve helped somehow in the past. With this in mind, having a presence at conferences/events and curating your personal network is essential.
  • How did you get into NLP?
    I was working on a search application so I had to learn more. I started learning about Information Retrieval methods like TF-IDF, using off-the-shelf tools. From there, things took off: I later studied Information Retrieval for my MSc and PhD, I developed an interest for Natural Language Processing at large and I’ve been involved in many NLP projects. These days, I work on a broad variety of Data Science projects but NLP remains my favourite topic.
  • Can you tell us about your experience in writing books?
    I wrote one book on Data Mining for Social Media, and developed two video courses on data science so far. It’s a lot of work! The reward is usually not on the financial side, unless you write the next Harry Potter series. The process gave me a lot of insights on the publishing industry as a whole and put me in touch with a variety of professionals in that industry, which was great. I learned bits and pieces related to editing, marketing and all the other steps that one doesn’t think about when starting a book. More importantly, it gave me the opportunity to polish my writing skills, which I think are essential in our profession, even more so for a consultant. I think for every professional in our field, improving your writing is a good investment, that will always pay dividends in the long run.
  • Tell us about your experience as Python trainer
    I’ve been teaching and training in tech, in different capacities, for more than twenty years now. In the early days of my career, it was a second job through a local non-profit organisation. Later it became more and more central and after a short stint in academia, a few years ago I started offering corporate training courses as part of my consulting services. The demand for Python and Data Science is high, with many companies looking into the PyData stack for their data analytics and business intelligence needs. Most of my trainings these days are 2 or 3-day sessions with small groups of circa 10-12 people. Python is easy to pick up so during those few days of training folks can learn a lot and feel productive, even people who are new to programming.
  • How has your training experience changed with remote work now being more prominent?
    Training remotely is something I’ve been doing for a few years, so when Covid restrictions forced companies to work from home, I was ready. Reading the room in a Zoom call is obviously more difficult compared to in-person sessions, but there are tricks one can implement to keep the engagement high, improving the overall enjoyment for the delegates. For example, short demos with frequent “try it yourself” moments bring up questions that help me perceive the students’ understanding of the subject — Jupyter notebooks are great for this because of their interactive nature. Splitting the class in groups of 2-3 people using breakout rooms is also very useful for interactive exercises that people can solve in a “pair programming” fashion.
  • Tell us about your community engagement (meet-ups, conferences, etc)
    Since 2014 I started attending local meet-ups, and in particular I was regularly at PyData London, a new (at the time) meet-up around Python and Data Science. I was enjoying the atmosphere and the quality of the presentations, and I found myself coming back every month. Like many Python events, PyData London is community-driven: everything is run by volunteers. Shortly after the first few events, I got closer with the regulars and the organisers and started helping out with the monthly meet-ups and even with the annual conference, reviewing proposals, chairing sessions, etc. Since 2018 I’ve been the chair of our annual conference, and I’ve helped growing the conference to over 700 attendees. Meanwhile, I’ve also attended many other Python conferences giving talks and tutorials, in particular PyCon UK and PyCon Italy, but also EuroPython, various local PyData chapters in the UK, PyMunich and PyParis. The common line is always the great community: you get the opportunity to meet interesting people in your field, talking shop while enjoying a relaxing atmosphere. I certainly recommend everybody to look for local events to connect to your peers, and if possible to present your work to boost your CV!

Many thanks to Marco and Cesare for having me as first guest of their podcast, which now has already a few episodes under its belt, do check them out here!

PyCon Italy 2017 write-up

Last week I’ve travelled to Florence where I attended PyCon Otto, the 8th edition of the Italian Python Conference. As expected, it’s been yet another great experience with the Italian Python community and many international guests.

This year the very first day, Thursday, was beginners’ day, with introductory workshops run by volunteer mentors. Thanks to a cancelled flight, I’ve missed out on this opportunity so I joined the party only for the main event.

On Friday, I’ve run another version of my tutorial on Natural Language Processing for beginners. The tutorial was oversubscribed and the organisers really made an effort to accommodate as many people as possible in the small training room, so at the end, I had ~35 attendees. After the workshop, I had a lot of interesting conversations and some ideas on how to improve the material with additional exercises. Some credits for this are due to my friend Miguel Martinez who contributed with the text classification material for the first edition of the workshop.

As per tradition, at the end of the workshop I’ve also run a raffle to give away a free copy of my book on Mastering Social Media Mining with Python.

On Saturday, I gave a talk titled Word Embeddings for Natural Language Processing with Python (link to slides), somehow a natural follow-up of the tutorial with slightly more advanced concepts, but still tailored for beginners. The talk was really well received, and a lot of interesting questions and conversations came up.

Following the traditional social event on Saturday night (a huge fiorentina), Sunday was pretty much a mellow day, with the last few excellent talks, a light lunch and my journey back.

It was great to meet so many new and old friends! The quality of this community event was stellar, and this was possible thanks to the contributions of organisers, volunteers, mentors, speakers and all the attendees.

See you for PyCon Italy 2018!

PyCon UK 2016 write-up

Last week I had a long weekend at PyCon UK 2016 in Cardiff, and it’s been a fantastic experience! Great talks, great friends/colleagues and lots of ideas.

On Monday 19th, on the last day of the conference, my friend Miguel and I have run a tutorial/workshop on Natural Language Processing in Python (the GitHub repo contains the Jupyter notebooks we used as well as some slides for an introduction).

Our NLP tutorial

Since I’ve already mentioned it, I’ll start from the end :)

The tutorial was tailored for NLP beginners and, as I mentioned explicitly at the very beginning, I wasn’t there to impress the experts. Rather, the whole point was to get the attendees a bit curious about Natural Language Processing, and to show them what you can do with a few lines of Python.

Overall, I think we’ve been quite lucky as we had the perfect audience: the right number of people (around 20+) with a bit of Python knowledge but not much NLP knowledge.

We only had some minor hiccups with the installation process, which is something we’re going to work on to make it smoother and more beginner-friendly. In particular the things I’d like to improve are:

  • add some testing / pre-flight checks, e.g. “how do I know that the environment is set up correctly?” (Miguel has already added this)
  • support for Windows: I’m quite useless with trouble-shooting Windows issues, but a couple of attendees had some troubles with the installation process not going too smoothly; maybe some virtual machine setup will be helpful

I also think having the material available in advance, so the attendees can start setting up the environment is very helpful. Most of them were quite engaged and I received a couple of “bug reports” on-the-fly, even a pull request that improved the installation process (thanks!)

Last but not least, I was also happy to give out a copy of my book (Mastering Social Media Mining with Python) that I had with me (the raffle was implemented on the spot through random.choice(), and the book went to Paivi from Django Girls).

I’ll give a shorter version of this tutorial at PyCon Ireland later this year, so in case you’ll be around, I’ll see you there :)

Unfortunately, the tutorials were not recorded so there is no video on-line, but the slides are in the GitHub repo so please dig in and send feedback if you have any.

The Open Day

Thursday 15th was “day zero” of the conference, hosted at Cardiff University. The ticket was free, although there was limited capacity. The day was aimed at introducing the new audience to Python and PyCon. We haven’t seen much Python code on that day, as the talks were mainly for newcomers, yet we had a lot of food for thoughs. This is a great way to introduce more people to Python and to show them how the community is friendly and happy to get more beginners on board.

Teachers, Kids and Education

One of the main themes of the conference was Education. Friday 16th, the first day of the main event, was labelled “Teachers Day”, while Saturday 17th was “Kids Day”. The effort to make CS education more accessible for kids was very clear, and some of the initiatives were really spot-on. In particular, some of the kids have been able to hack some small project together in a very short time, and they delivered a “show and tell” session at the end of the second day. I think their creativity and the fact that they were standing in front of a crowd of 500+ developers to show what they have been working on during their day have been very impressive.

Community in the Broader Sense

Another aspect that became quite clear is the strength of the Python Community. Some representatives of PyCon Poland, PyCon Switzerland and Django Europe were introducing their upcoming events. Some attendees with less economic capabilities were given the opportunity to attend, through some form of financial support (including e.g. students from India).

Representatives from PyCon Namibia and PyCon Zimbabwe were also attending and they discussed some of the challenges they are facing while building a local community in their countries.

In particular, the work Jessica from PyNAM is carrying out with young learners is extremely inspiring and deserves more visibility (link to the video of her talk).

Accessibility for Everybody

One of the features that I’ve never experienced in a conference so far was the speech-to-text transcription. During the talks, the speech-to-text team have been very busy writing down what the speakers were saying in real-time. While this is sometimes considered an accessibility feature which might benefit only deaf users, it turned out live captions are extremely beneficial for everybody. Firstly, not all the non-deaf attendees have perfect hearing. Secondly, not everybody is an English native speaker (both speakers and audience), so a word might be missed, or an accent might cause some confusion. Lastly, not every attendee is paying full attention to every talk for the whole talk: sometimes towards the end of the day, you just switch off for a moment and the live captions allow you to catch up.

Providing some accessibility feature turned out to be beneficial for everybody.

Shout out to the Organisers

Organising such a big event (500+ attendees) is not an easy task, so all the people who have worked hard to make this conference happen deserve a big round of applause. Not naming names here, but if you’ve been involved, thanks!

Being Interviewed about NLP

This was a bit random, in a very pleasant way. On Saturday, Miguel, Lev from RaRe Technologies and I spent some time with Kate Jarmul, who by the way just introduced her book on data wrangling, and also delivered a tutorial on the topic. The topic of the conversation was on our views, in the broader sense, about NLP / Text Analytics, how we got into this field, how we see this field evolving and so on. Apparently, this was an interview with some experts of the field, for a piece she’s writing for the O’Reilly blog (I should put an amazed emoticon here).

Using Python for …

The breadth of the topics discussed during the conference was really amazing. I think this kind of events are a great way to see what people are working on and how the tools we use every day are used by other people.

I’m not going to name any talk in particular, because there are too many good talks that deserve to be mentioned.

In terms of topics, some fields that are well covered by Python are:

  • Data Science (and related topics like data cleaning, NLP and machine learning)
  • Web development (with Django and so many interesting libraries)
  • electronics and robotics (with Raspberry Pi, micro:bit, MicroPython etc)
  • you name it :)

I’m probably not saying anything new here, but it was nice to see it in first person and step outside my data-sciency comfort zone.

Summary

Thanks to everybody who contributed to this event, and see you in Cardiff for PyCon UK 2017!

Mastering Social Media Mining with Python

book-cover

Great news, my book on data mining for social media is finally out!

The title is Mastering Social Media Mining with Python. I’ve been working with Packt Publishing over the past few months, and in July the book has been finalised and released.

Links:

As part of Packt’s Mastering series, the book assumes the readers already have some basic understanding of Python (e.g. for loops and classes), but more advanced concepts are discussed with examples. No particular experience with Social Media APIs and Data Mining is required. With 300+ pages, by the end of the book, the readers should be able to build their own data mining projects using data from social media and Python tools.

A bird’s eye view on the content:

  1. Social Media, Social Data and Python
    • Introduction on Social Media and Social Data: challenges and opportunities
    • Introduction on Python tools for Data Science
    • Overview on the use of public APIs to interact with social media platforms
  2. #MiningTwitter: Hashtags, Topics and Time Series
    • Interacting with the Twitter API in Python
    • Twitter data: the anatomy of a tweet
    • Entity analysis, text analysis, time series analysis on tweets
  3. Users, Followers, and Communities on Twitter
    • Analysing who follows whom
    • Mining your followers
    • Mining communities
    • Visualising tweets on a map
  4. Posts, Pages and User Interactions on Facebook
    • Interacting the Facebook Graph API in Python
    • Mining you posts
    • Mining Facebook Pages
  5. Topic analysis on Google Plus
    • Interacting with the Google Plus API in Python
    • Finding people and pages on G+
    • Analysis of notes and activities on G+
  6. Questions and Answers on Stack Exchange
    • Interacting with the StackOverflow API in Python
    • Text classification for question tags
  7. Blogs, RSS, Wikipedia, and Natural Language Processing
    • Blogs and web pages as social data Web scraping with Python
    • Basics of text analytics on blog posts
    • Information extraction from text
  8. Mining All the Data!
    • Interacting with many other APIs and types of objects
    • Examples of interaction with YouTube, Yelp and GitHub
  9. Linked Data and the Semantic Web
    • The Web as Social Media
    • Mining relations from DBpedia
    • Mining geo coordinates

The detailed table of contents is shown on the Packt Pub’s page. Chapter 2 is also offered as free sample.

Please have a look at the companion code for the book on my GitHub, so you can have an idea of the applications discussed in the book.

PyData London 2016 write-up

Last weekend I was at the PyData London conference for three Pythonic days. Firstly, thanks to the organiser, volunteers, speakers, sponsors and everyone who has contributed in a way or another to make the event a great success.

This year I had the opportunity to contribute as member of the review committee, which means I had a glimpse at the behind-the-scenes and I know how many great proposals we had. With three days and three to four tracks running in parallel, there is room for a lot of Pythonic parley, yet unfortunately many good proposals had to be turned down due to time/space constraints. The programme turned out to be great nevertheless.

The three days were really intense so there is just too much to say, but I’ll try to summarise some of the take-home messages.

Tutorials: delivering a tutorial is difficult. Everything that could go wrong, will go wrong (big screen that goes bananas for 10 minutes, flaky Internet connection so a conda install takes ages, you mention it). Jupyter notebook makes life better, but I strongly feel for the speakers, so a big thank you for taking the time to prepare some quality material.

Topics of interest: some topics seem to capture most of the attention this year, in particular there was a lot of interest around data pipelines, deep learning and Bayesian stats. Unsurprising?

Keynotes: following the recent news on the LIGO project, Prof. Andreas Freise gave an introduction to gravitational waves, lasers, the latest achievements in physics and other cool things far beyond my understanding. Something I could understand and relate to is his way to describe how he needs to write code to carry on his job, but writing code is not his main job. This is true for many academics and researchers without a software engineering background, who were also the main audience of my talk on building data pipelines (luckily enough, scheduled right after the keynote in the same room).

The second keynote, given by Tetiana Ivanova, was about the beginning of her journey in Data Science without formal education. Some of the suggestions were sensible, in fact I recently shared some of the same ideas in a short talk to UCL students and post-docs who want to move to industry.

The third and last keynote was given by Travis Oliphant: CEO of Continuum Analytics, author of NumPy, creator of SciPy, Pythonista since the late 1990’s. His talk was about scaling up and scaling out the PyData stack. Things to watch out for: Numba and Dask. Really exciting stuff going on!

My talk: I presented “Building Data Pipelines in Python”, with a focus on the need to bring R&D and Engineering together, and how basic engineering principles can be beneficial even if your job is not all about writing code. After presenting a very similar talk at PyCon Italy, I found the audience in London to be a bit more on the academic side than I initially thought, which was perfect for my engineering rants. After the usual first few minutes of feeling awkward when speaking publicly, I started my discussion on unit testing and asked how many in the audience write unit tests regularly. Random guy from the audience: “What’s a unit test?”. Thank you kind stranger, you lifted my spirit and the rest of the talk was a breeze.

The slides of my talk are on my speakerdeck.

Last year it took several months to get the videos out, this year only one day! So this is the video of my talk: https://www.youtube.com/watch?v=7NzH1Gx8-4E

I had some interesting questions after the talk and I also had some nice conversations the day after. Apparently, I raised some interest on Luigi, in fact a few people told me how they really had to attend the other talk about using Luigi in production, deliverd by Pete Owlett from Deliveroo, after listening to mine (the room was overflowing so I couldn’t even get close!). There was also some genuine interest on unit testing, and a very interesting question was how to apply it when working with Jupyter notebooks.

Lighting talks: apparently, saving your Jupyter notebooks on git is an issue that is taken very seriously by the community. In fact, three speakers came up with different solutions for the same problem.

Organisation: hat off to the organisers and everyone involved, and see you at the PyData London meetup!

Get in touch if you also have a write up of the event:

@MarcoBonzanini

PyCon Italia / PyData Italy 2016 Write-Up

Last week I’ve travelled to Florence to attend PyCon Sette, the seventh edition of the Italian Python Conference, born 10 years ago and held annually (with three editions of EuroPython in between).

First off, I have something to admit: as this was my first time at PyCon Italia, clearly I didn’n know what I was missing. Being overly busy with work and side projects, this is the perfect excuse to resume the blog.

Florence

The city doesn’t need much presentation: it’s simply one of the most beautiful cities in the world. I haven’t been there for a few years but things don’t seem to be very different from a turist’s point of view. The craft beer scene is booming, but at the same time culinary traditions are well preserved. Both of these are big thumbs-up for me. The best random moment of my trip: getting lost in the back streets of the old city centre, and then finding a dodgy hole-in-the-wall place that sells incredible focaccia and panini.

The Conference

PyCon Sette can be summarised as three intense days of Python, with more than 500 attendees. The first day was opened by Alex Martelli with a keynote about exception handling in Python 2 vs Python 3. A part from the keynotes, at any given time we had between 4 and 6 parallel sessions of talks or trainings. I decided to stick to the PyData track for the whole time, although the other tracks were also featuring some interesting talks. Some of the tracks were related to a particular sub-community, with PyData and DjangoVillage having a strong presence, but also Odoo, DjangoGirls and the Italian Postgres User Group are worth mentioning.

I’ve listened to many interesting talks. On top of my head, a few to remember: the talk about Internet of Things by Stefano Terna of TomorrowData.io (also winners of the start-up contest), the one about deployment of scikit-learn models in the cloud by Alex Casalboni and an interesting one about Functional Programming and Dask by Holger Peters.

Overall, hats off to the organisers. In particular, I had some conversations with Valerio Maggio who is the founder of PyData Italy. We exchanged some opinions about the conference and the community in the broader sense. Hopefully the interest around Data Science in Italy will keep rising, so maybe several local events throughout the year will be held, rather than having just one big national event per year.

My Talk

On Saturday, I gave a talk on Building Data Pipelines in Python. I wrote about building data pipelines with Luigi before, but this talk gave me the opportunity to look at the bigger picture. The general message was that Research and Engineering are different disciplines, but we (data-sciency and researchy people) can benefit from trying to meet in the middle. In particular, good engineering practices can help the less engineering-oriented researchers in their day-to-day mundane tasks. After opening the discussion on the overall topic, I had a brief moment of ranting about unit testing (or the lack of testing culture in some academic circles), I introduced Luigi as a workflow manager to build pipelines in Python and I closed with an overview on logging (described by Alex Martelli in his keynote as something that scares people off, at least initially) and a consideration about using good engineering practices in research.

The talk was addressed to beginners and to the less engineering-savvy PyData users, so expert software engineers probably didn’t benefit much from it. I had anyway a good response with several people coming after the talk for a chat. All in all, if at least one researcher will look into testing or will decide to try one of the workflow managers I mentioned, I’d say I’ve reached my goal.

The slides of my talk are on my speakerdeck (videos will be on-line soon).

See you next year in Florence!

Retrocomputing and Python: import turtle

My first experience with something related to programming was back in middle school. From time to time, our Math-and-Science substitute teacher used to walk us to the computer room, which was full of shiny Commodore 64 machines, where we had a lot of fun (sort of) with a graphic tool called turtle. What we were trying to do was simply to give a list of instructions to a turtle-shaped cursor, so it could move on the screen and draw some colourful shapes.

Back in those days, we didn’t even realise that we were doing something programming-related, we simply thought we were skipping Math for one day. Fast-forward a few years later, I found out about Logo, its value as educational programming language and Turtle graphics as one of Logo’s key features.

Given the festive spirit of these days, I thought I’d give a shot at the turtle package, part of the Python standard library ;)

Quick Intro on Turtle Graphics

Python has its own implementation of the turtle as part of the standard library (see documentation here). It uses the tkinter module for the underlying graphics, so it has to be run with a version of Python with Tk support.

If you’ve never heard of Turtle Graphics, these are some of the core concepts:

  • The turtle has a position (x, y coordinates) and an orientation
  • The orientation can be changed with right/left commands, e.g. right(90) will rotate 90 degrees clockwise
  • The position can be changed with forward/backward commands, or by setting the coordinates explicitly
  • The turtle is also called pen: when the pen is down, moving the turtle will draw a line

import turtle

The starting point is simply to import the turtle module. A turtle program will have a turtle.Screen object as a drawing canvas, and a turtle.Turtle object as a pen.

Let’s consider this first example:

import turtle

if __name__ == '__main__':
    win = turtle.Screen()

    turt = turtle.Turtle()
    turt.forward(100)
    turt.left(90)
    turt.forward(30)
    turt.color("red")
    turt.forward(30)

    win.mainloop()

This will produce the following:

Turtle Example

The turtle is initially oriented towards the right-hand side of the screen, i.e. towards 3 o’clock. Moving forward will produce the initial black line. As you can see the colour can be changed later using the turtle.color() function.

Festive Turtle

This paragraph shows a more complex example. The full code is available on GitHub

import turtle

if __name__ == '__main__':
    wn = turtle.Screen()

    my_turtle = turtle.Turtle()

    # start drawing the tree
    my_turtle.color("darkgreen")
    my_turtle.pensize(5)
    my_turtle.begin_fill()
    # the right half of the tree
    my_turtle.forward(100)
    my_turtle.left(150)
    my_turtle.forward(90)
    my_turtle.right(150)
    my_turtle.forward(60)
    my_turtle.left(150)
    my_turtle.forward(60)
    my_turtle.right(150)
    my_turtle.forward(40)
    my_turtle.left(150)
    my_turtle.forward(100)
    # the left half of the tree
    my_turtle.left(60)
    my_turtle.forward(100)
    my_turtle.left(150)
    my_turtle.forward(40)
    my_turtle.right(150)
    my_turtle.forward(60)
    my_turtle.left(150)
    my_turtle.forward(60)
    my_turtle.right(150)
    my_turtle.forward(90)
    my_turtle.left(150)
    my_turtle.forward(133)

    my_turtle.end_fill()
    # the trunk
    my_turtle.color("brown")
    my_turtle.pensize(1)
    my_turtle.begin_fill()

    my_turtle.right(90)
    my_turtle.forward(70)
    my_turtle.right(90)
    my_turtle.forward(33)
    my_turtle.right(90)
    my_turtle.forward(70)

    my_turtle.end_fill()

    # the star, see similar example on python.org
    my_turtle.penup()
    my_turtle.setpos(-17, 110)
    my_turtle.color("gold")
    my_turtle.begin_fill()
    my_turtle.pendown()
    for _ in range(36):
        my_turtle.forward(40)
        my_turtle.left(170)
    my_turtle.end_fill()


    # some colourful balls
    def ball(trt, x, y, size=10, colour="red"):
        trt.penup()
        trt.setpos(x, y)
        trt.color(colour)
        trt.begin_fill()
        trt.pendown()
        trt.circle(size)
        trt.end_fill()

    ball(my_turtle, 95, -5)
    ball(my_turtle, -110, -5)
    ball(my_turtle, 80, 40, size=7, colour="gold")
    ball(my_turtle, -98, 40, size=7, colour="gold")
    ball(my_turtle, 70, 70, size=5)
    ball(my_turtle, -93, 70, size=5)


    my_turtle.hideturtle()
    wn.mainloop()

And this is the output:

Turtle XMas Tree

Summary

Turtle graphics is a great educational tool to introduce kids to programming. Grown-ups can use it as well, for a bit of nostalgic fun ;)

The full code for the demo is available on GitHub

@MarcoBonzanini

Adding Slack Notifications to a Luigi Pipeline in Python

In a previous article, I’ve described how to build a data pipeline in Python using Luigi, a workflow manager written in Python and open sourced by Spotify. I also had the opportunity to give a short talk about Luigi at the local PyData London meetup (see slides).

One of the nice features of Luigi is the possibility of receiving e-mail notifications on error. While this is a useful feature, it’s tailored to errors only, so effectively you don’t know if the Luigi pipeline has completed its execution successfully, unless you manually check. As I wanted to extend the possibility of receiving a notification on Slack, also in case of success, I started looking around for the options.

I ended up developing my own solution: https://github.com/bonzanini/luigi-slack. This blog post is a brief overview on how to use this Python package with your Luigi pipeline.

Getting started with luigi-slack

From your organisation’s Slack page (e.g. yourname.slack.com) you can add a Bot integration. The setup is very quick, and you’ll receive a token that you’ll need to use to interact with the Slack API.

You can get the bleeding edge version of luigi-slackfrom the GitHub link above, but beware that this is a work in progress. A somewhat stable version is available from the cheese shop:

pip install luigi-slack

The key points of this package are:

  • Support for Python 3
  • Easy-to-use interface

Regarding the first point, the discussion on choosing Python 2 vs Python 3 is still never-ending and I’m not going there in this post. For a greenfield project, I prefer to use Python 3 rather than a version with a sunset date already decided. The support for Python 2 in luigi-slack is best-effort (and of course pull requests are always welcome).

In terms of easy-to-use interface, I borrowed the nice idea of using a context manager from luigi-monitor, because it makes it easy to integrate the library with an existing pipeline.

For example, given the basic code to run a Luigi pipeline which ends with the task YourTaskClass:

import luigi

if __name__ == '__main__':
    luigi.run(main_task_cls=YourTaskClass)

All we need in order to have Slack notifications is to refactor as follows:

import luigi
from luigi_slack import SlackBot, notify

if __name__ == '__main__':
    slacker = SlackBot(token='my-token',
                       channels=['mychannel', 'anotherchannel'])
    with notify(slacker):
        luigi.run(main_task_cls=YourTaskClass)

Configuration Options for luigi-slack

The SlackBot takes a number of arguments. Besides the token, which allows you to connect to your organisation’s Slack, all the other parameters are optional:

  • channels (default empty list) is the list of channel names that you want to push the notifications to. For the channel name, you don’t need the initial # symbol. You can also deliver the notifications to a single account, by using the @username syntax
  • events (default to [FAILURE]) is the list of event types, as defined in luigi_slack, that you want to track
  • max_events (default to 5) is the max number of events of a given type that you want to report. With more than max_events events of the same type, a “please check logs” message is reported instead
  • username (default to “Luigi-slack Bot”) is the screen name of your bot
  • task_representation (default to str) is the function used to represent the task in the notification (see explanation below)

In Luigi, representing a task as a string will print the task_id attribute of a luigi.Task, which include the class name as well as all the parameters. In other words, it looks like:

MyTask(param1=”some_value”, param2=”other_value”, your_secret_param=”your_secret_value”, …)

With a huge number of parameters that make the notification less readable, or with sensible parameters that you don’t want to send around in the Slack chat room, it makes sense to display the task a more conservative way. An example of custom string representation could be:

def custom_task_representation(task):
    return "{}(...)".format(task.__class__.__name__)

Once we pass the function as task_representation argument of the SlackBot, the task will appear in the notifications as:

MyTask(…)

Keep in mind that an instance of a Luigi task is identified by the class name AND the value of its parameters, which is why the task_id include them all. In other words, with a more compact representation like the one proposed in the above snippet, you won’t be able to distinguish between tasks with the same class name, but different param values. You’ll need to customise the function based on your needs.

Summary

I’m developing a Python package to include Slack notification support to a Luigi pipeline, with a simple interface, a few optional configuration parameters, and minimal requirements in terms of refactoring.

The code is available at https://github.com/bonzanini/luigi-slack, and you can install the Python package with:

pip install luigi_slack

As this is a work in progress, it’s not widely tested, and the interface could change. Comments and pull requests are welcome.

Building Data Pipelines with Python and Luigi

As a data scientist, the emphasis of the day-to-day job is often more on the R&D side rather than engineering. In the process of going from prototypes to production though, some of the early quick-and-dirty decisions turn out to be sub-optimal and require a decent amount of effort to be re-engineered. This usually slows down innovation, and generally speaking your project as a whole.

This post will discuss some experience in building data pipelines, e.g. extraction, cleaning, integration, pre-processing of data, in general all the steps that are necessary to prepare your data for your data-driven product. In particular, the focus in on data plumbing, and how a workflow manager like Luigi can come to the rescue, without getting in your way. With a minimal effort, the transition from prototype to production can be smoother.

You can find the code for the examples as GitHub Gist.

Early Days of a Prototype

In the early days of a prototype, the data pipeline often looks like this:

$ python get_some_data.py
$ python clean_some_data.py
$ python join_other_data.py
$ python do_stuff_with_data.py

This is quite common when the data project is in its exploratory stage: you know that you’ll need some pre-processing, you think it’s going to be a quick hack, so you don’t bother with some engineering best practices, then the number of scripts grows and your data pipeline will come back and bite you.

This approach has the only advantage of being quick and hacky. On the downside, it’s tedious: every time you want to re-run the pipeline, you need to manually call the bunch of scripts in sequence. Moreover, if you’re sharing this prototype with a colleague, there is even more room for misinterpretation (“why can’t I do stuff with data?”… “did you clean it first?”, etc.)

The obvious hacky solution seems to be: let’s put everything in one script. After some quick refactoring, the do_everything.py script can look like this:

if __name__ == '__main__':
    get_some_data()
    clean_some_data()
    join_other_data()
    do_stuff_with_data()

This is fairly simple to run:

$ python do_everything.py

(Note: you could also put everything in a bash script, which calls the indiviaul bunch of scripts in sequence, but the shortcomings will be more or less the same)

Boilerplate Code

When moving towards a production-ready pipeline, there are a few more aspects to consider besides the run-everything code. In particular, error handling should be taken into account:

 
try:
    get_some_data()
except GetSomeDataError as e:
    # handle this

But if we chain all the individual tasks together, we end up with a Christmas tree of try/except:

try:
    get_some_data()
    try:
        clean_some_data()
        try:
            # you see where this is going...
        except EvenMoreErrors:
            # ...
    except CleanSomeDataError as e:
        # handle CleanSomeDataError
except GetSomeDataError as e:
    # handle GetSomeDataError

Another important aspect to consider is how to resume a pipeline. For example, if the first few tasks are completed, but then an error occurs half-way through, how do we re-run the pipeline without re-executing the initial successful steps?

# check if the task was already successful
if not i_got_the_data_already():
    # if not, run it
    try:
        get_some_date()
    except GetSomeDataError as e:
        # handle the error

Moving to Luigi

Luigi is a Python tool for workflow management. It has been developed at Spotify, to help building complex data pipelines of batch jobs. To install Luigi:

$ pip install luigi

Some of the useful features of Luigi include:

  • Dependency management
  • Checkpoints / Failure recovery
  • CLI integration / parameterisation
  • Dependency Graph visualisation

There are two core concepts to understand how we can apply Luigi to our own data pipeline: Tasks and Targets. A task is a unit of work, designed by extending the class luigi.Task and overriding some basic methods. The output of a task is a target, which can be a file on the local filesystem, a file on Amazon’s S3, some piece of data in a database etc.

Dependencies are defined in terms of inputs and outputs, i.e. if TaskB depends on TaskA, it means that the output of TaskA will be the input of TaskB.

Let’s look at a couple of template tasks:

# Filename: run_luigi.py
import luigi

class PrintNumbers(luigi.Task):

    def requires(self):
        return []

    def output(self):
        return luigi.LocalTarget("numbers_up_to_10.txt")

    def run(self):
        with self.output().open('w') as f:
            for i in range(1, 11):
                f.write("{}\n".format(i))

class SquaredNumbers(luigi.Task):

    def requires(self):
        return [PrintNumbers()]

    def output(self):
        return luigi.LocalTarget("squares.txt")

    def run(self):
        with self.input()[0].open() as fin, self.output().open('w') as fout:
            for line in fin:
                n = int(line.strip())
                out = n * n
                fout.write("{}:{}\n".format(n, out))
                
if __name__ == '__main__':
    luigi.run()

This code showcases two tasks: PrintNumbers, that writes the number from 1 to 10 into a file called numbers_up_to_10.txt, one number per line, and SquaredNumbers, that reads the such file and outputs a list of pairs number-square into squares.txt, also one pair per line.

To run the tasks:

$ python run_luigi.py SquaredNumbers --local-scheduler

Luigi will take care of checking the dependencies between tasks, see that the input of SquaredNumbers is not there, so it will run the PrintNumbers task first, then carry on with the execution.

The first argument we’re passing to Luigi is the name of the last task in the pipeline we want to run. The second argument simply tells Luigi to use a local scheduler (more on this later).

You could also use the luigi command:

$ luigi -m run_luigi.py SquaredNumbers --local-scheduler

Anatomy of a Task

To create a Luigi task we simply need to create a class whose parent is luigi.Task, and override some methods. In particular:

  • requires() should return the list of dependencies for the given task — in other words a list of tasks
  • output() should return the target for the task (e.g. a LocalTarget, a S3Target, etc.)
  • run() should contain the logic to execute

Luigi will check the return values of requires() and output() and build the dependency graph accordingly.

Passing Parameters

Hard-coding filenames and config values is generally speaking an anti-pattern. Once you’ve understood the structure and the dynamics of your task, you should look into parameterising all the configuration aspects so that you can dynamically call the same script with different arguments.

The class luigi.Parameter() is the place to look into. Each Luigi task can have a number of parameters. Let’s say for example that we want to modify the previous example to support a custom number. As the parameter we’re using with the range() function is an integer, we can use luigi.IntParameter rather than the default parameter class. This is how the modified tasks can look like:

class PrintNumbers(luigi.Task):
    n = luigi.IntParameter()

    def requires(self):
        return []

    def output(self):
        return luigi.LocalTarget("numbers_up_to_{}.txt".format(self.n))

    def run(self):
        with self.output().open('w') as f:
            for i in range(1, self.n+1):
                f.write("{}\n".format(i))

class SquaredNumbers(luigi.Task):
    n = luigi.IntParameter()

    def requires(self):
        return [PrintNumbers(n=self.n)]

    def output(self):
        return luigi.LocalTarget("squares_up_to_{}.txt".format(self.n))

    def run(self):
        with self.input()[0].open() as fin, self.output().open('w') as fout:
            for line in fin:
                n = int(line.strip())
                out = n * n
                fout.write("{}:{}\n".format(n, out))

To call the SquaredNumbers tasks up to, say, 20:

$ python run_luigi.py SquaredNumbers --local-scheduler --n 20

Parameters can also have default values, e.g.

n = luigi.IntParameter(default=10)

so in this way, if you don’t specify the --n argument, it will default to 10.

Sample code as GitHub Gist

Local vs Global Scheduler

So far, we’ve used the --local-scheduler option to run Luigi tasks with a local scheduler. This is useful for development, but in a production environment we should make use of the centralised scheduler (see the docs on the scheduler).

This has a few advantages:

  • avoid running two instances of the same task simultaneously
  • nice web-based visualisation

You can run the Luigi scheduler daemon in the foreground with:

$ luigid

or in the background with:

$ luigid --background

It will default to port 8082, so you can point your browser to http://localhost:8082 to access the visualisation.

With the global Luigi scheduler running, we can re-run the code without the option for the local scheduler:

$ python run_luigi.py SquaredNumbers --n [BIG_NUMBER]

As the sample code will run in milliseconds, if you want to have a chance to switch to the browser and see the dependency graph while the tasks are still running, you should probably use a big number like 10,000,000 or more for the --n option.

This is a cropped screenshot of the dependency graph:

dependency-graph-screenshot

Summary

We have described the definition of data pipelines using Luigi, a workflow manager written in Python. Luigi provides a nice abstraction to define your data pipeline in terms of tasks and targets, and it will take care of the dependencies for you.

In terms of code re-use, and with the mindset of going from prototype to production, I’ve found very helpful to define the business logic of the tasks in separate Python packages (i.e. with a setup.py file). In this way, from your Luigi script you can simply import your_package and call it from there.

A task can produce multiple files as output, but if that’s your case, you should probably verify if the task can be broken down into smaller units (i.e. multiple tasks). Do all these outputs logically belong together? Do you have dependencies between them? If you can’t break the task down, I’ve found it simpler/useful just to define the output() as a log file with the names and the timestamps of all the individual files created by the task itself. The log file name can be formatted as TaskName_timestamp_param1value_param2value_etc.

Using a workflow manager like Luigi is in general helpful because it handles dependencies, it reduces the amount of boilerplate code that is necessary for parameters and error checking, it manages failure recovery and overall it forces us to follow a clear pattern when developing the data pipeline.

It’s also important to consider its limitations:

  • It was built for batch jobs, it’s probably not useful for near real-time processing
  • It doesn’t trigger the execution for you, you still need to run the data pipeline (e.g. via a cronjob)

@MarcoBonzanini