Stemming, Lemmatisation and POS-tagging with Python and NLTK

This article describes some pre-processing steps that are commonly used in Information Retrieval (IR), Natural Language Processing (NLP) and text analytics applications.

In particular, the focus is on the comparison between stemming and lemmatisation, and the need for part-of-speech tagging in this context. The discussion shows some examples in NLTK, also as
Gist on github.

Stemming

Stemming is the process of reducing a word into its stem, i.e. its root form. The root form is not necessarily a word by itself, but it can be used to generate words by concatenating the right suffix.

For example, the words fish, fishes and fishing all stem into fish, which is a correct word. On the other side, the words study, studies and studying stems into studi, which is not an English word.

Most commonly, stemming algorithms (a.k.a. stemmers) are based on rules for suffix stripping.
The most famous example is the Porter stemmer, introduced in the 1980’s and currently implemented in a variety of programming languages.

Traditionally, search engines and other IR applications have applied stemming to improve the chance of matching different forms of a word, almost treating them like synonyms, as conceptually they “belong” together.

Lemmatisation

The purpose of Lemmatisation is to group together different inflected forms of a word, called lemma. The process is somehow similar to stemming, as it maps several words into one common root. The output of lemmatisation is a proper word, and basic suffix stripping wouldn’t provide the same outcome. For example, a lemmatiser should map gone, going and went into go. In order to achieve its purpose, lemmatisation requires to know about the context of a word, because the process relies on whether the word is a noun, a verb, etc.

Part-of-speech Tagging

Part-of-speech (POS) tagging is the process of assigning a word to its grammatical category, in order to understand its role within the sentence. Traditional parts of speech are nouns, verbs, adverbs, conjunctions, etc.

Part-of-speech taggers typically take a sequence of words (i.e. a sentence) as input, and provide a list of tuples as output, where each word is associated with the related tag.

Part-of-speech tagging is what provides the contextual information that a lemmatiser needs to choose the appropriate lemma.

Examples in Python and NLTK

One of the most popular packages for NLP in Python is the Natural Language Toolkit (NLTK). It includes several tools for text analytics, as well as training data for some of the tools, and also some well-known data sets.

To install NLTK:

pip install nltk

In order to install the additional data, you can use its internal tool. From a Python interactive shell, simply type:

import nltk
nltk.download()

This will open a GUI which you can use to choose which data you want to download (if you’re not using a GUI environment, the interface will be textual). In a dev environment, I normally just download all the data for all the packages in the default folder ($HOME/nltk_data) but you can personalise
your installation.

A full example of stemming, lemmatisation and POS-tagging is available as Gist on github.

Let’s focus on this snippet:

from nltk.stem import PorterStemmer, WordNetLemmatizer

stemmer = PorterStemmer()
lemmatiser = WordNetLemmatizer()

print("Stem %s: %s" % ("studying", stemmer.stem("studying")))
print("Lemmatise %s: %s" % ("studying", lemmatiser.lemmatize("studying")))
print("Lemmatise %s: %s" % ("studying", lemmatiser.lemmatize("studying", pos="v")))

The output will be:

Stem studying: studi
Lemmatise studying: studying
Lemmatise studying: study

We can observe that the stemming process doesn’t generate a real word, but a root form.
On the other side, the lemmatiser generates real words, but without contextual information it’s not able to distinguish between nouns and verbs, hence the lemmatisation process doesn’t change
the word. The context is provided by the POS tag (“v” for verb in this example).

In order to generate POS tags automatically, nltk comes with a simple function. The snippet for POS tagging:

from nltk import pos_tag
from nltk.tokenize import word_tokenize

s = "This is a simple sentence"
tokens = word_tokenize(s) # Generate list of tokens
tokens_pos = pos_tag(tokens) 

print(tokens_pos)

and the output will be:

[('This', 'DT'), ('is', 'VBZ'), ('a', 'DT'), ('simple', 'JJ'), ('sentence', 'NN')]

NLTK uses the set of tags from the Penn Treebank project.

Summary

Stemming, lemmatisation and POS-tagging are important pre-processing steps in many text analytics applications. You can get up and running very quickly and include these capabilities in your Python applications by using the off-the-shelf solutions in offered by NLTK.

Sentiment Analysis with Python and scikit-learn

Sentiment Analysis is a field of study which analyses people’s opinions towards entities like products, typically expressed in written forms like on-line reviews. In recent years, it’s been a hot topic in both academia and industry, also thanks to the massive popularity of social media which provide a constant source of textual data full of opinions to analyse.

This article discusses one particular application of sentiment analysis: sentiment classification at the document level. In other words, given a document (e.g. a review), the task consists in finding out whether it provides a positive or a negative sentiment towards the product being discussed.

The following paragraphs describe the setup and the main components
or our classification example with samples of code in Python using scikit-learn, a popular machine learning library. The complete code is discussed at the end of this post, and available as Gist on Github.

Setting up for the experiments

We’re using Python and in particular scikit-learn for these experiments. To install scikit-learn:

pip install -U scikit-learn

Scikit-learn has a couple of dependencies, in particular numpy and scipy. If these dependencies are not resolved by pip for some reason, you can make the installation explicit with:

pip install -U numpy scipy scikit-learn

The data set used for this experiments is the well-known Polarity Dataset v2.0, downloadable from here.

The data set contains 2,000 documents, labelled and pre-processed. In particular, there are two labels, positive and negative with 1,000 documents each. Every document has been tokenised and lowercased; each line of a document represents a sentence. This pre-processing takes out most of the work we have to do to get started, so we can focus on the classification problem. Real world data are usually messy and need proper pre-processing before we can make good use of them. All we need to do here is read the files and split the words over white spaces.

Feature extraction in scikit-learn

In classification, items are represented by their features. In our case, documents are represented by their words, so we will use words as features.

scikit-learn provides several vectorizers to translate the input documents into vectors of features (or feature weights). Typically we want to give appropriate weights to different words, and TF-IDF is one of the most common weighting schemes used in text analytics applications. In scikit-learn, we can use the TfidfVectorizer:

vectorizer = TfidfVectorizer(min_df=5,
                             max_df = 0.8,
                             sublinear_tf=True,
                             use_idf=True)
train_vectors = vectorizer.fit_transform(train_data)
test_vectors = vectorizer.transform(test_data)

The parameters used in this example with the vectorizer are:

  • min_df=5, discard words appearing in less than 5 documents
  • max_df=0.8, discard words appering in more than 80% of the documents
  • sublinear_tf=True, use sublinear weighting
  • use_idf=True, enable IDF

More options are available and the best configuration might depend on your data or on the details of the task you’re facing.

The first call to fit_transform() will create the vocabulary (i.e. the list of words/features) and the feature weights from the training data. Secondly, we call simply transform() on the test data, which will create the feature weights for the test data, using the same vocabulary as the training data.

Classification in scikit-learn

scikit-learn comes with a number of different classifiers already built-in. In these experiments, we use different variations of Support Vector Machine (SVM), which is commonly used in classification applications.

The classification procedure is fairly simple:

classifier_rbf = svm.SVC()
classifier_rbf.fit(train_vectors, train_labels)
prediction_rbf = classifier_rbf.predict(test_vectors)

The SVC() class generates a SVM classifier with RBF (Gaussian) kernel as default option (several other options are available).

The fit() method will perform the training and it requires the training data processed by the vectorizer as well as the correct class labels.

The classification step consists in predicting the labels for the test data.

Comments on The Complete Code

The complete code is available as Gist on Github. The script takes the data folder as parameter, assuming the same format of the original data, with two subfolders pos and neg.

The first reads the content of the files and creates lists of training/testing documents and labels.
We split the data set into training (90% of the documents) and testing (10%) by exploiting the file names (they all start with “cvX”, with X=[0..9]). This calls for k-fold cross-validation,
not implemented in the example but fairly easy to integrate.

if fname.startswith('cv9'):
    # 10% test data
    test_data.append(content)
    test_labels.append(curr_class)
else:
    # 90% training data
    train_data.append(content)
    train_labels.append(curr_class)

Once the vectorizer has generated the feature vectors for training and testing, we can call the classifier as described above. In the example, we try different variations of SVM:

classifier_rbf = svm.SVC()
classifier_linear = svm.SVC(kernel='linear')
classifier_liblinear = svm.LinearSVC()

After performing the classification, we print the quality (precision/recall) results using classification_report(), and some timing information.

We notice that:

  • The default RBG kernel performs worse than the linear kernel
  • SVC() with linear kernel is much much slower than LinearSVC()

The first point opens for a discussion on Gaussian vs. linear kernels, not really part of this blog post, but as a rule of thumb when the number of features is much higher than the number of samples (documents), a linear kernel is probably the preferred choice. Moreover, there are options to properly tune the parameters of a RBF kernel.

The second bullet point is easily explained by the fact that, under the hood, scikit-learn relies on different C libraries. In particular SVC() is implemented using libSVM, while LinearSVC() is implemented using liblinear, which is explicitly designed for this kind of application.

Summary

We have discussed an application of sentiment analysis, tackled as a document classification problem with Python and scikit-learn.

The choice of the classifier, as well as the feature extraction process, will influence the overall quality of the results, and it’s always good to experiment with different configurations.

scikit-learn offers many options from this point of view.

Knowing the underlying implementation also allows for a better choice in terms of speed.

Full example in Python.

Searching PubMed with Python

Update 2021-01: minor update to reflect some changes in the Pubmed API

PubMed is a search engine accessing millions of biomedical citations. Users can freely search for biomedical references. For some articles, the access to the full text paper is also open.

This post describes how you can programmatically search the PubMed database with Python, in order to integrate searching or browsing capabilities into your Python application.

There are two main options to consider:

  • Accessing the database via their public API
  • Using a package that does the above for you, e.g. Biopython

The Entrez Database a.k.a. the PubMed API

The PubMed API is called the Entrez Database. It’s a web service freely accessible, although there are some guidelines to follow (at the moment of this writing, they recommend not to post more than three requests per second).

There are in total 8 different functions, or e-utilities, which access the database in different ways. Most of the utilities will return XML data, although some of them have the option to return a more convenient JSON format.

In particular, the search API is available at the following URL:

http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi

If we want to search for the term fever, the URL we need is for example:

http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=pubmed&retmode=json&retmax=20&sort=relevance&term=fever

The query string parameters used in this example:

  • db=pubmed, to narrow the search down to the pubmed DB only
  • retmode=json, to have a JSON string in response and not an XML
  • retmax=20, to obtain 20 results
  • sort=relevance, the results are sorted by relevance and not by added date which is the default ranking option on pubmed
  • term=[your query], the URL-encoded query

This search session will provide a number of PubMed IDs (probably 20) corresponding to the top citations which match our query.

In order to get some more details about these citations, we can use the efetch utility, which takes one or more citation ID as input. At the moment, the efetch utility does not return JSON, so XML is the only option to consider.

Given a list of citation IDs, the fetch operation can be built as follows

http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&retmode=xml&id=ID1,ID2,...

At this point, the response will be an XML to handle with e.g. minidom or other XML library. Please notice that we can query the efetch utility for multiple documents, simply by separating them with a comma.

Overall, it’s relatively easy to create the appropriate request using libraries like urllib.request or, better, requests. The response can be parsed with the json module, or minidom in case of XML.

An even more convenient way to do the job is to use an existing library that does what we need for us. A good example is Biopython, a comprehensive package for biological computation in Python.

Searching PubMed with Biopython

You can install the Biopython package with pip:

pip install biopython

The only component we need for searching PubMed is Entrez, which we can import with:

from Bio import Entrez

We can define a function for performing the search, e.g.

def search(query):
    Entrez.email = 'your.email@example.com'
    handle = Entrez.esearch(db='pubmed',
                            sort='relevance',
                            retmax='20',
                            retmode='xml',
                            term=query)
    results = Entrez.read(handle)
    return results

The list of citation IDs will be available as results[‘IdList’].

The next step is to fetch the details for all the retrieved articles via the efetch utility:

def fetch_details(id_list):
    ids = ','.join(id_list)
    Entrez.email = 'your.email@example.com'
    handle = Entrez.efetch(db='pubmed',
                           retmode='xml',
                           id=ids)
    results = Entrez.read(handle)
    return results


A full example of search over the term fever:

if __name__ == '__main__':
    results = search('fever')
    id_list = results['IdList']
    papers = fetch_details(id_list)
    for i, paper in enumerate(papers['PubmedArticle']):
         print("{}) {}".format(i+1, paper['MedlineCitation']['Article']['ArticleTitle']))


Notice that the structure of the MedlineCitation dictionaries can get
really convoluted, so you can get familiar with it by doing some pretty-printing. For example after fetching the papers with the code above, you can print out the data for the first paper using the following snippet, so you can understand the structure of its record.

# Pretty print the first paper in full to observe its structure
import json
print(json.dumps(papers['PubmedArticle'][0], indent=2))

The reason for declaring your email address is to allow the NCBI to
contact you before blocking your IP, in case you’re violating the guidelines.

The Gist of the full example:

https://gist.github.com/bonzanini/5a4c39e4c02502a8451d

My Python Code is Slow? Tips for Profiling

tl;dr

Before you can optimise your slow code, you need to identify the bottlenecks: proper profiling will give you the right insights.

This article discusses some profiling tools for Python.

Introduction

Python is a high-level programming language with an emphasis on readability. Some of its peculiarities, like the dynamic typing, or the (in)famous GIL, might have some trade-offs in terms of performance.

Many open source packages often follow a readability-first approach: the algorithms are firstly implemented using pythonic, easy-to-read code, then the performance issues are identified and tackled, refactoring the code or employing solutions like Cython. For example, this is the case of machine learning packages like scikit-learn or gensim. The latter shows an implementation of the word2vec algorithm which is even faster than the original C implementation by Google, quite impressive if we consider how Python is often seen as slow.

Before we start to refactor our code, or to think about solutions like Cython, it is important to identify where the performance bottlenecks are, so we can make an informed decision regarding the course of action we want to follow. This is a fundamental step if we want to get to biggest benefit with
the least amount of work. In fact, one of the biggest mistake in this context would be to make an educated guess, or to follow an intuition, and fix what we believe is the source of the problem.

By profiling our code, we take this uncertainty away since we will know
exactly where the problems are.

Sample code to profile

The following functions will be used for a simple proof of concept.

Please notice that while it’s often reasonable to assume the pythonic code to be faster than the non-pythonic one, we don’t know it yet! So we actually need to verify if the slow() function is slower than pythonic():

# profile_test.py
def slow(N=1000000):
    total = 0
    for i in range(N):
        total += i
    return total

def pythonic(N=1000000):
    total = sum(range(N))
    return total

Both functions simply sum N integers, with N defaulted to one million.

You need to time it

Profiling involves measuring the resource you want to optimise for, whether
it is memory usage or CPU time. In this article we are focusing on execution (CPU) time in general, so profiling mainly involves timing.

The very basic approach for timing involves the unix shell. Given the profile_test.py code above, you can use the time command to verify the run time:

$ time python -c "import profile_test; profile_test.slow()"

real    0m0.102s
user    0m0.077s
sys 0m0.023s

$ time python -c "import profile_test; profile_test.pythonic()"

real    0m0.071s
user    0m0.043s
sys 0m0.024s

Notice that this timing also includes the set-up cost of importing the profile_test module, which is not what we want to test. This first results tell us that the slow() function is actually slower than pythonic().

This way of timing the Python code from the command line can become a bit awkward in the moment we want to time longer pieces of code. We can use the time module to include some timing feature within our code. The overall structure of our timing code will be:

import time
t0 = time.time()  # start time
# the code to time goes here
t1 = time.time() # end time
print(t1 - t0)

Given the profile_test.py above, we can expand it by appending the following:

if __name__ == '__main__':
    import time
    t0 = time.time()
    result = slow()
    t1 = time.time()
    print("slow(): %f" % (t1 - t0))
    t0 = time.time()
    result = pythonic()
    t1 = time.time()
    print("pythonic(): %f" % (t1 - t0))

The script can now be executed:

$ python profile_test.py
slow(): 0.077502
pythonic(): 0.022454

We notice that the difference in timing is now more clear, as it was probably mitigated by the overhead introduced with the set-up cost of importing the module and calling the code from the external time facility.

One more option is the timeit module, which shares some aspects with the time command: it can be easily called from the command line and it is particularly useful for quickly testing some small bits of Python code. It also offers the option to loop through the code for a number of times, in order to get some statistics like average or best run time.

The cProfile module

Part of the standard library, the cProfile module allows you to go a bit more into details analysing the most expensive functions.

You can call the cProfile module from the command line without modifying your existing Python code:

$ python -m cProfile -o profiling_results profile_test.py

The above command will save the profiling results in the file specified after the -o flag, in this case profiling_results.

We can analyse the results using the pstats module, either in a Python script or from an interactive session:

>>> import pstats
>>> stats = pstats.Stats("profiling_results")
>>> stats.sort_stats("tottime")

>>> stats.print_stats(10)

         12 function calls in 0.104 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.083    0.083    0.083    0.083 profile_test.py:3(slow)
        1    0.021    0.021    0.021    0.021 {built-in method sum}
        2    0.000    0.000    0.000    0.000 {built-in method print}
        1    0.000    0.000    0.104    0.104 profile_test.py:1()
        4    0.000    0.000    0.000    0.000 {built-in method time}
        1    0.000    0.000    0.021    0.021 profile_test.py:9(pythonic)
        1    0.000    0.000    0.104    0.104 {built-in method exec}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}


In this example, the function calls are ordered by total time (tottime), the other option being the cumulative time (cumtime), and the top 10 functions are printed on the screen.

Again, we can notice how the slow() function is slower than pythonic(). Given the simplicity of our code though, this profiling session doesn’t tell us much that we didn’t already know. It is anyway interesting to see how we have information about the number of times a function is call, and whether we are calling a built-in or a custom method.

Line by line with line_profiler

If we need a higher level of details, the options we might want to consider is the line_profiler. It’s not part of the standard library so we can install it with:

pip install line_profiler

The line_profiler provides a decorator that we can use for the functions we want to analyse. In order to use it, we need to modify our code as follows.

Firstly, import the module:

import line_profiler

Secondly, decorate the functions with the @profile decorator:

@profile
def slow(N=1000000):
# code of slow()

@profile
def pythonic(N=1000000):
# code of pythonic

The line_profiler provides a command line utility to run it:

$ kernprof -v -l profile_test.py

This command will give the following output:

slow(): 1.347966
pythonic(): 0.021008
Wrote profile results to profile_test.py.lprof
Timer unit: 1e-06 s

Total time: 0.764091 s
File: profile_test.py
Function: slow at line 3

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
     3                                           @profile
     4                                           def slow(N=1000000):
     5         1            1      1.0      0.0      total = 0
     6   1000001       362178      0.4     47.4      for i in range(N):
     7   1000000       401912      0.4     52.6          total += i
     8         1            0      0.0      0.0      return total

Total time: 0.020996 s
File: profile_test.py
Function: pythonic at line 10

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
    10                                           @profile
    11                                           def pythonic(N=1000000):
    12         1        20995  20995.0    100.0      total = sum(range(N))
    13         1            1      1.0      0.0      return total

The time is here measured in millionth of a second. We immediately notice how the slow() function is now much slower. In fact, the profiling introduces some overhead that is particularly prominent in this function. If we analyse the output, we can see how some lines of code are hit N times (1M in out case) in the slow() function: this is where the overhead is coming from. Analysing the output line by line, we can have a better insight of what is causing the difference in speed between the two functions.

A word on unit testing

Unit tests are important. Do not avoid testing just because it makes your profiling process easier. If your profiling approach breaks some tests, do not ignore it, but rather find a workaround to have both profiling and testing in place.

Something I’ve heard in a recent PyData London Meetup (possibly I’m quoting Ian Oszvald?):

“[without unit testing] my code was very fast and very wrong”.

I think this conveys the message.

Summary

Long story short:

  • If you need to make your code faster, you need to know where the performance bottlenecks are
  • You can use some very basic functionality from the unix shell, e.g. the time command
  • Python provides some basic facilities for timing, e.g. the time and timeit modules
  • Python provides some more advanced facilities for profiling, e.g. the cProfile and line_profiler modules
  • Do not forget to test your code, because you need it to be both fast and correct