Profiling is hard. Measuring the right metric and correctly interpreting the obtained data can be difficult even for relatively simple programs.

For performance optimization, I'm a big fan of the poor man's profiler: run the binary to analyze under a debugger, periodically stop the execution, get a backtrace and continue. After doing this a few times, the hotspots will become apparent. This works amazingly well in practice and gives a reliable picture of where time is spent, without the danger of skewed results from instrumentation overhead.

Sometimes it's nice to get a more fine-grained view. That is, not only find the hotspot, but get an overview how much time is spent where. That's where 'real' profilers come in handy.

Under Windows, I like the built-in "Event Tracing for Windows" (ETW), which produces files that can be analyzed with Xperf/Windows Performance Analyzer. It is a really well thought out system, and the Xperf UI is amazing in the analyzing abilities that it offers. Probably the best place to start reading up on this is ETW Central.

Under Linux, I haven't found a profiler I can really recommend, yet. gprof and sprof are both ancient and have severe limitations. OProfile may be nice, but I haven't had a chance to use it yet, as it wasn't available for my Ubuntu LTS release.

I have used Callgrind from the Valgrind toolkit in combination with the KCachegrind GUI analyzer. I typically invoke it like this:

valgrind --tool=callgrind --callgrind-out-file=callgrind-cpu.out ./program-to-profile
kcachegrind callgrind-cpu.out

Callgrind works by instrumenting the binary under test. It slows down program execution, often by a factor of 10. Further, it only measures CPU time, so sleeping times are not included. This makes it unsuitable for programs that wait a significant amount of time for network or disk operations to complete. Despite these drawbacks, it's pretty handy if CPU time is all that you're interested in.

If blocking times are important (as they are for so many modern applications - we generally spend less time computing and more time communicating), gperftools is a decent choice. It includes a CPU profiler that can be run in real-time sampling mode, and the results can viewed in KCachegrind. It is recommended to compile into the binary to analyze, but using LD_PRELOAD works decently well:

CPUPROFILE_REALTIME=1 CPUPROFILE=prof.out LD_PRELOAD=/usr/lib/ ./program-to-profile
google-pprof --callgrind ./program_to_profile prof.out > callgrind-wallclock.out
kcachegrind callgrind-wallclock.out

If it works, this gives a good overall profile of the application. Unfortunately, it sometimes fails: on amd64, there are sporadic crashes from within libunwind. It's possible to just ignore those and rerun the profile, at least interesting data is obtained 50% of the time.

The more serious problem is that CPUPROFILE_REALTIME=1 causes gperftools to use SIGALARM internally, conflicting with any applications that want to use that signal for themselves. Looking at the profiler source code, it should be possible to work around this limitation with the undocumented CPUPROFILE_PER_THREAD_TIMERS and CPUPROFILE_TIMER_SIGNAL environment variables, but I couldn't get that to work yet.

You'd think that perf has something to offer in this area as well. Indeed, it has a CPU profiling mode (with nice flamegraph visualizations) and a sleeping time profiling mode, but I couldn't find a way to combine the two to get a real-time profile.

Overall, there still seems to be room for a good, reliable real-time sampling profiler under Linux. If I'm missing something, please let me know!

Returning generators from with statements

Recently, an interesting issue came up at work that involved a subtle interaction between context managers and generator functions. Here is some example code demonstrating the problem:

def resource():
    """Context manager for some resource"""

    print("Resource setup")
    print("Resource teardown")

def _load_values():
    """Load a list of values (requires resource to be held)"""

    for i in range(3):
        print("Generating value %d" % i)
        yield i

def load_values():
    """Load values while holding the required resource"""

    with resource():
        return _load_values()

This is the output when run:

>>> for val in load_values(): pass
Resource setup
Resource teardown
Generating value 0
Generating value 1
Generating value 2

Whoops. The resource is destroyed before the values are actually generated. This is obviously a problem if the generator depends on the existence of the resource.

When you think about it, it's pretty clear what's going on. Calling _load_values() produces a generator object, whose code is only executed when values are requested. load_values() returns that generator, exiting the with statement and leading to the destruction of the resource. When the outer for loop (for val) comes around to iterating over the generator, the resource is long gone.

How do you solve this problem? In Python 3.3 and newer, you can use the yield from syntax to turn load_values() into a generator as well. The execution of load_values() is halted at the yield from point until the child generator is exhausted, at which point it is safe to dispose of the resource:

def load_values():
    """Load values while holding the required resource"""

    with resource():
        yield from _load_values()

In older Python versions, an explicit for loop over the child generator is required:

def load_values():
    """Load values while holding the required resource"""

    with resource():
        for val in _load_values():
            yield val

Still another method would be to turn the result of _load_values() into a list and returning that instead. This incurs higher memory overhead since all values have to be held in memory at the same time, so it's only appropriate for relatively short lists.

To sum up, it's a bad idea to return generators from under with statements. While it's not terribly confusing what's going on, it's a whee bit subtle and not many people think about this until they ran into the issue. Hope this heads-up helps.

A better way for deleting Docker images and containers

In one of my last posts, I described the current (sad) state of managing Docker container and image expiration. Briefly, Docker creates new containers and images for many tasks, but there is no good way to automatically remove them. The best practice seems to be a rather hack-ish bash one-liner.

Since this wasn't particularly satisfying, I decided to do something about it. Here, I present docker-cleanup, a Python application for removing containers and images based on a configurable set of rules.

This is a rules file example:

# Keep currently running containers, delete others if they last finished
# more than a week ago.
KEEP CONTAINER IF Container.State.Running;
DELETE CONTAINER IF Container.State.FinishedAt.before('1 week ago');

# Delete dangling (unnamed and not used by containers) images.
DELETE IMAGE IF Image.Dangling;

Clear, expressive, straight-forward. The rule language can do a whole lot more and provides a readable and intuitive way to define removal policies for images and containers.

Head over to GitHub, give it a try, and let me know what you think!

Using Python slice objects for fun and profit

Just a quick tip about the hardly known slice objects in Python. They are used to implement the slicing syntax for sequence types (lists, strings):

s = "The quick brown fox jumps over the lazy dog"

# s[4:9] is internally converted (and equivalent) to s[slice(4, 9)].
assert s[4:9] == s[slice(4, 9)]

# 'Not present' is encoded as 'None'
assert s[20:] == s[slice(20, None)]

slice object can be used in normal code too, for example for tracking regions in strings: instead of having separate start_idx and end_idx variables (or writing a custom class/namedtuple) simply roll the indices into a slice.

# A column-aligned table:
         '<none>       <none>   0987654321AB   2 hours ago   385.8 MB',
         'chris/web    latest   0123456789AB   2 hours ago   385.8 MB',
header, *entries = table

# Compute the column slices by parsing the header. Gives a list of slices.
slices = find_column_slices(header)

for entry in entries:
    repo, tag, id, created, size = [entry[sl].strip() for sl in slices]

This is mostly useful when the indices are computed at runtime and applied to more than one string.

More generally, slice objects encapsulate regions of strings/lists/tuples, and are an appropriate tool for simplifying code that operates on start/end indices. They provide a clean abstraction, make the code more straight-forward and save a bit of typing.

A neat Python debugger command

pdb is a console-mode debugger built into Python. Out of the box, it has basic features like variable inspection, breakpoints, and stack frame walking, but it lacks more advanced capabilities.

Fortunately, it can be customized with a .pdbrc file in the user's home directory. Ned Batchelder has several helpful commands in his .pdbrc file:

  • pl: print local variables
  • pi obj: print the instance variables of obj
  • ps: print the instance variables of self

Printing instance variables is great for quickly inspecting objects, but it shows only one half of the picture. What about the class-side of objects? Properties and methods are crucial for understanding what can actually be done with an object, in contrast to what data it encapsulates.

Since I couldn't find a readily available pdb command for listing class contents, I wrote my own:

# Print contents of an object's class (including bases).
alias pc for k,v in sorted({k:v for cls in reversed(%1.__class__.__mro__) for k,v in cls.__dict__.items() if cls is not object}.items()): print("%s%-20s= %-80.80s" % ("%1.",k,repr(v)))

pc lists the contents of an object's class and its base classes. Typically, these are the properties and methods supported by the object. It is used like this:

# 'proc' is a multiprocessing.Process() instance.
(Pdb) pc proc
proc.daemon              = <property object at 0x036B9A20>
proc.exitcode            = <property object at 0x036B99C0>
proc.ident               = <property object at 0x036B9A50>
proc.is_alive            = <function BaseProcess.is_alive at 0x033E4618>
proc.join                = <function BaseProcess.join at 0x033E45D0>                = <property object at 0x036B99F0>                 = <property object at 0x036B9A50>                 = <function at 0x033E4A98>
proc.start               = <function BaseProcess.start at 0x033E4DB0>
proc.terminate           = <function BaseProcess.terminate at 0x033E4DF8>

Note the difference to pi, which lists the contents of the proc instance:

(Pdb) pi proc       # In contrast, here is the image dictionary.
proc._args          = ()
proc._config        = {'authkey': b'\xd0\xc8\xbd\xd6\xcf\x7fo\xab\x19_A6\xf8M\xd4\xef\x88\xa9;\x99c\x9
proc._identity      = (2,)
proc._kwargs        = {}
proc._name          = 'Process-2'
proc._parent_pid    = 1308
proc._popen         = None
proc._target        = None

In general, pc focuses on the interface while pi examines the state of the object. The two complement each other nicely. Especially when working with an unfamiliar codebase, pc is helpful for quickly figuring out how to use a specific class.

pc works with both Python 2 and Python 3 (on Python 2 it only shows new-style classes). Add it to your .pdbrc and give it a try. Let me know what you think!

Python's GIL and atomic reference counting

I love Python because it's an incredibly fun, expressive and productive language. However, it's often criticised for being slow. I think the correct answer to that is two-fold:

  • Use an alternative Python implementation for impressive speed-ups. For example, PyPy is on average 7 times faster than the standard CPython.
  • Not all parts of a program have to be blazingly fast. Use Python for all non-performance-critical areas, such as the UI and database access, and drop to C or C++ only when it's required for for CPU intensive tasks. This is easy to achieve with language binding generators such as CFFI, SWIG or SIP.

A further performance-related issue is Python's Global Interpreter Lock (GIL), which ensures that only one Python thread can run at a single time. This is a bit problematic, because it affects PyPy as well, unless you want to use its experimental software transactional memory support.

Why is this such a big deal? With the rise of multi-core processors, multithreading is becoming more important as well. This not only affects performance on large servers, it impacts desktop programs and is crucial for battery life on mobile phones (race to idle). Further, other programming languages make multi-threaded programming easier and easier. C, C++, and Java have all moved to a common memory model for multithreading. C++ has gained std::atomic, futures, and first-class thread support. C# has async and await, which is proposed for inclusion in C++ as well. This trend will only accelerate in the future.

With this in mind, I decided to investigate the CPython GIL. Previous proposals for its removal have failed, but I thought it's worth a look — especially since I couldn't find any recent attempts.

The results were not encouraging. Changing the reference count used by Python objects from a normal int to an atomic type resulted in a ~23% slowdown on my machine. This is without actually changing any of the locking. This penalty could be moderated for single-threaded programs by only using atomic instructions once a second thread is started. This requires a function call and an if statement to check whether to use atomics or not in the refcount hot path. Doing this still results in an 11% slowdown in the single-threaded case. If hot-patching was used instead of the if, a 6% slowdown remained.

refcount slowdown

The last result looks promising, but is deceiving. Hot-patching would rely on having a single function to patch. Alas, the compiler decided to mostly inline the Py_INCREF/Py_DECREF function calls. Disabling inlining of these functions gives a 16% slowdown, which is worse than the "call + if" method. Furthermore, hot-patching is probably not something that could be merged in CPython anyway.

So what's the conclusion? Maybe turning Py_INCREF and Py_DECREF into functions and living with the 11% slowdown of the single-threaded case would be sell-able, if compelling speed-ups of multithreaded workloads could be shown. It should be possible to convert one module at a time from the GIL to fine-grained locking, but performance increases would only be expected once at least a couple of core modules are converted. That would take a substantial amount of work, especially given the high risk that the resulting patches wouldn't be accepted upstream.

Where does this leave Python as a language in the multi-threaded world? I'm not sure. Since PyPy is already the solution to Python's performance issue, perhaps it can solve the concurrency problem as well with its software transactional memory mode.

PS: My profiling showed that the reference counting in Python (Py_INCREF() and Py_DECREF()) takes up to about 5–10% of the execution time of benchmarks (not including actual object destruction), crazy!

Subclassing C++ in Python with SIP

I use SIP in MapsEvolved to generate bindings for interfacing Python with C++. I really like SIP due to its straight-forward syntax that mostly allows just copying class definitions over from C++. Further, it's really well thought out and contains support for a number of advanced use cases.

One such feature is implementing a C++ interface in Python. The resulting class can then even be passed back to C++, and any methods called on it will be forwarded to the Python implementation. Sweet!

Here is an example I originally wrote for this Stack Overflow question. It illustrates how ridiculously easy it is to get this working:


class EXPORT Node {
    int getN() const;
struct EXPORT NodeVisitor {
    virtual void OnNode(Node *n) = 0;
void visit_graph_nodes(NodeVisitor *nv);


%Module pyvisit

#include "visitor.h"

class Node {
    int getN() const;

struct NodeVisitor {
    virtual void OnNode(Node* n) = 0;

void visit_graph_nodes(NodeVisitor *nv);

Using it from Python:

>>> import pyvisit
>>> class PyNodeVisitor(pyvisit.NodeVisitor):
>>>     def OnNode(self, node):
>>>         print(node.getN())
>>> pnv = PyNodeVisitor()
>>> visit_graph_nodes(pnv)

Here, the C++ function visit_graph_nodes() calls the Python method pnv.OnNode() for every node in its (internal) graph. A zip file with the full working source code of this example can be downloaded here.

The subclassing capabilities of SIP don't stop at interfaces, either. It's possible to derive from any C++ class, abstract or not, inheriting (or overriding) existing method implementations as needed. This gives a lot of flexibility and makes it easy to have classes with some parts implemented in C++, and others being in Python.

Configuring SSL on Apache 2.4

Configuring a modern web server to employ strong encryption and forward secrecy doesn't have to be hard. There is excellent documentation from Mozilla and from the OWASP.

Obtaining an SSL certificate

One major stumbling block is where to obtain an SSL certificate. In the future, this should hopefully be easy with Let's Encrypt. Until that is actually functional, StartSSL offers free SSL certificates. The process takes a bit of patience, but it's not difficult. There's also a StartSSL HOWTO from

While I've used StartSSL in the past, I had some trouble with them because 10 years after I registered, someone grabbed and StartSSL was alleging I was trying to mislead users?! So that was the end of my business with them...

I've now switched to Comodo's Positive SSL Certificate, which I like for a couple of reasons:

  • it lasts for 3 years,
  • it's really uncomplicated, and
  • it's crazy cheap: 7.45$ per year.

The process of getting the cert from them was super easy, simpler than StartSSL. About 3 hours from going to their website to having the certificate installed on my server, with most of it waiting email verifications. Credit card payment was quick and easy. 10/10, would buy again :-)

Apache configuration

With the certificate acquisition out of the way, here are the juicy bits from my Apache config.

mod_ssl config:

# Enable only cyphers that support forward secrecy.
# See these two links for reference:

# Use server priorities for cipher algorithm choice.
SSLHonorCipherOrder on

# With Apache 2.4, SSLv2 is gone and only SSLv3 and TLSv* are supported.
# Disable SSLv3, all TLS protocols are OK.
SSLProtocol all -SSLv3

# Enable OCSP stapling
# With this, the client can verify that our certificate isn't revoked
# without having to query an external OCSP service.
SSLUseStapling On
SSLStaplingCache shmcb:${APACHE_RUN_DIR}/ssl_stapling(32768)

The per-site configuration:

SSLEngine On
SSLCertificateKeyFile   /path/to/serverkey.key    # The private server key.
SSLCertificateFile      /path/to/certificate.crt  # The certificate provided by CA.
SSLCertificateChainFile /path/to/cert-bundle      # A separate download from your CA.

# Use a customized prime group for DH key exchange (vs Logjam attack).
# Generate a DH group file with:
#    openssl dhparam -out dhparams.pem 2048
# Newer Apache versions support the following command to set the dhparams:
SSLOpenSSLConfCmd DHParameters "/path/to/dhparams.pem"

# If Apache reports an error for the above line, remove it and include
# the dhparams in the certificate:
#   cat <CERT>.crt dhparams.pem > cert-with-dhparams.crt
#   SSLCertificateFile cert-with-dhparams.crt

# HSTS: Force browsers to require SSL for this domain for the next year.
# Down-grade to HTTP will cause browsers to abort with a security error.
Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"

# HPKP: Pin the current key for the next two months.
# Generate hash using:
#   openssl rsa -in <serverkey>.key -outform der -pubout | \
#   openssl dgst -sha256 -binary | openssl enc -base64
# You ideally want to generate a backup key and include that here as well,
# in case the primary key is lost or compromised.
# Also note the implications for key rollover.
# See:
Header always set Public-Key-Pins "pin-sha256=\"<HASH>\"; max-age=5184000; includeSubDomains"

# Disable compression to avoid BREACH HTTPS/SSL attack.
<Location />
    SetEnv no-gzip

That should cover the basics.


As for SSL connection testing, I found the Qualys SSL Labs Test helpful. It shows what browsers (browser versions) will get which encryption quality (forward secrecy or not) and highlights common problems such as certificate chain issues.

Hope this helps someone out there!

Cleaning up unused Docker images and containers

Docker doesn't delete old/unused images or containers by itself, even if they weren't used for a long time or were only intermediary steps on the way to another image. This leads to an image sprawl that eats up a lot of disk space if not kept in check.

The right way to solve this would be to parse the output of docker inspect and remove containers and images based on certain policies. Unfortunately, a quick internet search did not turn up a script that does this.

Since I didn't want to spend the time to write such a thing myself, I resorted to what – sadly – seems to be state-of-the-art docker image management: a cronjob running those two lines:

docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty docker rm
docker images -f "dangling=true" -q | xargs --no-run-if-empty docker rmi >/dev/null 2>&1

The first line removes containers that are older than two weeks and are not currently running (docker rm simply will not remove running containers). The second line removes images that are not used by any container and are not tagged (i.e. don't have proper repository name).

These two invocations are based on this Stack Overflow question and on this blog post by Jim Hoskins.

This solution works well enough, you probably shouldn't use it on production servers, though. :-)

Manually creating Docker images

Docker is a virtualization solution that's been gaining a lot of momentum over the last few years. It focuses on light-weight, ephemeral containers that can be created based on simple config files.

Docker's main target platform is amd64, but it also works on x86. However, practically all official container images in the Docker registry are amd64 based, which means they can't be used on an x86 machine. So, it's necessary to manually create the required base images. As you might have guessed, my server runs Docker on x86, so I've had to find a solution for that problem.

Fortunately, creating images from scratch is really easy with the script that comes bundled with Docker. On Debian systems, its installed in /usr/share/, on Fedora it has to be obtained from the Docker git repository:

$ git clone

The script can then be found under docker/contrib/

Creating a Debian Jessie image is straight-forward:

# -t debootstrap/minbase debootstrap --variant=minbase jessie

This command will create a minimal Debian Jessie image using Debootstrap, and import it into Docker with the name debootstrap/minbase. Further options can set a specific Debian mirror server and a list of additional packages to install:

# -t debootstrap/minbase debootstrap \
             --include=locales --variant=minbase \

This will use as mirror and install the locales package in the image. has backends to bootstrap Arch Linux, Busybox, Centos, Mageia, and Ubuntu. Fedora images doesn't seem to be supported directly, but they can be generated by following instructions compiled by James Labocki.

Finally, it's worth mentioning that this should only be used to generate base images. You'd then use Docker itself (cf. Dockerfile) to create images that actually do something interesting, based on these base images. This will save both time and memory, due to Docker's caching and copy-on-write mechanisms.