Returning generators from with statements

Recently, an interesting issue came up at work that involved a subtle interaction between context managers and generator functions. Here is some example code demonstrating the problem:

@contextlib.contextmanager
def resource():
    """Context manager for some resource"""

    print("Resource setup")
    yield
    print("Resource teardown")


def _load_values():
    """Load a list of values (requires resource to be held)"""

    for i in range(3):
        print("Generating value %d" % i)
        yield i


def load_values():
    """Load values while holding the required resource"""

    with resource():
        return _load_values()

This is the output when run:

>>> for val in load_values(): pass
Resource setup
Resource teardown
Generating value 0
Generating value 1
Generating value 2

Whoops. The resource is destroyed before the values are actually generated. This is obviously a problem if the generator depends on the existence of the resource.

When you think about it, it's pretty clear what's going on. Calling _load_values() produces a generator object, whose code is only executed when values are requested. load_values() returns that generator, exiting the with statement and leading to the destruction of the resource. When the outer for loop (for val) comes around to iterating over the generator, the resource is long gone.

How do you solve this problem? In Python 3.3 and newer, you can use the yield from syntax to turn load_values() into a generator as well. The execution of load_values() is halted at the yield from point until the child generator is exhausted, at which point it is safe to dispose of the resource:

def load_values():
    """Load values while holding the required resource"""

    with resource():
        yield from _load_values()

In older Python versions, an explicit for loop over the child generator is required:

def load_values():
    """Load values while holding the required resource"""

    with resource():
        for val in _load_values():
            yield val

Still another method would be to turn the result of _load_values() into a list and returning that instead. This incurs higher memory overhead since all values have to be held in memory at the same time, so it's only appropriate for relatively short lists.

To sum up, it's a bad idea to return generators from under with statements. While it's not terribly confusing what's going on, it's a whee bit subtle and not many people think about this until they ran into the issue. Hope this heads-up helps.

published November 12, 2015
tags python

A better way for deleting Docker images and containers

In one of my last posts, I described the current (sad) state of managing Docker container and image expiration. Briefly, Docker creates new containers and images for many tasks, but there is no good way to automatically remove them. The best practice seems to be a rather hack-ish bash one-liner.

Since this wasn't particularly satisfying, I decided to do something about it. Here, I present docker-cleanup, a Python application for removing containers and images based on a configurable set of rules.

This is a rules file example:

# Keep currently running containers, delete others if they last finished
# more than a week ago.
KEEP CONTAINER IF Container.State.Running;
DELETE CONTAINER IF Container.State.FinishedAt.before('1 week ago');

# Delete dangling (unnamed and not used by containers) images.
DELETE IMAGE IF Image.Dangling;

Clear, expressive, straight-forward. The rule language can do a whole lot more and provides a readable and intuitive way to define removal policies for images and containers.

Head over to GitHub, give it a try, and let me know what you think!

Using Python slice objects for fun and profit

Just a quick tip about the hardly known slice objects in Python. They are used to implement the slicing syntax for sequence types (lists, strings):

s = "The quick brown fox jumps over the lazy dog"

# s[4:9] is internally converted (and equivalent) to s[slice(4, 9)].
assert s[4:9] == s[slice(4, 9)]

# 'Not present' is encoded as 'None'
assert s[20:] == s[slice(20, None)]

slice object can be used in normal code too, for example for tracking regions in strings: instead of having separate start_idx and end_idx variables (or writing a custom class/namedtuple) simply roll the indices into a slice.

# A column-aligned table:
table = ('REPOSITORY   TAG      IMAGE ID       CREATED       VIRTUAL SIZE',
         '<none>       <none>   0987654321AB   2 hours ago   385.8 MB',
         'chris/web    latest   0123456789AB   2 hours ago   385.8 MB',
        )
header, *entries = table

# Compute the column slices by parsing the header. Gives a list of slices.
slices = find_column_slices(header)

for entry in entries:
    repo, tag, id, created, size = [entry[sl].strip() for sl in slices]
    ...

This is mostly useful when the indices are computed at runtime and applied to more than one string.

More generally, slice objects encapsulate regions of strings/lists/tuples, and are an appropriate tool for simplifying code that operates on start/end indices. They provide a clean abstraction, make the code more straight-forward and save a bit of typing.

A neat Python debugger command

pdb is a console-mode debugger built into Python. Out of the box, it has basic features like variable inspection, breakpoints, and stack frame walking, but it lacks more advanced capabilities.

Fortunately, it can be customized with a .pdbrc file in the user's home directory. Ned Batchelder has several helpful commands in his .pdbrc file:

  • pl: print local variables
  • pi obj: print the instance variables of obj
  • ps: print the instance variables of self

Printing instance variables is great for quickly inspecting objects, but it shows only one half of the picture. What about the class-side of objects? Properties and methods are crucial for understanding what can actually be done with an object, in contrast to what data it encapsulates.

Since I couldn't find a readily available pdb command for listing class contents, I wrote my own:

# Print contents of an object's class (including bases).
alias pc for k,v in sorted({k:v for cls in reversed(%1.__class__.__mro__) for k,v in cls.__dict__.items() if cls is not object}.items()): print("%s%-20s= %-80.80s" % ("%1.",k,repr(v)))

pc lists the contents of an object's class and its base classes. Typically, these are the properties and methods supported by the object. It is used like this:

# 'proc' is a multiprocessing.Process() instance.
(Pdb) pc proc
...
proc.daemon              = <property object at 0x036B9A20>
proc.exitcode            = <property object at 0x036B99C0>
proc.ident               = <property object at 0x036B9A50>
proc.is_alive            = <function BaseProcess.is_alive at 0x033E4618>
proc.join                = <function BaseProcess.join at 0x033E45D0>
proc.name                = <property object at 0x036B99F0>
proc.pid                 = <property object at 0x036B9A50>
proc.run                 = <function BaseProcess.run at 0x033E4A98>
proc.start               = <function BaseProcess.start at 0x033E4DB0>
proc.terminate           = <function BaseProcess.terminate at 0x033E4DF8>

Note the difference to pi, which lists the contents of the proc instance:

(Pdb) pi proc       # In contrast, here is the image dictionary.
proc._args          = ()
proc._config        = {'authkey': b'\xd0\xc8\xbd\xd6\xcf\x7fo\xab\x19_A6\xf8M\xd4\xef\x88\xa9;\x99c\x9
proc._identity      = (2,)
proc._kwargs        = {}
proc._name          = 'Process-2'
proc._parent_pid    = 1308
proc._popen         = None
proc._target        = None

In general, pc focuses on the interface while pi examines the state of the object. The two complement each other nicely. Especially when working with an unfamiliar codebase, pc is helpful for quickly figuring out how to use a specific class.

pc works with both Python 2 and Python 3 (on Python 2 it only shows new-style classes). Add it to your .pdbrc and give it a try. Let me know what you think!

Python's GIL and atomic reference counting

I love Python because it's an incredibly fun, expressive and productive language. However, it's often criticised for being slow. I think the correct answer to that is two-fold:

  • Use an alternative Python implementation for impressive speed-ups. For example, PyPy is on average 7 times faster than the standard CPython.
  • Not all parts of a program have to be blazingly fast. Use Python for all non-performance-critical areas, such as the UI and database access, and drop to C or C++ only when it's required for for CPU intensive tasks. This is easy to achieve with language binding generators such as CFFI, SWIG or SIP.

A further performance-related issue is Python's Global Interpreter Lock (GIL), which ensures that only one Python thread can run at a single time. This is a bit problematic, because it affects PyPy as well, unless you want to use its experimental software transactional memory support.

Why is this such a big deal? With the rise of multi-core processors, multithreading is becoming more important as well. This not only affects performance on large servers, it impacts desktop programs and is crucial for battery life on mobile phones (race to idle). Further, other programming languages make multi-threaded programming easier and easier. C, C++, and Java have all moved to a common memory model for multithreading. C++ has gained std::atomic, futures, and first-class thread support. C# has async and await, which is proposed for inclusion in C++ as well. This trend will only accelerate in the future.

With this in mind, I decided to investigate the CPython GIL. Previous proposals for its removal have failed, but I thought it's worth a look — especially since I couldn't find any recent attempts.

The results were not encouraging. Changing the reference count used by Python objects from a normal int to an atomic type resulted in a ~23% slowdown on my machine. This is without actually changing any of the locking. This penalty could be moderated for single-threaded programs by only using atomic instructions once a second thread is started. This requires a function call and an if statement to check whether to use atomics or not in the refcount hot path. Doing this still results in an 11% slowdown in the single-threaded case. If hot-patching was used instead of the if, a 6% slowdown remained.

refcount slowdown
Atomic refcounts slow down the overall Python execution speed by >20%.

The last result looks promising, but is deceiving. Hot-patching would rely on having a single function to patch. Alas, the compiler decided to mostly inline the Py_INCREF/Py_DECREF function calls. Disabling inlining of these functions gives a 16% slowdown, which is worse than the "call + if" method. Furthermore, hot-patching is probably not something that could be merged in CPython anyway.

So what's the conclusion? Maybe turning Py_INCREF and Py_DECREF into functions and living with the 11% slowdown of the single-threaded case would be sell-able, if compelling speed-ups of multithreaded workloads could be shown. It should be possible to convert one module at a time from the GIL to fine-grained locking, but performance increases would only be expected once at least a couple of core modules are converted. That would take a substantial amount of work, especially given the high risk that the resulting patches wouldn't be accepted upstream.

Where does this leave Python as a language in the multi-threaded world? I'm not sure. Since PyPy is already the solution to Python's performance issue, perhaps it can solve the concurrency problem as well with its software transactional memory mode.

PS: My profiling showed that the reference counting in Python (Py_INCREF() and Py_DECREF()) takes up to about 5–10% of the execution time of benchmarks (not including actual object destruction), crazy!