Building External Libraries with Boost Build

I recently ran into the problem of having to use an external library not available for our distribution. Of course, it would be possible to build Debian or RPM packages, deploy them, and then link against those. However, building sanely-behaving library packages is tricky.

The easier way is to include the source distribution in a sub-folder of the project and integrate it into our build system, Boost Build. It seemed prohibitive to create Boost Build rules to build the library from scratch (from *.cpp to *.a/*.so) because the library had fairly complex build requirements. The way to go was to leverage the library's own build system.

Easier said than done. I only arrived at a fully integrated solution after seeking help on the Boost Build Mailing List (thanks Steven, Vladimir). This is the core of it:

path-constant LIBSQUARE_DIR : . ;

make libsquare.a
    : [ glob-tree *.c *.cpp *.h : generated-file.c gen-*.cpp ]
    : @build-libsquare
actions build-libsquare
    ( cd $(LIBSQUARE_DIR) && make )
    cp $(LIBSQUARE_DIR)/libsquare.a $(<)

alias libsquare
    : libsquare.a                                        # sources
    :                                                    # requirements
    :                                                    # default-build
    : <include>$(LIBSQUARE_DIR) <dependency>libsquare.a  # usage-requirements

The first line defines a filesystem constant for the library directory, so the build process becomes independent of the working directory.

The next few lines tell Boost Build about a library called libsquare.a, which is built by calling make. The cp command copies the library to the target location expected by Boost Build ($(<)). glob-tree defines source files that should trigger library rebuilds when changed, taking care to exclude files generated by the build process itself.

The alias command defines a Boost Build target to link against the library. The real magic is the <dependency>libsquare.a directive: it is required because the library build may produce header files used by client code. Thus, build-libsquare must run before compiling any dependent C++ files. Adding libsquare.a to an executable's dependency list won't quite enforce this: it only builds libsquare.a in time for the linking step (*.oexecutable), but not for the compilation (*.cpp*.o). In contrast, the <dependency>libsquare.a directive propagates to all dependent build steps, including the compilation, and induces the required dependencies.

I created an example project on GitHub that demonstrates this. It builds a simple executable relying on a library built via make. The process is fully integrated in the Boost Build framework and triggered by a single call to bjam (or bb2).

If you need something along these lines give this solution a try!

Working with angles (is surprisingly hard)

Due to the periodic nature of angles, especially the discontinuity at 2π / 360°, working with them presents some subtle issues. For example, the angles 5° and 355° are "close" to each other, but are numerically quite different. In a geographic (GIS) context, this often surfaces when working with geometries spanning Earth's date-line at -180°/+180° longitude, which often requires multiple code paths to obtain the desired result.

I've recently run into the problem of computing averages and differences of angles. The difference should increase linearly, it should be 0 if they are equal, negative if the second angle lies to one side of the first, positive if it's on the other side (whether that's left or right depends on whether the angles increase in the clockwise direction or in counter-clockwise direction). Getting that right and coming up with test cases to prove that it's correct was quite interesting. As Bishop notes in Pattern Recognition and Machine Learning, it's often simpler to perform operations on angles in a 2D (x, y) space and then back-transform to angles using the atan2() function. I've used that for the averaging function; the difference is calculated using modulo arithmetic.

Here's the Python version of the two functions:

def average_angles(angles):
    """Average (mean) of angles

    Return the average of an input sequence of angles. The result is between
    ``0`` and ``2 * math.pi``.
    If the average is not defined (e.g. ``average_angles([0, math.pi]))``,
    a ``ValueError`` is raised.

    x = sum(math.cos(a) for a in angles)
    y = sum(math.sin(a) for a in angles)

    if x == 0 and y == 0:
        raise ValueError(
            "The angle average of the inputs is undefined: %r" % angles)

    # To get outputs from -pi to +pi, delete everything but math.atan2() here.
    return math.fmod(math.atan2(y, x) + 2 * math.pi, 2 * math.pi)

def subtract_angles(lhs, rhs):
    """Return the signed difference between angles lhs and rhs

    Return ``(lhs - rhs)``, the value will be within ``[-math.pi, math.pi)``.
    Both ``lhs`` and ``rhs`` may either be zero-based (within
    ``[0, 2*math.pi]``), or ``-pi``-based (within ``[-math.pi, math.pi]``).

    return math.fmod((lhs - rhs) + math.pi * 3, 2 * math.pi) - math.pi

The code, along with test cases can also be found in this GitHub Gist. Translation of these functions to other languages should be straight-forward, sin()/cos()/fmod()/atan2() are pretty ubiquitous.

Resource management with Python

There should be one – and preferably only one – obvious way to do it.

— Tim Peters in The Zen of Python

There are multiple ways to manage resources with Python, but only one of them is save, reliable and Pythonic.

Before we dive in, let's examine what resources can mean in this context. The most obvious examples are open files, but the concept is broader: it includes locked mutexes, started client processes, or a temporary directory change using os.chdir(). The common theme is that all of these require some sort of cleanup that must reliably be executed in the future. The file must be closed, the mutex unlocked, the process terminated, and the current directory must be changed back.

So the core question is: how to ensure that this cleanup really happens?

Failed solutions

Manually calling the cleanup function at the end of a code block is the most obvious solution:

f = open('file.txt', 'w')

The problem with this is that f.close() will never be executed if do_something(f) throws an exception. So we'll need a better solution.

C++ programmers see this and try to apply the C++ solution: RAII, where resources are acquired in an object's constructor and released in the destructor:

class MyFile(object):
    def __init__(self, fname):
        self.f = open(fname, 'w')

    def __del__(self):

my_f = MyFile('file.txt')
# my_f.__del__() automatically called once my_f goes out of scope

Apart from being verbose and a bit un-Pythonic, it's also not necessarily correct. __del__() is only called once the object's refcount reaches zero, which can be prevented by reference cycles or leaked references. Additionally, until Python 3.4 some __del__() methods were not called during interpreter shutdown.

A workable solution

The way to ensure that cleanup code is called in the face of exceptions is the try ... finally construct:

f = open('file.txt', 'w')

In contrast to the previous two solutions, this ensures that the file is closed no matter what (short of an interpreter crash). It's a bit unwieldy, especially when you think about try ... finally statements sprinkled all over a large code base. Fortunately, Python provides a better way.

The correct solution™

The Pythonic solution is to use the with statement:

with open('file.txt', 'w') as f:

It is concise and correct even if do_something(f) raises an exception. Nearly all built-in classes that manage resources can be used in this way.

Under the covers, this functionality is implemented using objects known as context managers, which provide __enter__() and __exit__() methods that are called at the beginning and end of the with block. While it's possible to write such classes manually, an easier way is to use the contextlib.contextmanager decorator.

from contextlib import contextmanager

def managed_resource(name):
    r = acquire_resource(name)
        yield r

with managed_resource('file.txt') as r:

The contextmanager decorator turns a generator function (a function with a yield statement) into a context manager. This way it is possible to make arbitrary code compatible with the with statement in just a few lines of Python.

Note that try ... finally is used as a building block here. In contrast to the previous solution, it is hidden away in a utility resource manager function, and doesn't clutter the main program flow, which is nice.

If the client code doesn't need to obtain an explicit reference to the resource, things are even simpler:

def managed_resource(name):
    r = acquire_resource(name)

with managed_resource('file.txt'):

Sometimes the argument comes up that this makes it harder to use those resources in interactive Python sessions – you can't wrap your whole session in a gigantic with block, after all. The solution is simple: just call __enter__() on the context manager manually to obtain the resource:

cm_r = managed_resource('file.txt')
r = cm_r.__enter__()
# Work with r...
cm_r.__exit__(None, None, None)

The __exit__() method takes three arguments, passing None here is fine (these are used to pass exception information, where applicable). Another option in interactive sessions is to not call __exit__() at all, if you can live with the consequences.

Wrap Up

Concise, correct, Pythonic. There is no reason to ever manage resources in any other way in Python. If you aren't using it yet - start now!

Remarks on enable_shared_from_this

std::enable_shared_from_this is a template base class that allows derived classes to get a std::shared_ptr to themselves. This can be handy, and it's not something that C++ classes can normally do. Calling std::shared_ptr<T>(this) is not an option as it creates a new shared pointer independent of the existing one, which leads to double destruction.

The caveat is that before calling the shared_from_this() member function, a shared_ptr to the object must already exist, otherwise undefined behavior results. In other words, the object must already be managed by a shared pointer.

This presents an interesting issue. When using this technique, there are member functions (those that rely on shared_from_this()) that can only be called if the object is managed via a shared_ptr. This is a rather subtle requirement: the compiler won't enforce it. If violated, the object may even work at runtime until a problematic code path is executed, which may happen rarely – a nice little trap. At the very least, this should be prominently mentioned in the class documentation. But frankly, relying on the documentation to communicate such a subtle issue sounds wrong.

The correct solution is to let the compiler enforce it. Make the constructors private and provide a static factory method that returns a shared_ptr to a new instance. Take care to delete the copy constructor and the assignment operator to prevent anyone from obtaining non-shared-pointer-managed instances this way.

Another point worth mentioning about enable_shared_from_this is that the member functions it provides, shared_from_this() and weak_from_this(), are public. Not only the object itself can retrieve it's owning shared_ptr, everyone else can too. Whether this is desirable is essentially an API design question and depends on the context. To restrict access to these functions, use private inheritance.

Overall, enable_shared_from_this is an interesting tool, if a bit situational. However, it requires care to use safely, in a way that prevents derived classes from being used incorrectly.

Building a computer from logic gates

Ever wondered how computers actually work on a low level?

After Jeff Atwood's recent post about Robot Odyssey, I did.

The following is a sketch of what could work, not necessarily what modern hardware actually does. The aim is to explore how a Turing-complete, multi-purpose computation engine could in principle be built from simple logic elements.

From silicon to computation

Nearly all chips are manufactured on silicon plates called wavers. These plates are modified in a complex process to create semiconductor-based diodes and transistors. Most general-purpose processors use a technology called CMOS that arranges the transistors into logic gates – devices that carry out operations on zeros and ones. The most common gate to implement is the NAND. All other common logic gates (AND, OR, NOT, XOR, ...) can be constructed from NAND building blocks.

Memory cells can be constructed by composing multiple logic gates. Each cell stores a single bit of information. Conceptually, it has one output (VALUE) where the current value can be read. Additionally, there are two input pins: SET and SET_VALUE. For reading, SET is zero. For writing, SET is one and the SET_VALUE becomes the new value stored in the cell. It's not hard to imagine how to build a memory controller on top of an array of memory cells that allows addressing of individual cells for getting and setting their value.

How can memory be modified in practice? For example, how is it possible to invert (change 0 to 1 and vice versa) the value of a memory cell? Reading the memory, inverting it and writing it back into the memory cell leads to oscillation: when the cell value is changed it is immediately read back, and inverted and written again. This cycle repeats as quickly as the electronics can handle.

Memory cell feeding back to itself via an inverter!

The solutions to this conundrum are clocks and edge-triggered flip-flops. Clocks are signals switching between 0 and 1 at a defined frequency. Edge-triggered flip-flops read their input at the rising edge of the clock (when it switches from 0 to 1) and output that value until the next rising edge. In other words, they sample their input once per clock cycle and hold that value until the next cycle. When such an element is inserted into the inversion loop, the memory value is inverted exactly once per clock cycle.

Memory cell feeding back to itself via an edge-triggered flip-flop and an inverter!

Based on this technique other operations can be implemented as well, such as adding or multiplying memory cells, copying memory contents to other locations, performing bitwise operations, and so on.

General-purpose processors

For each of those operations the logic gates would have to be arranged differently, though. In contrast, real general-purpose CPU's have fixed logic circuits, their gate configuration doesn't change during runtime. Instead, the operations to execute are read from memory and interpreted according to the chip's instruction set.

For our analysis, let's assume the command is read from separate input lines instead. We'll return to reading commands from memory later on.

How could one design and implement an instruction set? Let's say we have a machine with 8 lines (bits) of input and four 8-bit registers A, B, C, D. External memory is addressed in chunks of 8 bits and is attached via 8 address lines that select the location, 8 lines for reading/writing the 8-bit value, and one line to switch between reading and writing. What operations could we have?

Opcode Mnemonic Description
00RRVVVV SetHi VVVV, RR Set the 4 highest bits of register RR to VVVV.
01RRVVVV SetLo VVVV, RR Set the 4 lowest bits of register RR to VVVV.
1000RRSS Mov RR, SS Copy the value of register RR into register SS.
100100RR Read [RR] Read from memory address stored in RR, store the result in register RR.
100110RR Not RR Logically invert the value of register RR.
100111RR Inv RR Negate (one's complement) the value of register RR.
1010RRSS Add RR, SS Add registers RR and SS, store the result in SS.
1011RRSS Mul RR, SS Multiply registers RR and SS, store the result in SS.
1100RRSS And RR, SS Logical AND of registers RR and SS, store the result in SS.
1101RRSS Or RR, SS Logical OR of registers RR and SS, store the result in SS.
1111RRSS Write RR, [SS] Write the value of register RR to the memory address stored in register SS.

It's not very efficient, but it enables a good amount of computation. How could it be implemented? All the separate opcodes could be realized as separate logic blocks on a chip. Each of them individually should be relatively easy to implement. Selecting which block to run (depending on the opcode) is a bit tricky. The easiest way to handle this is to run them all, but only enable output to the registers and memory for the single command that is desired by the input. On every cycle, all possible commands would be computed simultaneously, but only the desired one would be allowed to write to registers and memory. Is it efficient? No. Would it work? Yes.

Finally, we can address the problem of reading instructions from memory. Given the system described in the previous paragraphs, it shouldn't be too hard to add a separate component that reads instructions from memory and feeds it to this computation engine. The two components would communicate via an instruction-pointer register. The instruction set could be expanded to include (conditional) jumps, making the overall system Turing complete.


There are several small problems with what I've described, e.g. how to deal with instructions that consume multiple clock cycles, but all of them are solvable without too much trouble.

Thinking this topic is an interesting exercise. On the transistor level, it's hard to see how a real processor could ever be constructed from these primitives. Possible in principle – but hard to see how to do in practice. Three levels of abstractions above, after gates and memory cells there are suddenly memory blocks that are addressable via a parallel protocol. Every abstraction step is comprehensible, yet complexity is built up quickly. Two levels of abstraction further we suddenly have an 8-bit microprocessor.

It must have been an exciting opportunity to figure all of this out in the middle of the last century.

Boost Range Highlights

Last week, I presented Boost Range for Humans: documentation for the Boost Range library with concrete code examples. This week we'll talk about some of the cool features in Boost Range.


boost::irange() is the C++ equivalent to Python's range() function. It returns a range object containing an arithmetic series of numbers:

boost::irange(4, 10)    -> {4, 5, 6, 7, 8, 9}
boost::irange(4, 10, 2) -> {4, 6, 8}

Together with indexed() (see below), it serves as a range-based alternative to the classic C for loop.


boost::combine() takes two or more input ranges and creates a zipped range – a range of tuples where each tuple contains corresponding elements from each input range.

The input ranges must have equal size.

std::string str = "abcde";
std::vector<int> vec = {1, 2, 3, 4, 5};
for (const auto & zipped : boost::combine(str, vec)) {
    char c; int i;
    boost::tie(c, i) = zipped;

    // Iterates over the pairs ('a', 1), ('b', 2), ...


Most if not all algorithms from the C++ standard library that apply to containers (via begin/end iterator pairs) have been wrapped in Boost Range. Examples include copy(), remove(), sort(), count(), find_if().


Adaptors are among the most interesting concepts Boost Range has to offer.

There are generally two ways to use adaptors, either via function syntax or via a pipe syntax. While the former is handy for simple cases, while the latter allows chaining data transformations into an easy-to-read pipeline.

bool is_even(int n) { return n % 2 == 0; }
const std::vector<int> vec = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };

// Function-style call:
for (int i : boost::adaptors::filter(vec, is_even)) { ... }

// Pipe-style call:
for (int i : vec | boost::adaptors::filtered(is_even)) { ... }

To see the power of the latter syntax, consider a transformation pipeline:

int square(int n) { return n*n; }
std::map<int, int> input_map = { ... };

using boost::adaptors;
auto result = input_map | map_values
                        | filtered(is_even)
                        | transformed(square);


The boost::adaptors::indexed() adapter warrants special mention: it is analogous to Python's enumerate(). Given a Range, it gives access to the elements as well as their index within the range. Boost 1.56 or higher is required for this to work properly.


boost::accumulate() by default sums all the items in an input range, but other reduction functions can be supplied as well. Together with range adapters, this makes map-reduce pipelines easy to write.

std::vector<int> vec = {1, 2, 3, 4, 5};
int product = boost::accumulate(vec, 1, std::multiplies<int>());


boost::as_literal() may be less a highlight and more of a crutch, but it bears mentioning still. Boost Range functions accept a wide variety of types, among them strings. C++ style strings (std::string) always work as expected, but with character arrays (char[]), there is an ambiguity as to whether the argument should be interpreted as array (including the terminal '\0' character) or as string (excluding the terminator).

By default, Boost Range treats them as arrays, which they are, after all. In practice, this is often a pitfall for newcomers. If any string-related range operations don't work as expected, this is a common reason.

To force the library to treat character arrays as strings, they can be wrapped in an as_literal() call. Alternatively, the C strings can be cast to std::string as well.


There are several interesting aspects about Boost Range. It plays very well with C++11's range-based for loops and makes code operating on containers much easier to write and (most importantly) read. In addition, it makes it possible to lay out data processing pipelines a lot more clearly.

Container iteration and modification becomes as easy as it is in modern scripting languages, which is a huge, huge step for the C++ language.

Let's hope that C++17 brings similar capabilities in the standard library. Until then, Boost Range is the way to go, so check out the docs and try it yourself.

Boost Range for Humans

Boost Range encapsulates the common C++ pattern of passing begin/end iterators around by combining them into into a single range object. It makes code that operates on containers much more readable. One wonders why such functionality was not included in the C++ standard library in the first place, and indeed, similar ideas could be added to C++17, see N4128 and Ivan Cukic's Meeting C++ presentation. In my opinion, Boost Range is something that every C++ programmer should know about.

The library is reasonably well documented, but I was often missing concrete code examples and an explicit mention what headers are required for which function. Since this presumably happens to other people as well, I invested the time to change that situation.

Thus, I present Boost Range for Humans. It contains working example code for every function in Boost Range, along with required headers and links to the official documentation and the latest source code. I hope it will make Boost Range more accessible and furthers its adoption.

Next week, we'll look into some of the highlights of what Boost Range can offer.

Debugging riddle of the day

One of our services failed to start on a test system (Ubuntu 12.04 on amd64). The stdout/stderr log streams contained only the string “Permission denied” – less than helpful. strace showed that the service tried to create a file under /run, which it doesn't have write permissions to. This caused the it to bail out:

open("/run/some_service", O_RDWR|O_CREAT|O_NOFOLLOW|O_CLOEXEC, 0644) = -1 EACCES (Permission denied)

Grepping the source code and configuration files for /run didn't turn up anything that could explain this open() call. Debugging with gdb gave further hints:

Breakpoint 2, 0x00007ffff73e3ea0 in open64 () from /lib/x86_64-linux-gnu/
(gdb) bt
#0  0x00007ffff73e3ea0 in open64 () from /lib/x86_64-linux-gnu/
#1  0x00007ffff7bd69bf in shm_open () from /lib/x86_64-linux-gnu/
#2  0x0000000000400948 in daemonize () at service.cpp:93
#3  0x00000000004009ac in main () at main.cpp:24
(gdb) p (char*)$rdi
$1 = 0x7fffffffe550 "/run/some_service"
(gdb) frame 2
#2  0x0000000000400948 in daemonize () at service.cpp:93
9           int fd = shm_open(fname.c_str(), O_RDWR | O_CREAT, 0644);
(gdb) p fname
$2 = {...., _M_p = 0x602028 "/some_service"}}

The open("/run/some_service", ...) was caused by an shm_open("/some_service", ...).

This code is working on other machines, why does it fail on this particular one? Can you figure it out? Bonus points if you can explain why it is trying to access /run and not some other directory. You might find the shm_open() man page and source code helpful.

I'll be waiting for you.


The solution is pretty evident after examining the Linux version of shm_open(). By default, it tries to create shared memory files under /dev/shm. If that doesn't exist, it will pick the first tmpfs mount point from /proc/mounts.

In Ubuntu 12.04, /dev/shm is a symlink to /run/shm. On this machine the symlink was missing, which caused shm_open() to go hunting for a tmpfs filesystem, and /run happened to be the first one in /proc/mounts.

Re-creating the symlink solved the problem. Why it was missing in the first place is still unclear. In the aftermath, we're also improving the error messages in this part of the code to make such issues easier to diagnose.

libconf - a Python reader for libconfig files

This weekend, I uploaded my first package to PyPI: libconf, a pure-Python reader for files in libconfig format. This configuration file format is reminiscent of JSON and is mostly used in C/C++ projects through the libconfig library. It looks like this:

version = 7;
window: {
   title: "libconfig example"
   position: { x: 375; y: 210; w: 800; h: 600; }
capabilities: {
   can-do-lists: (true, 0x3A20, ("sublist"), {subgroup: "ok"})
   can-do-arrays: [3, "yes", True]

There are already two Python implementations: pylibconfig2 is a pure-Python reader licensed under GPLv3 and python-libconfig provides bindings for the libconfig C++ library. The first one I didn't like because of it's licensing, the second one I didn't like because of the more involved installation procedure. Also, I kind of enjoy writing parsers.

So, I set down during the easter weekend and wrote libconf. It's a pure-Python reader for libconfig files with an interface similar to the Python json module. There are two main methods: load(f) and loads(string). Both return a dict-like data-structure that can be indexed (config['version']), but supports attribute access as well (config.version):

import libconf
>>> with open('example.cfg') as f:
...     config = libconf.load(f)
>>> config['window']['title']
'libconfig example'
>>> config.window.title
'libconfig example'

It was a fun little project. Creating a recursive descent parser is pretty straightforward, especially for such a simple file format. Writing documentation, packaging and uploading to GitHub and PyPI took longer than coding up the implementation itself.