Remarks on enable_shared_from_this

std::enable_shared_from_this is a template base class that allows derived classes to get a std::shared_ptr to themselves. This can be handy, and it's not something that C++ classes can normally do. Calling std::shared_ptr<T>(this) is not an option as it creates a new shared pointer independent of the existing one, which leads to double destruction.

The caveat is that before calling the shared_from_this() member function, a shared_ptr to the object must already exist, otherwise undefined behavior results. In other words, the object must already be managed by a shared pointer.

This presents an interesting issue. When using this technique, there are member functions (those that rely on shared_from_this()) that can only be called if the object is managed via a shared_ptr. This is a rather subtle requirement: the compiler won't enforce it. If violated, the object may even work at runtime until a problematic code path is executed, which may happen rarely – a nice little trap. At the very least, this should be prominently mentioned in the class documentation. But frankly, relying on the documentation to communicate such a subtle issue sounds wrong.

The correct solution is to let the compiler enforce it. Make the constructors private and provide a static factory method that returns a shared_ptr to a new instance. Take care to delete the copy constructor and the assignment operator to prevent anyone from obtaining non-shared-pointer-managed instances this way.

Another point worth mentioning about enable_shared_from_this is that the member functions it provides, shared_from_this() and weak_from_this(), are public. Not only the object itself can retrieve it's owning shared_ptr, everyone else can too. Whether this is desirable is essentially an API design question and depends on the context. To restrict access to these functions, use private inheritance.

Overall, enable_shared_from_this is an interesting tool, if a bit situational. However, it requires care to use safely, in a way that prevents derived classes from being used incorrectly.

published tagged c++

Building a computer from logic gates

Ever wondered how computers actually work on a low level?

After Jeff Atwood's recent post about Robot Odyssey, I did.

The following is a sketch of what could work, not necessarily what modern hardware actually does. The aim is to explore how a Turing-complete, multi-purpose computation engine could in principle be built from simple logic elements.

From silicon to computation

Nearly all chips are manufactured on silicon plates called wavers. These plates are modified in a complex process to create semiconductor-based diodes and transistors. Most general-purpose processors use a technology called CMOS that arranges the transistors into logic gates – devices that carry out operations on zeros and ones. The most common gate to implement is the NAND. All other common logic gates (AND, OR, NOT, XOR, ...) can be constructed from NAND building blocks.

Memory cells can be constructed by composing multiple logic gates. Each cell stores a single bit of information. Conceptually, it has one output (VALUE) where the current value can be read. Additionally, there are two input pins: SET and SET_VALUE. For reading, SET is zero. For writing, SET is one and the SET_VALUE becomes the new value stored in the cell. It's not hard to imagine how to build a memory controller on top of an array of memory cells that allows addressing of individual cells for getting and setting their value.

How can memory be modified in practice? For example, how is it possible to invert (change 0 to 1 and vice versa) the value of a memory cell? Reading the memory, inverting it and writing it back into the memory cell leads to oscillation: when the cell value is changed it is immediately read back, and inverted and written again. This cycle repeats as quickly as the electronics can handle.

Memory cell feeding back to itself via an inverter!

The solutions to this conundrum are clocks and edge-triggered flip-flops. Clocks are signals switching between 0 and 1 at a defined frequency. Edge-triggered flip-flops read their input at the rising edge of the clock (when it switches from 0 to 1) and output that value until the next rising edge. In other words, they sample their input once per clock cycle and hold that value until the next cycle. When such an element is inserted into the inversion loop, the memory value is inverted exactly once per clock cycle.

Memory cell feeding back to itself via an edge-triggered flip-flop and an inverter!

Based on this technique other operations can be implemented as well, such as adding or multiplying memory cells, copying memory contents to other locations, performing bitwise operations, and so on.

General-purpose processors

For each of those operations the logic gates would have to be arranged differently, though. In contrast, real general-purpose CPU's have fixed logic circuits, their gate configuration doesn't change during runtime. Instead, the operations to execute are read from memory and interpreted according to the chip's instruction set.

For our analysis, let's assume the command is read from separate input lines instead. We'll return to reading commands from memory later on.

How could one design and implement an instruction set? Let's say we have a machine with 8 lines (bits) of input and four 8-bit registers A, B, C, D. External memory is addressed in chunks of 8 bits and is attached via 8 address lines that select the location, 8 lines for reading/writing the 8-bit value, and one line to switch between reading and writing. What operations could we have?

Opcode Mnemonic Description
00RRVVVV SetHi VVVV, RR Set the 4 highest bits of register RR to VVVV.
01RRVVVV SetLo VVVV, RR Set the 4 lowest bits of register RR to VVVV.
1000RRSS Mov RR, SS Copy the value of register RR into register SS.
100100RR Read [RR] Read from memory address stored in RR, store the result in register RR.
100110RR Not RR Logically invert the value of register RR.
100111RR Inv RR Negate (one's complement) the value of register RR.
1010RRSS Add RR, SS Add registers RR and SS, store the result in SS.
1011RRSS Mul RR, SS Multiply registers RR and SS, store the result in SS.
1100RRSS And RR, SS Logical AND of registers RR and SS, store the result in SS.
1101RRSS Or RR, SS Logical OR of registers RR and SS, store the result in SS.
1111RRSS Write RR, [SS] Write the value of register RR to the memory address stored in register SS.

It's not very efficient, but it enables a good amount of computation. How could it be implemented? All the separate opcodes could be realized as separate logic blocks on a chip. Each of them individually should be relatively easy to implement. Selecting which block to run (depending on the opcode) is a bit tricky. The easiest way to handle this is to run them all, but only enable output to the registers and memory for the single command that is desired by the input. On every cycle, all possible commands would be computed simultaneously, but only the desired one would be allowed to write to registers and memory. Is it efficient? No. Would it work? Yes.

Finally, we can address the problem of reading instructions from memory. Given the system described in the previous paragraphs, it shouldn't be too hard to add a separate component that reads instructions from memory and feeds it to this computation engine. The two components would communicate via an instruction-pointer register. The instruction set could be expanded to include (conditional) jumps, making the overall system Turing complete.


There are several small problems with what I've described, e.g. how to deal with instructions that consume multiple clock cycles, but all of them are solvable without too much trouble.

Thinking this topic is an interesting exercise. On the transistor level, it's hard to see how a real processor could ever be constructed from these primitives. Possible in principle – but hard to see how to do in practice. Three levels of abstractions above, after gates and memory cells there are suddenly memory blocks that are addressable via a parallel protocol. Every abstraction step is comprehensible, yet complexity is built up quickly. Two levels of abstraction further we suddenly have an 8-bit microprocessor.

It must have been an exciting opportunity to figure all of this out in the middle of the last century.

published tagged hardware

Boost Range Highlights

Last week, I presented Boost Range for Humans: documentation for the Boost Range library with concrete code examples. This week we'll talk about some of the cool features in Boost Range.


boost::irange() is the C++ equivalent to Python's range() function. It returns a range object containing an arithmetic series of numbers:

boost::irange(4, 10)    -> {4, 5, 6, 7, 8, 9}
boost::irange(4, 10, 2) -> {4, 6, 8}

Together with indexed() (see below), it serves as a range-based alternative to the classic C for loop.


boost::combine() takes two or more input ranges and creates a zipped range – a range of tuples where each tuple contains corresponding elements from each input range.

The input ranges must have equal size.

std::string str = "abcde";
std::vector<int> vec = {1, 2, 3, 4, 5};
for (const auto & zipped : boost::combine(str, vec)) {
    char c; int i;
    boost::tie(c, i) = zipped;

    // Iterates over the pairs ('a', 1), ('b', 2), ...


Most if not all algorithms from the C++ standard library that apply to containers (via begin/end iterator pairs) have been wrapped in Boost Range. Examples include copy(), remove(), sort(), count(), find_if().


Adaptors are among the most interesting concepts Boost Range has to offer.

There are generally two ways to use adaptors, either via function syntax or via a pipe syntax. While the former is handy for simple cases, while the latter allows chaining data transformations into an easy-to-read pipeline.

bool is_even(int n) { return n % 2 == 0; }
const std::vector<int> vec = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };

// Function-style call:
for (int i : boost::adaptors::filter(vec, is_even)) { ... }

// Pipe-style call:
for (int i : vec | boost::adaptors::filtered(is_even)) { ... }

To see the power of the latter syntax, consider a transformation pipeline:

int square(int n) { return n*n; }
std::map<int, int> input_map = { ... };

using boost::adaptors;
auto result = input_map | map_values
                        | filtered(is_even)
                        | transformed(square);


The boost::adaptors::indexed() adapter warrants special mention: it is analogous to Python's enumerate(). Given a Range, it gives access to the elements as well as their index within the range. Boost 1.56 or higher is required for this to work properly.


boost::accumulate() by default sums all the items in an input range, but other reduction functions can be supplied as well. Together with range adapters, this makes map-reduce pipelines easy to write.

std::vector<int> vec = {1, 2, 3, 4, 5};
int product = boost::accumulate(vec, 1, std::multiplies<int>());


boost::as_literal() may be less a highlight and more of a crutch, but it bears mentioning still. Boost Range functions accept a wide variety of types, among them strings. C++ style strings (std::string) always work as expected, but with character arrays (char[]), there is an ambiguity as to whether the argument should be interpreted as array (including the terminal '\0' character) or as string (excluding the terminator).

By default, Boost Range treats them as arrays, which they are, after all. In practice, this is often a pitfall for newcomers. If any string-related range operations don't work as expected, this is a common reason.

To force the library to treat character arrays as strings, they can be wrapped in an as_literal() call. Alternatively, the C strings can be cast to std::string as well.


There are several interesting aspects about Boost Range. It plays very well with C++11's range-based for loops and makes code operating on containers much easier to write and (most importantly) read. In addition, it makes it possible to lay out data processing pipelines a lot more clearly.

Container iteration and modification becomes as easy as it is in modern scripting languages, which is a huge, huge step for the C++ language.

Let's hope that C++17 brings similar capabilities in the standard library. Until then, Boost Range is the way to go, so check out the docs and try it yourself.

published tagged boost, c++

Boost Range for Humans

Boost Range encapsulates the common C++ pattern of passing begin/end iterators around by combining them into into a single range object. It makes code that operates on containers much more readable. One wonders why such functionality was not included in the C++ standard library in the first place, and indeed, similar ideas could be added to C++17, see N4128 and Ivan Cukic's Meeting C++ presentation. In my opinion, Boost Range is something that every C++ programmer should know about.

The library is reasonably well documented, but I was often missing concrete code examples and an explicit mention what headers are required for which function. Since this presumably happens to other people as well, I invested the time to change that situation.

Thus, I present Boost Range for Humans. It contains working example code for every function in Boost Range, along with required headers and links to the official documentation and the latest source code. I hope it will make Boost Range more accessible and furthers its adoption.

Next week, we'll look into some of the highlights of what Boost Range can offer.

published tagged boost, c++

Debugging riddle of the day

One of our services failed to start on a test system (Ubuntu 12.04 on amd64). The stdout/stderr log streams contained only the string “Permission denied” – less than helpful. strace showed that the service tried to create a file under /run, which it doesn't have write permissions to. This caused the it to bail out:

open("/run/some_service", O_RDWR|O_CREAT|O_NOFOLLOW|O_CLOEXEC, 0644) = -1 EACCES (Permission denied)

Grepping the source code and configuration files for /run didn't turn up anything that could explain this open() call. Debugging with gdb gave further hints:

Breakpoint 2, 0x00007ffff73e3ea0 in open64 () from /lib/x86_64-linux-gnu/
(gdb) bt
#0  0x00007ffff73e3ea0 in open64 () from /lib/x86_64-linux-gnu/
#1  0x00007ffff7bd69bf in shm_open () from /lib/x86_64-linux-gnu/
#2  0x0000000000400948 in daemonize () at service.cpp:93
#3  0x00000000004009ac in main () at main.cpp:24
(gdb) p (char*)$rdi
$1 = 0x7fffffffe550 "/run/some_service"
(gdb) frame 2
#2  0x0000000000400948 in daemonize () at service.cpp:93
9           int fd = shm_open(fname.c_str(), O_RDWR | O_CREAT, 0644);
(gdb) p fname
$2 = {...., _M_p = 0x602028 "/some_service"}}

The open("/run/some_service", ...) was caused by an shm_open("/some_service", ...).

This code is working on other machines, why does it fail on this particular one? Can you figure it out? Bonus points if you can explain why it is trying to access /run and not some other directory. You might find the shm_open() man page and source code helpful.

I'll be waiting for you.


The solution is pretty evident after examining the Linux version of shm_open(). By default, it tries to create shared memory files under /dev/shm. If that doesn't exist, it will pick the first tmpfs mount point from /proc/mounts.

In Ubuntu 12.04, /dev/shm is a symlink to /run/shm. On this machine the symlink was missing, which caused shm_open() to go hunting for a tmpfs filesystem, and /run happened to be the first one in /proc/mounts.

Re-creating the symlink solved the problem. Why it was missing in the first place is still unclear. In the aftermath, we're also improving the error messages in this part of the code to make such issues easier to diagnose.

libconf - a Python reader for libconfig files

This weekend, I uploaded my first package to PyPI: libconf, a pure-Python reader for files in libconfig format. This configuration file format is reminiscent of JSON and is mostly used in C/C++ projects through the libconfig library. It looks like this:

version = 7;
window: {
   title: "libconfig example"
   position: { x: 375; y: 210; w: 800; h: 600; }
capabilities: {
   can-do-lists: (true, 0x3A20, ("sublist"), {subgroup: "ok"})
   can-do-arrays: [3, "yes", True]

There are already two Python implementations: pylibconfig2 is a pure-Python reader licensed under GPLv3 and python-libconfig provides bindings for the libconfig C++ library. The first one I didn't like because of it's licensing, the second one I didn't like because of the more involved installation procedure. Also, I kind of enjoy writing parsers.

So, I set down during the easter weekend and wrote libconf. It's a pure-Python reader for libconfig files with an interface similar to the Python json module. There are two main methods: load(f) and loads(string). Both return a dict-like data-structure that can be indexed (config['version']), but supports attribute access as well (config.version):

import libconf
>>> with open('example.cfg') as f:
...     config = libconf.load(f)
>>> config['window']['title']
'libconfig example'
>>> config.window.title
'libconfig example'

It was a fun little project. Creating a recursive descent parser is pretty straightforward, especially for such a simple file format. Writing documentation, packaging and uploading to GitHub and PyPI took longer than coding up the implementation itself.

published tagged Python

Requesting certificates the whirlwind way

As a follow-up to last post, let's take a closer look at certificates. Specifically, when you'd want one and how you'd go about obtaining it.

When would you need a certificate?

The most common reason is to provide access to a service over a secure connection, e.g. HTTPS for web traffic or IMAPS for emails. This requires a certificate signed by a CA that's recognized by browsers and operating systems. Signatures by these CAs indicate control over the internet domain for which the certificate was issued (e.g. They are used to prove to users that they are talking to the real thing, not to some scammer who hijacked their internet connection. This kind of certificate is often called "server certificate".

Instead of proving the identity of a server, certificates can also be used to authenticate a client. Unsurprisingly, this is called "client certificate" and can serve as a more secure alternative to password-based authentication. For example, Apple Push Notifications (for sending notifications to apps on an iPhone) require a certificate signed by Apple to prove that someone is allowed to send push notifications. Several VPN technologies also rely on client certificates for authentication.

How do you go about obtaining a certificate?

You need to create a certificate signing request (CSR) and submit it to the CA. Last time we saw that a certificate consists of a keypair identifier (hash), metadata, and signatures. A CSR contains only the first two of these: a hash that uniquely identifies a keypair and metadata relating to the intended use of the certificate.

Obviously, this means there can be no CSR without a keypair!1

For most purposes, it's best to start with a new keypair that hasn't been used anywhere else. So we'll first generate that and then create a CSR for it.

Those are two pretty simple operations, they just get complicated by the horrific user interface of OpenSSL. For most people, these commands equate to mystical incantations that must be recited to work their magic. That's unfortunate: it just deepens the confusion about how this certificate shebang is supposed to work. OpenSSL is the most widely available and popular tool for the job, though, so for better or worse we'll have to put up with it.

On Linux, OpenSSL should be available from the package manager, if it isn't installed already. On MacOS, OpenSSL is probably pre-installed, if not, it can be obtained from Homebrew. Windows binaries can be found here.

Creating a CSR

First, generate a keypair:

openssl genrsa -out certtest.key 2048

This will write a new keypair to the file "certtest.key". The number 2048 is the desired key size, with larger keys being more secure but also slower. As of 2016, 2048 bits is a reasonable default.

Next create a CSR for that keypair:

openssl req -new -sha256 -key certtest.key -out certtest.csr

OpenSSL will ask some questions to set the CSR metadata. For server certificates, the crucial field is "Common Name", which has to be the internet address of the server the certificate is intended for. The other fields are for informational purposes only. You may set them, or you may write a single dot (".") to leave the field blank. Not typing anything at all at the question prompts will give you OpenSSL's default values, don't do that.

Here is a screen capture of the process:


Before submitting the CSR for signing, it's a good practice to re-check the metadata fields:

openssl req -noout -text -in certtest.csr

This outputs several lines of data, the important part is the "Subject" line at the very top. Check that all the fields are what you expect them to be.

Submitting the CSR for signing

The details of submitting a CSR for signing vary from CA to CA, typically it involves an upload on a web page. Depending on the intended use of the certificate, additional steps may be required, such as proving your real-life identity and/or proving that you have control over some domain (for server certificates). Fortunately, these steps are typically quite thoroughly described in the CA's documentation or on the internet.

After the submission, it may take some time for the certificate to be signed, then you can download it from the CA's web page.

Pretty simple, right?

1 For users of the Apple Keychain application this can be a bit confusing, because there is an assistant to generate CSR's, but it doesn't mention keypairs anywhere. Under the hood, it does create a new keypair and add it to the keychain, it just doesn't tell you.

A whirlwind introduction to the secure web

How secure internet connections work is often a mystery, even for fairly technical people – let's rectify this!

Although some fairly complicated mathematics lays the foundation, it's not necessary to grasp all of it to get a good high-level understanding of the secure web. In this post, I'll give an overview of the major components of the secure web and of how they interact. I want to shed some light on the wizardry that both your browser and webservers do to make secure communication over the internet possible. Specifically, the focus is on authentication: how the public key infrastructure (PKI) guarantees you that you are really talking to your bank not to some scammer.

This is going to be a high-level overview with a lot of handwaving. We'll gloss over some of the nitty gritty details, and focus on the big picture.


Let's start off with the basic building block of public-key cryptography, the public/private keypair. It consists of two halves: the public part can be shared with the world, the private half must stay hidden, for your eyes only. With such a keypair, you can do three pretty nifty things:

  • People can encrypt data with your public key, and only you can decrypt it.
  • You can prove to other people (who have your public key) that you know the private key. In some sense, they can verify your identity.
  • You can sign data with your private key, and everyone with the public key can verify that this data came from you and was not tempered with.

Let's look at an example to see how cool this is: let's say your bank has a keypair and you know their public key (it could be printed in large letters on the walls of their building, it's public after all). You can then securely communicate with your bank, without anyone being able to listen in, even if they can intercept or modify the messages between you and the bank (encryption). You can be sure you really are talking to the bank, and not to a fraudster (verification). And the bank can make statements that anyone who has their public key can ascertain is genuine, i.e. it really came from the bank (signing).

The last part is nice because the bank can sign a statement about your balance, give it to you, and you can forward it to your sleazy landlord who wants proof of your financial situation. The bank and the landlord never directly talk with each other, nevertheless the latter has full certainty that the statement was made by the bank, and that you didn't tamper with it.

So it's cool that we can securely communicate with our banks. We can do the same with websites: once we have the public key of e.g. Google, it's easy to setup an encrypted communication channel. Via the verification function of keypairs, it's also easy to prove we really are talking to that Google, and not to some kid who's trying to steal our password to post dog pictures in our cat groups.

How do we get Google's public key? — This is where things start going downhill.

In the bank example, we'd gotten the public key personally from the bank (written on its front wall). With Google, it'd be kind of difficult to travel to Mountain View just to get get their public key. And we can't just go and download the key from, the whole point is that we're not sure that the we're talking to is the real Google.

Are we completely out of luck? Can we communicate securely over the internet only if we manually exchange keys before, which we usually can't? It turns out we are only sort-of out of luck: we are stuck with certificates and the halfway-broken system of certificate authorities.


A certificate contains several parts:

  • an identifier (a hash) that uniquely identifies a keypair
  • metadata that says who this keypair belongs to
  • signatures: statements signed by other keys that vouch that the keypair referenced here really belongs to the entity described in the metadata section

It's important to note that certificates don't need to be kept secret. The keypair identifier doesn't reveal the private key, so certificates can be shared freely. The corollary of this is that a certificate alone can't be used to verify you're talking to anyone in particular. To be used for authentification, it needs to be paired with the associated private key.

With that out of the way, let's look at why certificates are useful. Say someone gives you a certificate and proves they have the associated private key. You've never met this person. However, the certificate carries signatures from several keys that you know belong to close friends of yours. All of those signatures attest that this person is called "Hari Seldon". If you trust your friends, you can be pretty certain that the person is really called that way.

When you think about this, it's kind of neat. A stranger can authenticate to you (prove that they say who they are) just because someone you trust made a statement confirming the stranger's identity. That this statement is really coming from your trusted friend is ensured, because it's signed with their private key.

The same concept can be applied websites. As long as there's someone you trust and you have their public key, they can sign other people's certificates to affirm that identity to you. For example, they can sign a certificate for Google that says "This really is the real". When you see that certificate and verify that the other party has the associated private key, you'll have good reason to believe that you really are talking to Google's server, not some scam version by a North Korean hacker.

Certificate Authorities

So how do you find someone you can trust? And how does that person make sure that the certificate they are signing really belongs to Google? They face the same problems confirming that fact as you did! Does this even improve the situation in any way?

It does – let's take the questions in order. The reality on the internet is: it's not you trusting someone, it's your browser that does the trusting. Your browser includes public keys from so-called "certificate authorities" (CA's). You can find the list of CA's trusted by your own browser in its options, under Advanced / Security / Certificates / Authorities. If the browser sees certificates signed by any one of these keys, it believes them to be true. It trusts CA's not to sign any bogus certificates.

Why are these keys trustworthy? Because CA's are mostly operated by large companies that have strict policies in place to make sure they only sign stuff that's legit. How do they do that? After all, as an individual you'd have a pretty tough time verifying that the public key offered by really belongs to Google. Don't CA's face the same problem?

Not really. There are billions of people accessing There are only about 200 CA's that are trusted by the common browsers. And Google needs a signed certificate by only one of them (one signature is enough to earn the browser's trust). So Google can afford to prove it's identity to a CA: by sending written letters, a team of lawyers, or whatever. Once Google gets a certificate for signed by any reputable CA, it is recognized by pretty much every device in the world.

Similarly, I, as a private person, can get a certificate for by proving my identity and my ownership of this domain to the CA. The identity part is usually done by submitting a scan of a driver's license or passport. Ownership of the domain can be shown by uploading a file supplied by the CA to the webserver. Once the CA confirms that the file is there, it knows I have control of that domain.

So instead of me having to prove my identity to every single user visiting this website, I can prove it once to a CA, and all browsers coming here will recognize this as good enough. This way, CA's make the problem of authentication of servers ("I'm the real, not a cheap fake") tractable. It's a system that has made the large-scale deployment of secure internet traffic via HTTPS possible.

The half-broken part

Let's get back to the analogy of a stranger authenticating to you via a certificate signed by someone you know. What if the signature wasn't from a close friend of yours, but from a seedy guy you meet occasionally when going out? Would you still have full confidence in the certificate? Hopefully not.

What does this mean for the web?

Not all of the CA's included in the common web browsers are the equivalent of a trusted friend:

  • They may be in control of some government who wants a certificate for, so it can read dissident's emails
  • An employee with access to the certificate authority key may create certificates for bank websites and sell them on the black market
  • The computer network where the CA keys are stored could have been hacked

I'm pretty sure all three of those have actually happened in the past. Given that a single forged certificate can be used to attack millions of users, CA's are juicy targets. As soon as forged certificates are detected in the wild, they tend to be blacklisted (blocked by browsers) very quickly, but there is still a window of vulnerability.

For this reason, the whole CA system has been questioned over the last few years, but replacing it does not seem feasible at the moment. There are techniques (such as public key pinning) to augment the CA-based authentication, but it takes time for them to be picked up by website owners.

While this is a problem, it mostly affects the largest websites (obtaining a forged certificate is difficult and costly). Together with browser vendors, they are developing new mitigation techniques against forged certificates. In the meantime, the rest of us is still pretty well served by the current CA system, even though it is not perfect.


So, this is it for an overview of the public key infrastructure that enables secure connections to internet sites, from the basics of public key cryptography to certificate authorities. If you want to dig deeper, I recommend starting with the Wikipedia articles I linked throughout the article. If you are interested in cryptography in general, I highly recommend Bruce Schneier's book Applied Cryptography. It's 20 years old now, and still enormously relevant today.

I hope this text helps a bit to clear up the confusion associated with public key cryptography and the secure web. If you liked it, or if you have any suggestions for improvement, please let me know in the comments!

dh_virtualenv and long package names (FileNotFound error)

Interesting tidbit about Linux:

A maximum line length of 127 characters is allowed for the first line in a #! executable shell script.

from man execve

Should be enough, right?


Well, not if you are using dh_virtualenv with long package names, anyway:

Installing pip...
  Error [Errno 2] No such file or directory while executing command /tmp/ /usr/share/python-virtualenv/pip-1.1.tar.gz
...Installing pip...done.
Traceback (most recent call last):
  File "/usr/bin/virtualenv", line 3, in <module>
  File "/usr/lib/python2.7/dist-packages/", line 938, in main
  File "/usr/lib/python2.7/dist-packages/", line 1054, in create_environment
install_pip(py_executable, search_dirs=search_dirs, never_download=never_download)
  File "/usr/lib/python2.7/dist-packages/", line 643, in install_pip
  File "/usr/lib/python2.7/dist-packages/", line 976, in call_subprocess
cwd=cwd, env=env)
  File "/usr/lib/python2.7/", line 679, in __init__
errread, errwrite)
  File "/usr/lib/python2.7/", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Traceback (most recent call last):
  File "/usr/bin/dh_virtualenv", line 106, in <module>
sys.exit(main() or 0)
  File "/usr/bin/dh_virtualenv", line 83, in main
  File "/usr/lib/python2.7/dist-packages/dh_virtualenv/", line 112, in create_virtualenv
  File "/usr/lib/python2.7/", line 511, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['virtualenv', '--system-site-packages', '--setuptools', 'debian/long-package-name/usr/share/python/long-package-name']' returned non-zero exit status 1
make: *** [binary-arch] Error 1
dpkg-buildpackage: error: fakeroot debian/rules binary gave error exit status 2

dh_virtualenv is used to create Debian packages that include Python virtualenvs. It is one of the better ways of packaging Python software, especially if there are Python dependencies that are not available in Debian or Ubuntu. When building a .deb package, it creates a virtualenv in a location such as:


This virtualenv has several tools under its bin/ directory, and they all have the absolute path of the virtualenv's Python interpreter hard-coded in their #! shebang line:


Given that <build-directory> often contains the package name as well, it's easy to overflow the 128 byte limit of the #! shebang line. In my case, with a ~30 character package name, the path length grew to 160 characters!

Consequently, the kernel couldn't find the Python executable anymore, and running any of the tools from the bin/ directory gave an ENOENT (file not found) error. This is what happened when virtualenv tried to install pip during the initial setup. The root cause of this error is not immediately obvious, to say the least.

To check whether this affects you, check the line length of any script with wc:

head -n 1 /path/to/virtualenv/bin/easy_install | wc -c

If that's larger than 128, it's probably the cause of the problem.

The fix is to change the package name and/or the build location to something shorter. The alternative would be to patch the Linux kernel, which – depending on your preferences – sounds either fun or really unpleasant. Suit yourself!