Requesting certificates the whirlwind way

As a follow-up to last post, let's take a closer look at certificates. Specifically, when you'd want one and how you'd go about obtaining it.

When would you need a certificate?

The most common reason is to provide access to a service over a secure connection, e.g. HTTPS for web traffic or IMAPS for emails. This requires a certificate signed by a CA that's recognized by browsers and operating systems. Signatures by these CAs indicate control over the internet domain for which the certificate was issued (e.g. They are used to prove to users that they are talking to the real thing, not to some scammer who hijacked their internet connection. This kind of certificate is often called "server certificate".

Instead of proving the identity of a server, certificates can also be used to authenticate a client. Unsurprisingly, this is called "client certificate" and can serve as a more secure alternative to password-based authentication. For example, Apple Push Notifications (for sending notifications to apps on an iPhone) require a certificate signed by Apple to prove that someone is allowed to send push notifications. Several VPN technologies also rely on client certificates for authentication.

How do you go about obtaining a certificate?

You need to create a certificate signing request (CSR) and submit it to the CA. Last time we saw that a certificate consists of a keypair identifier (hash), metadata, and signatures. A CSR contains only the first two of these: a hash that uniquely identifies a keypair and metadata relating to the intended use of the certificate.

Obviously, this means there can be no CSR without a keypair!1

For most purposes, it's best to start with a new keypair that hasn't been used anywhere else. So we'll first generate that and then create a CSR for it.

Those are two pretty simple operations, they just get complicated by the horrific user interface of OpenSSL. For most people, these commands equate to mystical incantations that must be recited to work their magic. That's unfortunate: it just deepens the confusion about how this certificate shebang is supposed to work. OpenSSL is the most widely available and popular tool for the job, though, so for better or worse we'll have to put up with it.

On Linux, OpenSSL should be available from the package manager, if it isn't installed already. On MacOS, OpenSSL is probably pre-installed, if not, it can be obtained from Homebrew. Windows binaries can be found here.

Creating a CSR

First, generate a keypair:

openssl genrsa -out certtest.key 2048

This will write a new keypair to the file "certtest.key". The number 2048 is the desired key size, with larger keys being more secure but also slower. As of 2016, 2048 bits is a reasonable default.

Next create a CSR for that keypair:

openssl req -new -sha256 -key certtest.key -out certtest.csr

OpenSSL will ask some questions to set the CSR metadata. For server certificates, the crucial field is "Common Name", which has to be the internet address of the server the certificate is intended for. The other fields are for informational purposes only. You may set them, or you may write a single dot (".") to leave the field blank. Not typing anything at all at the question prompts will give you OpenSSL's default values, don't do that.

Here is a screen capture of the process:


Before submitting the CSR for signing, it's a good practice to re-check the metadata fields:

openssl req -noout -text -in certtest.csr

This outputs several lines of data, the important part is the "Subject" line at the very top. Check that all the fields are what you expect them to be.

Submitting the CSR for signing

The details of submitting a CSR for signing vary from CA to CA, typically it involves an upload on a web page. Depending on the intended use of the certificate, additional steps may be required, such as proving your real-life identity and/or proving that you have control over some domain (for server certificates). Fortunately, these steps are typically quite thoroughly described in the CA's documentation or on the internet.

After the submission, it may take some time for the certificate to be signed, then you can download it from the CA's web page.

Pretty simple, right?

1 For users of the Apple Keychain application this can be a bit confusing, because there is an assistant to generate CSR's, but it doesn't mention keypairs anywhere. Under the hood, it does create a new keypair and add it to the keychain, it just doesn't tell you.

A whirlwind introduction to the secure web

How secure internet connections work is often a mystery, even for fairly technical people – let's rectify this!

Although some fairly complicated mathematics lays the foundation, it's not necessary to grasp all of it to get a good high-level understanding of the secure web. In this post, I'll give an overview of the major components of the secure web and of how they interact. I want to shed some light on the wizardry that both your browser and webservers do to make secure communication over the internet possible. Specifically, the focus is on authentication: how the public key infrastructure (PKI) guarantees you that you are really talking to your bank not to some scammer.

This is going to be a high-level overview with a lot of handwaving. We'll gloss over some of the nitty gritty details, and focus on the big picture.


Let's start off with the basic building block of public-key cryptography, the public/private keypair. It consists of two halves: the public part can be shared with the world, the private half must stay hidden, for your eyes only. With such a keypair, you can do three pretty nifty things:

  • People can encrypt data with your public key, and only you can decrypt it.
  • You can prove to other people (who have your public key) that you know the private key. In some sense, they can verify your identity.
  • You can sign data with your private key, and everyone with the public key can verify that this data came from you and was not tempered with.

Let's look at an example to see how cool this is: let's say your bank has a keypair and you know their public key (it could be printed in large letters on the walls of their building, it's public after all). You can then securely communicate with your bank, without anyone being able to listen in, even if they can intercept or modify the messages between you and the bank (encryption). You can be sure you really are talking to the bank, and not to a fraudster (verification). And the bank can make statements that anyone who has their public key can ascertain is genuine, i.e. it really came from the bank (signing).

The last part is nice because the bank can sign a statement about your balance, give it to you, and you can forward it to your sleazy landlord who wants proof of your financial situation. The bank and the landlord never directly talk with each other, nevertheless the latter has full certainty that the statement was made by the bank, and that you didn't tamper with it.

So it's cool that we can securely communicate with our banks. We can do the same with websites: once we have the public key of e.g. Google, it's easy to setup an encrypted communication channel. Via the verification function of keypairs, it's also easy to prove we really are talking to that Google, and not to some kid who's trying to steal our password to post dog pictures in our cat groups.

How do we get Google's public key? — This is where things start going downhill.

In the bank example, we'd gotten the public key personally from the bank (written on its front wall). With Google, it'd be kind of difficult to travel to Mountain View just to get get their public key. And we can't just go and download the key from, the whole point is that we're not sure that the we're talking to is the real Google.

Are we completely out of luck? Can we communicate securely over the internet only if we manually exchange keys before, which we usually can't? It turns out we are only sort-of out of luck: we are stuck with certificates and the halfway-broken system of certificate authorities.


A certificate contains several parts:

  • an identifier (a hash) that uniquely identifies a keypair
  • metadata that says who this keypair belongs to
  • signatures: statements signed by other keys that vouch that the keypair referenced here really belongs to the entity described in the metadata section

It's important to note that certificates don't need to be kept secret. The keypair identifier doesn't reveal the private key, so certificates can be shared freely. The corollary of this is that a certificate alone can't be used to verify you're talking to anyone in particular. To be used for authentification, it needs to be paired with the associated private key.

With that out of the way, let's look at why certificates are useful. Say someone gives you a certificate and proves they have the associated private key. You've never met this person. However, the certificate carries signatures from several keys that you know belong to close friends of yours. All of those signatures attest that this person is called "Hari Seldon". If you trust your friends, you can be pretty certain that the person is really called that way.

When you think about this, it's kind of neat. A stranger can authenticate to you (prove that they say who they are) just because someone you trust made a statement confirming the stranger's identity. That this statement is really coming from your trusted friend is ensured, because it's signed with their private key.

The same concept can be applied websites. As long as there's someone you trust and you have their public key, they can sign other people's certificates to affirm that identity to you. For example, they can sign a certificate for Google that says "This really is the real". When you see that certificate and verify that the other party has the associated private key, you'll have good reason to believe that you really are talking to Google's server, not some scam version by a North Korean hacker.

Certificate Authorities

So how do you find someone you can trust? And how does that person make sure that the certificate they are signing really belongs to Google? They face the same problems confirming that fact as you did! Does this even improve the situation in any way?

It does – let's take the questions in order. The reality on the internet is: it's not you trusting someone, it's your browser that does the trusting. Your browser includes public keys from so-called "certificate authorities" (CA's). You can find the list of CA's trusted by your own browser in its options, under Advanced / Security / Certificates / Authorities. If the browser sees certificates signed by any one of these keys, it believes them to be true. It trusts CA's not to sign any bogus certificates.

Why are these keys trustworthy? Because CA's are mostly operated by large companies that have strict policies in place to make sure they only sign stuff that's legit. How do they do that? After all, as an individual you'd have a pretty tough time verifying that the public key offered by really belongs to Google. Don't CA's face the same problem?

Not really. There are billions of people accessing There are only about 200 CA's that are trusted by the common browsers. And Google needs a signed certificate by only one of them (one signature is enough to earn the browser's trust). So Google can afford to prove it's identity to a CA: by sending written letters, a team of lawyers, or whatever. Once Google gets a certificate for signed by any reputable CA, it is recognized by pretty much every device in the world.

Similarly, I, as a private person, can get a certificate for by proving my identity and my ownership of this domain to the CA. The identity part is usually done by submitting a scan of a driver's license or passport. Ownership of the domain can be shown by uploading a file supplied by the CA to the webserver. Once the CA confirms that the file is there, it knows I have control of that domain.

So instead of me having to prove my identity to every single user visiting this website, I can prove it once to a CA, and all browsers coming here will recognize this as good enough. This way, CA's make the problem of authentication of servers ("I'm the real, not a cheap fake") tractable. It's a system that has made the large-scale deployment of secure internet traffic via HTTPS possible.

The half-broken part

Let's get back to the analogy of a stranger authenticating to you via a certificate signed by someone you know. What if the signature wasn't from a close friend of yours, but from a seedy guy you meet occasionally when going out? Would you still have full confidence in the certificate? Hopefully not.

What does this mean for the web?

Not all of the CA's included in the common web browsers are the equivalent of a trusted friend:

  • They may be in control of some government who wants a certificate for, so it can read dissident's emails
  • An employee with access to the certificate authority key may create certificates for bank websites and sell them on the black market
  • The computer network where the CA keys are stored could have been hacked

I'm pretty sure all three of those have actually happened in the past. Given that a single forged certificate can be used to attack millions of users, CA's are juicy targets. As soon as forged certificates are detected in the wild, they tend to be blacklisted (blocked by browsers) very quickly, but there is still a window of vulnerability.

For this reason, the whole CA system has been questioned over the last few years, but replacing it does not seem feasible at the moment. There are techniques (such as public key pinning) to augment the CA-based authentication, but it takes time for them to be picked up by website owners.

While this is a problem, it mostly affects the largest websites (obtaining a forged certificate is difficult and costly). Together with browser vendors, they are developing new mitigation techniques against forged certificates. In the meantime, the rest of us is still pretty well served by the current CA system, even though it is not perfect.


So, this is it for an overview of the public key infrastructure that enables secure connections to internet sites, from the basics of public key cryptography to certificate authorities. If you want to dig deeper, I recommend starting with the Wikipedia articles I linked throughout the article. If you are interested in cryptography in general, I highly recommend Bruce Schneier's book Applied Cryptography. It's 20 years old now, and still enormously relevant today.

I hope this text helps a bit to clear up the confusion associated with public key cryptography and the secure web. If you liked it, or if you have any suggestions for improvement, please let me know in the comments!

dh_virtualenv and long package names (FileNotFound error)

Interesting tidbit about Linux:

A maximum line length of 127 characters is allowed for the first line in a #! executable shell script.

from man execve

Should be enough, right?


Well, not if you are using dh_virtualenv with long package names, anyway:

Installing pip...
  Error [Errno 2] No such file or directory while executing command /tmp/ /usr/share/python-virtualenv/pip-1.1.tar.gz
...Installing pip...done.
Traceback (most recent call last):
  File "/usr/bin/virtualenv", line 3, in <module>
  File "/usr/lib/python2.7/dist-packages/", line 938, in main
  File "/usr/lib/python2.7/dist-packages/", line 1054, in create_environment
install_pip(py_executable, search_dirs=search_dirs, never_download=never_download)
  File "/usr/lib/python2.7/dist-packages/", line 643, in install_pip
  File "/usr/lib/python2.7/dist-packages/", line 976, in call_subprocess
cwd=cwd, env=env)
  File "/usr/lib/python2.7/", line 679, in __init__
errread, errwrite)
  File "/usr/lib/python2.7/", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Traceback (most recent call last):
  File "/usr/bin/dh_virtualenv", line 106, in <module>
sys.exit(main() or 0)
  File "/usr/bin/dh_virtualenv", line 83, in main
  File "/usr/lib/python2.7/dist-packages/dh_virtualenv/", line 112, in create_virtualenv
  File "/usr/lib/python2.7/", line 511, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['virtualenv', '--system-site-packages', '--setuptools', 'debian/long-package-name/usr/share/python/long-package-name']' returned non-zero exit status 1
make: *** [binary-arch] Error 1
dpkg-buildpackage: error: fakeroot debian/rules binary gave error exit status 2

dh_virtualenv is used to create Debian packages that include Python virtualenvs. It is one of the better ways of packaging Python software, especially if there are Python dependencies that are not available in Debian or Ubuntu. When building a .deb package, it creates a virtualenv in a location such as:


This virtualenv has several tools under its bin/ directory, and they all have the absolute path of the virtualenv's Python interpreter hard-coded in their #! shebang line:


Given that <build-directory> often contains the package name as well, it's easy to overflow the 128 byte limit of the #! shebang line. In my case, with a ~30 character package name, the path length grew to 160 characters!

Consequently, the kernel couldn't find the Python executable anymore, and running any of the tools from the bin/ directory gave an ENOENT (file not found) error. This is what happened when virtualenv tried to install pip during the initial setup. The root cause of this error is not immediately obvious, to say the least.

To check whether this affects you, check the line length of any script with wc:

head -n 1 /path/to/virtualenv/bin/easy_install | wc -c

If that's larger than 128, it's probably the cause of the problem.

The fix is to change the package name and/or the build location to something shorter. The alternative would be to patch the Linux kernel, which – depending on your preferences – sounds either fun or really unpleasant. Suit yourself!

Behold the power of perf-tools

perf-tools is a collection of scripts for system-wide tracing on Linux. It's really, really cool. It's what the perf command should have included from day one, but didn't.

It is packaged in Debian and Ubuntu, but those versions miss some key features. As perf-tools consists of shell scripts (no compilation necessary), I recommend using the GitHub version directly:

git clone

Two tools that are included are execsnoop and opensnoop, which trace new program executions and open() calls across the whole system.

$ sudo ./execsnoop
21:12:56  22898  15674 ls --color=auto -la
21:12:56  22899  15674 git rev-parse --is-inside-work-tree
21:12:56  22900  15674 git rev-parse --git-dir

$ sudo ./opensnoop
Tracing open()s. Ctrl-C to end.
COMM             PID      FD FILE
opensnoop        22924   0x3 /etc/
gawk             22924   0x3 /usr/lib/locale/locale-archive
top              15555   0x8 /proc/1/stat

Maybe the most interesting tool is uprobe. It's magic: it traces function calls in arbitrary user-space programs. With debugging symbols available, it can trace practically every function in a program. Without them, it can trace exported functions or arbitrary code locations (specified by raw address). It can also trace library code, e.g. libc). Having these possibilities on a production system without any prior setup is staggering.

$ sudo user/uprobe -F -l /tmp/a.out | grep quicksort
$ sudo user/uprobe -F p:/tmp/a.out:_Z9quicksortN9__gnu_cxx17__normal_iteratorIPiSt6vectorIiSaIiEEEES5_
Tracing uprobe _Z9quicksort[snip] (p:_Z9quicksort[snip] /tmp/a.out:0x8ba). Ctrl-C to end.
   a.out-23171 [000] d... 1860355.891238: _Z9quicksort[snip]: (0x80488ba)
   a.out-23171 [000] d... 1860355.891353: _Z9quicksort[snip]: (0x80488ba)

(To demangle the C++ function names, use the c++filt tool.)

perf-tools really shows the power of the Linux perf/ftrace infrastructure, and make it usable for the broad masses. There are several other tools that analyze latency and cache hit rates, trace kernel functions, and much more. To finally have such functionality in Linux is fabulous!

Running strace for multiple processes

Just a quick note about strace, the ancient Linux system-call tracing tool.

It can trace multiple processes at once: simply start it with multiple -p arguments (the numbers give the processes' PIDs):

sudo strace -p 2916 -p 2929 -p 2930 -p 2931 -p 2932 -o /tmp/strace.log

This is great for tracing daemons which use separate worker processes, for example Apache with mpm-prefork.

Plotting maps with Folium

Data visualization in Python is a well solved problem by now. Matplotlib and it's prettier cousin Seaborn are widely used to generate static graphs. Bokeh generates HTML files with interactive, JavaScript-based graphs. It's a great way of sharing data with other people who don't have a Python development environment ready. Several other libraries exist for more specialized purposes.

What has been missing for a long time was good map libraries. Plotting capabilities were fine, but basemap support of the existing libraries was very limited. For example, the popular Matplotlib-basemap has great plot types (contour maps, heatmaps, ...) but can't show any high-resolution maps: it only has country/state shapes or whole-world images. Consequently, it's useless for drawing city or street level maps, unless you want to set up your own tile server (you don't).

Along comes Folium, a library that generates interactive maps in HTML format based on Leaflet.js. It supports, among others, OpenStreetMap and MapBox base layers which look great and provide enough details for large-scale maps.

Here is an example that shows some GPS data I cleaned up with a Kalman filter:

def plot(points, center):
    map_osm = folium.Map(location=center, zoom_start=16, max_zoom=23)

Here's what it looks like. I find it pretty neat, especially given that it took only 3 lines of code to create:


Mixing C runtime library versions

When compiling code with Microsoft's Visual C/C++ compiler, a dependency on Microsoft's C runtime library (CRT) is introduced. The CRT can be linked either statically or dynamically, and comes in several versions (different version numbers as well as in debug and release variants).

Complications arise when libraries linked into the same program use different CRTs. This happens if they were compiled with different compiler versions, or with different compiler flags (static/dynamic CRT linkage or release/debug switches). In theory, this could be made to work, but in practice it is asking for trouble:

  • If the CRT versions differ (either version number or debug/release flag), you can't reliably share objects generated by CRT A with any code that uses CRT B. The reason is that the two CRTs may have a different memory layout (structure layout) for the that object. The memory location that CRT A wrote the object size to might be interpreted by CRT B as a pointer, leading to a crash when CRT B tries to access that memory.
  • Even if the same CRT version is included twice (once statically, once dynamically linked), they won't share a heap. Both CRTs track the memory they allocate individually, but they don't know anything about objects allocated by the other CRT. If CRT A tries to free memory allocated by CRT B, that causes heap corruption as the two CRTs trample on each others feet. While you can freely share objects between CRTs, you have to be careful whenever memory is allocated or freed. This can sometimes be managed when writing C code, but is very hard to do correctly in C++ (where e.g. pushing to a vector can cause a reallocation of its internal buffer).

Accordingly, having multiple CRTs in the same process is fragile at best. When mixing CRTs, there are no tools to check whether objects are shared in a way that's problematic, and manual tracking is subtle, easy to get wrong, and unreliable. Mistakes will lead to difficult-to-diagnose bugs and intermittent crashes, generally at the most inconvenient times.

To keep your sanity, ensure that all code going into your program uses the same CRT.1 Consequently, all program code, as well as all libraries, need to be compiled from scratch using the same runtime library options (/MD or /MT). Pre-compiled libraries are a major headache, because they force the use of a specific compiler version to get matching CRT version requirements. If multiple pre-compiled libraries use different CRT versions, there may not be any viable solution at all.

This situation will improve with the runtime library refactoring in VS2015, which promises CRT compatibility to subsequent compiler versions. Thus, this inconvenience should mostly be solved in the future.

1 Dependency Walker can be used to list all dynamically linked CRT versions. I'm not sure whether that can be done for statically linked CRTs, I generally avoid those.

Debian's most annoying warning message: "Setting locale failed"

You ssh into a server or you enter a chroot, and the console overflows with these messages:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = "en_US:en",
        LC_ALL = (unset),
        LC_TIME = "en_US.utf8",
        LC_CTYPE = "de_AT.UTF-8",
        LC_COLLATE = "C",
        LC_MESSAGES = "en_US.utf8",
        LANG = "de_AT.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory


Fortunately the fix is easy (adjust the locale names for your situation):

locale-gen en_US.UTF-8 de_AT.UTF-8

Finally, peace on the console.


Profiling is hard. Measuring the right metric and correctly interpreting the obtained data can be difficult even for relatively simple programs.

For performance optimization, I'm a big fan of the poor man's profiler: run the binary to analyze under a debugger, periodically stop the execution, get a backtrace and continue. After doing this a few times, the hotspots will become apparent. This works amazingly well in practice and gives a reliable picture of where time is spent, without the danger of skewed results from instrumentation overhead.

Sometimes it's nice to get a more fine-grained view. That is, not only find the hotspot, but get an overview how much time is spent where. That's where 'real' profilers come in handy.

Under Windows, I like the built-in "Event Tracing for Windows" (ETW), which produces files that can be analyzed with Xperf/Windows Performance Analyzer. It is a really well thought out system, and the Xperf UI is amazing in the analyzing abilities that it offers. Probably the best place to start reading up on this is ETW Central.

Under Linux, I haven't found a profiler I can really recommend, yet. gprof and sprof are both ancient and have severe limitations. OProfile may be nice, but I haven't had a chance to use it yet, as it wasn't available for my Ubuntu LTS release.

I have used Callgrind from the Valgrind toolkit in combination with the KCachegrind GUI analyzer. I typically invoke it like this:

valgrind --tool=callgrind --callgrind-out-file=callgrind-cpu.out ./program-to-profile
kcachegrind callgrind-cpu.out

Callgrind works by instrumenting the binary under test. It slows down program execution, often by a factor of 10. Further, it only measures CPU time, so sleeping times are not included. This makes it unsuitable for programs that wait a significant amount of time for network or disk operations to complete. Despite these drawbacks, it's pretty handy if CPU time is all that you're interested in.

If blocking times are important (as they are for so many modern applications - we generally spend less time computing and more time communicating), gperftools is a decent choice. It includes a CPU profiler that can be run in real-time sampling mode, and the results can viewed in KCachegrind. It is recommended to compile into the binary to analyze, but using LD_PRELOAD works decently well:

CPUPROFILE_REALTIME=1 CPUPROFILE=prof.out LD_PRELOAD=/usr/lib/ ./program-to-profile
google-pprof --callgrind ./program_to_profile prof.out > callgrind-wallclock.out
kcachegrind callgrind-wallclock.out

If it works, this gives a good overall profile of the application. Unfortunately, it sometimes fails: on amd64, there are sporadic crashes from within libunwind. It's possible to just ignore those and rerun the profile, at least interesting data is obtained 50% of the time.

The more serious problem is that CPUPROFILE_REALTIME=1 causes gperftools to use SIGALARM internally, conflicting with any applications that want to use that signal for themselves. Looking at the profiler source code, it should be possible to work around this limitation with the undocumented CPUPROFILE_PER_THREAD_TIMERS and CPUPROFILE_TIMER_SIGNAL environment variables, but I couldn't get that to work yet.

You'd think that perf has something to offer in this area as well. Indeed, it has a CPU profiling mode (with nice flamegraph visualizations) and a sleeping time profiling mode, but I couldn't find a way to combine the two to get a real-time profile.

Overall, there still seems to be room for a good, reliable real-time sampling profiler under Linux. If I'm missing something, please let me know!

Returning generators from with statements

Recently, an interesting issue came up at work that involved a subtle interaction between context managers and generator functions. Here is some example code demonstrating the problem:

def resource():
    """Context manager for some resource"""

    print("Resource setup")
    print("Resource teardown")

def _load_values():
    """Load a list of values (requires resource to be held)"""

    for i in range(3):
        print("Generating value %d" % i)
        yield i

def load_values():
    """Load values while holding the required resource"""

    with resource():
        return _load_values()

This is the output when run:

>>> for val in load_values(): pass
Resource setup
Resource teardown
Generating value 0
Generating value 1
Generating value 2

Whoops. The resource is destroyed before the values are actually generated. This is obviously a problem if the generator depends on the existence of the resource.

When you think about it, it's pretty clear what's going on. Calling _load_values() produces a generator object, whose code is only executed when values are requested. load_values() returns that generator, exiting the with statement and leading to the destruction of the resource. When the outer for loop (for val) comes around to iterating over the generator, the resource is long gone.

How do you solve this problem? In Python 3.3 and newer, you can use the yield from syntax to turn load_values() into a generator as well. The execution of load_values() is halted at the yield from point until the child generator is exhausted, at which point it is safe to dispose of the resource:

def load_values():
    """Load values while holding the required resource"""

    with resource():
        yield from _load_values()

In older Python versions, an explicit for loop over the child generator is required:

def load_values():
    """Load values while holding the required resource"""

    with resource():
        for val in _load_values():
            yield val

Still another method would be to turn the result of _load_values() into a list and returning that instead. This incurs higher memory overhead since all values have to be held in memory at the same time, so it's only appropriate for relatively short lists.

To sum up, it's a bad idea to return generators from under with statements. While it's not terribly confusing what's going on, it's a whee bit subtle and not many people think about this until they ran into the issue. Hope this heads-up helps.