Manually creating Docker images

Docker is a virtualization solution that's been gaining a lot of momentum over the last few years. It focuses on light-weight, ephemeral containers that can be created based on simple config files.

Docker's main target platform is amd64, but it also works on x86. However, practically all official container images in the Docker registry are amd64 based, which means they can't be used on an x86 machine. So, it's necessary to manually create the required base images. As you might have guessed, my server runs Docker on x86, so I've had to find a solution for that problem.

Fortunately, creating images from scratch is really easy with the mkimage.sh script that comes bundled with Docker. On Debian systems, its installed in /usr/share/docker.io/contrib/mkimage.sh, on Fedora it has to be obtained from the Docker git repository:

$ git clone https://github.com/docker/docker.git

The script can then be found under docker/contrib/mkimage.sh.

Creating a Debian Jessie image is straight-forward:

# mkimage.sh -t debootstrap/minbase debootstrap --variant=minbase jessie

This command will create a minimal Debian Jessie image using Debootstrap, and import it into Docker with the name debootstrap/minbase. Further options can set a specific Debian mirror server and a list of additional packages to install:

# mkimage.sh -t debootstrap/minbase debootstrap \
             --include=locales --variant=minbase \
             jessie http://httpredir.debian.org/debian

This will use httpredir.debian.org as mirror and install the locales package in the image.

mkimage.sh has backends to bootstrap Arch Linux, Busybox, Centos, Mageia, and Ubuntu. Fedora images doesn't seem to be supported directly, but they can be generated by following instructions compiled by James Labocki.

Finally, it's worth mentioning that this should only be used to generate base images. You'd then use Docker itself (cf. Dockerfile) to create images that actually do something interesting, based on these base images. This will save both time and memory, due to Docker's caching and copy-on-write mechanisms.


Installing CyanogenMod 11 on a Samsung Galaxy S2

I bought my Samsung Galaxy S2 in 2011, and it's still going strong. It really was a great phone for the time and held up incredibly well. Unfortunately, Samsung's support has ended long ago, and users are stranded with an obsolete (and insecure) firmware.

Fortunately, CyanogenMod still provides relatively recent images for the device. As of this writing, snapshots of CM11 (based on Android 4.4) are available, but there are no images of CM12.

Here is how I flashed CM11 to my phone. This is based on the official CyanogenMod wiki page for the SGS2 and on this xda-developers post. Since you can brick your phone if you don't know what you are doing, I suggest reading both of these pages. Note that you will need to factory-reset your phone, so backup all your data (files, apps, SMS, contacts, ...).

All the following steps have to be performed on a root shell on Linux.

To start from a clean slate, create a new Debian Jessie chroot (you may need to install debootstrap first). Don't use LXC/Docker/VMWare here, you need raw hardware access:

host#  mkdir sgs2-flash-chroot
host#  cd sgs2-flash-chroot
host#  debootstrap jessie .
host#  mount --bind /dev/ dev
host#  mount --bind /sys sys
host#  mount --bind /proc proc

Copy the following files to sgs2-flash-chroot/tmp:

Boot the phone into download-mode (shutdown, then VOLDOWN + HOME + POWER) and connect to the Linux computer.

host#  chroot .
chroot#  apt-get install heimdall-flash android-tools-adb
chroot#  heimdall print-pit
chroot#  cd /tmp
chroot#  heimdall flash --KERNEL zImage --no-reboot

Disconnect the USB cable and hold POWER until the phone shuts down. Reboot into recovery (VOLUP + HOME + POWER, let go of POWER after 5 seconds or you'll trigger a reboot). Then reconnect the USB cable.

chroot#  adb devices    # Check if device recognized.
chroot#  adb push Recovery_CWM_6.0.4.7_I9100.zip /emmc

In recovery, select "install from zip file" to flash the new recovery image. Then go into advanced -> "reboot recovery". Mount /storage/sdcard0 in the recovery menu, then reconnect the USB cable.

chroot#  adb devices    # Check if device recognized.
chroot#  adb push cm-11-20141115-SNAPSHOT-M12-i9100.zip /storage/sdcard0
chroot#  adb push gapps-kk-20140105-signed.zip /storage/sdcard0

Again, in recovery, select "install from zip files", first install the CM image, then the GApps package. Select "reboot" to boot into CyanogenMod. Shut down again, reboot into recovery, wipe cache and perform factory reset, reboot into CM (avoid factory reset with stock kernel due to the "super brick" problem).

Done. You should now have a not-so-shiny-anymore Galaxy S2 running a new-and-shiny CyanogenMod 11. Enjoy :-)


Checklib finally announced

On Monday, checklib was finally announced on debian-devel-announce, thanks to Andreas Barth for sponsoring the mail.

I got very positive reactions from a number of people, which is great. I got less friendly comments prior to the announcement (by one person) , and I'm happy this reaction wasn't representative for the rest of Debian.

It's nice that people show interest in the problem, there's currently a discussion on debian-devel if and how automatic checking (and fixing) could be added to debhelper. That would seriously rock, as it would be one of the faster ways to get the number of affected packages down.

It's also cool to hear that the GNOME people are fixing their .la files with 2.16 in order to cut down dependencies introduced by broken libtool files.

There are some other interesting things on the horizon on the technical side of the project, as automatically built dbgsym packages (containing debug symbols, Ubuntu does that already), and the idea Simon Richter already talked about, which could really cut down the work the release team has with library transitions.


Installing Debian on an oldworld PPC

The victim was a PowerMac 9500 with a 300MHz G3 CPU, 200Mb RAM, a 9G HDD (with OS9 installed), and a 2G HDD (blank).

I hooked up a PowerBook to see the serial console output, since OpenFirmware only talks over the serial line. Then I finally found floppy images that would boot actually boot. They were from Woody, so I did a netinst using boot-floppies.

Unfortunately the Linux didn't come up after reboot. After resetting the nvram I could at least boot MacOS again and then start Linux via BootX. It took quite some fiddling with quik and the nvram till I had the direct boot working.

Subsequently I upgraded to etch, but I couldn't get a 2.6 kernel to boot. First I got a bus error from the IMS TwinTurbo graphics card driver, then the kernel "forgot" where the initramfs was loaded, which turned out to be a grave bug in the Debian kernel images (#366620).

I wrote patches for both bugs and now I finally have Etch with Linux 2.6 working. Whee!

The only sad thing is that all this took the better part of last week :-/


Correct use of hyphens in man-pages

When writing manual pages the question comes up when to use "-" and when to use "\-". The answer is actually quite simple. Use "-" whenever you want a hyphen and "\-" when you want a minus sign.

There are two exceptions though: In the name section, "\-" is used to separate program name from short description, as in "man \- an interface to on-line manuals".

The other exception is that you have to use "\-" for options/switches (-h, --foo, etc.). "\-" causes man to emit an U+002d Hyphen-Minus character, whereas "-" results in U+2010 Hyphen (in a unicode locale).

U+2d is the normal ASCII hyphen char, the one programs use to test for switches. So "\-" allows copy&paste from the manpage, while "-" doesn't.


ELF talk

Last monday I held a short talk about ELF objects and dynamic linking for the Debienna crowd. It went semi-well; people were quite interested but somtimes didn't seem to grasp what I was talking about. Which was probably my fault because I didn't spend enough time preparing the talk, being on a difficult subject to begin with.

Perhaps I'll talk about the subject again for maks, Rhonda and baumgartner (if they are still interested), since they weren't able to attend.

In case anyone cares, I've written up some notes about ELF, dynamic linking, symbol lookup and related stuff, covering most the thinks I talked about.


Cross-compiler fun

I needed to fix the elfutils build failure on ia64, but I didn't have access to such a machine. Fortunately Herbert Pƶtzl pointed out ski, an ia64 emulator for Linux.

Ski needs a custom guest kernel however, so I had to cross-compile that for ia64.

Setting up a cross-compiling toolchain on Debian is really easy nowadays; there's even a nice HOWTO describing the needed steps. For lazy people pre-built packages are available.

When compiling the toolchain yourself, note that you may need more/other library packages then listed in the HOWTO. This depends on the target architecture, e.g. for ia64 you will need libunwind7-dev, libatomic-ops-dev, and further libc6.1 instead of libc6. Otherwise gcc will complain about missing build-dependencies.

For ia64 I ran into a linker error when building gcc, however a patch from Bertl's cross-compiling corner solved that.

While doing all this I wrote some scripts to automate the process, so compiling a cross-toolchain (for any architecture) is now a matter of 5 minutes configuration and one ./driver run. Whee!


Using mutt's header_cache feature

In the past I needed to move old mails out to a backup folder occasionally, since it took too long for mutt to load all the headers when opening the maildir (often some 10 seconds for about 6000 mails).

Today I remembered that mutt should actually do header caching, so I looked it up in the docs, and saw that the config option was missing in my .muttrc. I put it in, restarted mutt, and after the cache was initialized the performance was noticeably better:

set header_cache="~/.mutt/headercache"

Mutt can actually use 3 kinds of database backends for the cache, bdb, gdbm and qdbm. The default in Debian is bdb, to use one of the others you have to flip a switch in debian/rules and recompile the package.

On my machine qdbm was fastest (gdbm slightly slower, dbm far behind), so I stuck with that. Since I use a self-compiled mutt package anyway (because I want my index_color patch included), it's not much of a problem.


Automatically syncing files between hosts without compromizing security

The problem

The goal is to automatically synchronize files between several hosts without compromising the integrity of the separate machines. A nice tool for 2-way sync is unison. To sync files between different machines The Right Way (TM) is to tunnel the unison protocol over ssh. This is well supported by unison.

To run sync automatically (e.g. via cron), you need to create an SSH keypair without passphrase, so unison can log into the other machine without human interaction. This is where the problems start, since anyone who got access to the private key (e.g. by compromising or stealing the machine the private key was on) can log into the other host.

Now ssh has a nice way to restrict what you can do with a specific key, so you can e.g. use the following in the remote hosts ~/.ssh/authorized_keys:

command="/usr/bin/unison -server" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA8K2cd0yemw...

That way someone who has the private key can't execute arbitrary commands, but just unison in server mode. However it's still possible to tell the unison server to overwrite arbitrary files (which the user has write access to). This is a major problem, since also files like ~/.bashrc can be overwritten, so the next time the user logs in, arbitrary commands will be executed.

A possible solution

One solution is to simply create a new user on the remote host with a disabled password, and let unison run as that user (via adding the appropriate line to $HOME/.ssh/authorized_keys, and telling the local unison to use that username).

That's possible, but the .bashrc trick still works, it's just less likely that the code there is ever executed (root would have to use su to become that user).

For me this solution didn't work out since I wanted to sync my maildir, and it was hard to ensure that file permissions were set in a way that both allowed me to read my mail and allowed unison (running under user unison-sync) to sync the files.

The Right Solution (TM)

All the problems vanish as soon as you run unison under the user you'd normally use, but in a chroot. Now a full-blown chroot takes up a lot of space, and there's once again the danger that someone might enter the chroot and run some kind of shell (though the risk is even lower).

It's best to use a chroot which only contains the bare minimum of files necessary to run unison -server.

You get numerous advantages:

  • No problems with file permissions
  • No shell inside the chroot that would read startup files from $HOME.
  • Hardly any space wasted. The whole chroot is about 4Mb in size
  • Since the chroot is pretty much empty, many common exploits (well, shell codes) won't work

How to do it

greek0@orest:/home/chroots/unichroot$ cat ~/.ssh/authorized_keys
command="/usr/bin/dchroot -q -c unison -- /usr/bin/unison -server" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA8K2cd0.....

greek0@orest:/home/chroots/unichroot$ grep unison /etc/dchroot.conf
unison /home/chroots/unichroot

greek0@orest:/home/chroots/unichroot$ find . -maxdepth 3 | xargs ls -ld
drwxr-xr-x   2 root   root      4096 2006-08-04 16:39 ./bin
-rwxr-xr-x   1 root   root    576100 2006-08-04 15:06 ./bin/sash
lrwxrwxrwx   1 root   root         4 2006-08-04 16:39 ./bin/zsh -> sash
drwxr-xr-x   3 root   root      4096 2006-08-04 15:10 ./home
drwx------   4 greek0 greek0    4096 2006-08-04 15:18 ./home/greek0
drwx------  31 greek0 greek0    4096 2006-08-04 13:47 ./home/greek0/Maildir
drwx------   2 greek0 greek0    4096 2006-08-04 15:47 ./home/greek0/.unison
drwxr-xr-x   2 root   root      4096 2006-08-04 15:07 ./lib
-rwxr-xr-x   1 root   root     88164 2006-08-04 14:58 ./lib/ld-linux.so.2
-rwxr-xr-x   1 root   root   1151644 2006-08-04 14:56 ./lib/libc.so.6
-rw-r--r--   1 root   root      9592 2006-08-04 14:56 ./lib/libdl.so.2
-rw-r--r--   1 root   root    141040 2006-08-04 14:55 ./lib/libm.so.6
-rw-r--r--   1 root   root      9656 2006-08-04 14:55 ./lib/libutil.so.1
drwxr-xr-x   3 root   root      4096 2006-08-04 14:53 ./usr
drwxr-xr-x   2 root   root      4096 2006-08-04 14:55 ./usr/bin
lrwxrwxrwx   1 root   root        14 2006-08-04 15:12 ./usr/bin/unison -> unison-2.13.16
-rwxr-xr-x   1 root   root    955784 2006-08-04 14:54 ./usr/bin/unison-2.13.16

The zsh symlink is there because I have /bin/zsh as my shell in /etc/passwd, and dchroot also wants to use it in the chroot (for launching unison).

/home/greek0/Maildir is bind-mounted from outside the chroot, bind-mounting is done at boot-time via /etc/fstab.

The chroot was created manually, simply by copying the files from the host. You obviously need /usr/bin/unison plus all the libraries it depends on. You can find those via readelf -d /usr/bin/unison | grep NEEDED. Additionally you need the dynamic linker /lib/ld-linux.so.2 (seen from readelf -l /usr/bin/unison | grep INTERP -A 1).

One thing to pay attention to is that most of the files copied from /lib are symlinks. Be sure to either use cp without arguments, or use cp -a and copy the link targets too.


Installing Windows 95 inside QEMU on Linux

If you can just install Windows 95 inside QEMU and it magically works consider yourself lucky. I tried this and got an error message that I need to create a FAT partition first where the installer can place some files. That's where it all began.

Installing

  • Create the disk image. dd if=/dev/zero of=hda bs=$(( 1024*1024 )) seek=1000 count=0. That way you create a sparse 1GB file.
  • Get a FreeDOS Floppy and CD image and install FreeDOS inside QEMU, there you can partition your disk. qemu -hda hda -fda fd0 -cdrom cdrom-img -boot a.
  • Insert the windows CD, run qemu with -cdrom /dev/cdrom -boot c so you get to the FreeDOS prompt, then go to the cdrom drive and run setup. The setup should work then

Getting networking up inside Windows

  • You need the Realtek rtl8139 driver, other network cards won't work (ne2000 at least doesn't). So run qemu with -net user -net nic,model=rtl8139.
  • Windows 95 unfortunately doesn't have the driver for the rtl8139, so you need to get it onto the system. Download it from well, the Realtek driver download page. Unzip it, put it onto a floppy, create an image of that floppy and run qemu with -fda image.
  • Then setup the networking under windows, reboot and you should be able to access the internet.