In the past I needed to move old mails out to a backup
folder occasionally, since it took too long for mutt to load all the
headers when opening the maildir (often some 10 seconds for about 6000 mails).
Today I remembered that mutt should actually do header caching, so I looked it
up in the docs, and saw that the config option was missing in my
.muttrc. I put it in, restarted mutt, and after the cache was initialized the
performance was noticeably better:
Mutt can actually use 3 kinds of database backends for the cache, bdb, gdbm and
qdbm. The default in Debian is bdb, to use one of the others you have to flip a
switch in debian/rules and recompile the package.
On my machine qdbm was fastest (gdbm slightly slower, dbm far behind), so I
stuck with that. Since I use a self-compiled mutt package anyway (because I
want my index_color patch included), it's not much of a problem.
The goal is to automatically synchronize files between several hosts without
compromising the integrity of the separate machines. A nice tool for 2-way sync
is unison. To
sync files between different machines The Right Way (TM) is to tunnel the
unison protocol over ssh. This is well supported by unison.
To run sync automatically (e.g. via cron), you need to
create an SSH keypair
without passphrase, so unison can log into the other machine without human
interaction. This is where the problems start, since anyone who got access to
the private key (e.g. by compromising or stealing the machine the private key
was on) can log into the other host.
Now ssh has a nice way to restrict what you can do with a specific key, so you
can e.g. use the following in the remote hosts
That way someone who has the private key can't execute arbitrary commands, but
just unison in server mode. However it's still possible to tell the unison
server to overwrite arbitrary files (which the user has write access to). This
is a major problem, since also files like ~/.bashrc can be
overwritten, so the next time the user logs in, arbitrary commands will be
A possible solution
One solution is to simply create a new user on the remote host with a disabled
password, and let unison run as that user (via adding the appropriate line to
$HOME/.ssh/authorized_keys, and telling the local unison to use
That's possible, but the .bashrc trick still works, it's just less
likely that the code there is ever executed (root would have to use
su to become that user).
For me this solution didn't work out since I wanted to sync my maildir, and it
was hard to ensure that file permissions were set in a way that both allowed me
to read my mail and allowed unison (running under user unison-sync) to sync the
The Right Solution (TM)
All the problems vanish as soon as you run unison under the user you'd normally
use, but in a chroot. Now a full-blown chroot takes up a lot of space, and
there's once again the danger that someone might enter the chroot and run some
kind of shell (though the risk is even lower).
It's best to use a chroot which only contains the bare minimum of files
necessary to run unison -server.
You get numerous advantages:
No problems with file permissions
No shell inside the chroot that would read startup files from $HOME.
Hardly any space wasted. The whole chroot is about 4Mb in size
Since the chroot is pretty much empty, many common exploits (well, shell codes) won't work
The zsh symlink is there because I have /bin/zsh as
my shell in /etc/passwd, and dchroot also wants to use it in the
chroot (for launching unison).
/home/greek0/Maildir is bind-mounted from outside the chroot,
bind-mounting is done at boot-time via /etc/fstab.
The chroot was created manually, simply by copying the files from the host. You
obviously need /usr/bin/unison plus all the libraries it depends
on. You can find those via readelf -d /usr/bin/unison | grep NEEDED.
Additionally you need the dynamic linker /lib/ld-linux.so.2 (seen
from readelf -l /usr/bin/unison | grep INTERP -A 1).
One thing to pay attention to is that most of the files copied from
/lib are symlinks. Be sure to either use cp without
arguments, or use cp -a and copy the link targets too.
If you can just install Windows 95 inside QEMU and it magically works consider
yourself lucky. I tried this and got an error message that I need to create a
FAT partition first where the installer can place some files. That's where it
Create the disk image. dd if=/dev/zero of=hda bs=$(( 1024*1024 ))
seek=1000 count=0. That way you create a sparse 1GB file.
Get a FreeDOS Floppy and CD image and install FreeDOS inside QEMU,
there you can partition your disk. qemu -hda hda -fda fd0 -cdrom
cdrom-img -boot a.
Insert the windows CD, run qemu with -cdrom /dev/cdrom -boot
c so you get to the FreeDOS prompt, then go to the cdrom drive and run
setup. The setup should work then
Getting networking up inside Windows
You need the Realtek rtl8139 driver, other network cards won't work
(ne2000 at least doesn't). So run qemu with -net user -net
Windows 95 unfortunately doesn't have the driver for the rtl8139, so
you need to get it onto the system. Download it from well, the Realtek
page. Unzip it,
put it onto a floppy, create an image of that floppy and run qemu with
Then setup the networking under windows, reboot and you should be able to
access the internet.
This is a tool that displays Debian bug reports in mutt. You can then
directly read all messages sent to the bug and reply. The messages are fetched
directly from the web interface, so there is no delay between requesting bug
and getting it per email.
This tool was originally written by Christoph Berg, I've made some
modifications to make it work in arbitrary directories.
gpg --verify is quite slow when you have large keyrings included (like the
debian keyring). This is nasty, since mutt has to wait until gpg is finished
when displaying a gpg signed message (with signature verification on). So I've
written a tool that splits a huge keyring into a lot of smaller keyrings (one
key per keyring) and a shell script to verify signatures, to be used
from within mutt. The former tool is called splitkeyring.sh. The
latter one is gpgverify.sh.
gpgverify.sh first invokes gpg --verify as normal and
captures its output. If gpg failed because the key was not found in any
keyring, the script looks if the key is in one of the splitted keyrings, and if
so, reruns gpg with that keyring included. Otherwise the gpg error is returned.
These scripts are still hacky, if you want to use them you'll probably have
to modify them a bit. They aren't too big, so this shouldn't be too much of a