Convert from VirtualBox to KVM/Qemu

Konverting from VirtualBox to KVM is actually quite easy.

If you use snapshots in VirtualBox, then you will first have to clone the current running state into a new VM (Right clickt -> Clone -> blah -> select only current state)

Then, if you use dynamic growing disks, convert them to raw format

VBoxManage clonehd --format RAW Win7.vdi Win7.raw

Now convert it to qcow2

qemu-img convert -f raw Win7.raw -O qcow2 Win7.img

In virt-manager create a new VM, import the existing image, and boot. Windows 7 did boot without Problems. Then install virtio and SPICE drivers, shutdown, change settings, and enjoy. Wow, I was impressed.

SOHO Mailserver with Postfix + Postgresql + Dovecot + SpamAssassin + Roundcube

This HowTo describes my Home-Mailserver Setup. Basically this is a sum-it-all-up article from various resources on the net. 

Used Software:

  • Arch Linux OS
  • Postfix MTA
  • PostgreSQL database backend
  • Dovecot IMAP Server
  • Roundcube Webmail + Apache Webserver
  • Spamassassin junk filter
  • Server-side filtering with Sieve
  • fetchmail (for pulling all spread accounts in this one place)

Preconditions in my setup:

  • Server behind Firewall/NAT
  • Dynamic IP (No-IP Plus managed DynDNS service with MX Record etc)
  • StartSSL certificate for both Web- and Mail-Server domain
  • ISP doesn’t allow running an outgoing mail server directly, requires relaying through their mail gateway
  • Apache + PHP + Postgresql already running and working
Read More

SEO, Java & the URL-encoded Session ID

Our Homepge is written in Java using JSP/JSF and for compatibility reasons we do URL-encode a jsessionid even the client accepts cookies, and also we create a session for every visitor. This causes every link on our pages to include an ugly ;jsessionid=<very long hash>, which search engines tend to dislike.

Our new online markeing manager startet his job with the idea to use static HTML landing pages to get our plattform more search engine friendly. Now those need to be filled with resonably fresh content. We’re selling real estate, so it’s of no use to have Google index a house that just got sold yesterday. So those landing pages would need to read stuff from the database and generate fresh, static content on a daily basis.

Now if the jsessionid is the main reson for those landing pages, I’d prefer things to be done differently. Means: let’s just not deliver a session id to searchengine crawlers.

For now I stripped them using Apache’s mod_rewrite. I found the Howto here:

That guy also presented an even nicer solution: just don’t even URL-encode the session id for bots:

Thanks a lot for those articles.

Comments should work now

Heck, some interesting and nice comments have been posted, and landed up in the approval queue. I never got a mail, and currently cannot yet figure out where to configure this in Drupal. Anyways comment approval is disabled now and I published all comments posted so far. Sorry for the confusion.

Project T5220, try 1: How NOT to configure a T5220 as a complete Oracle/Weblogic development environment

Currently I have to configure a Sun Enterprise T5220 as our new “developemnt environment”, replacing our V440 (Oracle) and the T1000 (Weblogic). We chose a Sun CMT machine as we wanted to stay close to production in regards of architecture (processor type/OS etc).

The machine will have to run an Oracle instance, and some containers for the Weblogic environments (we have an integration env, daily build env + some development environments for projects). The new T5220 ist packed with a 1.4GHz T2 Niagara, 64GB of memory, 2x146GB for the OSes, and 6x300GB for the database and NFS.

Given that, the ideal setup sounds like:

  • 2 Guest LDOMs, where:
  • 1 Oracle-LDOM
  • 1 Weblogic LDOM, using Solaris zones to seperate the environments
  • The setup is 100% ZFS
  • The control domain runs on one slice of the 146GB drives in ZFS raid1
  • The guest domain roots run on the other slice of that disk, seperate zpool, exported as ZVOL
  • The 6 300GB disks are exported as raw disk slices (EFI labeled), and formed into a ZFS raidz inside the Oracle LDOM
  • Inside that raidz zpool, there is a ZFS with recordsize=8k for the Oracle datafiles, and a ZFS with 128k blocksize for the redo logs

Regarding the database, this is pretty similar to our current setup, a SunFire V440 with a ST2540 FC-Storage attached, running ZFS on top of the hardware RAID5. Similar in regards of the filesystem, at least. ZFS runs very well on the V440.

Okay, now, after setting this up, it turned out that the DB performance is unacceptable. Absolutely horrible to be honest. We’re doing large imports of our production database on a regular basis, and on the V440 it takes about 50 minutes, now we’re up to 150 minutes on the T5220.

Here are some numbers. Simple dd testing: creating a 15GB file.

V440 with St2540 RAID5 volume, exported via 2x2Gbit FC, configured as a simple ZFS

$ time dd if=/dev/zero of=ddtest.file bs=8k count=2000000
2000000+0 records in
2000000+0 records out

real    2m32.380s
user    0m3.021s
sys     1m27.533s
$ echo "((16384000000/152)/1024)/1024"|bc -l

T5220 with 10k/rpm local SAS, exported as raw disk slice into guest LDOM and configured as ZFS RAIDz:

$ echo "((16384000000/336)/1024)/1024"|bc -l

Now things get strange. Inside the control domain, onto the ldompool:

$ echo "((1638400000/35)/1024)/1024"|bc -l


That is worse than I get with a Linux guest in VirtualBox onto a virtual drive. Not acceptable for 300€ 10k rpm/s SAS drives.

Me asking for help and further infos at Sun forums

Anyways, I’m going to attach the documented step-by-step guide on how to setup all this from scratch (M$-Word .doc, in German, sorry)

Howto create a youtube video from mp3/ogg audio using a picture

If you want to create a youtube video from an audio file, here is how to do this.
All you need is the audio file, a single picture, and ffmpeg.

First find out the lenght of the audio file in seconds, you’ll need it. Here is an example with a 420 seconds file:

ffmpeg -loop_input -i picture.jpg -vcodec mpeg4 -r 25.00 -qscale 2 -s 480x360 \
-i audiofile.mp3 -acodec libmp3lame -ab 128k -t 420 video.avi

This will create a Hi-Res MPEG-4 video with 128k audio. The trick here is to use that one picture and loop it for -t seconds.

Howto: Compile Firefox PGO on Gentoo using an ebuild

Warning! All this doesn’t work at all anymore. Don’t even try it.

This Howto explains how to build Mozilla Firefox with PGO (profile guided optimization) on Gentoo using portage and the ebuild. It’s quite tricky, so make sure the prerequisites are correct on your system.


  • Your user has to be in the "portage" group!
  • /var/tmp/portage has to be 775 or 770 (chmod g+w /var/tmp/portage)
  • You need a customized ebuild

I use the mozilla-firefox-3.1_beta3 ebuild from mozilla overlay. The needed should be in the files/ subdir. You can svn checkout my modified one from here:

svn co

Just put that in your overlay.

Make sure to enable the useflags "pgo -xulrunner" in your /etc/portage/package.use file!

Now as user we call the ebuild phases manually

cd /path/to/your/overlay/www-client/mozilla-firefox
ebuild mozilla-firefox-3.1_beta3-r1.ebuild compile

This will start the compile, after the first PGO run, the browser will open up, and you should do some usual browsing for some minutes. Maybe go to the spidermonkey JS speed test page. When done, just close the browser.

Now the second run will be done. When finished, we install it as root with

sudo ebuild mozilla-firefox-3.1_beta3-r1.ebuild qmerge

Howto control Tomcat using wget

I just had to restart a webapp in Tomcat without stopping a second app running in the same tomcat instance.

Usually this can be done easily via the Tomcat Manager, but in this case I was not able to access the Manager due to firewall rules. Though I was able to access the server using ssh, but there was no curl installed.

Luckily wget did the trick too!

wget \
--http-user=manager-user \
--http-password=manager-password \
-q -O - http://localhost:8080/manager/html/reload?path=/test \
| grep -A1 Message|awk -F'>' '{print $NF}'

OK - Started application at context path /test


Howto import SAR data into a PostgreSQL database and create graphs in

This is a walkthrough on how to get SAR data into a database, and use to create nice graphs using a JDBC connection to that database, and SUN Report Builder to do the graphs.

In this example I will use SAR’s CPU utilisation data.

Software used:


1. Setup the Database

I use a PostgreSQL database named sar here. The SAR data is on the same host as the database, and the PostgreSQL user has read permission on the SAR files.

First we need to create a table

"host" varchar, 
"interval" numeric, 
"time" timestamp with time zone, 
"offset" numeric(1), 
"user" numeric(5,2), 
"nice" numeric(5,2), 
"system" numeric(5,2), 
"iowait" numeric(5,2), 
"steal" numeric(5,2), 
"idle" numeric(5,2)



2. Convert SAR data to CSV

Then we need to output the SAR data in a format we can load into the database, using sadf.

LC_ALL=en_US sadf -d /var/log/sa/sa15 -- -u > /tmp/sa20090215.csv

Note: The LC_ALL=en_US ensures that decimal values are seperated with a dot, not a comma.


3. Load the CSV directly into the database

Then we can load that CSV directly into the database:

echo "COPY sar_cpu FROM '/tmp/sa20090215.csv' DELIMITERS ';' CSV;"|psql -d sar

Cool, isn’t it? This can be scripted easily and controlled via a cronjob, like import “yesterday’s sar data, using a simple wrapper script and some date function.


4. Create an Base DB and connect it to PostgreSQL

Now start OOo Base, create a new DB -> JDBC.

See for an overwiev about the connection parameters:






5. Install SUN Report Builder

Then download SRB from, open Extras -> Extension Manager, and install it.

In OOo Base you’ll find it in Reports -> Report in Design View

There you have the possibility to create graphs and diagrams as usual in Calc.