Howto easily update GPS-A data on a Sony Alpha 65/77/99 and others on Linux/Mac

In order to speed up GPS locking on a Sony Alpha 65 (or similar) SLT camera, it’s possible to update the GPS-A data (also called almanac data). Like on any other modern GPS device, the almanac data is used to give the device a hint where the satellites are located. The data usually is valid for only some weeks,then it needs to updated again.

The common way to get the almanac data on a Sony Alpha is to use Sony’s Software, so no go for Linux users. But there are other ways to do it. Read More

Running Owncloud WebDAV with Nginx

Update: since Owncloud 5.0 the config below didn't work anymore. A slightly more complex config is needed. See this GIST: https://gist.github.com/petarpetrovic/5163565

Old info for pre 5.0 below.

Here is how I got OwnCloud's WebDAV feature to work in Nginx.

I use Nginx with dav-ext-module which provides support for OPTIONS and PROPFIND methods, but it works with plain http_dav_module, too. You do not need dav-ext-module, but if you're going to use it, you have to be very careful not to set dav_ext_methods in the root context, otherwhise the hole site's folder structure can be browsed with webdav. It's best to set the dav handler only on remote.php.

On my server, Owncloud is accessed at /owncloud along with Drupal7 in the root-context.

Note that the dav handler location has to be set before the \.php handler, because with Nginx the first ~ match wins.

server {
        ##common server settings
        ##...

        root /srv/http;
        index index.php;

        #required for owncloud
        client_max_body_size 8M;
        create_full_put_path on;
        dav_access user:rw group:rw all:r;

        ##common rules here
        ##...

        # Owncloud WebDAV
        location ~ /owncloud/remote.php/ {
                dav_methods PUT DELETE MKCOL COPY MOVE;
                dav_ext_methods PROPFIND OPTIONS;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
        }

        location / {
                try_files $uri $uri/ @rewrite;
                expires max;
        }

        location @rewrite {
                #some rules here for legacy stuff
                #...
                # Drupal7
                rewrite ^ /index.php last;
        }

        # PHP handler
        location ~ \.php$ {
                fastcgi_pass   unix:/var/run/php-fpm/php-fpm.sock;
                fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
                include        fastcgi_params;
                fastcgi_intercept_errors on;
        }

        ##other common rules
        ##...
}

Using OpenSUSE’s snapper on ArchLinux to manage btrfs snapshots

Today I created a PKGBUILD for OpenSUSE's snapper utility which allows creating and managing BTRFS snapshots. You can find it on AUR here.

Snapper is a quite handy tool to manage BTRFS snapshots. On OpenSUSE it comes with a YaST2 plugin, so it even has a GUI. On Arch you can still get the command line version though.

Configuration:

1. First create a .snapshots subvolume in the root of the subvolume you want to be snappered. E.g. for a /mnt/rootfs/arch-root subvolume:

btrfs subvolume create /mnt/rootfs/arch-root/.snapshots

2. Create a config based on the provided template

cd /etc/snapper
cp config-templates/default configs/root

3. Edit /etc/snapper/configs/root

# subvolume to snapshot
SUBVOLUME="/mnt/rootfs/arch-root"

4. Edit /etc/conf.d/snapper

SNAPPER_CONFIGS="root home"

Et voilĂ . That's it.

# snapper list-configs
Config | Subvolume            
-------+----------------------
root   | /mnt/rootfs/arch-root
home   | /mnt/rootfs/home

By default, snapper will take a snapshot every hour. To disable this, edit /etc/snapper/configs/ and set

TIMELINE_CREATE="no"

As snapshots cost almost no space at all, I'll keep it enabled. After some hours:

# snapper list
Type   | #  | Pre # | Date                        | Cleanup  | Description | Userdata
-------+----+-------+-----------------------------+----------+-------------+---------
single | 0  |       |                             |          | current     |         
single | 1  |       | Mi 01 Feb 2012 00:01:01 CET | timeline | timeline    |         
single | 2  |       | Mi 01 Feb 2012 01:01:01 CET | timeline | timeline    |         
single | 3  |       | Mi 01 Feb 2012 02:01:01 CET | timeline | timeline    |         
single | 4  |       | Mi 01 Feb 2012 03:01:01 CET | timeline | timeline    |         
single | 5  |       | Mi 01 Feb 2012 04:01:01 CET | timeline | timeline    |         
single | 6  |       | Mi 01 Feb 2012 05:01:01 CET | timeline | timeline    |         
single | 7  |       | Mi 01 Feb 2012 06:01:01 CET | timeline | timeline    |         
single | 8  |       | Mi 01 Feb 2012 07:01:01 CET | timeline | timeline    |         
single | 9  |       | Mi 01 Feb 2012 08:01:01 CET | timeline | timeline    |         
single | 10 |       | Mi 01 Feb 2012 09:01:01 CET | timeline | timeline    |         
single | 11 |       | Mi 01 Feb 2012 10:01:01 CET | timeline | timeline    |         
single | 12 |       | Mi 01 Feb 2012 11:01:01 CET | timeline | timeline    |         
single | 13 |       | Mi 01 Feb 2012 12:01:01 CET | timeline | timeline    |         

Basic Usage:

snapper will pick the "root" config by default. To see the "home" config, use:

snapper -c home list

You can compare snapshots with

snapper diff 12..13

By default, it will show chages from every file that changed. Also if you use snapper to revert changes, all files will get reverted. This is not desirable for systemfiles like /etc/mtab, logfiles and other dynamic files. This is where the filters feature comes in. To exclude everything in /var/log/, create the file /etc/snapper/filters/logfiles.txt

/var/log/*

I use these excludes for now:

/etc/mtab
/var/log/*
/var/tmp/*
/var/spool/mail/*
/var/lib/postgres/data/*
*/.bash_history

Help

Snapper includes manpages. See

man snapper
man snapper-configs

Hints (updated 2013-10-27)

After a year of snapper usage here are some hints that might help others, too.

If you use snapper's timeline feature with the default values for daily/monthly/yearly snapshots, you may notice a serious slowdown of your filesystem. On my SSD I sometimes saw performace drop to 5MB/s and complete stalls. This is due to having way too many snapshots spread across a large time frame.

SOHO Mailserver with Postfix + Postgresql + Dovecot + SpamAssassin + Roundcube

This HowTo describes my Home-Mailserver Setup. Basically this is a sum-it-all-up article from various resources on the net. 

Used Software:

  • Arch Linux OS
  • Postfix MTA
  • PostgreSQL database backend
  • Dovecot IMAP Server
  • Roundcube Webmail + Apache Webserver
  • Spamassassin junk filter
  • Server-side filtering with Sieve
  • fetchmail (for pulling all spread accounts in this one place)

Preconditions in my setup:

  • Server behind Firewall/NAT
  • Dynamic IP (No-IP Plus managed DynDNS service with MX Record etc)
  • StartSSL certificate for both Web- and Mail-Server domain
  • ISP doesn’t allow running an outgoing mail server directly, requires relaying through their mail gateway
  • Apache + PHP + Postgresql already running and working
Read More

Howto create a youtube video from mp3/ogg audio using a picture

If you want to create a youtube video from an audio file, here is how to do this.
All you need is the audio file, a single picture, and ffmpeg.

First find out the lenght of the audio file in seconds, you’ll need it. Here is an example with a 420 seconds file:

ffmpeg -loop_input -i picture.jpg -vcodec mpeg4 -r 25.00 -qscale 2 -s 480x360 \
-i audiofile.mp3 -acodec libmp3lame -ab 128k -t 420 video.avi

This will create a Hi-Res MPEG-4 video with 128k audio. The trick here is to use that one picture and loop it for -t seconds.

Howto: Compile Firefox PGO on Gentoo using an ebuild

Warning! All this doesn’t work at all anymore. Don’t even try it.

This Howto explains how to build Mozilla Firefox with PGO (profile guided optimization) on Gentoo using portage and the ebuild. It’s quite tricky, so make sure the prerequisites are correct on your system.

Prerequisites:

  • Your user has to be in the "portage" group!
  • /var/tmp/portage has to be 775 or 770 (chmod g+w /var/tmp/portage)
  • You need a customized ebuild

I use the mozilla-firefox-3.1_beta3 ebuild from mozilla overlay. The needed run-firefox.sh should be in the files/ subdir. You can svn checkout my modified one from here:

svn co http://gimpel.ath.cx/svn/www-client/mozilla-firefox/

Just put that in your overlay.

Make sure to enable the useflags "pgo -xulrunner" in your /etc/portage/package.use file!

Now as user we call the ebuild phases manually

cd /path/to/your/overlay/www-client/mozilla-firefox
ebuild mozilla-firefox-3.1_beta3-r1.ebuild compile

This will start the compile, after the first PGO run, the browser will open up, and you should do some usual browsing for some minutes. Maybe go to the spidermonkey JS speed test page. When done, just close the browser.

Now the second run will be done. When finished, we install it as root with

sudo ebuild mozilla-firefox-3.1_beta3-r1.ebuild qmerge

Howto import SAR data into a PostgreSQL database and create graphs in OpenOffice.org

This is a walkthrough on how to get SAR data into a database, and use OpenOffice.org to create nice graphs using a JDBC connection to that database, and SUN Report Builder to do the graphs.

In this example I will use SAR’s CPU utilisation data.

Software used:

 

1. Setup the Database

I use a PostgreSQL database named sar here. The SAR data is on the same host as the database, and the PostgreSQL user has read permission on the SAR files.

First we need to create a table

CREATE TABLE sar_cpu 
(
"host" varchar, 
"interval" numeric, 
"time" timestamp with time zone, 
"offset" numeric(1), 
"user" numeric(5,2), 
"nice" numeric(5,2), 
"system" numeric(5,2), 
"iowait" numeric(5,2), 
"steal" numeric(5,2), 
"idle" numeric(5,2)
)

 

 

2. Convert SAR data to CSV

Then we need to output the SAR data in a format we can load into the database, using sadf.

LC_ALL=en_US sadf -d /var/log/sa/sa15 -- -u > /tmp/sa20090215.csv

Note: The LC_ALL=en_US ensures that decimal values are seperated with a dot, not a comma.

 

3. Load the CSV directly into the database

Then we can load that CSV directly into the database:

echo "COPY sar_cpu FROM '/tmp/sa20090215.csv' DELIMITERS ';' CSV;"|psql -d sar

Cool, isn’t it? This can be scripted easily and controlled via a cronjob, like import “yesterday’s sar data, using a simple wrapper script and some date function.

 

4. Create an OpenOffice.org Base DB and connect it to PostgreSQL

Now start OOo Base, create a new DB -> JDBC.

See http://jdbc.postgresql.org/documentation/83/connect.html for an overwiev about the connection parameters:

URL:

jdbc:postgresql:sar

Class:

org.postgresql.Driver

 

5. Install SUN Report Builder

Then download SRB from http://extensions.services.openoffice.org/project/reportdesign, open Extras -> Extension Manager, and install it.

In OOo Base you’ll find it in Reports -> Report in Design View

There you have the possibility to create graphs and diagrams as usual in Calc.

Howto connect OpenOffice to MS SQL server

Just had to connect to a MS SQL server that holds collected SAR statistics from our production servers.

Until now I used Toad in a virtual XP machine, but there has to be some way to connect in a graphical way from Linux too. And indeed there is!

Tools needed:

I chose JTDS, got the JAR file and copied it somehwere in the classpath. On OpenSUSE that is /usr/share/java. You can also add it in OpenOffice directly: Extras -> Options -> Java -> Classpath

Then fire up OpenOffice Base, and in the database wizard

  1. Open existing Database
  2. JDBC
jdbc:jtds:sqlserver://hostname:portnumber/databasename

The port number usually is 1433. The JDBC driver class is

net.sourceforge.jtds.jdbc.Driver

Then enter the db username, in the next step enter the password, and connect.

That’s it.