Running Owncloud WebDAV with Nginx

Update: since Owncloud 5.0 the config below didn't work anymore. A slightly more complex config is needed. See this GIST: https://gist.github.com/petarpetrovic/5163565

Old info for pre 5.0 below.

Here is how I got OwnCloud's WebDAV feature to work in Nginx.

I use Nginx with dav-ext-module which provides support for OPTIONS and PROPFIND methods, but it works with plain http_dav_module, too. You do not need dav-ext-module, but if you're going to use it, you have to be very careful not to set dav_ext_methods in the root context, otherwhise the hole site's folder structure can be browsed with webdav. It's best to set the dav handler only on remote.php.

On my server, Owncloud is accessed at /owncloud along with Drupal7 in the root-context.

Note that the dav handler location has to be set before the \.php handler, because with Nginx the first ~ match wins.

server {
        ##common server settings
        ##...

        root /srv/http;
        index index.php;

        #required for owncloud
        client_max_body_size 8M;
        create_full_put_path on;
        dav_access user:rw group:rw all:r;

        ##common rules here
        ##...

        # Owncloud WebDAV
        location ~ /owncloud/remote.php/ {
                dav_methods PUT DELETE MKCOL COPY MOVE;
                dav_ext_methods PROPFIND OPTIONS;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
        }

        location / {
                try_files $uri $uri/ @rewrite;
                expires max;
        }

        location @rewrite {
                #some rules here for legacy stuff
                #...
                # Drupal7
                rewrite ^ /index.php last;
        }

        # PHP handler
        location ~ \.php$ {
                fastcgi_pass   unix:/var/run/php-fpm/php-fpm.sock;
                fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
                include        fastcgi_params;
                fastcgi_intercept_errors on;
        }

        ##other common rules
        ##...
}

Project T5220, try 1: How NOT to configure a T5220 as a complete Oracle/Weblogic development environment

Currently I have to configure a Sun Enterprise T5220 as our new “developemnt environment”, replacing our V440 (Oracle) and the T1000 (Weblogic). We chose a Sun CMT machine as we wanted to stay close to production in regards of architecture (processor type/OS etc).

The machine will have to run an Oracle instance, and some containers for the Weblogic environments (we have an integration env, daily build env + some development environments for projects). The new T5220 ist packed with a 1.4GHz T2 Niagara, 64GB of memory, 2x146GB for the OSes, and 6x300GB for the database and NFS.

Given that, the ideal setup sounds like:

  • 2 Guest LDOMs, where:
  • 1 Oracle-LDOM
  • 1 Weblogic LDOM, using Solaris zones to seperate the environments
  • The setup is 100% ZFS
  • The control domain runs on one slice of the 146GB drives in ZFS raid1
  • The guest domain roots run on the other slice of that disk, seperate zpool, exported as ZVOL
  • The 6 300GB disks are exported as raw disk slices (EFI labeled), and formed into a ZFS raidz inside the Oracle LDOM
  • Inside that raidz zpool, there is a ZFS with recordsize=8k for the Oracle datafiles, and a ZFS with 128k blocksize for the redo logs

Regarding the database, this is pretty similar to our current setup, a SunFire V440 with a ST2540 FC-Storage attached, running ZFS on top of the hardware RAID5. Similar in regards of the filesystem, at least. ZFS runs very well on the V440.

Okay, now, after setting this up, it turned out that the DB performance is unacceptable. Absolutely horrible to be honest. We’re doing large imports of our production database on a regular basis, and on the V440 it takes about 50 minutes, now we’re up to 150 minutes on the T5220.

Here are some numbers. Simple dd testing: creating a 15GB file.

V440 with St2540 RAID5 volume, exported via 2x2Gbit FC, configured as a simple ZFS

$ time dd if=/dev/zero of=ddtest.file bs=8k count=2000000
2000000+0 records in
2000000+0 records out

real    2m32.380s
user    0m3.021s
sys     1m27.533s
$ echo "((16384000000/152)/1024)/1024"|bc -l
102.79605263157894736842

T5220 with 10k/rpm local SAS, exported as raw disk slice into guest LDOM and configured as ZFS RAIDz:

$ echo "((16384000000/336)/1024)/1024"|bc -l
46.50297619047619047619

Now things get strange. Inside the control domain, onto the ldompool:

$ echo "((1638400000/35)/1024)/1024"|bc -l
44.64285714285714285714

WTF?

That is worse than I get with a Linux guest in VirtualBox onto a virtual drive. Not acceptable for 300€ 10k rpm/s SAS drives.

Me asking for help and further infos at Sun forums

Anyways, I’m going to attach the documented step-by-step guide on how to setup all this from scratch (M$-Word .doc, in German, sorry)

Howto create a youtube video from mp3/ogg audio using a picture

If you want to create a youtube video from an audio file, here is how to do this.
All you need is the audio file, a single picture, and ffmpeg.

First find out the lenght of the audio file in seconds, you’ll need it. Here is an example with a 420 seconds file:

ffmpeg -loop_input -i picture.jpg -vcodec mpeg4 -r 25.00 -qscale 2 -s 480x360 \
-i audiofile.mp3 -acodec libmp3lame -ab 128k -t 420 video.avi

This will create a Hi-Res MPEG-4 video with 128k audio. The trick here is to use that one picture and loop it for -t seconds.

Howto control Tomcat using wget

I just had to restart a webapp in Tomcat without stopping a second app running in the same tomcat instance.

Usually this can be done easily via the Tomcat Manager, but in this case I was not able to access the Manager due to firewall rules. Though I was able to access the server using ssh, but there was no curl installed.

Luckily wget did the trick too!

wget \
--http-user=manager-user \
--http-password=manager-password \
-q -O - http://localhost:8080/manager/html/reload?path=/test \
| grep -A1 Message|awk -F'>' '{print $NF}'

OK - Started application at context path /test

Perfect!

Howto import SAR data into a PostgreSQL database and create graphs in OpenOffice.org

This is a walkthrough on how to get SAR data into a database, and use OpenOffice.org to create nice graphs using a JDBC connection to that database, and SUN Report Builder to do the graphs.

In this example I will use SAR’s CPU utilisation data.

Software used:

 

1. Setup the Database

I use a PostgreSQL database named sar here. The SAR data is on the same host as the database, and the PostgreSQL user has read permission on the SAR files.

First we need to create a table

CREATE TABLE sar_cpu 
(
"host" varchar, 
"interval" numeric, 
"time" timestamp with time zone, 
"offset" numeric(1), 
"user" numeric(5,2), 
"nice" numeric(5,2), 
"system" numeric(5,2), 
"iowait" numeric(5,2), 
"steal" numeric(5,2), 
"idle" numeric(5,2)
)

 

 

2. Convert SAR data to CSV

Then we need to output the SAR data in a format we can load into the database, using sadf.

LC_ALL=en_US sadf -d /var/log/sa/sa15 -- -u > /tmp/sa20090215.csv

Note: The LC_ALL=en_US ensures that decimal values are seperated with a dot, not a comma.

 

3. Load the CSV directly into the database

Then we can load that CSV directly into the database:

echo "COPY sar_cpu FROM '/tmp/sa20090215.csv' DELIMITERS ';' CSV;"|psql -d sar

Cool, isn’t it? This can be scripted easily and controlled via a cronjob, like import “yesterday’s sar data, using a simple wrapper script and some date function.

 

4. Create an OpenOffice.org Base DB and connect it to PostgreSQL

Now start OOo Base, create a new DB -> JDBC.

See http://jdbc.postgresql.org/documentation/83/connect.html for an overwiev about the connection parameters:

URL:

jdbc:postgresql:sar

Class:

org.postgresql.Driver

 

5. Install SUN Report Builder

Then download SRB from http://extensions.services.openoffice.org/project/reportdesign, open Extras -> Extension Manager, and install it.

In OOo Base you’ll find it in Reports -> Report in Design View

There you have the possibility to create graphs and diagrams as usual in Calc.

Howto use KDE4’s device notifier to play a DVD in SMplayer

Just finally figured this one out! It’s possible to add an entry to KDE4’s device notifier, so you can just click & play a DVD in SMplayer. Awesome!

You need a very recent SMplayer and Mplayer, latter one compiled with dvdnav support.

  1. In SMplayer’s preferences, go to mouse & keyboard -> mouse, and for left click select "Activate option in DVD menus".
  2. Create a file called smplayer-playdvd-predicate.desktop
    in $KDEDIR/share/apps/solid/actions, here that means /usr/kde/svn/share/apps/solid/actions/smplayer-playdvd-predicate.desktop, with following in it:
[Desktop Entry]
X-KDE-Solid-Predicate=[ StorageVolume.ignored == false AND OpticalDisc.availableContent == 'Data|VideoDvd' ]
Type=Service
Actions=open;

[Desktop Action open]
Name=Play DVD with SMplayer
Exec=smplayer dvdnav:////%d
Icon=smplayer

Now restart KDE, insert a DVD in your drive, or load DVD ISO image in cdemu, wait for the device notifier to pop up, and select.

Device notifier play DVD in SMplayer

That’s it!

Howto: Log firewall from OpenWrt to a remote rsyslog

This is how I got remote logging from my OpenWrt router to the syslog daemon on the server box.

On the server side, I enabled remote logging over UDP (refer to the rsyslog or syslog-ng documentation).

On the OpenWRT box following steps are needed

Enable remote syslog logging

Edit /etc/config/system and enable remote logging by adding:

option 'log_ip' '192.168.1.2'

Now reboot the router and see if it logs correctly.

Enable firewall logging (-j LOG)

Update (2013): In recent Openwrt builds this is as simple as editing /etc/config/firewall and adding a line to each zone that you want to get logged

config 'zone'
        option 'name' 'wan'
        ...
        option 'log' '1'

That’s all.

 

The info below is valid only for old OpenWRT builds Kamikaze 8.09 and older!

Then I had to get IPtables to produce some log output. With Kamikaze’s new firewall config layout this was a bit tricky. I decided to just log SYN flood protection actions, and the dropping of INVALID packets on INPUT and FORWARD chains. Therefore we need to edit /lib/firewall/uci_firewall.sh and add 3 lines (those with -j LOG)

In function fw_defaults()

$IPTABLES -A INPUT -m state --state INVALID -j LOG --log-prefix "DROP INVALID (INPUT): "
$IPTABLES -A INPUT -m state --state INVALID -j DROP
...
$IPTABLES -A FORWARD -m state --state INVALID -j LOG --log-prefix "DROP INVALID (FORWARD): "
$IPTABLES -A FORWARD -m state --state INVALID -j DROP

and for the SYN flood stuff, in function load_synflood()

$IPTABLES -A syn_flood -j LOG --log-prefix "SYN FLOOD: "
$IPTABLES -A syn_flood -j DROP

 

Bash pattern matching

BASH offers a nice way to replace patterns. Until now I always used basename and a variable.
For example moving all .jpeg files in a folder to .jpg (hey, it’s just an example..)

for moo in *.jpeg; do
    newname="`basename $moo .jpeg`.jpg"
    mv $moo $newname
done

But isn’t that just plain ugly? That one is lot nicer:

for i in *.jpeg; do
    mv "$i" "${i%.*}.jpg"
done

1337

Howto: receive mail and save attachment with fetchmail, procmail and metamail

At work I recently had to set up a solution that periodically checks a POP3 account on our M$ Exchange wannabe mailserver, and saves the attachments to some folder for further processing. As I didn’t find a ready-to-go-solution for this on the web, just snippets here and there, and of course hundrets of other people asking the same, here it is.

You will need

  • fetchmail
  • procmail
  • metamail (or uudeview, see first comment below)

In this example I’ll use a POP3 account, the full mail will be backed up to ~/mail_backup, and attachments will be unpacked to ~/attachments. fetchmail also handles IMAP accounts just fine. Please refer to the fetchmail documentation.

Setting up fetchmail

First create a file $HOME/.fetchmailrc

poll my.pop3.server 
protocol pop3 
user 'myuser' 
password 'mypassword' 
mda '/usr/bin/procmail -d %T'

Setting up procmail and metamail

Then we configure procmail so it forwards the messages to metamail in $HOME/.procmailrc

:0
*^content-Type:
{
        # backup the complete mail first..
        # you can leave out this part if you don't want a backup of the complete mail
        :0c:
        $HOME/mail_backup

        # Now the actual unpacking part
        #
        # this is the place where the attachments will be unpacked to
        METAMAIL_TMPDIR=$HOME/attachments

        # forward to metamail
        :0fw
        | metamail -w -y -x
}

Regarding metamail, we tell it to ignore any mailcap file, so it doesn’t use interpreters (-w), yanking the message and save the content raw (-y) and force it in non-interactive mode (-x).

That’s it about it. We are ready for testing.

Test run

Now we simply fire up fetchmail, the rest should be magic.

fetchmail -kv

HTH