Permanent Remap Keys in X11

Because my shift key got broken, I remapped Caps Lock to Shift using xmodmap:

1
2
3
remove Lock = Caps_Lock
keysym Caps_Lock = Shift_L
add Shift = Shift_L

However these settings got sometimes lost. (ex: after the driver was reloaded after suspend). Finally I found event_key_remap patch from here, which allows to permanently redefine keys in the xorg.conf.

To apply the patch under archlinux simply install xf86-input-evdev-remap from AUR:

yaourt -S xf86-input-evdev-remap

To track down the key, you want to remap use xev on the terminal. Just type the wanted keys a few times. The output will be something like the following:

1
2
3
4
5
KeyRelease event,  serial 33,  synthetic NO,  window 0x1e00001,
    root 0x8e,  subw 0x0,  time 5672767,  (611, 262),  root:(613, 288),
    state 0x1,  keycode 50 (keysym 0xffe1,  Shift_L),  same_screen YES
    XLookupString gives 0 bytes:
    XFilterEvent returns: False

The interesting value here is the keycode. Use this code to build your final xorg.conf. In my case this was:

1
2
3
4
5
6
7
#/etc/X11/xorg.conf.d/10-kb-layout.conf
Section "InputClass"
    Identifier             "Keyboard Defaults"
    MatchIsKeyboard        "yes"
    Option                 "XkbLayout" "de"          # Replace this with your layout
    Option                 "event_key_remap" "58=50" # Caps Lock Key = Shift
EndSection

Global Response Management With RestKit

For our latest iOS app we are using RestKit Framework, which is a really great and advanced library to communicate to your REST API.

When you have lots of requests in different areas of your project, you may want to have a global handling for failure events. For example how an Login View, if any of the requests gives you an 401 (Unauthorized) status code.

In RestKit 0.20 they introduced the oppertunity to register your own RKObjectRequestOperation, which is the common way to do this.

So at first you create a subclass of RKObjectRequestOperation, let’s call it CustomRKObjectRequestOperation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#import "RKObjectRequestOperation.h"

@interface CustomRKObjectRequestOperation : RKObjectRequestOperation

@end

@implementation CustomRKObjectRequestOperation

- (void)setCompletionBlockWithSuccess:(void ( ^ ) ( RKObjectRequestOperation *operation , RKMappingResult *mappingResult ))success failure:(void ( ^ ) ( RKObjectRequestOperation *operation , NSError *error ))failure
{
    [super setCompletionBlockWithSuccess:^void(RKObjectRequestOperation *operation , RKMappingResult *mappingResult) {
        if (success) {
            success(operation, mappingResult);
        }

    }failure:^void(RKObjectRequestOperation *operation , NSError *error) {

        [[NSNotificationCenter defaultCenter] postNotificationName:@"connectionFailure" object:operation];

        if (failure) {
            failure(operation, error);
        }

    }];
}

@end

This is the point where we overwrite the method which sets the completion and failure block. I use the Observer Pattern (NSNotificationCenter) to notify about connectionFailures. (Learn more about NSNotificationCenter)

Of course we need to tell RestKit to use our custom RKObjectRequestOperation class. You can do this by adding this line to you RestKit configuration:

1
[[RKObjectManager sharedManager] registerRequestOperationClass:[CustomRKObjectRequestOperation class]];

Now we need a class where we listen to the failure notifications. You can choose any of your class, I use the AppDelegate for this.

1
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(connectionFailedWithOperation:) name:@"connectionFailure" object:nil];

As you should know, the connectionFailedWithOperation: is called when a connection failure occures.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
- (void)connectionFailedWithOperation:(NSNotification *)notification
{
    RKObjectRequestOperation *operation = (RKObjectRequestOperation *)notification.object;
    if (operation) {

        NSInteger statusCode = operation.HTTPRequestOperation.response.statusCode;

        switch (statusCode) {
            case 0: // No internet connection
            {
            }
                break;
            case  401: // not authenticated
            {
            }
                break;

            default:
            {
            }
                break;
        }
    }
}

Links:
RestKit Framework
Class Documentation for RKObjectRequestOperation

by Albert Schulz
If you have any questions feel free to contact me:
eMail: mail@halfco.de
Twitter: @albert_sn
Web: halfco.de

Mongoid: Use ObjectId as Created_at

Update: Nov 20 2014: add setter

One great feature of Mongodb is, that the first bytes of each ObjectID contains the time, they were generated. This can be exploited to mimic the well known created_at field in rails. First put this file in your lib directory.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#lib/mongoid/created.rb
module Mongoid
  module CreatedAt
    # Returns the creation time calculated from ObjectID
    #
    # @return [ Date ] the creation time
    def created_at
      id.generation_time
    end

    # Set generation time of ObjectId.
    # Note: This will modify the ObjectId and
    # is therefor only useful for not persisted documents
    #
    # @return [ BSON::ObjectId ] the generated object id
    def created_at=(date)
      self.id = BSON::ObjectId.from_time(date)
    end
  end
end

If you are still using mongoid 3 replace BSON::ObjectId with Moped::BSON::ObjectId.

Now you can include this module in every Model, where you need created at.

1
2
3
4
5
6
7
8
#app/models/user.rb
class User
  include Mongoid::Document
  include Mongoid::CreatedAt
# ...
end
u = User.new(created_at: 1.hour.ago)
u.created_at

That’s all easy enough, isn’t it?

Use Systemd as a Cron Replacement

Since systemd 197 timer units support calendar time events, which makes systemd a full cron replacement. Why one would replace the good old cron? Well, because systemd is good at executing stuff and monitor its state!

  • with the help of journalctl you get last status and logging output, which is a great thing to debug failing jobs:
1
2
3
4
5
6
7
8
9
10
11
$ systemctl status reflector-update.service
reflector-update.service - "Update pacman's mirrorlist using reflector"
   Loaded: loaded
(/etc/systemd/system/timer-weekly.target.wants/reflector-update.service)
   Active: inactive (dead)

Jun 09 17:58:30 higgsboson reflector[30109]: rating http://www.gtlib.gatech.edu/pub/archlinux/
Jun 09 17:58:30 higgsboson reflector[30109]: rating rsync://rsync.gtlib.gatech.edu/archlinux/
Jun 09 17:58:30 higgsboson reflector[30109]: rating http://lug.mtu.edu/archlinux/
Jun 09 17:58:30 higgsboson reflector[30109]: Server Rate       Time
...
  • there are a lot of useful systemd unit options like IOSchedulingPriority, Nice or JobTimeoutSec
  • it is possible to let depend units on other services, like mounting the nfs host before starting the mysql-backup.service or depending on the network.target.

So let’s get it started. The first thing you might want to do, is to replace the default scripts located in the runparts directories /etc/cron.{daily,hourly,monthly,weekly}.

On my distribution (archlinux) these are logrotate, man-db, shadow and updatedb: For convenience I created a structure like /etc/cron.*:

mkdir /etc/systemd/system/timer-{hourly,daily,weekly}.target.wants

and added the following timer.

cd /etc/systemd/system
wget https://blog.higgsboson.tk/downloads/timers.tar
tar -xvf timers.tar && rm timers.tar
/etc/systemd/system/timer-hourly.timer (timer-hourly.timer) download
1
2
3
4
5
6
7
8
9
10
[Unit]
Description=Hourly Timer

[Timer]
OnBootSec=5min
OnUnitActiveSec=1h
Unit=timer-hourly.target

[Install]
WantedBy=basic.target
/etc/systemd/system/timer-hourly.target (timer-hourly.target) download
1
2
3
[Unit]
Description=Hourly Timer Target
StopWhenUnneeded=yes
/etc/systemd/system/timer-daily.timer (timer-daily.timer) download
1
2
3
4
5
6
7
8
9
10
[Unit]
Description=Daily Timer

[Timer]
OnBootSec=10min
OnUnitActiveSec=1d
Unit=timer-daily.target

[Install]
WantedBy=basic.target
/etc/systemd/system/timer-daily.target (timer-daily.target) download
1
2
3
[Unit]
Description=Daily Timer Target
StopWhenUnneeded=yes
/etc/systemd/system/timer-weekly.timer (timer-weekly.timer) download
1
2
3
4
5
6
7
8
9
10
[Unit]
Description=Weekly Timer

[Timer]
OnBootSec=15min
OnUnitActiveSec=1w
Unit=timer-weekly.target

[Install]
WantedBy=basic.target
/etc/systemd/system/timer-weekly.target (timer-weekly.target) download
1
2
3
[Unit]
Description=Weekly Timer Target
StopWhenUnneeded=yes

… and enable them:

systemctl enable timer-hourly.timer
systemctl enable timer-daily.timer
systemctl enable timer-weekly.timer

These directories work like their cron equivalents, each service file located in such a directory will be executed at the given time.

Now move on to the service files. If you’re not running Arch, the paths might be different on your system.

cd /etc/systemd/system
wget https://blog.higgsboson.tk/downloads/services.tar
tar -xvf services.tar && rm services.tar
/etc/systemd/system/timer-daily.target.wants/logrotate.service (logrotate.service) download
1
2
3
4
5
6
7
8
[Unit]
Description=Update man-db

[Service]
Nice=19
IOSchedulingClass=2
IOSchedulingPriority=7
ExecStart=/usr/bin/logrotate /etc/logrotate.conf
/etc/systemd/system/timer-daily.target.wants/man-db-update.service (man-db-update.service) download
1
2
3
4
5
6
7
8
[Unit]
Description=Update man-db

[Service]
Nice=19
IOSchedulingClass=2
IOSchedulingPriority=7
ExecStart=/usr/bin/mandb --quiet
/etc/systemd/system/timer-daily.target.wants/mlocate-update.service (mlocate-update.service) download
1
2
3
4
5
6
7
8
[Unit]
Description=Update mlocate database

[Service]
Nice=19
IOSchedulingClass=2
IOSchedulingPriority=7
ExecStart=/usr/bin/updatedb
/etc/systemd/system/timer-daily.target.wants/verify-shadow.service (verify-shadow.service) download
1
2
3
4
5
6
7
[Unit]
Description=Verify integrity of password and group files

[Service]
Type=oneshot
ExecStart=/usr/sbin/pwck -r
ExecStart=/usr/sbin/grpck -r

At last but not least you can disable cron:

systemctl stop cronie && systemctl disable cronie

If you want to execute at a special calendar events for example “every first day in a month” use the “OnCalendar=” option in the timer file. example:

send-bill.timer
1
2
3
4
5
6
7
8
9
[Unit]
Description=Daily Timer

[Timer]
OnCalendar=*-*-1 0:0:O
Unit=send-bill.target

[Install]
WantedBy=basic.target

That’s all for the moment. Have a good time using the power of systemd!

Below some service files, I use:

/etc/systemd/system/timer-weekly.target.wants/reflector-update.service (reflector-update.service) download
1
2
3
4
5
6
7
8
9
[Unit]
Description="Update pacman's mirrorlist using reflector"

[Service]
Nice=19
IOSchedulingClass=2
IOSchedulingPriority=7
Type=oneshot
ExecStart=/usr/bin/reflector --verbose -l 5 --sort rate --save /etc/pacman.d/mirrorlist
/etc/systemd/system/timer-weekly.target.wants/pkgstats.service (pkgstats.service) download
1
2
3
4
5
6
[Unit]
Description=Run pkgstats

[Service]
User=nobody
ExecStart=/usr/bin/pkgstats

See this link for details about my shell-based pacman notifier

/etc/systemd/system/timer-daily.target.wants/pacman-update.service (pacman-update.service) download
1
2
3
4
5
6
7
8
9
10
11
[Unit]
Description=Update pacman's package cache

[Service]
Nice=19
Type=oneshot
IOSchedulingClass=2
IOSchedulingPriority=7
Environment=CHECKUPDATE_DB=/var/lib/pacman/checkupdate
ExecStartPre=/bin/sh -c "/usr/bin/checkupdates > /var/log/pacman-updates.log"
ExecStart=/usr/bin/pacman --sync --upgrades --downloadonly --noconfirm --dbpath=/var/lib/pacman/checkupdate

Automated Backups for Chef Server 11

In this article I will share my setup, I use to backup chef server. In the best case, you have a dedicated machine, which has network access to your chef server. Otherwise you will have to additionally use a different backup program like rsnapshot or duplicity to backup the created export directory. In my case I use a raspberry pie with a hdd docking station and a power saving harddrive

To get started you will need ruby on the backup machine. I prefer using rvm for this job. Feel free to choose your preferred way:

1
$ curl -L https://get.rvm.io | bash -s stable --autolibs=enabled

To create the backup, I use the great knife-backup gem of Marius Ducea:

1
$ gem install knife-backup

Then add these scripts to your system:

1
2
3
4
$ mkdir -p ~/bin && cd ~/bin
$ wget http://blog.higgsboson.tk/downloads/code/chef-backup/backup-chef.sh
$ wget http://blog.higgsboson.tk/downloads/code/chef-backup/restore-chef.sh
$ chmod +x {backup,restore}-chef.sh
backup script (backup-chef.sh) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/bin/bash
# optional: load rvm
source "$HOME/.rvm/scripts/rvm" || source "/usr/local/rvm/scripts/rvm"

cd /tmp

BACKUP=/path/to/your/backup #<--- EDIT THIS LINE
TMPDIR=/tmp/$(mktemp -d chef-backup-XXXX)
MAX_BACKUPS=8

cd $TMPDIR
trap "rm -rf '$TMPDIR'" INT QUIT TERM EXIT
knife --config $HOME/.chef/knife-backup.rb backup export -D . >/dev/null
tar -cjf "$BACKUP/$(date +%m.%d.%Y).tar.bz2" .
# keep the last X backups
ls -t "$BACKUP" | tail -n+$MAX_BACKUPS | xargs rm -f
restore script (restore-chef.sh) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/bin/bash

if [ "$#" -eq 0 ]; then
    echo "USAGE: $0 /path/to/backup"
    exit 1
fi

source "$HOME/.rvm/scripts/rvm" || source "/usr/local/rvm/scripts/rvm"

cd /tmp
TMPDIR=/tmp/$(mktemp -d chef-restore-XXXX)

cd "$TMPDIR"
trap "rm -rf '$TMPDIR'" INT QUIT TERM EXIT
tar xf $1
knife --config $HOME/.chef/knife-backup.rb backup restore -D .

Modify BACKUP variable to match your backup destination. Next you will need a knife.rb to get access to your server. I suggest to create a new client:

1
2
3
4
5
6
7
8
9
10
11
$ mkdir -p ~/.chef
$ knife client create backup --admin --file "$HOME/.chef/backup.pem"
$ cat <<'__EOF__' >> ~/.chef/knife-backup.rb
log_level                :info
log_location             STDOUT
node_name                'backup'
client_key               "#{ENV["HOME"]}/.chef/backup.pem"
chef_server_url          'https://chef.yourdomain.tld' # EDIT HERE
syntax_check_cache_path  "#{ENV["HOME"]}.chef/syntax_check_cache"
__EOF__
$ knife role list # test authentication

Now test the whole setup, by running the backup-chef.sh script:

1
$ ~/bin/backup-chef.sh

It should create a tar file in the backup directory.

If everything works, you can add a cronjob to automate this.

1
$ crontab -e
@daily $HOME/bin/backup-chef.sh

To restore a backup simply run (where DATE is the date of the backup)

1
$ ~/bin/restore-chef.sh /path/to/backup/DATE.tar.bz2

That’s all folks!

Owncloud 5 and Nginx

Since my last post owncloud has added offical documentation for nginx. Unfortunately the documentation there didn’t worked for me out of the box:

error.log
1
2
3
4
5
2013/04/19 22:14:38 [error] 32402#0: *251 FastCGI sent in stderr: "Access to the
script '/var/www/cloud' has been denied (see security.limit_extensions)" while
reading response header from upstream,  client: ::1,  server:
cloud.higgsboson.tk,  request: "GET /index.php HTTP/1.1",  upstream:
"fastcgi://unix:/var/run/php-fpm.sock:",  host: "cloud.higgsboson.tk"

The problem here was again a missing fastcgi_params option.

To solve the problem include the following line either in ‘/etc/nginx/fastcgi_params’

/etc/nginx/fastcgi_params
1
2
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# ...

or in the owncloud block in nginx.conf:

/etc/nginx/nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
server {
  listen 80;
  server_name cloud.example.com;
  return  https://$server_name$request_uri;  # enforce https
}

server {
  listen 443 ssl;
  server_name cloud.example.com;

  ssl_certificate /etc/ssl/nginx/cloud.example.com.crt;
  ssl_certificate_key /etc/ssl/nginx/cloud.example.com.key;

  # Path to the root of your installation
  root /var/www/;

  client_max_body_size 10G; # set max upload size
  fastcgi_buffers 64 4K;

  rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect;
  rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect;
  rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect;

  index index.php;
  error_page 403 = /core/templates/403.php;
  error_page 404 = /core/templates/404.php;

  location ~ ^/(data|config|\.ht|db_structure\.xml|README) {
    deny all;
  }

  location / {
    # The following 2 rules are only needed with webfinger
    rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
    rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json
last;

    rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
    rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;

    rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;

    try_files $uri $uri/ index.php;
  }

  location ~ ^(.+?\.php)(/.*)?$ {
    try_files $1 = 404;

    include fastcgi_params;
    fastcgi_param PATH_INFO $2;
    fastcgi_param HTTPS on;
    # THIS LINE WAS ADDED
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_pass 127.0.0.1:9000;
    # Or use unix-socket with 'fastcgi_pass unix:/var/run/php5-fpm.sock;'
  }

  # Optional: set long EXPIRES header on static assets
  location ~* ^.+\.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
    expires 30d;
    # Optional: Don't log access to assets
    access_log off;
  }

}

Add Flattr to Octopress

Update add payment relation to header, thanks to @voxpelli

In this article I will show how to add Flattr to your octopress blog and feed.

First of all add your flattr user name (also known as user id) to the configuration:

_config.yml
1
2
# Flattr
flattr_user: YourFlattrName

To add a flattr button to the sharing section of your posts, add this template:

source/_includes/post/flattr_button.html (flattr_buttom.html) download
1
2
3
4
5
6
7
8
9
<a class="FlattrButton" style="display:none;"
    title="{{ page.title }}"
    data-flattr-uid="{{ site.flattr_user }}"
    data-flattr-tags="{{ page.categories | join: "," }}"
    data-flattr-button="compact"
    data-flattr-category="text"
    href="{{ site.url }}{{ page.url }}">
    {% if page.description %}{{ page.description }}{% else %}{{page.content | truncate: 500}}{% endif %}
</a>

and add the following javascript to your custom head.html

source/_includes/custom/head.html (head.html) download
1
2
3
4
5
6
7
8
9
10
11
12
13
{% if site.flattr_user %}
<script type="text/javascript">
/* <![CDATA[ */
    (function() {
        var s = document.createElement('script'), t = document.getElementsByTagName('script')[0];
        s.type = 'text/javascript';
        s.async = true;
        s.src = '//api.flattr.com/js/0.6/load.js?mode=auto';
        t.parentNode.insertBefore(s, t);
    })();
/* ]]> */
</script>
{% endif %}

Now include it in your sharing template:

source/_includes/post/sharing.html (sharing.html) download
1
2
3
4
5
6
<div class="share">
    {% if site.flattr_user %}
    {% include post/flattr_button.html %}
    {% endif %}
    ...
</div>

The result will look like this:

To make flattr discoverable by programs (feed reader, podcatcher, browser extensions…), a payment relation link is needed in html head as well as in the atom feed.

First add this (lengthy) template…

source/_includes/flattr_param.html (flattr_param.html) download
1
2
3
4
5
6
7
8
9
10
11
12
13
{% if post %}
{% assign item = post %}
{% else %}
{% assign item = page %}
{% endif %}

{% capture flattr_url %}{{ site.url }}{{ item.url }}{% endcapture %}

{% capture flattr_title %}{% if item.title %}{{ item.title }}{% else %}{{ site.title }}{% endif %}{% endcapture %}

{% capture flattr_description %}{% if item.description %}{{ item.description}}{% else index == true %}{{ site.description }}{% endif %}{% endcapture %}

{% capture flattr_param %}url={{ flattr_url | cgi_escape }}&amp;user_id={{site.flattr_user | cgi_escape }}&amp;title={{ flattr_title | cgi_escape }}&amp;category=text&amp;description={{ flattr_description | truncate: 1000 | cgi_escape }}&amp;tags={{ item.categories | join: "," | cgi_escape }}{% endcapture %}

… then include it in your feed …

source/atom.xml (atom.xml) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
layout: null
---
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title><![CDATA[{{ site.title }}]]></title>

  ...

  {% for post in site.posts limit: 20 %}
  <entry>
    <title type="html"><![CDATA[{{ post.title | cdata_escape }}]]></title>
    <link href="{{ site.url }}{{ post.url }}"/>
    <updated>{{ post.date | date_to_xmlschema }}</updated>
    <id>{{ site.url }}{{ post.id }}</id>
    {% if site.flattr_user %}
    {% include flattr_param.html %}
    <link rel="payment" href="https://flattr.com/submit/auto?{{ flattr_param }}" type="text/html" />
    {% endif %}
    <content type="html"><![CDATA[
      {{ post.content | expand_urls: site.url | cdata_escape }}
    ]]></content>
  </entry>
  {% endfor %}
</feed>

and in your head template:

source/_includes/custom/head.html (head2.html) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{% if site.flattr_user %}
<script type="text/javascript">
/* <![CDATA[ */
    (function() {
        var s = document.createElement('script'), t = document.getElementsByTagName('script')[0];
        s.type = 'text/javascript';
        s.async = true;
        s.src = '//api.flattr.com/js/0.6/load.js?mode=auto';
        t.parentNode.insertBefore(s, t);
    })();
/* ]]> */
</script>

{% include flattr_param.html %}
<link rel="payment" href="https://flattr.com/submit/auto?{{ flattr_param }}" type="text/html" />
{% endif %}

Because not (yet) all feed reader support this feature, you can add a dedicated flattr link.

Therefor create a new template:

source/_includes/flattr_feed_button.html (flattr_feed_buttom.html) download
1
2
3
4
5
{% include flattr_param.html %}
<a href="https://flattr.com/submit/auto?url={{ flattr_param }}">
      <img src="https://api.flattr.com/button/flattr-badge-large.png"
           alt="Flattr this"/>
</a>

Compared to the other button, this one will not require javascript, which isn’t always available in feed readers.

Finally add it your feed template:

source/atom.xml (atom2.xml) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
---
layout: null
---
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title><![CDATA[{{ site.title }}]]></title>

  ...

  {% for post in site.posts limit: 20 %}
  <entry>
    <title type="html"><![CDATA[{{ post.title | cdata_escape }}]]></title>
    <link href="{{ site.url }}{{ post.url }}"/>
    <updated>{{ post.date | date_to_xmlschema }}</updated>
    <id>{{ site.url }}{{ post.id }}</id>
    {% if site.flattr_user %}
    {% include flattr_param.html %}
    <link rel="payment" href="https://flattr.com/submit/auto?{{ flattr_param }}" type="text/html" />
    {% endif %}
    <content type="html"><![CDATA[
      {{ post.content | expand_urls: site.url | cdata_escape }}
      {% if site.flattr_user %} {% include flattr_feed_button.html %} {% endif %}
    ]]></content>
  </entry>
  {% endfor %}
</feed>

This will add a flattr button to each entry in your feed.

Preview:

That’s all folks! I hope you will become rich by your flattr income.

Pubsubhubbub With Octopress

In this article I explain how to set up octopress with pubsubhubbub, to get push-enabled feeds. In my example I use superfeedr, which is free to use.

After you signup up a hub, in my case higgsboson.superfeedr.com, you have to add a hub reference to your atom feed.

_config.yml
1
2
3
4
# ....

# pubsubhubbub
hub_url: http://higgsboson.superfeedr.com/ # <--- replace this with your hub

Insert this line:

1
{% if site.hub_url %}<link href="{{ site.hub_url }}" rel="hub"/>{% endif %}

into source/atom.xml. So it looks like this:

source/atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">

  <title><![CDATA[{{ site.title }}]]></title>
  <link href="{{ site.url }}/atom.xml" rel="self"/>
  <link href="{{ site.url }}/"/>
  {% if site.hub_url %}<link href="{{ site.hub_url }}" rel="hub"/>{% endif %}
  <updated>{{ site.time | date_to_xmlschema }}</updated>
  <id>{{ site.url }}/</id>
  <author>
    <name><![CDATA[{{ site.author | strip_html }}]]></name>
    {% if site.email %}<email><![CDATA[{{ site.email }}]]></email>{% endif %}
  </author>
  <generator uri="http://octopress.org/">Octopress</generator>

  {% for post in site.posts limit: 20 %}
  <entry>
    <title type="html"><![CDATA[{{ post.title | cdata_escape }}]]></title>
    <link href="{{ site.url }}{{ post.url }}"/>
    <updated>{{ post.date | date_to_xmlschema }}</updated>
    <id>{{ site.url }}{{ post.id }}</id>
    <content type="html"><![CDATA[{{ post.content | expand_urls: site.url | cdata_escape }}]]></content>
  </entry>
  {% endfor %}
</feed>

To push out updates, you have to ping your hub, this is easily done in your deploy rake task.

Add these lines to the end of your deploy task in your Rakefile:

1
2
3
4
5
6
7
8
9
require 'net/http'
require 'uri'
hub_url = "higgsboson.superfeedr.com" # <--- replace this with your hub
atom_url = "http://blog.higgsboson.tk/atom.xml" # <--- replace this with your full feed url
resp, data = Net::HTTP.post_form(URI.parse(hub_url),
    {'hub.mode' => 'publish',
    'hub.url' => atom_url})
raise "!! Hub notification error: #{resp.code} #{resp.msg}, #{data}" unless resp.code == "204"
puts "## Notified hub (" + hub_url + ") that feed #{atom_url} has been updated"

So you end up with something like this:

Rakefile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
desc "Default deploy task"
task :deploy do
  # Check if preview posts exist, which should not be published
  if File.exists?(".preview-mode")
    puts "## Found posts in preview mode, regenerating files ..."
    File.delete(".preview-mode")
    Rake::Task[:generate].execute
  end

  Rake::Task[:copydot].invoke(source_dir, public_dir)
  Rake::Task["#{deploy_default}"].execute

  require 'net/http'
  require 'uri'
  hub_url = "higgsboson.superfeedr.com" # <--- replace this with your hub
  atom_url = "http://blog.higgsboson.tk/atom.xml" # <--- replace this with your full feed url
  resp, data = Net::HTTP.post_form(URI.parse(hub_url),
                                   {'hub.mode' => 'publish',
                                    'hub.url' => atom_url})
  raise "!! Hub notification error: #{resp.code} #{resp.msg}, #{data}" unless resp.code == "204"
  puts "## Notified hub (" + hub_url + ") that feed #{atom_url} has been updated"
end

Now whenever you run rake deploy, it will automaticly update your hub.

If you have a jabber or google talk account, you can easily verify your setup by adding push-bot to your contact list and subscribe to your feed.

Icinga-Web and Pnp4nagios With Nginx

In this article I will show my nginx configuration for the icinga web interface. At the time of writing I installed version 1.8 on ubuntu 12.04 using this ppa:

1
2
3
4
5
6
7
$ sudo add-apt-repository ppa:formorer/icinga
$ sudo add-apt-repository ppa:formorer/icinga-web
$ sudo apt-get update
# without --no-install-recommends, it will try to install apache
$ sudo apt-get --no-install-recommends install icinga-web
$ sudo apt-get install icinga-web-pnp # optional: for pnp4nagios
$ sudo apt-get install nginx php5-fpm # if not already installed

For php I just use php-fpm without a special configuration. If you installed icinga from source, you have change the roots to match your installation path (to /usr/local/icinga-web/)

nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
upstream fpm {
    server unix:/var/run/php5-fpm.sock;
}

server {
    listen 80;
    listen 443 ssl;
    # FIXME
    server_name icinga.yourdomain.tld;

    access_log /var/log/nginx/icinga.access.log;
    error_log /var/log/nginx/icinga.error.log;
    # FIXME
    ssl_certificate /etc/ssl/private/icinga.yourdomain.tld.crt;
    ssl_certificate_key /etc/ssl/private/icinga.yourdomain.tld.pem;

    # Security - Basic configuration
    location = /favicon.ico {
      log_not_found off;
      access_log off;
      expires max;
    }

    location = /robots.txt {
      allow all;
      log_not_found off;
      access_log off;
    }

    # Deny access to hidden files
    location ~ /\. {
      deny all;
      access_log off;
      log_not_found off;
    }

    root /usr/share/icinga-web/pub;

    location /icinga-web/styles {
      alias /usr/share/icinga-web/pub/styles;
    }

    location /icinga-web/images {
      alias /usr/share/icinga-web/pub/images;
    }

    location /icinga-web/js {
      alias /usr/share/icinga-web/lib;
    }
    location /icinga-web/modules {
      rewrite ^/icinga-web/(.*)$ /index.php?/$1 last;
    }
    location /icinga-web/web {
      rewrite ^/icinga-web/(.*)$ /index.php?/$1 last;
    }

    #>>> configuration for pnp4nagios
    location /pnp4nagios {
      alias /usr/share/pnp4nagios/html;
    }

    location ~ ^(/pnp4nagios.*\.php)(.*)$ {
      root /usr/share/pnp4nagios/html;
      include fastcgi_params;
      fastcgi_split_path_info ^(.+\.php)(.*)$;
      fastcgi_param PATH_INFO $fastcgi_path_info;

      fastcgi_param SCRIPT_FILENAME $document_root/index.php;
      fastcgi_pass fpm;
    }
    #<<<

    location / {
      root   /usr/share/icinga-web/pub;
      index index.php;
      location ~* ^/(robots.txt|static|images) {
        break;
      }

      if ($uri !~ "^/(favicon.ico|robots.txt|static|index.php)") {
        rewrite ^/([^?]*)$ /index.php?/$1 last;
      }
    }

    location ~ \.php$ {
      include /etc/nginx/fastcgi_params;

      fastcgi_split_path_info ^(/icinga-web)(/.*)$;

      fastcgi_pass fpm;
      fastcgi_index index.php;
      include /etc/nginx/fastcgi_params;
    }
}

Systemd on Raspbian

As I like the stability and raw speed of systemd, I wanted to leave debian’s init system behind and switch to systemd.

The basic installation is pretty easy:

$ apt-get install systemd

Then you need to tell the kernel to use systemd as the init system:

To do so, append init=/bin/systemd to the end of /boot/cmdline.txt line

$ cat /boot/cmdline.txt
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait init=/bin/systemd

If you reboot, systemd will be used instead of the default init script.

Currently debians version of systemd doesn’t ship many service files by default. Systemd will automaticly fallback to the lsb script, if a service file for a deamon is missing. So the speedup isn’t as big as on other distributions such as archlinux or fedora, which provide a deeper integration.

To get a quick overview, which services are started nativly, type the following command:

$ systemctl list-units

All descriptions containing LSB: are launched through lsb scripts.

Writing your own service files, is straight forward. If you add custom service files, put them in /etc/systemd/system, so they will not get overwritten by updates.

To get further information about systemd, I recommend the great archlinux wiki article.

At the end of this article, I provide some basic one, I use. I port them over mostly from archlinux. In the most cases, i just have adjusted the path of the binary to get them working. (from /usr/bin to /usr/sbin for ex.) It is important, that the service name match with the initscript, so it will be used instead by systemd. This will not work in all cases like dhcpcd which contains the specific network device (like dhcpcd@eth0). In this case, you have to remove origin service with update-rc.d and enable the service file with systemctl enable.

Also avaible as gist:

/etc/systemd/system/dhcpcd@.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# IMPORTANT: only works with dhcpcd5 not the old dhcpcd3!
[Unit]
Description=dhcpcd on %I
Wants=network.target
Before=network.target

[Service]
Type=forking
PIDFile=/run/dhcpcd-%I.pid
ExecStart=/sbin/dhcpcd -A -q -w %I
ExecStop=/sbin/dhcpcd -k %I

[Install]
Alias=multi-user.target.wants/dhcpcd@eth0.service
/etc/systemd/system/monit.service
1
2
3
4
5
6
7
8
9
10
11
12
[Unit]
Description=Pro-active monitoring utility for unix systems
After=network.target

[Service]
Type=simple
ExecStart=/usr/bin/monit -I
ExecStop=/usr/bin/monit quit
ExecReload=/usr/bin/monit reload

[Install]
WantedBy=multi-user.target
/etc/systemd/system/ntp.service
1
2
3
4
5
6
7
8
9
10
11
12
[Unit]
Description=Network Time Service
After=network.target nss-lookup.target

[Service]
Type=forking
PrivateTmp=true
ExecStart=/usr/sbin/ntpd -g -u ntp:ntp
ControlGroup=cpu:/

[Install]
WantedBy=multi-user.target
/etc/systemd/system/sshdgenkeys.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[Unit]
Description=SSH Key Generation
ConditionPathExists=|!/etc/ssh/ssh_host_key
ConditionPathExists=|!/etc/ssh/ssh_host_key.pub
ConditionPathExists=|!/etc/ssh/ssh_host_ecdsa_key
ConditionPathExists=|!/etc/ssh/ssh_host_ecdsa_key.pub
ConditionPathExists=|!/etc/ssh/ssh_host_dsa_key
ConditionPathExists=|!/etc/ssh/ssh_host_dsa_key.pub
ConditionPathExists=|!/etc/ssh/ssh_host_rsa_key
ConditionPathExists=|!/etc/ssh/ssh_host_rsa_key.pub

[Service]
ExecStart=/usr/bin/ssh-keygen -A
Type=oneshot
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
/etc/systemd/system/ssh.socket
1
2
3
4
5
6
7
8
9
[Unit]
Conflicts=ssh.service

[Socket]
ListenStream=22
Accept=yes

[Install]
WantedBy=sockets.target
/etc/systemd/system/ssh@.service
1
2
3
4
5
6
7
8
9
10
11
[Unit]
Description=SSH Per-Connection Server
Requires=sshdgenkeys.service
After=syslog.target
After=sshdgenkeys.service

[Service]
ExecStartPre=/bin/mkdir -m700 -p /var/run/sshd
ExecStart=-/usr/sbin/sshd -i
ExecReload=/bin/kill -HUP $MAINPID
StandardInput=socket
/etc/systemd/system/ifplugd@.service
1
2
3
4
5
6
7
8
9
10
11
[Unit]
Description=Daemon which acts upon network cable insertion/removal

[Service]
Type=forking
PIDFile=/run/ifplugd.%i.pid
ExecStart=/usr/sbin/ifplugd %i
SuccessExitStatus=0 1 2

[Install]
WantedBy=multi-user.target