This feed contains pages in the "debian" category.

I've been working on setting up OpenERP for my needs and today I decided it was time to work on backing up the beast. Since I've been running bacula at home to backup my environment, it was time to tweak it so that it made reasonable backups of OpenERP too.

In the end I was able to build a really elegant solution for backing it all up. I decided to go for the bpipe plugin that allows one to pipe programs directly to the bacula file daemon. This allowed me to do a live dump of the database with pg_dump and store it directly to the backup set without writing it to the disk.

Since the other examples in bacula wiki define methods that either use files or FIFO to do the backup, I documented my setup there too.

The only thing that was left was to add the directories specific for OpenERP to the backup and I was all set.

Posted Mon Jun 14 21:13:05 2010 Tags: debian

Once in a while I get this urge to use SELinux on some of the servers I manage, but almost always run in to something that puts me back enough to never finish the project. This time I managed to figure out the last few glitches. In the end, SELinux still has a really steep learning curve, so it's not for the impatient ones. Even though enabling SELinux in Debian has become a lot easier since the first time I tried to get things running, it's still just the tip of the iceberg. In Debian it is just a matter of installing the right packages and running a few commands, but that's just where the troubles start.

Most of the howtos focus on single user or shared installations where all users are created locally. Also most howtos fail to mention that you need to relabel files in certain cases.

One of the most annoying problem I ran in to was changing all non-system users away from the unconfined_u class. This is of course done like this (found here):

semanage -m -s user_u __default__

The problem here is that it changes the existing user as well and you start to get errors like these:

denied  { read } for  pid=32258 comm="bash" name=".profile" dev=dm-5 ino=185474 scontext=user_u:user_r:user_t:s0 tcontext=unconfined_u:object_r:unconfined_home_t:s0 tclass=file

The problem here is that the home directory for the user is still labeled for the wrong class. The fix is to relabel the home. Sadly this is something that you just need to know, it's not explained anywhere. At least I haven't found an explanation anywhere. Another good thing to do before you continue is to change the already existing user to the staff class. Staff class has a bit more relaxed security controls and you get to change the security roles (details here).

semanage login -m -s staff_u myuser
fixfiles relabel /home/myuser

This gets you a semi working setup, next problem usually is that some daemons are denied access to parts of your system. For me, this was postfix trying to access my home directory that was mounted over NFS. For such cases, you should persuade the maker of the module package to update the global policy if it's a common use case. Or you need to create a policy package that allows access to the given files.

The process itself is documented in the audit2access manual page. In general you should study the audit2why and audit2allow packages. The former tells you if there is an easier way to fix something (like enable a boolean) and latter will create the required policy lines. Only problem is to compile the policy and load it. The only problem here was to find the right tool and the right lines from the manual.

In general the SELinux learning curve is way too steep. It's a system that works pretty well once you learn all the tricks and start to completely understand the toolset. The community should continue working on lowering the bar for new users. There has been some major improvements since I first tried SELinux, so it's the right direction.

Posted Mon Jul 13 00:04:02 2009 Tags: debian

Since I enabled comments in this blog, I finally needed to configure a split DNS for my network.

There are various reasons why one needs a split DNS and as it's usually pointed out, the reasons are usually non-technical. In my case the reasons are technical: I have a NAT in my local network that allows me to host this website locally. What causes problems is that the domain name ressukka.net points to the external IP address and that doesn't work from the inside. So split DNS it is.

There are various ways of building a split DNS, one can use the views feature in bind9 or you can set up 2 separate DNS servers that provide different information (and redirect your local resolver to use the internal server). The latter is more secure if the internal zone is sensitive.

I decided to use a hybrid solution. I already knew that PowerDNS Recursor was capable of serving authoritative zones (think pre-cached) so I decided to leverage on that. Setting this up turned out to be simpler than I expected.

First I made a copy of the existing zone and edited it to fit my needs. I changed the IP address of ressukka.net to point to the IP address on the local network. I also adjusted some other entries that pointed to the local network.

Next I modified bind to listen on the external IP address. This can be accomplished by adding a listen-on { 1.2.3.4; }; to the options in the configuration. I also disabled the resolver by adding recursion no;, this forces the bind to work as authoritative only.

Then I installed the PowerDNS Recursor (pdns-recursor package in debian) and configured it to listen on the internal address only (local-address=10.0.0.1) and added the pre-cached zone to the configuration with auth-zones=ressukka.net=/path/to/internal-zone

Now, after restarting both daemons, I had a working split DNS with minimal configuration. I was also able to change the external DNS to authoritative only mode, which is a good idea in any case.

Posted Mon Mar 23 22:38:37 2009 Tags: debian

For some time I've suffered from the infamous clocksource problem with all Linux hosts that aren't running the Citrix provided kernels. I'm bit old fashioned and I want to run Debian provided kernels instead the Citrix ones, mostly because the Debian kernel receives security updates.

During the fight with my own server last night, it finally dawned to me.

The clocksource problem appears after you suspend a Linux host and the kernel in the virtual machine starts spewing this:

Mar  5 09:24:17 co kernel: [461562.007153] clocksource/0: Time went backwards: ret=f03d318c7db9 delta=-200458290723043 shadow=f03d1d566f4a offset=143675d9

I've been trying to figure out what is different with Citrix and Debian kernels, because the problem doesn't occur with the Citrix provided kernel.

The final hint to solving this problem came from Debian wiki. The same issue is mentioned there, but the workaround is not something I like. I perfer making sure that the host server has the correct time and the virtual machine just follows that time.

But the real clue was the clocksource line. It turns out that the Citrix kernel uses jiffies as the clocksource per default, while Debian uses the xen clocksource. It would make sense that the xen clocksource is more accurate since it's native to the hypervisor.

So by just running this on the domU fixes the problem:

echo "jiffies"> /sys/devices/system/clocksource/clocksource0/current_clocksource

There is no need to decouple the clock from the host, which is exactly what I needed. To make this change permanent, you need to add clocksource=jiffies to the bootparameters of your domU kernel.

You can do this by modifying grub configuration and adding clocksource=jiffies to the kopt line and running update-grub. Or you can use XenCenter and modify the virtual machine parameters and clocksource=jiffies to boot parameters.

It's also worth noting that this problem does apply to plain vanilla Debian installations as well, so reading that whole wiki page is a good idea.

Posted Thu Mar 5 17:26:28 2009 Tags: debian

I finally decided that it's time for me to upgrade my Xen installation. It used to run etch with backported Xen, because the etch version was increasingly difficult to work with.

I also acknowledge that some of the issues I've been having are simply caused by yours truly, but even still the Debian Xen installation is way too fragile to my taste. I've already considered installing XenServer Express locally and running the hosts on it. The big drawback has been that XenCenter (the tool that is used to manage XenServer) is windows only and it doesn't work with wine.

So you can imagine my desperation...

Anyway, the latest upgrade from etch to lenny was painful as usual. The first part went smoothly, bit of sed magic on sources.list and a few upgrade commands (carefully picking the Xen packages out of the upgrade set). So in the end I had a working lenny installation with backported Xen.

Next I made sure that there was nothing major going on in my network (one of the virtual machines acts as my local firewall) and took a deep breath before upgrading the rest of the packages. I knew to be careful about xendomains -script which has reliably restored my virtual machines after reboot to a broken host so I had always ended up restarting my virtual machines after reboot.

I carefully cleared XENDOMAINS_AUTO and set XENDOMAINS_RESTORE to false in /etc/default/xendomains so that the virtual machines would be saved but not restored or restarted on reboot.

After the normal pre-boot checks I went for it.

Oddly enough everything worked normally and the system came up after a bit of waiting. I checked the bridges and everything appeared normal, so it was time to try and restore a single domain to see that everything actually did work as planned.

Hydrogen:~# xm restore /var/lib/xen/save/Aluminium
Error: Device 0 (vif) could not be connected. Hotplug scripts not working.

Oof, Googling for the issue revealed that there were others that had suffered from the same problem on various different platforms the problems were caused by different things. One would assume that the problem is in the vif-bridge script that is mentioned in the xend-config.sxp file as the script that brings up the vif, but after many hours of tial and error and pointless googling (over gprs connection), I couldn't find any solution to the problem. It was time to call it a day (it was almost 3 am already...)

During the night I had a new idea about the possible cause. What if the problem isn't in xend, but somewhere else. I fired up udevadm monitor to see what udev saw and it wasn't much. I'm not an expert with udev, but from previous encounters I had a vague feeling that there was supposed to be more events flying around.

I wasn't able to pinpoint what was wrong so I decided to purge xen-utils, of which I had 2 versions installed: 3.2-1 and 3.0.2. I also removed everything related to xenstore. After reinstalling the current versions and restoring my configuration files the first host came up just fine.

I still had problems resuming the virtual machines and I ended up rebooting them again, which was nothing new, but at least they were running again.

In the end I don't know what was the actual cause for udev not handling the devices properly, but I'm happy to have them all running again. And I learned a valuable lesson of all this: udev is an important part of Xen, make sure it works properly.

Posted Thu Mar 5 17:11:39 2009 Tags: Debian

I've been bitten by grub upgrades and installations on Debian family domU servers. Apparently there are others out there who have been bitten too.

The bug itself is caused by a missing device entry, probably because of udev. Anyway, grub-probe tries to discover the root device so that update-grub can properly generate a menu.lst. In certain scenarios the root device itself doesn't exist. Here is an example from a configuration generated with xen-tools:

Hydrogen:/etc/xen# grep phy Neon.cfg 
disk    = [ 'phy:Local1/Neon-disk,sda1,w', 'phy:Local1/Neon-swap,sda2,w' ]

While this is a valid configuration, the device sda doesn't exists within the virtual machine. As a workaround the above blog entry suggests manually adding the sda device and the device entry in device.map.

This solution does work, but it will fail with the next upgrade. The proper solution is to adjust the Xen configuration so that the root device is created. And since Xen uses different naming scheme for devices we can upgrade to that too. So the above example becomes:

Hydrogen:/etc/xen# grep phy Neon.cfg 
disk    = [ 'phy:Local1/Neon-disk,xvda,w', 'phy:Local1/Neon-swap,xvdb,w' ]

You also need to adjust the existing grub configuration and fstab within the domU. It's a bit more work and requires an additional reboot, but it gives you a peace of mind that the next upgrade will work without a hitch.

Posted Tue Feb 17 07:57:53 2009 Tags: debian

As an obligatory note, Debian Lenny was released earlier today. Which means that sysadmins all over the world are starting to upgrade their servers.

There is an oddly little known tool that each and every sysadmin should install on at least one server they maintain, called apt-listchanges. It lists changes made to packages since the currently installed version. Sure that information will be overwhelming on major upgrades, but what is useful even on major upgrades is the capability to parse News files in the same way.

News files contain important information about the package in question. For example a maintainer could list known upgrade problems there, like is done in the lighttpd package. Or list changes in package specific default behaviour, like is done in Vim package.

Sure, you will notice these in time, but it's nice to get a heads up before a problem bites you.

Posted Sun Feb 15 22:35:57 2009 Tags: debian

Since I keep ending up in situations where I need to clean up postfix queue from mails sent by a single host and always forget the command, I'm posting it here. Maybe someone else will find it useful as well.

To begin with, you need to determine the IP address of the culprit you want to eliminate. How you do this, is up to you. Grepping logs or examining the files in the queue both work. But for some reason there doesn't appear to be a good tool to get statistics on the sending IP addresses, only the origin and destination domains.

Once you have determined the IP address which you want to purge, you can use the following spell. You might have to repeat the same line for active and incoming queues as well, but usually deferred is the queue I have the most mails.

grep -lrE '10.20.30.4' /var/spool/postfix/deferred | xargs -r -n1 basename | postsuper -d -

It's important that the IP address has escaped dots, because dots can account for any character. In the worst case it will end up matching a lot of wrong IP addresses. Another important bits are the '[^0-9]' groups in both ends of the pattern. Those make the IP address only match that particular IP address. Without that extra limitation 1.1.1.1 would match anything that has 1 as the last number in the first octet and 1 as the first number of the last octet. For example: 211.1.1.154 would be a valid match.

The other important bit, yet oddly unknown, is the postsuper command. Postsuper modifies the queue and -d flag makes it delete files in the queue by QueueID. For some reason I keep on seeing all sorts of find -exec rm {} spells all over, which isn't really that nice for the daemon itself.

So here it is, one more tidbit I've been meaning to write up for quite some time now. Enjoy!

Posted Sun Jan 25 11:23:36 2009 Tags: debian

"You never call, you never write. I hardly know you anymore."

Yes, I've been meaning to write up on several things. For some time now, I've been a happy VIM user and a while back I ran in to a blog post where someone mentioned a new feature they found in VIM which got me to explore the vim-scripts package.

There are a lot of scripts out there that extend VIM far beyond what it can do by default. And it's quite powerful even without the scripts. One of the neat little scripts I decided to install by default was surround, it allows one to easily replace surrounding parenthesis, tags or quotation marks.

There are a lot of scripts in the vim-scripts package, but it's not always clear how to enable the scripts. Thats where vim-addon-manager comes to play, it provides a vim-addon command that allows you to easily enable or disable the scripts.

I'm still trying to grasp the full potential of all the new commands available, but it certainly appears that I'll be having even more fun writing stuff. It's kind of odd, at first when you start to use vi-like editors, you struggle. But in the end it's just such a convenient way of editing files that it really does grow on you.

Posted Sat Jan 24 19:50:34 2009 Tags: debian

Some people consider the gnome usability guidelines a nuisance and some consider certain applications way too simplistic. While it is really hard to get the usability right, it's well worth it.

We need to keep in mind that as computer oriented people we tend to see things differently. Things that are simple to us aren't really that simple to the "normal people". But one of the simple things we can do the insure that the software we write serve the people it's designed for is to remove all not needed pop-ups and questions.

A good way to detect these would be to ask yourself why would a user choose anything else but the most logical option. It's kind of hard to explain, so lets pick an example:

Firefox is updated, the first question it usually asks after upgrade is about incompatible extensions. The user is presented with choices, check for updates or cancel. Now, we are dealing with internet browser so the user should be connected to the internet so there is no problem with checking for the updates, we can rule out that scenario. The other scenario that I can come up with is that a developer doesn't want to update some certain extension.

So at least I can't come up with any reasonable scenario why someone would want to select anything else but to upgrade the main option of upgrading the extensions. Why not just leave out the option and instead do the upgrade automatically. If you wish to be transparent, you can show the user that you are doing the upgrade. Or you can do like some applications do and just do it without bothering the user with the options.

I know I sound like a Google fanboy, but Google generally gets this right. Their applications skip out all upgrade related notices and just do the upgrade. Regular user doesn't want to upgrade because the user has been scared with incompatibility notices and upgrade checklists for so long. Just going ahead with the upgrade in complete silence keeps their software up to date as well.

Another example would be from few years back: The Ubuntu installation. Back in the day Debian was working on Debian Installer, which is also used as the main installer in Ubuntu alternative installation media. Debian Installer is capable of doing most things silently, but with Debian it by default asks a lot of questions. It doesn't matter, since most people who install Debian can be categorized as developers. But in my opinion, the thing that made Ubuntu a success was that it doesn't ask the questions that can be answered without asking the user.

So, back to usability. There are basically 2 camps, the "normal" users and the developers. Developers want and need to see a lot of the backend behaviour, just to debug problems. Currently a lot of the open source software is focused towards developers while they are gaining grounds on the "normal" population as well. We should start focusing on the users for a change.

Posted Sun Jan 4 13:48:56 2009 Tags: debian