27 November 2014
Owncloud’s SQLite-Database is just for small use cases. On larger environments it’s better to use a mysql-database. Owncloud7 has a very nice little tool to convert the existing SQLite-database to mysql and change the owncloud-configuration.
26 November 2014
Today I installed zfs on debian. The installation was straight forward but later I had some problems with my pool. After every single reboot, I had to import the zfspool manually(“zpool import -a”). I searched the internet for a proper solution, but I just found something like: “use ZFS_MOUNT=yes in /etc/default/zfs”. Even with this option it just doesn’t automatically import my pools.
24 November 2014
dr@tardis> for i in `ls -d *`; do du -hs $i; done
4,0K cron-job.txt
48K etc
4,0K install.sh
4,0K README.Debian.TXT
4,0K README.TXT
16K usr
22 November 2014
At the moment there is a great hype about “docker”, even if the technique isn’t new. So what’s so great about it?
I think it got popular, because it’s very easy to build system-containers with it. There is also a repository where you find ready to use environments for many use cases. With a few commands, you can have an isolated system with all the configs you need. That’s very nice. Compared to virtual machines, those containers need a minimum of resources because they share the resources with their host-machine and don’t run their own kernel. And they don’t need to boot, they just run when they are started. Docker implemented some kind of scripts called “Dockerfiles”. Those are recipies of how images should be configured, customized and build.
When I first heard about docker I was just thinking on developers. Now I realized how useful this tool could be for administrators too. You can use one server and serve every customer his own “linux+apache2+mysql”-system, instead of virtual hosting or chroot-ing.
In this article i want to describe how to create a “Debian Wheezy”-Image using “debootstrap”. This debianwheezy-image will be the base for our “dockerfile”. The dockerfile itself will create another image called “hoti/drupal-dev”. In this dockerfile we will pre-install a ssh-server,mysql-server, apache2-server, php5, drush and drupal.
21 November 2014
It happens many times that a server run for month or years and then the system administrator has to reboot it and it doesn’t come up again because the power supply(or other hardware) is damaged. At this point someone says with the voice of a teacher: “Never change a running system”. It’s the same with software-updates. The system runs many years without any updates, and then we have to upgrade it and nothing works. And again, someone from behind says with the voice of a teacher: “Never change a running system”. Those are the days where my left eye starts to twitch and I tend to get crazy…
19 November 2014
Assume we have a nagios-logfile(nagios.log) like this:
[1416415259] PASSIVE HOST CHECK: server1;0;PING OK - Packet loss = 0%, RTA = 0.04 ms
[1416415259] PASSIVE HOST CHECK: server2;0;PING OK - Packet loss = 0%, RTA = 0.04 ms
[1416415259] EXTERNAL COMMAND: PROCESS_HOST_CHECK_RESULT;server3;0;PING OK - Packet loss = 0%, RTA = 0.15 ms
Then we can convert the UNIX-Timestamp using the following command:
19 November 2014
Removing empty lines with sed:
sed '/^$/d' myFile > newFile
19 November 2014
Here are a some examples how to loop over a range of numbers using a “for-loop”:
19 November 2014
Nagios is an awesome monitoring-tool. I give my best to check as much services as possible with nagios. Here I want to explain how I check if updates for the Horde-Framework exist…
Although horde uses pear for update-management I don’t want to check pear directly(because of permissions and of course I don’t want to create too much traffic on the horde-repository). That’s why i use the following cronjob which checks for upgrades once an hour:
12 November 2014
I want to have control over debian updates, but I don’t want to make them manually. So I decided to do it “half-automatic”. This means that they run automatically until user-input is needed. The whole prozess is recorded in logfiles and after the updates are done this script sends out emails.
This is the parser for the summary-email. it works for english ang german environments:
#!/usr/bin/perl
use strict;
if ( $#ARGV ne 0)
{
die "usage: $0 \n";
}
my $logfile = $ARGV[0];
my $upgrades = undef;
open(LOG,"< $logfile") or die "can't open logfile: $logfile";
while()
{
my $line = $_;
if($line =~ /The following packages will be upgraded/g)
{
$line = ;
while($line !~ /\d+ upgraded, \d+ newly installed, \d+ to remove and \d+ not upgraded/)
{
$upgrades = $upgrades . " " . $line;
$line = ;
}
}
if($line =~ /Die folgenden Pakete werden aktualisiert/g)
{
$line = ;
while($line !~ /\d+ aktualisiert, \d+ neu installiert, \d+ zu entfernen und \d+ nicht aktualisiert./)
{
$upgrades = $upgrades . " " . $line;
$line = ;
}
}
}
close(LOG);
$upgrades =~ s/^\s+//g;
$upgrades =~ s/\r+//g;
$upgrades =~ s/\n+//g;
if($upgrades =~ /^\s+$/g or $upgrades =~ /^$/g or $upgrades =~ /^\n$/g)
{
exit 0;
}
my @arr = split(/\s+/,$upgrades);
print "$upgrades \n";
exit $#arr + 1;
And this is our update-script:
#!/bin/bash
LOGDIR=/opt/update-logs
PARSER=/opt/bin/aptlogparser.pl
eval `ssh-agent -s`
ssh-add
export TERM="rxvt"
function update_customer
{
for host in $HOSTS
do
DAT=`date +%F-%R`
echo "KUNDE: $KUNDE"
script -c "ssh -l root $host \"export TERM=rxvt; export DEBIAN_FRONTEND=readline; echo $TERM; hostname; apt-get update; apt-get upgrade\""
test -e typescript && mv typescript $LOGDIR/$host-$DAT.log
UPDATES=`$PARSER $LOGDIR/$host-$DAT.log`
if [$? -ne 0]
then
echo "$host: $UPDATES" >> ${KUNDE}_EMAIL.txt
echo "" >> ${KUNDE}_EMAIL.txt
fi
done
DAT=`date +%F`
if [-e ${KUNDE}_EMAIL.txt]
then
cat ${KUNDE}_EMAIL.txt | mutt -s "Linux-Updates vom $DAT" -- $EMAIL
rm ${KUNDE}_EMAIL.txt
fi
}
HOSTS="websrv mailsrv"
EMAIL="bob@example.com"
KUNDE="customer1"
update_customer
HOSTS="linuxsrv1 linuxsrv2"
EMAIL="alice@example.com"
KUNDE="customer2"
update_customer