Wednesday, May 13, 2020

Mobile skin and plugin for Roundcube webmail

The "Melanie2" mobile skin and plugin seems to make Roundcube work on phones. And it is easier to install than it would seem when only briefly looking at the github instructions.

These are the commands I needed on a Debian 10 ("Buster") machine where roundcube 1.3.10 was installed. (the base install was done from the Debian repositories: apt install roundcube roundcube-sqlite3. Instead of sqlite3, it is also possible to use PostgreSQL or MySQL with the roundcube-pgsql or roundcube-mysql packages).

For the mobile stuff:

Create a directory to store the extra stuff and make upgrades easier:

mkdir -p /opt/roundcube-stuff
cd /opt/roundcube-stuff/

Get the files from Github:

git clone https://github.com/messagerie-melanie2/roundcube_skin_melanie2_larry_mobile
git clone https://github.com/messagerie-melanie2/roundcube_jquery_mobile
git clone https://github.com/messagerie-melanie2/roundcube_mobile

Instead of renaming and copying the directories, create symlinks and copy them to roundcube's skins and plugins folders:

ln -si $(pwd)/roundcube_skin_melanie2_larry_mobile/ melanie2_larry_mobile
ln -si $(pwd)/roundcube_jquery_mobile/              jquery_mobile
ln -si $(pwd)/roundcube_mobile                      mobile

cp -vd /opt/roundcube-stuff/melanie2_larry_mobile   /var/lib/roundcube/skins/
cp -vd /opt/roundcube-stuff/jquery_mobile           /var/lib/roundcube/plugins/
cp -vd /opt/roundcube-stuff/mobile                  /var/lib/roundcube/plugins/

Finally, add 'mobile' to the $config['plugins'] array in /etc/roundcube/config.inc.php. If doing it by hand is too much work, copy/pasting this should work:

echo 'array_push( $config["plugins"], "mobile" );' | tee -a /etc/roundcube/config.inc.php
#or:
## echo '$config["plugins"][] = "mobile";' | tee -a /etc/roundcube/config.inc.php

Labels: , , , , ,

Saturday, May 11, 2013

Mediawiki with Postgres on Debian

A short guide to install Mediawiki on Debian with PostgreSQL 9.1.With a fix for this error:

"Attempting to connect to database "postgres" as superuser "postgres"... error: No database connection"

Installing packages

The server is still using Debian Squeeze, but I expect it would quite the same for the new Debian Wheezy. Here I used squeeze-backports.

 Add the backports repository if needed:

echo "deb http://backports.debian.org/debian-backports squeeze-backports main contrib non-free" >> /etc/apt/sources.list

Install everything:

apt-get update
apt-get -t squeeze-backports install apache2 postgresql-9.1 postgresql-contrib php5-pgsql
apt-get -t squeeze-backports install imagemagick libdbd-pg-perl
apt-get -t squeeze-backports install mediawiki

I use a separate IP for the wiki, so need to add it to the interface:

mcedit /etc/network/interfaces
# wiki on it's own IP
auto eth0:3
iface eth0:3 inet static
    address 192.168.10.4
    netmask 255.255.255.0

/etc/init.d/networking restart

Apache configuration

# I use the mod_rewrite module in Apache
a2enmod rewrite

# I prefer the config file in sites-enabled
# (but it's really just a symlink to /etc/mediawiki/apache.conf):
mv /etc/apache2/conf.d/mediawiki.conf /etc/apache2/sites-enabled

My virtual host config:

<VirtualHost *:80>
    ServerName wiki.example.lan
    ServerAlias wiki.example.lan
    ServerAdmin webmaster@example.com
    DocumentRoot /docs/www-wiki

    ErrorLog /var/log/apache2/wiki-error.log
    CustomLog /var/log/apache2/wiki-access.log combined

    ServerSignature On

    Alias /icons/ "/usr/share/apache2/icons/"

    RewriteEngine On
    RewriteRule ^/w(iki)?/(.*)$  http://%{HTTP_HOST}/index.php/$2 [L,NC]

    <Directory /docs/www-wiki/>
        Options +FollowSymLinks
        AllowOverride All
        # Default is Deny. Exceptions listed below with "Allow ...":
        Order Deny,Allow
        Deny from All
        Satisfy any
        # LAN
        Allow from 192.168.10.0/24
        # VPN
        Allow from 10.0.10.0/24

# If using LDAP
#        AuthType Basic
#        AuthName "Example Wiki. Requires user name and password"
#        AuthBasicProvider ldap
#        AuthzLDAPAuthoritative on
#        AuthLDAPURL ldap://localhost:389/ou=People,dc=example,dc=lan?uid
#        AuthLDAPGroupAttribute memberUid
#        AuthLDAPGroupAttributeIsDN off
#        Require ldap-group cn=users,ou=Groups,dc=example,dc=lan
    </Directory>

    # some directories must be protected
    <Directory /docs/www-wiki/config>
        Options -FollowSymLinks
        AllowOverride None
    </Directory>

    <Directory /docs/www-wiki/upload>
        Options -FollowSymLinks
        AllowOverride None
    </Directory>

    <Directory "/usr/share/apache2/icons">
        Options Indexes MultiViews
        AllowOverride None
        Order allow,deny
        Allow from all
    </Directory>
</VirtualHost>

Moving files

I used a directory other than the default /var/lib/mediawiki. So I had to move things over:

cp -rp /var/lib/mediawiki /docs/www-wiki

The only tricky part, with the fix:

Before starting the web configurator in http://wiki.example.lan/config/ you need to define a password for the "postgres" database user. Mediawiki will start the psql client as the www-data system user, but with the -U argument to set the user to "postgres". Even if you defined a password for the system user "postgres", this is not the password of the database user "postgres".

So you need to start psql as the postgres system user, which you can do as root using sudo -c, and then set the password inside the psql client:

sudo -u postgres psql
psql (9.1.9)
Type "help" for help.

postgres=# \password
Enter new password:
Enter it again:
postgres=# \q

If you don't do this, the Mediawiki config will end with this error:

Attempting to connect to database "postgres" as superuser "postgres"... error: No database connection

And a big pink and unhelpful error box below.

The Postgresql log (tail /var/log/postgresql/postgresql-9.1-main.log) will show:

FATAL:  password authentication failed for user "postgres"

Finally

Now you just have to move LocalSettings.php to /etc/mediawiki/.

And if you used a different install root, you have to edit it to change the MW_INSTALL_PATH:

define('MW_INSTALL_PATH','/docs/www-wiki');



Labels: , , , , , , , ,

Sunday, January 06, 2013

scripting disk partitionning in Linux - take 2

It is possible to use parted to script/automate disk partitioning in Linux, as described in "Command-line partitioning and formatting".

Another way is to use sgdisk from the GPT fdisk programs.

In Debian and derivatives, it can be installed with sudo apt-get install gdisk.

The current version 0.8.1 from the Ubuntu 12.04 repository would partition only the first 2TB of a 4 TB. disk. So you may need to get a more recent version from the downloads page. I got version 0.8.5 for x64, and that worked very well.

The following will create and format a single NTFS partition on an entire drive:

disk=/dev/sdb            # Make sure you got this right !!
label="My_Disk_Name"
echo "disk $disk will be completely erased."

sudo sgdisk -Z $disk
sudo sgdisk --new=0:0:-8M -t 1:0700 $disk
sudo sgdisk -p $disk
sudo mkntfs --verbose --fast --label "$label" --no-indexing --with-uuid ${disk}1

-Z removes any left-over partitions

--new=0:0:-8M creates a single partition from the start of the disk to 8MB before the end (just in case it's useful to not end on the very last sector)

-t 1:0700 sets the first partition we just created to type "Microsoft Basic Partition", which is the type we want for a simple NTFS partition. Linux would be -t 1:8300. Use sgdisk -L to get a list of partition types.

Note that for comfortable (and safer) manual partitioning, there is also cgdisk. It is like the old cfdisk, but works with new disks over 2TB.

Labels: , , , , , , , ,

Wednesday, January 02, 2013

Set up your own Dynamic DNS

The problem with external dynamic DNS services like dyndns.org, no-ip.com, etc. is that you constantly have to look after them. Either they are free, but they expire after 1 month and you have to go to their web site to re-activate your account. Or you pay for them, but then you need to take care of the payments, update the credit card info, etc. This is all much too cumbersome for something that should be entirely automated.

If you manage your own DNS anyway, it may be simpler in the long run to set-up your own dynamic DNS system.

Bind has everything needed. There is a lot of info on the Internet on how to do it, but what I found tended to be more complicated than becessary or insecure or both. So here is how I did it on a Debian 6 ("squeeze") server.

The steps described below are:

Initialize variables

To make it easier to copy/paste commands, we initialize a few variables

binddir="/var/cache/bind"
etcdir="/etc/bind"

(In Debian, you can use grep directory /etc/bind/named.conf.options to find the correct binddir value)

For dynamic hosts, we will use a subdomain of our main zone: .dyn.example.com.

host=myhost; zone=dyn.example.com

Create key

Most example use the dnssec-keygen command. That would create 2 files (with ugly names): one .private and one .key (public) file. This is useless since the secret key is the same in both files, and the nsupdate method doesn't use a public/private key mechanism anyway.

There is a less-known and more appropriate command in recent distributions : ddns-confgen. By default, it will just print sample entries with instructions to STDOUT. You can try it out with:

ddns-confgen -r /dev/urandom -s $host.$zone.

The options we use here are to use an "hmac-md5" algorithm instead of the default "hmac-sha256". It simplifies things with nsupdate later. And we also specify the key name to be the same as the host's name. That way, we can use a wildcard in the "update-policy" in named.conf.local and don't need to update it every time we add a host.

ddns-confgen -r /dev/urandom -q -a hmac-md5 -k $host.$zone -s $host.$zone. | tee -a $etcdir/$zone.keys

chown root:bind   $etcdir/$zone.keys
chmod u=rw,g=r,o= $etcdir/$zone.keys

Depending on how you intend to use nsupdate, you may want to also have a separate key file for every host key. nsupdate cannot use the $zone.keys file if it contains multiple keys. So you might prefer to directly create these individual keyfiles by adding something like > $etcdir/key.$host.$zone :

ddns-confgen -r /dev/urandom -q -a hmac-md5 -k $host.$zone -s $host.$zone. | tee -a $etcdir/$zone.keys > $etcdir/key.$host.$zone

chown root:bind   $etcdir/$zone.keys $etcdir/key.*
chmod u=rw,g=r,o= $etcdir/$zone.keys $etcdir/key.*

Configure bind

Create zone file

Edit $binddir/$zone :

$ORIGIN .
$TTL  3600 ; 1 hour

dyn.example.com IN SOA  dns-server.example.com. hostmaster.example.com. (
         1 ; serial (start at 1 for a dynamic zone instead of the usual date-based serial)
      3600 ; refresh by secondaries (but they get NOTIFY-ed anyway)
       600 ; retry (every 10 minutes if refresh fails)
    604800 ; expire (slaves remove the record after 1 week if they could not refresh it)
       300 ; minimum ttl for negative answers (5 minutes)
)

$ORIGIN dyn.example.com.
NS      dns-server.example.com.

Edit /etc/bind/named.conf.local

Edit /etc/bind/named.conf.local to add :

// DDNS keys
include "/etc/bind/dyn.example.com.keys";

// Dynamic zone
zone "dyn.example.com" {
    type master;
    file "/var/cache/bind/dyn.example.com";
    update-policy {
        // allow host to update themselves with a key having their own name
        grant *.dyn.example.com self dyn.example.com.;
    };
};

Reload server config

rndc reload && sleep 3 && grep named /var/log/daemon.log | tail -20

(adjust the sleep and tail values depending on the number of zones your DNS server handles, so that it has time to report any problems)

Test

If you created individual key files, or your $zone.keys file contains only a single key, you can test like this:

host=myhost; ip=10.11.12.13; zone=dyn.example.com; server=dns-server.example.com; keyfile=$etcdir/key.$host.$zone
echo -e "server $server\n zone $zone.\n update delete $host.$zone.\n update add $host.$zone. 600 A $ip\n send" | nsupdate -k "$keyfile"

Or, more readable and with an extra TXT record:

cat <<EOF | nsupdate -k $keyfile
server $server
zone $zone.
update delete $host.$zone.
update add $host.$zone. 600 A $ip
update add $host.$zone. 600 TXT "Updated on $(date)"
send
EOF

(If you get a could not read key from $keyfile: file not found error, and the file actually exists and is owned by the bind process user, you may be using an older version of nsupdate (like the version in Debian Etch). In that case, replace nsupdate -k $keyfile with nsupdate -y "$key_name:$secret" using the key name and secret found in your key file.)

Check the result:

host -t ANY $host.$zone

It should output something like

myhost.dyn.example.com descriptive text "Update on Tue Jan  1 17:16:03 CET 2013"
myhost.dyn.example.com has address 10.11.12.13
If you try to use a file with multiple keys in the -k option to nsupdate, you will get an error like this:

... 'key' redefined near 'key'
could not read key from FILENAME.keys.{private,key}: already exists

Usage

In a /etc/network/if-up.d/ddnsupdate script.

If you have setup an update CGI page on your server, you could use something like this, letting the web server use the IP address it received anyway with your request.

#!/bin/sh

server=dns-server.example.com
host=myhost
secret="xBa2pz6ZCGQJ5obmvmp26w==" # copy the right key from $etcdir/$zone.keys

wget -O /dev/null --no-check-certificate "https://$server/ddns/update.cgi?host=$host;secret=$secret"
Otherwise, you can use nsupdate, but you need to determine your external IP first :

#!/bin/sh

server=dns-server.example.com
zone=dyn.example.com
host=myhost
secret="xBa2pz6ZCGQJ5obmvmp26w==" # copy the right key from $etcdir/$zone.keys

ip=$(wget -q -O - http://example.com/myip.cgi)

cat <<EOF | nsupdate
server $server
zone $zone.
key $host.$zone $secret
update delete $host.$zone.
update add $host.$zone. 600 A $ip
update add $host.$zone. 600 TXT "Updated on $(date)"
send
EOF

I used a very simple myip.cgi script on the web server, to avoid having to parse the output of the various existing services which show your IP in the browser:

#!/bin/sh
echo "Content-type: text/plain"
echo ""
echo $REMOTE_ADDR

This alternative script example uses SNMP to get the WAN IP from the cable router. It only does the update if the address has changed, and logs to syslog.

#!/bin/sh

zone=dyn.example.com
host=myname
secret="nBlw18hxipEyMEVUmwluQx=="
router=192.168.81.3

server=$(dig +short -t SOA $zone | awk '{print $1}')

ip=$( snmpwalk -v1 -m RFC1213-MIB -c public $router ipAdEntAddr | awk '!'"/$router/ {print \$4}" )

if [ -z "$ip" ]; then
 echo "Error getting wan ip from $router" 1>&2
 exit 1
fi

oldip=$(dig +short $host.$zone)

if [ "$ip" == "$oldip" ]; then
 logger -t `basename $0` "No IP change for $host.$zone ($ip)"
 exit
fi

cat <<EOF | nsupdate
server $server
zone $zone.
key $host.$zone $secret
update delete $host.$zone.
update add $host.$zone. 600 A $ip
update add $host.$zone. 600 TXT "Updated on $(date)"
send
EOF

logger -t `basename $0` "IP for $host.$zone changed from $oldip to $ip"

Web server update.cgi

An example update.cgi :

#!/usr/bin/perl

## Use nsupdate to update a DDNS zone.

## (This could be done with the Net::DNS module. It
##  would be more portable (Windows, etc.), but also
##  more complicated. So I chose the nsupdate utility
##  that comes with Bind instead.)

# "mi\x40alma.ch", 2013

use strict;

my $VERSION = 0.2;
my $debug = 1;

my $title = "DDNS update";

my $zone     = "dyn.example.com";
my $server   = "localhost";
my $nsupdate = "/usr/bin/nsupdate";


use CGI qw(:standard);

my $q = new CGI;

my $CR = "\r\n";

print $q->header(),
      $q->start_html(-title => $title),
      $q->h1($title);


if (param("debug")) {
    $debug = 1;
};

my $host   = param("host");
my $secret = param("secret");
my $ip     = param("ip") || $ENV{"REMOTE_ADDR"};
my $time   = localtime(time);

foreach ($host, $secret, $ip) {
    s/[^A-Za-z0-9\.\/\+=]//g; # sanitize, just in case...
    unless (length($_)) {
        die "Missing or bad parameters. host='$host', secret='$secret', ip='$ip'\n";
    }
}

my $commands = qq{
server $server
zone $zone.
key $host.$zone $secret
update delete $host.$zone.
update add $host.$zone. 600 A $ip
update add $host.$zone. 600 TXT "Updated by $0 v. $VERSION, $time"
send
};

print $q->p("sending update commands to $nsupdate:"), $CR,
      $q->pre($commands), $CR;

open( NSUPDATE, "| $nsupdate" ) or die "Cannot open pipe to $nsupdate : $!\n";
print NSUPDATE $commands        or die "Error writing to $nsupdate : $!\n";
close NSUPDATE                  or die "Error closing $nsupdate : $!\n";

print $q->p("Done:"), $CR;

my @result = `host -t ANY $host.$zone`;

foreach (@result) {
    print $q->pre($_), $CR;
}


if ($debug) {
# also log received parameters
    my @lines;
    for my $key (param) {
        my @values = param($key);
        push @lines, "$key=" . join(", ", @values);
    }
    warn join("; ", @lines), "\n";
}

print $q->end_html, $CR;

__END__

Labels: , , , , , , , ,

Tuesday, July 26, 2011

Importing root certificates into Firefox and Thunderbird

Update Feb. 2012: see at the end for an alternative for new profiles.

This is ridiculously complicated and makes me wonder whether I should just drop Firefox in Windows and go back to IE.

The problem:

How to automatically pre-import your self-signed certification authority into all user profiles for Firefox and Thunderbird.

The solution:

You need the Mozilla certutil utility (not the Microsoft certutil.exe).

In Windows, you would need to compile nss tools or use some ancient hard to find Windows binary to get it. But all my user profiles are on a Samba server, so it was much easier to do it on the server, with the added benefit of having Bash and not needing to struggle with the horrible cmd.exe.

First install the tools. In Debian, it would be:

apt-get install libnss3-tools

Then adapt this long command to your paths:

find /path/to/users-profiles -name cert8.db -printf "%h\n" | \
while read dir; do \
  certutil -A -n "My Own CA" -t "C,C,C" -d "$dir" -i "/path/to/my_own_cacert.cer"; \
done

(-printf "%h\n" prints just the directory, without the file name, one per line. That is fed to the $dir variable needed in the certutil command. The -n option is a required nickname for the certificate. -t "C,C,C" is what will make you accept any certificate signed by this CA you are importing).

See also: the certutil documentation, and a better explanation of the trust arguments (-t option).

Alternative:

The above solution works to add a certifcate to an existing profile's cert8.db. To have newly created profiles include the certificate, you need to put a good cert8.db file into the Program's directory.

  1. Either import your certificate(s) manually into an existing profile, or use the steps above to add the certificate(s) to a cert8.db file.
  2. Copy the new cert8.db to the Firefox (or Thunderbird) program directory, into a "/defaults/profile" subdirectory. (ie. "C:\Program Files (x86)\Mozilla Firefox\defaults\profile\").

This way, newly created profiles will copy this cert8.db file instead of creating a new one from scratch.

Labels: , , , , , , , , , , , ,

Sunday, July 03, 2011

Etch to Lenny trouble with libxml2

While upgrading a few Debian Etch systems to Lenny, I had a lot of trouble which showed up like this:
symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64

The real cause seems to have been that I had 2 libz libraries installed:

 # /sbin/ldconfig -pNX | grep libz
 libz.so.1 (libc6) => /lib/libz.so.1
 libz.so.1 (libc6) => /usr/lib/libz.so.1

So the solution was quite simple:

 # rm /lib/libz.so.1*

That's all that was needed to get rid of the mountain of dpkg errors, and continue the upgrades following the Debian guide. The next upgrade to Squeeze went smoothly.

For the benefit of Google searchers, here is a full error listing:

 Unpacking replacement shared-mime-info ...
update-mime-database: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: warning - old post-removal script returned error exit status 127
dpkg - trying script from the new package instead ...
update-mime-database: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error processing /var/cache/apt/archives/shared-mime-info_0.30-2_i386.deb (--unpack):
 subprocess new post-removal script returned error exit status 127
update-mime-database: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error while cleaning up:
 subprocess post-removal script returned error exit status 127
Preparing to replace libgnomevfs2-common 1:2.14.2-7 (using .../libgnomevfs2-common_1%3a2.22.0-5_all.deb) ...
Unpacking replacement libgnomevfs2-common ...
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: warning - old post-removal script returned error exit status 127
dpkg - trying script from the new package instead ...
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error processing /var/cache/apt/archives/libgnomevfs2-common_1%3a2.22.0-5_all.deb (--unpack):
 subprocess new post-removal script returned error exit status 127
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error while cleaning up:
 subprocess post-removal script returned error exit status 127
Preparing to replace libgnome2-common 2.16.0-2 (using .../libgnome2-common_2.20.1.1-1_all.deb) ...
Unpacking replacement libgnome2-common ...
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: warning - old post-removal script returned error exit status 127
dpkg - trying script from the new package instead ...
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error processing /var/cache/apt/archives/libgnome2-common_2.20.1.1-1_all.deb (--unpack):
 subprocess new post-removal script returned error exit status 127
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error while cleaning up:
 subprocess post-removal script returned error exit status 127
Errors were encountered while processing:
 /var/cache/apt/archives/shared-mime-info_0.30-2_i386.deb
 /var/cache/apt/archives/libgnomevfs2-common_1%3a2.22.0-5_all.deb
 /var/cache/apt/archives/libgnome2-common_2.20.1.1-1_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
A package failed to install.  Trying to recover:
dpkg: dependency problems prevent configuration of libbonoboui2-0:
 libbonoboui2-0 depends on libglade2-0 (>= 1:2.6.1); however:
  Version of libglade2-0 on system is 1:2.6.0-4.
 libbonoboui2-0 depends on libgtk2.0-0 (>= 2.12.0); however:
  Version of libgtk2.0-0 on system is 2.8.20-7.
dpkg: error processing libbonoboui2-0 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgnomecanvas2-0:
 libgnomecanvas2-0 depends on libglade2-0 (>= 1:2.6.1); however:
  Version of libglade2-0 on system is 1:2.6.0-4.
 libgnomecanvas2-0 depends on libgtk2.0-0 (>= 2.12.0); however:
  Version of libgtk2.0-0 on system is 2.8.20-7.
dpkg: error processing libgnomecanvas2-0 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgail18:
 libgail18 depends on libgtk2.0-0 (>= 2.12.0); however:
  Version of libgtk2.0-0 on system is 2.8.20-7.
dpkg: error processing libgail18 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgail-common:
 libgail-common depends on libgail18 (>= 1.9.1); however:
  Package libgail18 is not configured yet.
 libgail-common depends on libgtk2.0-0 (>= 2.12.0); however:
  Version of libgtk2.0-0 on system is 2.8.20-7.
dpkg: error processing libgail-common (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgnomevfs2-extra:
 libgnomevfs2-extra depends on libgnomevfs2-common (>= 1:2.22); however:
  Package libgnomevfs2-common is not installed.
 libgnomevfs2-extra depends on libgnomevfs2-common (<< 1:2.23); however:
  Package libgnomevfs2-common is not installed.
dpkg: error processing libgnomevfs2-extra (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgnomevfs2-0:
 libgnomevfs2-0 depends on libgnomevfs2-common (>= 1:2.22); however:
  Package libgnomevfs2-common is not installed.
 libgnomevfs2-0 depends on libgnomevfs2-common (<< 1:2.23); however:
  Package libgnomevfs2-common is not installed.
dpkg: error processing libgnomevfs2-0 (--configure):
 dependency problems - leaving unconfigured
Setting up libgnome-keyring0 (2.22.3-2) ...
dpkg: dependency problems prevent configuration of libgnome2-0:
 libgnome2-0 depends on libgnomevfs2-0 (>= 1:2.17.90); however:
  Package libgnomevfs2-0 is not configured yet.
 libgnome2-0 depends on libgnome2-common (>= 2.20); however:
  Package libgnome2-common is not installed.
 libgnome2-0 depends on libgnome2-common (<< 2.21); however:
  Package libgnome2-common is not installed.
dpkg: error processing libgnome2-0 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of xserver-xorg-input-mouse:
 xserver-xorg-input-mouse depends on xserver-xorg-core (>= 2:1.4); however:
  Version of xserver-xorg-core on system is 2:1.1.1-21etch5.
dpkg: error processing xserver-xorg-input-mouse (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of xserver-xorg-input-kbd:
 xserver-xorg-input-kbd depends on xserver-xorg-core (>= 2:1.4); however:
  Version of xserver-xorg-core on system is 2:1.1.1-21etch5.
dpkg: error processing xserver-xorg-input-kbd (--configure):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 libbonoboui2-0
 libgnomecanvas2-0
 libgail18
 libgail-common
 libgnomevfs2-extra
 libgnomevfs2-0
 libgnome2-0
 xserver-xorg-input-mouse
 xserver-xorg-input-kbd


Labels: , , , , ,

Sunday, May 29, 2011

Mac and OpenLDAP: Local homes for network users

I wanted a Mac to authenticate users against our Debian OpenLDAP server, but to create a local home directory on the Mac (see here for more details). The usual configuration for network users on the Mac is to mount their homes from the server over NFS. There are many excellent instructions on the net on how to do that. But finding help on how to have them use a local home instead was much more difficult.

It turns out it can be done very simply, by disabling one line in /etc/auto_master on the Mac. By default, it contains +auto_master, which tells the Mac's automounter to look for an automount map in LDAP. If this line is disabled, the Mac will create a local home for network users the first time they log in. Since our userHomes in LDAP are defined as /home/username, the Mac home is created under /home instead of /Users, which is fine.

So for such a setup, you do NOT need to import an Apple schema into your LDAP directory. (That was quite a hassle because you need to tweak the original schema which is not quite kosher; but it was unnecessary).

All you need to do is comment out this single line in /etc/auto_master to make it

#+auto_master  # Use directory service

Or copy/paste this:

sudo perl -i.orig -pe 's/^(\+auto_master.*)/## $1/' /etc/auto_master

Labels: , , , , , ,

Saturday, November 20, 2010

Moving IMAP Maildir to another user

A little recipe to move a user's IMAP mails to another user. (Tested on the Courier IMAP server on Debian).

Useful in situations like John leaving the company and Bob needing to have access to John's old emails.

olduser=john; newuser=bob
maildirmake -f $olduser /home/$newuser/Maildir/
cd /home/$olduser/Maildir/
for d in * ; do \
    cp -pr "$d" "/home/$newuser/Maildir/.$olduser/"; \
done
echo "INBOX.$olduser"  >>/home/$newuser/Maildir/courierimapsubscribed
for d in .??*; do \
    cp -pr "$d" "/home/$newuser/Maildir/.$olduser$d"; \
    echo "INBOX.$olduser$d" >>/home/$newuser/Maildir/courierimapsubscribed; \
done
chown -R $newuser /home/$newuser/Maildir

(Beware that if John had a folder with a one-letter name, that one will not be copied. It's because "for d in .*" would do a mess trying to copy "." and "..". So line 6 uses "for d in .??*" instead.)

Labels: , , , , , , ,

Friday, September 18, 2009

Boot an iso file on your hard disk using grub

I needed to update the firmware on several of the infamous Seagate Barracuda 7002.11 drives with the buggy "SD15" firmware.

Seagate offers a Windows executable or an ISO file to create a bootable CD. However, I had only a Mac, a Linux notebook without a CD drive, and a Windows notebook with neither a CD nor any possibility to attach SATA drives.

After some googling, it looked like the Intel Mac could be booted from such a DOS CD, and that might have worked ... but I didn't even have a blank CD.

But wouldn't it be possible to boot from an .iso file on the hard disk?

As it turns out, it is. And this will probably prove quite handy to boot various Live CDs.

My notebook has Ubuntu, with the standard Grub boot loader. Grub cannot boot from an iso file, but another boot loader called grub4dos can. And Grub can boot that smart cousin of his... Yes, it's a sort of weird setup, but it worked for me.

There are also ways to do that on a Windows system. But here is the Linux version:

  1. Get grub4dos from sourceforge or gna.org
  2. Extract grub.exe from the zip file and copy it to your /boot directory.
  3. Also put the .iso you want to boot into your /boot directory.
  4. Edit your /boot/grub/menu.lst file and add these lines:
title == Use grub4dos for the following entries: ==

title 1: Reload this menu using grub4dos to enable booting the next entries
kernel /boot/grub.exe

title 2: boot the Seagate firmware ISO file
map --mem /boot/MooseDT-SD1A-3D4D-16-32MB.ISO (hd32)
map --hook
chainloader (hd32)

(Adapt the "title 2: ..." line, and of course the "map --mem /boot/name-of-your.iso" line with the right file name. Note that it is case-sensitive)

When booting, you will first be in your standard grub, where you select the "1: ..." entry to load grub4dos. It will do just like grub, find the same menu.lst file, and display it again. Now you can select the .iso entry, which grub4dos will understand.

It is probably also possible to store the iso files on other partitions, but that is "left as an exercise to the reader" ...

See also: the wiki, this guide, a "success stories" thread or this other thread with a much hairier setup.

Labels: , , , ,

Wednesday, March 18, 2009

Installing latest FFMPEG on Debian Etch

How to install the latest FFMPEG on a Debian 4 ("Etch") server? This post encouraged me to try it, despite the fact that it needs compiling from source, and that Etch isn't even the current "stable" Debian anymore. PhillC's post helped a lot, but it still didn't work for me exactly as described there. So here is how it eventually did work for me.

# echo "deb http://www.debian-multimedia.org etch main" >>/etc/apt/sources.list

or

# echo "deb http://www.debian-multimedia.org stable main" >>/etc/apt/sources.list

(I used both, and fiddled with enabling and disabling that repository, so I'm not sure anymore which one ended up being useful).

aptitude update gave me a GPG error, so I had to add the key it mentioned:

# gpg --keyserver hkp://wwwkeys.eu.pgp.net --recv-keys 07DC563D1F41B907
# gpg --armor --export 07DC563D1F41B907 | apt-key add -
# aptitude update

The following didn't work, or only worked partially:

# apt-get build-dep ffmpeg
Reading package lists... Done
Building dependency tree... Done
E: Unable to find a source package for ffmpegcvs
I continued anyway with the long install line of various libraries. I had to remove some of these libraries from the suggested install line. Particularly, since I had to recompile libx264 anyway, I should have removed libx264-dev at this point. It is removed in the line below:
# aptitude install liblame-dev libfaad-dev libfaac-dev libxvidcore4-dev liba52-0.7.4 liba52-0.7.4-dev build-essential subversion

# cd /usr/src
# svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg

And so I got the current version as of March 17:

Checked out external at revision 28979.
Checked out revision 18021.

And I tried configure:

# cd /usr/src/ffmpeg
# ./configure --enable-gpl --enable-pp --enable-libvorbis --enable-liba52 --enable-libdc1394 --enable-libgsm --enable-libmp3lame --enable-libfaad --enable-libfaac --enable-pthreads --enable-libx264 -enable-libxvid

After various errors, and removing options, I ended up with this error:

ERROR: libx264 version must be >= 0.65.

And trying to install that from the debian-multimedia.org repository didn't work either:

# aptitude install libx264-65
The following packages have unmet dependencies:
libx264-65: Depends: libc6 (>= 2.7-1) but 2.3.6.ds1-13etch8 is installed and it is kept back.
So this thread came to rescue, and I embarked on getting x264 and compiling that from source too:
# aptitude install git git-core

Trying to use Git at this point gives an error, but suggests the solution:

# update-alternatives --config git
There are 2 alternatives which provide `git'.
Selection    Alternative
-----------------------------------------------
*+        1    /usr/bin/git.transition
        2    /usr/bin/git-scm

Press enter to keep the default[*], or type selection number: 2

Next steps:

# cd /usr/src/
# git clone git://git.videolan.org/x264.git
# cd x264
# ./configure --enable-shared

This gave an error about yasm, which was not the right version. I could have tried to compile that too, as shown on the Ubuntu forum, but impatiently decided to try the suggested disable option instead. So the x264 part which worked:

# ./configure --enable-shared --disable-asm
# make
# make install
# ldconfig

And finally, ffmpeg:

# cd /usr/src/ffmpeg/
# ./configure --enable-gpl --enable-postproc --enable-pthreads --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libx264 --enable-libxvid
# make
# make install
I also had to remove the old ffmpeg version (aptitude purge ffmpeg) which I had installed some time before this, and finally did this:
# echo /usr/local/lib >> /etc/ld.so.conf.d/local.conf
# ldconfig

Since I had a leftover libx264 installed with aptitude and which was too old, that caused a segmentation fault when I tried to encode with ffmpeg. After searching (aptitude search x264), I found i had to aptitude purge libx264-54 libx264-dev . Then, just to be sure, I re-did the ./configure, make clean, make, make install incantations for both x264 and ffmpeg.

In the end, ffmpeg is working. I suppose the --disable-asm option on x264 will make encoding slower, so it may be worth compiling yasm, and re-compiling x264 again.

Now that ffmpeg is working, the main problem is trying to understand it's myriad of incomprehensible and cryptically documented options.

Labels: , , , , , ,

Thursday, May 01, 2008

Fix slow ssh response

When logging into an ssh server you may experience a long delay after authentication, and before you get the prompt. This will happen with most sshd servers on a home network or on a new network while it is being set up.

The reason is that sshd tries to do a reverse lookup on the connecting IP, which takes a while to time out.

To speed up the initial response of ssh, the solution is to prevent these reverse lookups, at least until the network has a working DNS which can resolve the connecting IPs to names. To do this, set "UseDNS no" in your sshd_config file and force sshd to re-read it's configuration.
sudo -s # if you are not root, like on Ubuntu or Mac

file=/etc/sshd_config
# or
file=/etc/ssh/sshd_config
# or on a Mac
file=/private/etc/sshd_config

perl -i.bak -pe 's/^\s*#?UseDNS\s+.*/UseDNS no/i' $file
grep -qi 'UseDNS no' $file || echo UseDNS no >> $file
# on Linux:
kill -HUP `cat /var/run/sshd.pid`
(the last kill line to force sshd re-read it's config file doesn't work on Mac)
While searching for this solution, I came across other configuration settings. They didn't apply to my case, but if you still have problems, you may want to set "GSSAPIKeyExchange no" in your client configuration file which is usually in /etc/ssh_config (ssh_ , not sshd_ !). Or look into IPv6 problems.

Now I have to find an equivalent solution for rsync.

Labels: , , , , ,

Saturday, April 07, 2007

apcupsd in Debian Sarge and Etch

There seem to be prolems with apcupsd in Debian Sarge (and Etch): the machine shuts down, but will not restart when the power comes back.

One reason would be that the halt script powers the machine off instead of only shutting down without powering off.

Another is that the "killpower" command is not sent to the UPS, which doesn't cut the power to the machines. So if the power comes back before the batteries run out, the machine will not restart.

I followed the advice in this post, and changed the /etc/init.d/halt script by removing $poweroff from the halt line at the end. (I did not upgrade apcupsd as the poster did). Then the system halted without powering off, and I could read a cryptic error on the screen about ... libcrypto.so.0.9.7. That clue led to this bug report, and even though I don't use LVM or RAID, it looked like the same problem. So I tried the following:

Check that I do have the same library dependencies:
# ldd /sbin/apcupsd | grep usr
libcrypto.so.0.9.7 => /usr/lib/i686/cmov/libcrypto.so.0.9.7 (0xb7e95000)
libsnmp-0.4.2.so => /usr/lib/libsnmp-0.4.2.so (0xb7e33000)
Move these libraries to /lib, which is still accessible during shutdown, after /usr/lib on a different partition isn't anymore.
# mv -i /usr/lib/i686/cmov/libcrypto.so.0.9.7 /lib/libcrypto.so.0.9.7
# mv -i /usr/lib/libsnmp-0.4.2.so /lib/libsnmp-0.4.2.so
Put links where the libraries were originally:
# ln -i -s /lib/libsnmp-0.4.2.so /usr/lib/libsnmp-0.4.2.so
# ln -i -s /lib/libcrypto.so.0.9.7 /usr/lib/i686/cmov/libcrypto.so.0.9.7
and re-build the library cache:
# ldconfig
In Debian Etch, it turned out to be trickier, because some of these files in /usr/lib/ were links. As of now, the procedure for Etch seems to go like this:
# ldd /sbin/apcupsd | grep usr
libcrypto.so.0.9.8 => /usr/lib/i686/cmov/libcrypto.so.0.9.8 (0xb7e6c000)
libnetsnmp.so.9 => /usr/lib/libnetsnmp.so.9 (0xb7dc8000)
libz.so.1 => /usr/lib/libz.so.1 (0xb7c69000)
# mv -i /usr/lib/i686/cmov/libcrypto.so.0.9.8 /lib/libcrypto.so.0.9.8
# mv -i /usr/lib/libnetsnmp.so.9 /lib/libnetsnmp.so.9
# mv -i /usr/lib/libnetsnmp.so.9.0.1 /lib/libnetsnmp.so.9.0.1
# mv -i /usr/lib/libz.so.1 /lib/libz.so.1
# mv -i /usr/lib/libz.so.1.2.3 /lib/libz.so.1.2.3
# ln -i -s /lib/libcrypto.so.0.9.8 /usr/lib/i686/cmov/libcrypto.so.0.9.8
# ln -i -s /lib/libnetsnmp.so.9 /usr/lib/libnetsnmp.so.9
# ln -i -s /lib/libnetsnmp.so.9.0.1 /usr/lib/libnetsnmp.so.9.0.1
# ln -i -s /lib/libz.so.1 /usr/lib/libz.so.1
# ln -i -s /lib/libz.so.1.2.3 /usr/lib/libz.so.1.2.3
# ldconfig
It seems to work now, as far as my UPS is concerned. I hope that I din't break something else by moving these shared libraries.

Labels: , , , ,

Wednesday, February 07, 2007

Linux keyboard and less

Just a quick note, because all I found on the web was wrong. One small episode in the Linux keyboard nightmare: how to use your PC keyboard's Home and End keys in less, so that it goes to the start and end of the file?

The fix:

cat >>/etc/lesskey <<END
#command
\e[1~ goto-line
\e[4~ goto-end
\e\e quit
END
lesskey -o /etc/less /etc/lesskey
export LESS="-k/etc/less"
echo export LESS="-k/etc/less" >>/etc/profile
This also adds a double-escape to quit.

Labels: , , , ,

Monday, February 05, 2007

Linux desktop GUI in Windows

Just discovered a fantastic tool to allow remote access to a Linux GUI desktop from my Windows notebok: NoMachine's free NX Server.

While setting up a new server, after the initial basic install, I needed a way to compare the /etc trees of the old and new servers. There are over 500 file differences, so diff with it's unreadable output was not well suited to the task. At least, not by itself. The ideal tool would of course have been Total Commander's fantastic "Synchronize Dirs" command and it's built-in file comparison, but unfortunately that's only for Windows. The indispensable Linux Midnight Commander does have a directory comparison, but it doesn't go into subdirectories, nor does it compare files. Apparently, I needed something like Kompare, Meld or Kdiff3, which are all GUI programs, to get usable directory comparison. So I would have to install KDE or Gnome on a Linux server. That had always seemed pretty weird to me: why install a GUI on a machine which will have no keyboard or screen. Well, the times have changed, and you can now use KDE running on a remote server in some anonymous rack in a data center from your remote Windows machine. (BTW, Meld ended up being my preferred GUI diff tool).

I had first seen NX Server in a menu in Knoppix 5.1. I couldn't get it to work, but it had intrigued me. After spending far too much time struggling to try to get the open source version FreeNX to work on my new Debian Sarge server, I eventually came across this post which suggests just installing the NoMachine version. 10 minutes later, I had KDE running on my Windows notebook! The free NoMachine version is limited to 2 concurrent users, but that is one too many for me anyway.

While I'm still setting that server up, it's in an office with an ADSL link, and a very slow 100 kb/s. upstream speed. Still, I could connect to it from home, and use KDE as if I was sitting in front of the machine. It is orders of magnitude faster than VNC, which I sometimes use to access Windows machines over much faster links.

This technology also opens the door to the use of Linux in Windows offices. While it would be unrealistic to try to migrate most small businesses I know to Linux, it is now possible to add Linux applications running on the server to Windows users.We can now have the best of both worlds.

Download:

NX Client DEB for Linux

NX Node DEB for Linux

NX Desktop Server DEB for Linux

(You have to click a little through these pages to get to the real download link which you can use with wget)

Update: They seem to have grouped these 3 deb files onto the NX Free Edition for Linux page.

Install: (The order is important!)

# dpkg -i nxclient*.deb

# dpkg -i nxnode*.deb

# dpkg -i nxserver*.deb

Before I could install nxclient, I had to aptitude install libaudiofile0.

The server needs to connect to the ssh daemon. I use a non standard port, so I also had to

# perl -i.bak -pe 's/^[#\s]*(SSHDPort|SSHDAuthPort)\s*=.*/$1="MYOTHERPORT"/' /usr/NX/etc/node.cfg

# perl -i.bak -pe 's/^[#\s]*(SSHDPort|SSHDAuthPort)\s*=.*/$1="MYOTHERPORT"/' /usr/NX/etc/server.cfg

If you use AllowUsers in your /etc/ssh/sshd_config file, you need to add the nx user to that line:

# perl -i.bak -pe 's/^[#\s]*(AllowUsers\s*.*)/$1 nx/' /etc/ssh/sshd_config

If you get the error "cannot run startkde" or "cannot start gnome-session", you may need to  aptitude install ksmserver or the equivalent for gnome.


Labels: , , , , , ,

Wednesday, May 10, 2006

configuring debian

My check list when installing/configuring a new Debian system:

First a file manager:

apt-get install mc

Disable unneeded modules:

(For 2.6 kernels)

for m in eth1394 ax25 ipx appletalk netrom x25 rose decnet ; do echo "blacklist $m" >>/etc/modprobe.d/00-local ; done

To disable IPv6 which is mainly useless for now, and slows the system down in some cases, add "ipv6" to that list or

echo -e "blacklist ipv6" >>/etc/modprobe.d/00-local

(see http://www.debian-administration.org/articles/409 , http://beranger.org/index.php?article=1127 , http://beranger.org/index.php?article=2256)

You need to reboot for IPv6 to be disabled


PuTTY occasionally cannot logout
. This seems to fix it:
echo "shopt -s huponexit" >> /etc/bash.bashrc
(it's in the OpenSSH FAQ)
You may also need to add it to your existing configuration:
echo "shopt -s huponexit" >> ~/.bash_profile

The default bash history length is 500. I prefer more:
echo export HISTSIZE=2000 >> /etc/bash.bashrc

If hwclock -r gives you select() to /dev/rtc to wait for clock tick timed out
you need the --directisa parameter.
And you also want it used in the init scripts which call hwclock.
In Debian 4 (Etch):
# echo 'HWCLOCKPARS="--directisa"' >> /etc/default/rcS
In Debian 3 (Sarge) you need to edit /etc/init.d/hwclock.sh to add the parameter.

Small Intranet servers spend most of their time doing nothing. And I suppose that at 3+ GHz., they use lot of power for that. Maybe they would use less power when slowed down to a more reasonable speed. I took the following from the excellent Debian HOW-TO : CPU power management page:
#aptitude install cpufrequtils sysfsutils
#cat /proc/cpuinfo | grep "model name"
##adapt to your cpu:#modprobe p4_clockmod
#modprobe cpufreq_ondemand
#echo ondemand > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
##adapt to your cpu:#echo p4_clockmod >>/etc/modules
#echo cpufreq_ondemand >>/etc/modules
#echo devices/system/cpu/cpu0/cpufreq/scaling_governor = ondemand >>/etc/sysctl.conf

Postfix and SASL can be tricky to set up, but
Jimmy's weblog : Postfix and SASL (Debian) makes it a snap.

Labels: , , , ,

Thursday, April 06, 2006

en_CH locale

English may not be one of the official Swiss languages (yet?), but it is certainly useful in a computer system. I don't want server logs and error messages in French or German or Italian (or Romantsch?). These things are easiest to understand and get help about in English. But I don't want dates in the confusing US "MM/DD/YY" format either, or times in AM/PM. Most importantly: I want sorting to work correctly with accented letters, and I want Perl to understand accented letters for \w, the uc(), lc() functions, etc.

So I made my own en_CH locale. Tested it in Debian stable (3.1 "Sarge"), and it seems to do what I expected. If you want to try it out, the steps are below. And it is quite easy to edit language_COUNTRY files to suit your needs.
# cp /usr/share/i18n/SUPPORTED /usr/share/i18n/SUPPORTED.orig
# echo -e "en_CH ISO-8859-1\nen_CH.UTF-8 UTF-8" | sort - /usr/share/i18n/SUPPORTED.orig >/usr/share/i18n/SUPPORTED
# wget -O /usr/share/i18n/locales/en_CH http://alma.ch/linux/en_CH
# echo "en_CH ISO-8859-1" >>/etc/locale.gen
# echo "en_CH.UTF-8 UTF-8" >>/etc/locale.gen
# dpkg-reconfigure locales
You may also need to edit your /etc/environment file and/or /etc/default/locale if they have a left-over LANGUAGE= line.
At your next login, your locale should be set to en_CH, and these little tests should work:
$ echo -e "é\ne\nA\nà\nE" |sort
$ perl -Mlocale -e 'print "Uppercase accented é and à: ", uc("éà\n")'
$ echo -e "é\ne\nA\nà\nE" |perl -Mlocale -ne 'while (/(\w+)/g) {print "$1\n"}'
(Note that for Perl, you need use locale;).

Update: For Ubuntu 6.10 ("Edgy"), see also this post in this ubuntu forum thread!

Labels: , , , , , ,