Wednesday, May 22, 2013

Windows 7 profile trouble

Event ID 1511: Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off.

Or

Event ID 1521: Windows cannot locate the server copy of your roaming profile and is attempting to log you on with your local profile. Changes to the profile will not be copied to the server when you log off. This error may be caused by network problems or insufficient security rights.

  • Login as a different user (with admin rights)
  • Under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList, find Keys named SID.bak (like "S-1-5-21-4129847285-3583514821-2567293568-1001.bak")
  • Delete them
  • If needed, delete C:\Users\USERNAME

This seems to happen when a machine on the network thinks it is the domain master browser and convinces the real PDC of it. I have seen it happen with a Mac (10.6.8), and with a new NAS. They were both running Samba (just like the actual PDC which is a Debian Samba server).

To prevent Samba on these machines to try to become domain master browsers, add this to the [global] section of /etc/smb.conf (or /etc/samba/smb.conf, or whatever it is on your machine):

os level = 1
lm announce = No
preferred master = No
local master = No
domain master = No

Maybe "os level = 1" is exaggerated, but I used that anyway. The "local master = no" setting doesn't get activated on the Mac (testparm -sv | grep master still shows it set to Yes), but it works anyway now.

To check the master browser from Linux or Mac: nmblookup -M YOURDOMAIN or nmblookup -M -- - for all, which may show others which are not in the same domain/workgroup.

Labels: , , , ,

Saturday, May 11, 2013

Mediawiki with Postgres on Debian

A short guide to install Mediawiki on Debian with PostgreSQL 9.1.With a fix for this error:

"Attempting to connect to database "postgres" as superuser "postgres"... error: No database connection"

Installing packages

The server is still using Debian Squeeze, but I expect it would quite the same for the new Debian Wheezy. Here I used squeeze-backports.

 Add the backports repository if needed:

echo "deb http://backports.debian.org/debian-backports squeeze-backports main contrib non-free" >> /etc/apt/sources.list

Install everything:

apt-get update
apt-get -t squeeze-backports install apache2 postgresql-9.1 postgresql-contrib php5-pgsql
apt-get -t squeeze-backports install imagemagick libdbd-pg-perl
apt-get -t squeeze-backports install mediawiki

I use a separate IP for the wiki, so need to add it to the interface:

mcedit /etc/network/interfaces
# wiki on it's own IP
auto eth0:3
iface eth0:3 inet static
    address 192.168.10.4
    netmask 255.255.255.0

/etc/init.d/networking restart

Apache configuration

# I use the mod_rewrite module in Apache
a2enmod rewrite

# I prefer the config file in sites-enabled
# (but it's really just a symlink to /etc/mediawiki/apache.conf):
mv /etc/apache2/conf.d/mediawiki.conf /etc/apache2/sites-enabled

My virtual host config:

<VirtualHost *:80>
    ServerName wiki.example.lan
    ServerAlias wiki.example.lan
    ServerAdmin webmaster@example.com
    DocumentRoot /docs/www-wiki

    ErrorLog /var/log/apache2/wiki-error.log
    CustomLog /var/log/apache2/wiki-access.log combined

    ServerSignature On

    Alias /icons/ "/usr/share/apache2/icons/"

    RewriteEngine On
    RewriteRule ^/w(iki)?/(.*)$  http://%{HTTP_HOST}/index.php/$2 [L,NC]

    <Directory /docs/www-wiki/>
        Options +FollowSymLinks
        AllowOverride All
        # Default is Deny. Exceptions listed below with "Allow ...":
        Order Deny,Allow
        Deny from All
        Satisfy any
        # LAN
        Allow from 192.168.10.0/24
        # VPN
        Allow from 10.0.10.0/24

# If using LDAP
#        AuthType Basic
#        AuthName "Example Wiki. Requires user name and password"
#        AuthBasicProvider ldap
#        AuthzLDAPAuthoritative on
#        AuthLDAPURL ldap://localhost:389/ou=People,dc=example,dc=lan?uid
#        AuthLDAPGroupAttribute memberUid
#        AuthLDAPGroupAttributeIsDN off
#        Require ldap-group cn=users,ou=Groups,dc=example,dc=lan
    </Directory>

    # some directories must be protected
    <Directory /docs/www-wiki/config>
        Options -FollowSymLinks
        AllowOverride None
    </Directory>

    <Directory /docs/www-wiki/upload>
        Options -FollowSymLinks
        AllowOverride None
    </Directory>

    <Directory "/usr/share/apache2/icons">
        Options Indexes MultiViews
        AllowOverride None
        Order allow,deny
        Allow from all
    </Directory>
</VirtualHost>

Moving files

I used a directory other than the default /var/lib/mediawiki. So I had to move things over:

cp -rp /var/lib/mediawiki /docs/www-wiki

The only tricky part, with the fix:

Before starting the web configurator in http://wiki.example.lan/config/ you need to define a password for the "postgres" database user. Mediawiki will start the psql client as the www-data system user, but with the -U argument to set the user to "postgres". Even if you defined a password for the system user "postgres", this is not the password of the database user "postgres".

So you need to start psql as the postgres system user, which you can do as root using sudo -c, and then set the password inside the psql client:

sudo -u postgres psql
psql (9.1.9)
Type "help" for help.

postgres=# \password
Enter new password:
Enter it again:
postgres=# \q

If you don't do this, the Mediawiki config will end with this error:

Attempting to connect to database "postgres" as superuser "postgres"... error: No database connection

And a big pink and unhelpful error box below.

The Postgresql log (tail /var/log/postgresql/postgresql-9.1-main.log) will show:

FATAL:  password authentication failed for user "postgres"

Finally

Now you just have to move LocalSettings.php to /etc/mediawiki/.

And if you used a different install root, you have to edit it to change the MW_INSTALL_PATH:

define('MW_INSTALL_PATH','/docs/www-wiki');



Labels: , , , , , , , ,

Thursday, January 31, 2013

rsync server daemon on Mac OS X with launchctl

(Update: Added the --no-detach option to the rsync command. Newer MacOS versions wouldn't start the daemon without it. With the added argument, it now works again in Sierra.)

There are many web pages describing how to enable the rsync daemon on Mac OS X using launchd/launchctl mechanism. But I had to use a different (and simpler) plist file in LaunchDaemons to make it work across reboots on Lion (10.7.4).

(I started by following this guide , and this very similar one. And I also read this and this. In the end, what helped me getting the plist file right was this thread. Particularly this post: "For one you have both a Program and a ProgramArguments key, when you should have only one or the other (you use Program if there is just one element to the command, or ProgramArguments if there are multiple." And this one.)

This is the .plist file I used in /Library/LaunchDaemons/org.samba.rsync.plist : 

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Disabled</key>
    <false/>
    <key>Label</key>
    <string>org.samba.rsync</string>
    <key>ProgramArguments</key>
    <array>
        <string>/usr/bin/rsync</string>
        <string>--daemon</string>
        <string>--no-detach</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>KeepAlive</key>
    <dict>
        <key>SuccessfulExit</key>
        <false/>
    </dict>
</dict>
</plist>

This is an example /etc/rsyncd.conf file:

secrets file = /etc/rsyncd.secrets
hosts allow = 192.168.1.0/24 10.0.0.1 *.cust.isp.tld

uid = nobody
gid = nobody
list = yes
read only = yes

[shared]
path = /Users/Shared
comment = Users-Shared
uid = someuser
gid = admin
auth users = user_in_secrets

The file /etc/rsyncd.secrets looks like:

some_rsync_user:password
other_user:other_password

To install it:

sudo -s
chown root:wheel /etc/rsyncd.*
chmod 644 /etc/rsyncd.conf
chmod 600 /etc/rsyncd.secrets
launchctl load /Library/LaunchDaemons/org.samba.rsync.plist
launchctl start org.samba.rsync ## (this last command is probably unneeded)

To check if it is installed and running:

launchctl list | grep rsync
808  -    0x7fddb4806c10.anonymous.rsync
-    0    org.samba.rsync

ps ax | grep [r]sync
  808   ??  Ss     0:00.00 /usr/bin/rsync --daemon

rsync --stats someuser@localhost::

To remove it:

sudo launchctl unload /Library/LaunchDaemons/org.samba.rsync.plist
sudo killall rsync

For logging transfers, add

log file = /var/log/rsyncd.log
transfer logging = yes

to /etc/rsyncd.conf. And to have the log rotated, create a file like /etc/newsyslog.d/rsyncd.conf and add

# logfilename          [owner:group]    mode count size when  flags [/pid_file] [sig_num]
/var/log/rsyncd.log   644  5    5000 *     J

 

Labels: , , ,

Wednesday, January 02, 2013

Set up your own Dynamic DNS

The problem with external dynamic DNS services like dyndns.org, no-ip.com, etc. is that you constantly have to look after them. Either they are free, but they expire after 1 month and you have to go to their web site to re-activate your account. Or you pay for them, but then you need to take care of the payments, update the credit card info, etc. This is all much too cumbersome for something that should be entirely automated.

If you manage your own DNS anyway, it may be simpler in the long run to set-up your own dynamic DNS system.

Bind has everything needed. There is a lot of info on the Internet on how to do it, but what I found tended to be more complicated than becessary or insecure or both. So here is how I did it on a Debian 6 ("squeeze") server.

The steps described below are:

Initialize variables

To make it easier to copy/paste commands, we initialize a few variables

binddir="/var/cache/bind"
etcdir="/etc/bind"

(In Debian, you can use grep directory /etc/bind/named.conf.options to find the correct binddir value)

For dynamic hosts, we will use a subdomain of our main zone: .dyn.example.com.

host=myhost; zone=dyn.example.com

Create key

Most example use the dnssec-keygen command. That would create 2 files (with ugly names): one .private and one .key (public) file. This is useless since the secret key is the same in both files, and the nsupdate method doesn't use a public/private key mechanism anyway.

There is a less-known and more appropriate command in recent distributions : ddns-confgen. By default, it will just print sample entries with instructions to STDOUT. You can try it out with:

ddns-confgen -r /dev/urandom -s $host.$zone.

The options we use here are to use an "hmac-md5" algorithm instead of the default "hmac-sha256". It simplifies things with nsupdate later. And we also specify the key name to be the same as the host's name. That way, we can use a wildcard in the "update-policy" in named.conf.local and don't need to update it every time we add a host.

ddns-confgen -r /dev/urandom -q -a hmac-md5 -k $host.$zone -s $host.$zone. | tee -a $etcdir/$zone.keys

chown root:bind   $etcdir/$zone.keys
chmod u=rw,g=r,o= $etcdir/$zone.keys

Depending on how you intend to use nsupdate, you may want to also have a separate key file for every host key. nsupdate cannot use the $zone.keys file if it contains multiple keys. So you might prefer to directly create these individual keyfiles by adding something like > $etcdir/key.$host.$zone :

ddns-confgen -r /dev/urandom -q -a hmac-md5 -k $host.$zone -s $host.$zone. | tee -a $etcdir/$zone.keys > $etcdir/key.$host.$zone

chown root:bind   $etcdir/$zone.keys $etcdir/key.*
chmod u=rw,g=r,o= $etcdir/$zone.keys $etcdir/key.*

Configure bind

Create zone file

Edit $binddir/$zone :

$ORIGIN .
$TTL  3600 ; 1 hour

dyn.example.com IN SOA  dns-server.example.com. hostmaster.example.com. (
         1 ; serial (start at 1 for a dynamic zone instead of the usual date-based serial)
      3600 ; refresh by secondaries (but they get NOTIFY-ed anyway)
       600 ; retry (every 10 minutes if refresh fails)
    604800 ; expire (slaves remove the record after 1 week if they could not refresh it)
       300 ; minimum ttl for negative answers (5 minutes)
)

$ORIGIN dyn.example.com.
NS      dns-server.example.com.

Edit /etc/bind/named.conf.local

Edit /etc/bind/named.conf.local to add :

// DDNS keys
include "/etc/bind/dyn.example.com.keys";

// Dynamic zone
zone "dyn.example.com" {
    type master;
    file "/var/cache/bind/dyn.example.com";
    update-policy {
        // allow host to update themselves with a key having their own name
        grant *.dyn.example.com self dyn.example.com.;
    };
};

Reload server config

rndc reload && sleep 3 && grep named /var/log/daemon.log | tail -20

(adjust the sleep and tail values depending on the number of zones your DNS server handles, so that it has time to report any problems)

Test

If you created individual key files, or your $zone.keys file contains only a single key, you can test like this:

host=myhost; ip=10.11.12.13; zone=dyn.example.com; server=dns-server.example.com; keyfile=$etcdir/key.$host.$zone
echo -e "server $server\n zone $zone.\n update delete $host.$zone.\n update add $host.$zone. 600 A $ip\n send" | nsupdate -k "$keyfile"

Or, more readable and with an extra TXT record:

cat <<EOF | nsupdate -k $keyfile
server $server
zone $zone.
update delete $host.$zone.
update add $host.$zone. 600 A $ip
update add $host.$zone. 600 TXT "Updated on $(date)"
send
EOF

(If you get a could not read key from $keyfile: file not found error, and the file actually exists and is owned by the bind process user, you may be using an older version of nsupdate (like the version in Debian Etch). In that case, replace nsupdate -k $keyfile with nsupdate -y "$key_name:$secret" using the key name and secret found in your key file.)

Check the result:

host -t ANY $host.$zone

It should output something like

myhost.dyn.example.com descriptive text "Update on Tue Jan  1 17:16:03 CET 2013"
myhost.dyn.example.com has address 10.11.12.13
If you try to use a file with multiple keys in the -k option to nsupdate, you will get an error like this:

... 'key' redefined near 'key'
could not read key from FILENAME.keys.{private,key}: already exists

Usage

In a /etc/network/if-up.d/ddnsupdate script.

If you have setup an update CGI page on your server, you could use something like this, letting the web server use the IP address it received anyway with your request.

#!/bin/sh

server=dns-server.example.com
host=myhost
secret="xBa2pz6ZCGQJ5obmvmp26w==" # copy the right key from $etcdir/$zone.keys

wget -O /dev/null --no-check-certificate "https://$server/ddns/update.cgi?host=$host;secret=$secret"
Otherwise, you can use nsupdate, but you need to determine your external IP first :

#!/bin/sh

server=dns-server.example.com
zone=dyn.example.com
host=myhost
secret="xBa2pz6ZCGQJ5obmvmp26w==" # copy the right key from $etcdir/$zone.keys

ip=$(wget -q -O - http://example.com/myip.cgi)

cat <<EOF | nsupdate
server $server
zone $zone.
key $host.$zone $secret
update delete $host.$zone.
update add $host.$zone. 600 A $ip
update add $host.$zone. 600 TXT "Updated on $(date)"
send
EOF

I used a very simple myip.cgi script on the web server, to avoid having to parse the output of the various existing services which show your IP in the browser:

#!/bin/sh
echo "Content-type: text/plain"
echo ""
echo $REMOTE_ADDR

This alternative script example uses SNMP to get the WAN IP from the cable router. It only does the update if the address has changed, and logs to syslog.

#!/bin/sh

zone=dyn.example.com
host=myname
secret="nBlw18hxipEyMEVUmwluQx=="
router=192.168.81.3

server=$(dig +short -t SOA $zone | awk '{print $1}')

ip=$( snmpwalk -v1 -m RFC1213-MIB -c public $router ipAdEntAddr | awk '!'"/$router/ {print \$4}" )

if [ -z "$ip" ]; then
 echo "Error getting wan ip from $router" 1>&2
 exit 1
fi

oldip=$(dig +short $host.$zone)

if [ "$ip" == "$oldip" ]; then
 logger -t `basename $0` "No IP change for $host.$zone ($ip)"
 exit
fi

cat <<EOF | nsupdate
server $server
zone $zone.
key $host.$zone $secret
update delete $host.$zone.
update add $host.$zone. 600 A $ip
update add $host.$zone. 600 TXT "Updated on $(date)"
send
EOF

logger -t `basename $0` "IP for $host.$zone changed from $oldip to $ip"

Web server update.cgi

An example update.cgi :

#!/usr/bin/perl

## Use nsupdate to update a DDNS zone.

## (This could be done with the Net::DNS module. It
##  would be more portable (Windows, etc.), but also
##  more complicated. So I chose the nsupdate utility
##  that comes with Bind instead.)

# "mi\x40alma.ch", 2013

use strict;

my $VERSION = 0.2;
my $debug = 1;

my $title = "DDNS update";

my $zone     = "dyn.example.com";
my $server   = "localhost";
my $nsupdate = "/usr/bin/nsupdate";


use CGI qw(:standard);

my $q = new CGI;

my $CR = "\r\n";

print $q->header(),
      $q->start_html(-title => $title),
      $q->h1($title);


if (param("debug")) {
    $debug = 1;
};

my $host   = param("host");
my $secret = param("secret");
my $ip     = param("ip") || $ENV{"REMOTE_ADDR"};
my $time   = localtime(time);

foreach ($host, $secret, $ip) {
    s/[^A-Za-z0-9\.\/\+=]//g; # sanitize, just in case...
    unless (length($_)) {
        die "Missing or bad parameters. host='$host', secret='$secret', ip='$ip'\n";
    }
}

my $commands = qq{
server $server
zone $zone.
key $host.$zone $secret
update delete $host.$zone.
update add $host.$zone. 600 A $ip
update add $host.$zone. 600 TXT "Updated by $0 v. $VERSION, $time"
send
};

print $q->p("sending update commands to $nsupdate:"), $CR,
      $q->pre($commands), $CR;

open( NSUPDATE, "| $nsupdate" ) or die "Cannot open pipe to $nsupdate : $!\n";
print NSUPDATE $commands        or die "Error writing to $nsupdate : $!\n";
close NSUPDATE                  or die "Error closing $nsupdate : $!\n";

print $q->p("Done:"), $CR;

my @result = `host -t ANY $host.$zone`;

foreach (@result) {
    print $q->pre($_), $CR;
}


if ($debug) {
# also log received parameters
    my @lines;
    for my $key (param) {
        my @values = param($key);
        push @lines, "$key=" . join(", ", @values);
    }
    warn join("; ", @lines), "\n";
}

print $q->end_html, $CR;

__END__

Labels: , , , , , , , ,

Tuesday, June 26, 2012

USB OSX installer for the impatient

To make a bootable USB disk with the Mac OS X installer, the guides I found are much too verbose for my taste, and have too many cute screenshots and ads. Here is a summary for the impatient.

For Mavericks, Yosemite, El Capitan, Sierra

There is a handy "createinstallmedia" command.

The only difficulty is getting the installer, which must be downloaded from the App store. If you need an older installer than the current version, the only way seems to be to find it in the "purchased" page.

For the Sierra (10.12) installer, try this App Store link: https://itunes.apple.com/us/app/macos-sierra/id1127487414?mt=12

The downloaded installer image is automatically started. If you proceed with the install, it will be deleted afterwards. So copy it before installing or just close the installer.

  • Get a USB disk of 8GB or more.
  • Create a single GPT (GUID) partition on the USB key, and format it. This can be done in Disk Utility, but command line junkies can also do it this way:
    diskutil list ## check which is the device name to format
    disk=/dev/diskX ## USE CORRECT DISK found with previous command
    echo "This will completely destroy '$disk'"
    # diskutil partitionDisk $disk GPT hfs+ Untitled 100% ## Remove leading dash once you are sure
    
  • Define variables for the installer location and your USB disk:
    ("/Volumes/Untitled" is the mount point of your USB key, which will be erased.)
    installer="/Applications/Install OS X Yosemite.app"
    USBdisk="/Volumes/Untitled"
  • Then run:
  • sudo "$installer/Contents/Resources/createinstallmedia" --volume "$USBdisk" --applicationpath "$installer" --nointeraction

That's it.

For older versions like (Mountain) Lion

The installer disk image can be found in Applications / Install Mac OS X Lion.app (right-click -> Show Package Contents) / Contents / SharedSupport / InstallESD.dmg

  • Open InstallESD.dmg. You get a "Mac OSX Install ESD" disk on the desktop
  • Partition and format the (8 GB.) USB key as standard Mac OSX Extended with journal. (The partition table defaults to MBR for USB drives; that's OK)
  • In the "Restore" tab of Disk Utility:
    • the source is the mounted image on your desktop: "Mac OS X Install ESD" (NOT the .dmg file)
    • the destination is your new USB Mac partition (not the drive itself)

Other instructions suggest using the InstallESD.dmg file as the source, and the USB key itself (not the partition it contains) as the destination. That may work too. Just don't mix both methods. I had tried that and failed, but maybe it was because I had first made a GPT partition table instead of MBR?

If you only have a 4GB key, it seems to work using Carbon Copy Cloner and de-selecting all unneeded language packs. But I haven't tried an install from such a key.

Labels: , , , ,

Wednesday, February 15, 2012

WPKG client in Windows 7

Wpkg is a fantastic tool to manage software installs on groups of Windows machines without a Windows server with Active Directory. However, I had a few problems with it in Windows 7. These were solved by replacing the Wpkg Client with Wpkg-GP.

By default, the Wpkg service runs at startup and does it's installs in the background. But very often, it failed for some reason to get a connection to the network share at the right time when the service was starting, and aborted. The log showed

WNetAddConnection2-> The network location can not be reached.

I tried to add dependencies to the service, but didn't really find a reliable solution.

So in services.msc, I changed the service startup to "Automatic (delayed)". That solved the connection problem, but brought another. If I want to upgrade Thunderbird for example, the installer has a taskkill command to close Thunderbird before upgrading it. But with a delayed start, the user probably has already started Thunderbird, and it seems quite inappropriate to just kill it while it may actually be in use.

In Windows XP, it was possible to delay the login window, so that wpkg could have done it's thing before the user logged in, but for some reason, this doesn't work in Windows 7 anymore.

So the next step was to change the configuration in settings.xml to have wpkg run at shutdown instead. This also failed because, as far as I understand, Windows Vista/7 don't allow a process to prevent shutdown for more than 5 seconds.

Finally, the solution was to remove the standard Wpkg Client, and replace it with Wpkg-GP. That seems to work. I changed the wpkg configuration back to running at startup, and added a wpkg-gp package which also takes care of uninstalling the original wpkg client:

<package id="wpkg-gp" name="Wpkg-GP" revision="%version%">

    <variable name="version" value="0.15" />

    <check type="uninstall" condition="versiongreaterorequal" path="Wpkg-GP %version% .*" value="%version%"/>

    <install cmd="%SOFTWARE%\wpkg-gp\Wpkg-GP-0.15_x64.exe /S /INI %SOFTWARE%\wpkg-gp\Wpkg-GP.ini">
        <exit code="3010" reboot="delayed" />
    </install>
    <install cmd='msiexec /x "%SOFTWARE%\wpkg\WPKG Client 1.3.14-x64.msi" /qn /norestart' />

    <upgrade cmd="%SOFTWARE%\wpkg-gp\Wpkg-GP-0.15_x64.exe /S /INI %SOFTWARE%\wpkg-gp\Wpkg-GP.ini">
        <exit code="3010" reboot="delayed" />
    </upgrade>
</package>
 

Labels: , , , , ,

Tuesday, July 26, 2011

Importing root certificates into Firefox and Thunderbird

Update Feb. 2012: see at the end for an alternative for new profiles.

This is ridiculously complicated and makes me wonder whether I should just drop Firefox in Windows and go back to IE.

The problem:

How to automatically pre-import your self-signed certification authority into all user profiles for Firefox and Thunderbird.

The solution:

You need the Mozilla certutil utility (not the Microsoft certutil.exe).

In Windows, you would need to compile nss tools or use some ancient hard to find Windows binary to get it. But all my user profiles are on a Samba server, so it was much easier to do it on the server, with the added benefit of having Bash and not needing to struggle with the horrible cmd.exe.

First install the tools. In Debian, it would be:

apt-get install libnss3-tools

Then adapt this long command to your paths:

find /path/to/users-profiles -name cert8.db -printf "%h\n" | \
while read dir; do \
  certutil -A -n "My Own CA" -t "C,C,C" -d "$dir" -i "/path/to/my_own_cacert.cer"; \
done

(-printf "%h\n" prints just the directory, without the file name, one per line. That is fed to the $dir variable needed in the certutil command. The -n option is a required nickname for the certificate. -t "C,C,C" is what will make you accept any certificate signed by this CA you are importing).

See also: the certutil documentation, and a better explanation of the trust arguments (-t option).

Alternative:

The above solution works to add a certifcate to an existing profile's cert8.db. To have newly created profiles include the certificate, you need to put a good cert8.db file into the Program's directory.

  1. Either import your certificate(s) manually into an existing profile, or use the steps above to add the certificate(s) to a cert8.db file.
  2. Copy the new cert8.db to the Firefox (or Thunderbird) program directory, into a "/defaults/profile" subdirectory. (ie. "C:\Program Files (x86)\Mozilla Firefox\defaults\profile\").

This way, newly created profiles will copy this cert8.db file instead of creating a new one from scratch.

Labels: , , , , , , , , , , , ,

Sunday, July 03, 2011

Etch to Lenny trouble with libxml2

While upgrading a few Debian Etch systems to Lenny, I had a lot of trouble which showed up like this:
symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64

The real cause seems to have been that I had 2 libz libraries installed:

 # /sbin/ldconfig -pNX | grep libz
 libz.so.1 (libc6) => /lib/libz.so.1
 libz.so.1 (libc6) => /usr/lib/libz.so.1

So the solution was quite simple:

 # rm /lib/libz.so.1*

That's all that was needed to get rid of the mountain of dpkg errors, and continue the upgrades following the Debian guide. The next upgrade to Squeeze went smoothly.

For the benefit of Google searchers, here is a full error listing:

 Unpacking replacement shared-mime-info ...
update-mime-database: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: warning - old post-removal script returned error exit status 127
dpkg - trying script from the new package instead ...
update-mime-database: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error processing /var/cache/apt/archives/shared-mime-info_0.30-2_i386.deb (--unpack):
 subprocess new post-removal script returned error exit status 127
update-mime-database: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error while cleaning up:
 subprocess post-removal script returned error exit status 127
Preparing to replace libgnomevfs2-common 1:2.14.2-7 (using .../libgnomevfs2-common_1%3a2.22.0-5_all.deb) ...
Unpacking replacement libgnomevfs2-common ...
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: warning - old post-removal script returned error exit status 127
dpkg - trying script from the new package instead ...
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error processing /var/cache/apt/archives/libgnomevfs2-common_1%3a2.22.0-5_all.deb (--unpack):
 subprocess new post-removal script returned error exit status 127
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error while cleaning up:
 subprocess post-removal script returned error exit status 127
Preparing to replace libgnome2-common 2.16.0-2 (using .../libgnome2-common_2.20.1.1-1_all.deb) ...
Unpacking replacement libgnome2-common ...
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: warning - old post-removal script returned error exit status 127
dpkg - trying script from the new package instead ...
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error processing /var/cache/apt/archives/libgnome2-common_2.20.1.1-1_all.deb (--unpack):
 subprocess new post-removal script returned error exit status 127
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error while cleaning up:
 subprocess post-removal script returned error exit status 127
Errors were encountered while processing:
 /var/cache/apt/archives/shared-mime-info_0.30-2_i386.deb
 /var/cache/apt/archives/libgnomevfs2-common_1%3a2.22.0-5_all.deb
 /var/cache/apt/archives/libgnome2-common_2.20.1.1-1_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
A package failed to install.  Trying to recover:
dpkg: dependency problems prevent configuration of libbonoboui2-0:
 libbonoboui2-0 depends on libglade2-0 (>= 1:2.6.1); however:
  Version of libglade2-0 on system is 1:2.6.0-4.
 libbonoboui2-0 depends on libgtk2.0-0 (>= 2.12.0); however:
  Version of libgtk2.0-0 on system is 2.8.20-7.
dpkg: error processing libbonoboui2-0 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgnomecanvas2-0:
 libgnomecanvas2-0 depends on libglade2-0 (>= 1:2.6.1); however:
  Version of libglade2-0 on system is 1:2.6.0-4.
 libgnomecanvas2-0 depends on libgtk2.0-0 (>= 2.12.0); however:
  Version of libgtk2.0-0 on system is 2.8.20-7.
dpkg: error processing libgnomecanvas2-0 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgail18:
 libgail18 depends on libgtk2.0-0 (>= 2.12.0); however:
  Version of libgtk2.0-0 on system is 2.8.20-7.
dpkg: error processing libgail18 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgail-common:
 libgail-common depends on libgail18 (>= 1.9.1); however:
  Package libgail18 is not configured yet.
 libgail-common depends on libgtk2.0-0 (>= 2.12.0); however:
  Version of libgtk2.0-0 on system is 2.8.20-7.
dpkg: error processing libgail-common (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgnomevfs2-extra:
 libgnomevfs2-extra depends on libgnomevfs2-common (>= 1:2.22); however:
  Package libgnomevfs2-common is not installed.
 libgnomevfs2-extra depends on libgnomevfs2-common (<< 1:2.23); however:
  Package libgnomevfs2-common is not installed.
dpkg: error processing libgnomevfs2-extra (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgnomevfs2-0:
 libgnomevfs2-0 depends on libgnomevfs2-common (>= 1:2.22); however:
  Package libgnomevfs2-common is not installed.
 libgnomevfs2-0 depends on libgnomevfs2-common (<< 1:2.23); however:
  Package libgnomevfs2-common is not installed.
dpkg: error processing libgnomevfs2-0 (--configure):
 dependency problems - leaving unconfigured
Setting up libgnome-keyring0 (2.22.3-2) ...
dpkg: dependency problems prevent configuration of libgnome2-0:
 libgnome2-0 depends on libgnomevfs2-0 (>= 1:2.17.90); however:
  Package libgnomevfs2-0 is not configured yet.
 libgnome2-0 depends on libgnome2-common (>= 2.20); however:
  Package libgnome2-common is not installed.
 libgnome2-0 depends on libgnome2-common (<< 2.21); however:
  Package libgnome2-common is not installed.
dpkg: error processing libgnome2-0 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of xserver-xorg-input-mouse:
 xserver-xorg-input-mouse depends on xserver-xorg-core (>= 2:1.4); however:
  Version of xserver-xorg-core on system is 2:1.1.1-21etch5.
dpkg: error processing xserver-xorg-input-mouse (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of xserver-xorg-input-kbd:
 xserver-xorg-input-kbd depends on xserver-xorg-core (>= 2:1.4); however:
  Version of xserver-xorg-core on system is 2:1.1.1-21etch5.
dpkg: error processing xserver-xorg-input-kbd (--configure):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 libbonoboui2-0
 libgnomecanvas2-0
 libgail18
 libgail-common
 libgnomevfs2-extra
 libgnomevfs2-0
 libgnome2-0
 xserver-xorg-input-mouse
 xserver-xorg-input-kbd


Labels: , , , , ,

Tuesday, June 21, 2011

Command-line partitioning and formatting

Automatic non-interactive formatting in Linux is possible with parted.
The following creates 1 single ext3 partition on an entire disk. Of course, if you assign the wrong disk to the $disk variable, it will be a bad day...
# Select the disk device and chose a label for the partition
# ptype=msdos for disks up to 2 TB. ptype=gpt for disks over 2 TB
disk=/dev/sdx; label=my_part_label; ptype=msdos
I have added sleep commands, so I can just copy/paste the whole thing and still have a chance to Ctrl-C if I change my mind at the last second.

# print the current partition(s) state
parted $disk print ; sleep 10

# create a gpt ot msdos partition table (depending on $ptype variable defined above)
parted -a optimal $disk mklabel $ptype ; sleep 5

# create the partition, starting at 1MB which may be better
# with newer disks
parted -a optimal -- $disk unit compact mkpart primary ext3 "1" "-1" ; sleep 5

# format it
mke2fs -j -v -L "$label" ${disk}1 && echo "OK. That's it"

Update: see also scripting disk partitionning in Linux - take 2 for another way, using sgdisk instead of parted.

Labels: , , , , , , ,

Tuesday, June 07, 2011

Windows installers options for silent installs

Different installers use different command-line options for silent or unattended installs. Since I had started these notes, I have found a good overview on unattended.sourceforge.net.

Inno Setup

can be identified with the "Inno Setup" string appearing in various places in the installer's .exe. The options are described here. The most useful ones are:
  • /SAVEINF="filename"
    Save installation settings to the specified file.
  • /LOADINF="filename"
    Load the settings from the specified file after having checked the command line.
  • /SP-
    Disables the This will install... Do you wish to continue? prompt at the beginning of Setup.
  • /SILENT, /VERYSILENT
    When Setup is silent the wizard and the background window are not displayed but the installation progress window is. When a setup is very silent this installation progress window is not displayed. Everything else is normal so for example error messages during installation are displayed.
  • /DIR="x:\dirname"
  • /LANG=language
    Specifies the language to use. language specifies the internal name of the language as specified in a [Languages] section entry.

Nullsoft's NSIS

can be identified with the "NSIS" string appearing in various places in the installer's .exe. The options are described here, but there seem to be only 2 useful ones:
  • /S
    Silent installation
  • /D=C:\Bla
    Set output folder

Labels: , , , , , ,

Sunday, May 29, 2011

Mac and OpenLDAP: Local homes for network users

I wanted a Mac to authenticate users against our Debian OpenLDAP server, but to create a local home directory on the Mac (see here for more details). The usual configuration for network users on the Mac is to mount their homes from the server over NFS. There are many excellent instructions on the net on how to do that. But finding help on how to have them use a local home instead was much more difficult.

It turns out it can be done very simply, by disabling one line in /etc/auto_master on the Mac. By default, it contains +auto_master, which tells the Mac's automounter to look for an automount map in LDAP. If this line is disabled, the Mac will create a local home for network users the first time they log in. Since our userHomes in LDAP are defined as /home/username, the Mac home is created under /home instead of /Users, which is fine.

So for such a setup, you do NOT need to import an Apple schema into your LDAP directory. (That was quite a hassle because you need to tweak the original schema which is not quite kosher; but it was unnecessary).

All you need to do is comment out this single line in /etc/auto_master to make it

#+auto_master  # Use directory service

Or copy/paste this:

sudo perl -i.orig -pe 's/^(\+auto_master.*)/## $1/' /etc/auto_master

Labels: , , , , , ,

Saturday, May 28, 2011

Kill the Final Cut registration screen

I came across this nicely detailed post explaining how to get rid of the forced registration screen of Final Cut Pro/Studio, which always pops up when you really don't want to be bothered with this idiocy.

But I felt the solution was worse than the problem. It involved far too much clicking around for my taste. And you need the Property List Editor. You only have that once you have installed over 1 GB (!!) of developer tools. If you can remember where you put your OS X disk, that is.

Surely, there must be a better way to do it, by just copying a command from some web page and pasting it into Terminal?

It turned out to be 3 commands. And getting them right was much worse than the solution I din't like. You need your machine ID, which is in an XML file that defaults read doesn't want to read. And in that file it is encoded in Base64. You need to put this ID into a property list file as data. That can be done with defaults write, but the data needs to be in hex. I should just have registered, I guess...

Anyway, the detailed explanations are in the link of the first sentence, and the 3 ridiculous commands to paste into Terminal are here:

id=$(perl -MMIME::Base64 -ne '/^\s+(\S{64})\s*$/ && print unpack("H*",decode_base64($1));' "/Library/Application Support/ProApps/Final Cut Studio System ID"|tail -1)
sudo defaults write /Library/Preferences/com.apple.RegFinalCutStudio "{ AECoreTechRegister=1; AECoreTechRegSent=1; }"
sudo defaults write /Library/Preferences/com.apple.RegFinalCutStudio AECoreTechRegInfo -data "$id"

Labels: , , , , ,

Saturday, November 20, 2010

Moving IMAP Maildir to another user

A little recipe to move a user's IMAP mails to another user. (Tested on the Courier IMAP server on Debian).

Useful in situations like John leaving the company and Bob needing to have access to John's old emails.

olduser=john; newuser=bob
maildirmake -f $olduser /home/$newuser/Maildir/
cd /home/$olduser/Maildir/
for d in * ; do \
    cp -pr "$d" "/home/$newuser/Maildir/.$olduser/"; \
done
echo "INBOX.$olduser"  >>/home/$newuser/Maildir/courierimapsubscribed
for d in .??*; do \
    cp -pr "$d" "/home/$newuser/Maildir/.$olduser$d"; \
    echo "INBOX.$olduser$d" >>/home/$newuser/Maildir/courierimapsubscribed; \
done
chown -R $newuser /home/$newuser/Maildir

(Beware that if John had a folder with a one-letter name, that one will not be copied. It's because "for d in .*" would do a mess trying to copy "." and "..". So line 6 uses "for d in .??*" instead.)

Labels: , , , , , , ,

Wednesday, February 10, 2010

PDF to Word conversion notes

Had a complex PDF to convert to something editable like .doc, so I had another look at what was available.

This comparative test from 2008 was very helpful, as were some readers' comments. It concluded by recommending the koolwire.com service, which was indeed quite good, and also very convenient because it can be used through email. It produced an RTF with mostly actual tables. Visually, however, the tables in this particular case would have needed quite some re-formatting to look like the original ones.

Several readers suggested the PDF-to-Word service at pdftoword.com. For me, this gave me the best looking results. It converted the complex tables into columnized sections instead, but that was fine. (As an aside, it is not very clear which engine this service is using. It is related to Nitro PDF, a commercial Windows application which is promoted from the pdtftoword.com page. Also, the Nitro PDF pages link to the free pdftoword.com service as their free version. However, the produced Word document mentions Solid Converter PDF, another commercial Windows application, in it's properties. Weird...)

I also tried the convertpdftoword.net service which others suggested. It also gave a good looking Word document, but built it with tons of independent text boxes which was quite unconvenient in my case. A closer look, showed that this service was actually using VeryPDF's PDF2Word, which produced an RTF file (but with a .doc extension). PDF2Word turns out to actually be a re-packaging of xpdf, and is free (GPL) software. The source is available, but VeryPDF sells the Windows executable.

The funny thing from theses tests: the only completely useless conversions happened to be the one from Adobe itself.

Conclusion: I had the best results with pdftoword.com. But it all depends on your source document and what you want to do with it.

Labels: , , , , , ,

Friday, January 01, 2010

Open mbox file in Thunderbird

Unfortunately, there seems to be no straightforward way to ask Thunderbird to open or import an Mbox mail file directly.

Say you have an mbox file, and would like to view it in Thunderbird. For this example, we will view the file in a "temp-mbox" folder under Thunderbird's "Local Folders". The convoluted way which seems to work goes like this:

  • In Thunderbird, under Local Folders, create the new "temp-mbox" folder.
  • Exit Thunderbird.
  • Find your "Local Folders" directory in your profile. It may be something like "~/.thunderbird/random-string.default/Mail/Local Folders/". In there, you will find a temp-mbox and a temp-mbox.msf file.
  • Overwrite temp-mbox with your mbox file,
  • and delete the temp-mbox.msf index file.
  • Re-open Thunderbird
I needed to do this, because of another limitation of Thunderbird: it's poor search capabilities. Since the mails I wanted to group are on my own IMAP server, I did the search there, and put all the mails into a single file. What I wanted is all the last year's emails received from or sent to somedomain. The following got me a suitable mbox file:
mbox=somedomain-2009.mbox; search=@somedomain; \
find ~/Maildir/cur ~/Maildir/.Sent/cur -mtime -365 | \
while read f ; do \
if egrep "^(From|To|Cc):.*$search" "$f"; then \
  echo "From - " >>$mbox; \
  cat "$f" >>$mbox; \
fi; \
done
To achieve this using the TB search, I would have needed to:
  • Search Inbox without subfolders for "From contains @somedomain" or "To contains @somedomain" or "Cc contains @somedomain". This also searches previous years, and takes quite a while on my IMAP folder.
  • Save the search
  • Search Sent for "To contains @somedomain" or "Cc contains @somedomain".
  • Save the search
  • Create a folder for results
  • Open the first saved search folders, sort by date, and copy the 2009 mails to the new results folder
  • Repeat with the second saved search.

Labels: , , , , ,

Wednesday, December 30, 2009

OpenVPN client on Ubuntu 9.04 Jaunty

A few notes on setting up the openvpn client on Ubuntu, after my move from Windows. Configuration through the Network Manager VPN tab didn't work for me. As far as I could see, there was no way to directly import or copy my existing .ovpn files from Windows because NM doesn't use them. Instead, it uses it's own config files, which do not provide all the options of the standard openvpn client. The solution was to
  • install openvpn and resolvconf so that the name servers can be updated: sudo apt-get install openvpn resolvocnf
  • copy my .ovpn and key files to /etc/openvpn,
  • install gopenvpn to have a handy GUI launcher in the Gnome Panel. (the .deb package needs to be downloaded from the site)
  • Edit my .ovpn files to add up /etc/openvpn/update-resolv-conf and down /etc/openvpn/update-resolv-conf
It seems to work fine now. One example client .ovpn file looks like this:
client
dev tun
proto udp

remote hostname.example.com 1194

resolv-retry infinite
nobind
persist-key
persist-tun
mute-replay-warnings

ca example-cacert.pem
cert clientname.example.lan.pem
key clientname.example.lan.key

comp-lzo
verb 3

up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
2 little things are annoying: I need to enter my password, because changing the network requires root privileges. I'm sure there must be a solution, but the annoyance is probably not worth the time needed to find and apply it. And the other glitch is that window asking for my key's password sometimes opens behind the others.

Labels: , , , ,

Friday, September 18, 2009

Boot an iso file on your hard disk using grub

I needed to update the firmware on several of the infamous Seagate Barracuda 7002.11 drives with the buggy "SD15" firmware.

Seagate offers a Windows executable or an ISO file to create a bootable CD. However, I had only a Mac, a Linux notebook without a CD drive, and a Windows notebook with neither a CD nor any possibility to attach SATA drives.

After some googling, it looked like the Intel Mac could be booted from such a DOS CD, and that might have worked ... but I didn't even have a blank CD.

But wouldn't it be possible to boot from an .iso file on the hard disk?

As it turns out, it is. And this will probably prove quite handy to boot various Live CDs.

My notebook has Ubuntu, with the standard Grub boot loader. Grub cannot boot from an iso file, but another boot loader called grub4dos can. And Grub can boot that smart cousin of his... Yes, it's a sort of weird setup, but it worked for me.

There are also ways to do that on a Windows system. But here is the Linux version:

  1. Get grub4dos from sourceforge or gna.org
  2. Extract grub.exe from the zip file and copy it to your /boot directory.
  3. Also put the .iso you want to boot into your /boot directory.
  4. Edit your /boot/grub/menu.lst file and add these lines:
title == Use grub4dos for the following entries: ==

title 1: Reload this menu using grub4dos to enable booting the next entries
kernel /boot/grub.exe

title 2: boot the Seagate firmware ISO file
map --mem /boot/MooseDT-SD1A-3D4D-16-32MB.ISO (hd32)
map --hook
chainloader (hd32)

(Adapt the "title 2: ..." line, and of course the "map --mem /boot/name-of-your.iso" line with the right file name. Note that it is case-sensitive)

When booting, you will first be in your standard grub, where you select the "1: ..." entry to load grub4dos. It will do just like grub, find the same menu.lst file, and display it again. Now you can select the .iso entry, which grub4dos will understand.

It is probably also possible to store the iso files on other partitions, but that is "left as an exercise to the reader" ...

See also: the wiki, this guide, a "success stories" thread or this other thread with a much hairier setup.

Labels: , , , ,

Saturday, May 23, 2009

Hard drive partitions and file system essentials

What most normal users need to know about hard disk partitions and filesystems to be able to move hard disks between various operating systems like Mac or Windows.

Hard disks contain 1 or more partitions. To the user, each partition appears as if it were a separate hard disk.

(In Windows, each partition receives a separate drive letter like C:, D:, etc.; on a Mac, you see a separate icon on the desktop for each partition; in Linux, each is a device like /dev/sdb1, /dev/sdb2, etc.)

Every partition needs to be formatted with a file system to let the operating system store and retrieve files. (On Mac, this formatting process is called "erasing")

There are many different types of file systems. Your system needs to understand these file systems to be able to use them. Unfortunately, various operating systems use different file systems. The problem is to find which one will be understood by all the systems you intend to connect your drive to. Also, some systems only support reading some file systems, not writing to them.

Summary

Below is a table trying to summarize the compatibility between the 3 main operating systems and the 4 main file system types. There are many others, but if you know about them, you probably don't need this page.


Windows Mac OS X Linux
FAT32 or DOS

Native support

Max. 4GB. file size

Read/Write

Max. 4GB. file size

Read/Write

Max. 4GB. file size

NTFS Native support

Read only.

Write support through external drivers. 1

Read/Write on recent distributions. 2
HFS+ or "Mac OS extended" Requires third party programs for reading and writing. 3 Native support

Read only.

Write if the journaling feature has been turned off in Mac OS X. 4

Ext2 or Ext3 Free drivers allow Read/Write access.5 Requires commercial driver. 6 Native support
FAT or FAT32 (named "MS-DOS" in Macs)

This the oldest of the file systems commonly used today. As such, it has the greatest compatibility and the least functionality. It is a sort of lowest common denominator.

All operating systems can read and write to it. It is the file system generally used on USB flash drives, memory cards for photo cameras, etc.

It cannot store files greater than 4 Gigabytes. It is also the least reliable of the current file systems, and has many other drawbacks (fragmentation, no support for permission, time stamps in local time and with only 2 seconds resolution, etc.)

The Windows disk manager refuses to format a FAT32 partition greater than 32 GB. But it can be formatted in Windows with the free fat32format.exe utility, or can be formatted to the wanted size on Mac or Linux.

NTFS

Is the native file system of Windows.

Macs can read it, but cannot write to it.

However, there is a Mac version of the open source NTFS-3G driver which can write to NTFS. 1

Recent Linux versions can both read it and write to it (thes usually have this NTFS-3G driver installed by default). 2

HFS aka. "Mac OS X" HFS+ aka. "Mac OS X Extended (journaled)"

Is the native file system on Macs.The Mac default is the HFS+ journaled variant.

Windows needs special programs installed to be able to read or write it. 3

Linux can read it when it has the hfsutils package installed. It can also write to it if journaling has been disabled. 4

Ext2 or Ext3

is the most common file system on Linux.

(If you wonder why you would need to know anything about Linux: while it is not very common as a desktop operating system, it is the system used in almost all your non-computer devices which contain a hard disk, like your NAS backup disk, your media player, etc. If that device breaks, you may be able to recover the files from it's hard disk by connecting it to your main computer and installing the driver for the ext2 file system)

Windows can read and write to it using free drivers. 5

There is a Mac driver, but it may be problematic. 6

Footnotes:

1. Mac -> NTFS : The free and open source ntfs-3g driver for Mac is available on http://sourceforge.net/projects/catacombae/files/. The commercial version is based on the same code, but improves speed. You may also want to have a look at the user guide and the macntfs-3g blog.

2. Linux -> NTFS : In case you have an older distribution which doesn't have it pre-installed, you can normally install "ntfs-3g" using your distribution's package manager. Or have a look at their availability page.

3. Windows -> HFS : If you only need to copy files from a Mac disk to your Windows machine, you can use the free HFSExplorer, which will open your drive in a Windows Explorer-like window and let you copy files from there.

For full support, you may need commercial software like MacDrive or similar.

4. Linux -> HFS : If it isn't already on your system, you will need to install the "hfsutils" package.

If you need to write to the HFS disk, journaling must be disabled. You need to do this on a Mac. Afterwards, you can re-enable journaling (again on a Mac). To disable journaling on a Mac, open Disk Utility, select the volume, hold the Option (or Alt) key while opening the File menu, which will make the "Disable Journal" menu entry appear in the menu. Alternatively, you can enter diskutil disableJournal "/Volumes/YOUR_VOLUME_NAME" in Terminal

5. Windows -> ext2/3 : There are 2 free drivers. The open source one is at http://www.ext2fsd.com/ and the closed source one is at http://www.fs-driver.org/.

6. Mac -> Ext2/3 : You can try the commercial ExtFS for Mac OS X. Or the open source fuse-ext2 which I have never tried. (There is also another free open source driver (http://sourceforge.net/projects/ext2fsx), but that project doesn't seem to be actively maintained. It may have worked well on older Mac OS versions, but when I tried a simple folder move with the current version 1.4d4 on a Mac OS X 10.5 system, it made the system crash hard, and left a badly corrupted drive, which I had to repair using e2fsck on Linux.)

Labels: , , , , ,

Friday, April 24, 2009

The Is Your New Bicycle Meme Revisited

Even though the '... Is Your New Bicycle' meme is already over a year old, I couldn't resist adding to it. So here is a new meta-meme-site for your amusement:

allyournewbicylces.com fetches new 'Is Your New Bicycle' quotes for you

It has a list of the best sites which are still alive, selects one, gets you a fresh quote from it, and displays it using the obligatory layout, with a helpful link to the original source site in the corner.

And in case you find it easier to remember, it even has an alternative name: thisisyournewbicycle.com.

If you had not heard about this American election year meme, this is the original site (which even became a book), and here is an article about it.

Labels: ,

Wednesday, March 18, 2009

Installing latest FFMPEG on Debian Etch

How to install the latest FFMPEG on a Debian 4 ("Etch") server? This post encouraged me to try it, despite the fact that it needs compiling from source, and that Etch isn't even the current "stable" Debian anymore. PhillC's post helped a lot, but it still didn't work for me exactly as described there. So here is how it eventually did work for me.

# echo "deb http://www.debian-multimedia.org etch main" >>/etc/apt/sources.list

or

# echo "deb http://www.debian-multimedia.org stable main" >>/etc/apt/sources.list

(I used both, and fiddled with enabling and disabling that repository, so I'm not sure anymore which one ended up being useful).

aptitude update gave me a GPG error, so I had to add the key it mentioned:

# gpg --keyserver hkp://wwwkeys.eu.pgp.net --recv-keys 07DC563D1F41B907
# gpg --armor --export 07DC563D1F41B907 | apt-key add -
# aptitude update

The following didn't work, or only worked partially:

# apt-get build-dep ffmpeg
Reading package lists... Done
Building dependency tree... Done
E: Unable to find a source package for ffmpegcvs
I continued anyway with the long install line of various libraries. I had to remove some of these libraries from the suggested install line. Particularly, since I had to recompile libx264 anyway, I should have removed libx264-dev at this point. It is removed in the line below:
# aptitude install liblame-dev libfaad-dev libfaac-dev libxvidcore4-dev liba52-0.7.4 liba52-0.7.4-dev build-essential subversion

# cd /usr/src
# svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg

And so I got the current version as of March 17:

Checked out external at revision 28979.
Checked out revision 18021.

And I tried configure:

# cd /usr/src/ffmpeg
# ./configure --enable-gpl --enable-pp --enable-libvorbis --enable-liba52 --enable-libdc1394 --enable-libgsm --enable-libmp3lame --enable-libfaad --enable-libfaac --enable-pthreads --enable-libx264 -enable-libxvid

After various errors, and removing options, I ended up with this error:

ERROR: libx264 version must be >= 0.65.

And trying to install that from the debian-multimedia.org repository didn't work either:

# aptitude install libx264-65
The following packages have unmet dependencies:
libx264-65: Depends: libc6 (>= 2.7-1) but 2.3.6.ds1-13etch8 is installed and it is kept back.
So this thread came to rescue, and I embarked on getting x264 and compiling that from source too:
# aptitude install git git-core

Trying to use Git at this point gives an error, but suggests the solution:

# update-alternatives --config git
There are 2 alternatives which provide `git'.
Selection    Alternative
-----------------------------------------------
*+        1    /usr/bin/git.transition
        2    /usr/bin/git-scm

Press enter to keep the default[*], or type selection number: 2

Next steps:

# cd /usr/src/
# git clone git://git.videolan.org/x264.git
# cd x264
# ./configure --enable-shared

This gave an error about yasm, which was not the right version. I could have tried to compile that too, as shown on the Ubuntu forum, but impatiently decided to try the suggested disable option instead. So the x264 part which worked:

# ./configure --enable-shared --disable-asm
# make
# make install
# ldconfig

And finally, ffmpeg:

# cd /usr/src/ffmpeg/
# ./configure --enable-gpl --enable-postproc --enable-pthreads --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libx264 --enable-libxvid
# make
# make install
I also had to remove the old ffmpeg version (aptitude purge ffmpeg) which I had installed some time before this, and finally did this:
# echo /usr/local/lib >> /etc/ld.so.conf.d/local.conf
# ldconfig

Since I had a leftover libx264 installed with aptitude and which was too old, that caused a segmentation fault when I tried to encode with ffmpeg. After searching (aptitude search x264), I found i had to aptitude purge libx264-54 libx264-dev . Then, just to be sure, I re-did the ./configure, make clean, make, make install incantations for both x264 and ffmpeg.

In the end, ffmpeg is working. I suppose the --disable-asm option on x264 will make encoding slower, so it may be worth compiling yasm, and re-compiling x264 again.

Now that ffmpeg is working, the main problem is trying to understand it's myriad of incomprehensible and cryptically documented options.

Labels: , , , , , ,