Wednesday, May 13, 2020

Mobile skin and plugin for Roundcube webmail

The "Melanie2" mobile skin and plugin seems to make Roundcube work on phones. And it is easier to install than it would seem when only briefly looking at the github instructions.

These are the commands I needed on a Debian 10 ("Buster") machine where roundcube 1.3.10 was installed. (the base install was done from the Debian repositories: apt install roundcube roundcube-sqlite3. Instead of sqlite3, it is also possible to use PostgreSQL or MySQL with the roundcube-pgsql or roundcube-mysql packages).

For the mobile stuff:

Create a directory to store the extra stuff and make upgrades easier:

mkdir -p /opt/roundcube-stuff
cd /opt/roundcube-stuff/

Get the files from Github:

git clone https://github.com/messagerie-melanie2/roundcube_skin_melanie2_larry_mobile
git clone https://github.com/messagerie-melanie2/roundcube_jquery_mobile
git clone https://github.com/messagerie-melanie2/roundcube_mobile

Instead of renaming and copying the directories, create symlinks and copy them to roundcube's skins and plugins folders:

ln -si $(pwd)/roundcube_skin_melanie2_larry_mobile/ melanie2_larry_mobile
ln -si $(pwd)/roundcube_jquery_mobile/              jquery_mobile
ln -si $(pwd)/roundcube_mobile                      mobile

cp -vd /opt/roundcube-stuff/melanie2_larry_mobile   /var/lib/roundcube/skins/
cp -vd /opt/roundcube-stuff/jquery_mobile           /var/lib/roundcube/plugins/
cp -vd /opt/roundcube-stuff/mobile                  /var/lib/roundcube/plugins/

Finally, add 'mobile' to the $config['plugins'] array in /etc/roundcube/config.inc.php. If doing it by hand is too much work, copy/pasting this should work:

echo 'array_push( $config["plugins"], "mobile" );' | tee -a /etc/roundcube/config.inc.php
#or:
## echo '$config["plugins"][] = "mobile";' | tee -a /etc/roundcube/config.inc.php

Labels: , , , , ,

Thursday, May 15, 2014

Bootcamp adventures

I needed to replace a drive in a Mac mini with a bigger one. The drive had Mac OS X 10.9 (Mavericks) and Bootcamp with Windows 7. After using Clonezilla to backup the drive and restore it to the bigger one, the partitions were obviously still the same size. There was just a lot of free unpartitioned space at the end of the new drive.

How to resize and move all the partitions (including the hidden EFI and Recovery partitions), to fill the free space?

Disk Utility will not let you touch the Bootcamp partition. Windows 7 looked like it could resize it, but not move it. Resizing it with Win7 created a mess: the Mac would still see the original size.

The heart of the problem seems to be that the Mac wants a GPT partition table, but for Bootcamp, it creates a hybrid MBR partition which is what Win7 sees. Win7 would have no problem with a GPT-only partition, but Bootcamp makes a hybrid MBR anyway. Win7 then resizes that MBR partition, but doesn't update the GPT partition table, which is what the Mac sees. And the Mac doesn't let you fix it either.

At this point, I tried Gparted, but it wouldn't touch this mess (giving some error which I forgot).

Paragon's Camptune X looked like the best solution. However, after paying $20 for it, it turned out it couldn't do anything either. All it does is to let you move a cursor for the relative sizes of the Mac and Windows partitions. But you cannot increase the size to use the free space.

Finally, Rod Smith's Gdisk saved the day again.

What I ended up doing worked in the end:

  • Booted a Gparted USB key, and resized the Windows partition to fill the entire disk.
  • Booted to Mac, and used Camptune X to enlarge the Mac partition while reducing the Windows one.
  • Now, Windows would not boot.
  • Used gdisk to re-create the hybrid MBR, and mark the Windows partition as bootable, as explained in detail in this post.

Labels: , , , , ,

Wednesday, May 22, 2013

Windows 7 profile trouble

Event ID 1511: Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off.

Or

Event ID 1521: Windows cannot locate the server copy of your roaming profile and is attempting to log you on with your local profile. Changes to the profile will not be copied to the server when you log off. This error may be caused by network problems or insufficient security rights.

  • Login as a different user (with admin rights)
  • Under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList, find Keys named SID.bak (like "S-1-5-21-4129847285-3583514821-2567293568-1001.bak")
  • Delete them
  • If needed, delete C:\Users\USERNAME

This seems to happen when a machine on the network thinks it is the domain master browser and convinces the real PDC of it. I have seen it happen with a Mac (10.6.8), and with a new NAS. They were both running Samba (just like the actual PDC which is a Debian Samba server).

To prevent Samba on these machines to try to become domain master browsers, add this to the [global] section of /etc/smb.conf (or /etc/samba/smb.conf, or whatever it is on your machine):

os level = 1
lm announce = No
preferred master = No
local master = No
domain master = No

Maybe "os level = 1" is exaggerated, but I used that anyway. The "local master = no" setting doesn't get activated on the Mac (testparm -sv | grep master still shows it set to Yes), but it works anyway now.

To check the master browser from Linux or Mac: nmblookup -M YOURDOMAIN or nmblookup -M -- - for all, which may show others which are not in the same domain/workgroup.

Labels: , , , ,

Saturday, May 11, 2013

Mediawiki with Postgres on Debian

A short guide to install Mediawiki on Debian with PostgreSQL 9.1.With a fix for this error:

"Attempting to connect to database "postgres" as superuser "postgres"... error: No database connection"

Installing packages

The server is still using Debian Squeeze, but I expect it would quite the same for the new Debian Wheezy. Here I used squeeze-backports.

 Add the backports repository if needed:

echo "deb http://backports.debian.org/debian-backports squeeze-backports main contrib non-free" >> /etc/apt/sources.list

Install everything:

apt-get update
apt-get -t squeeze-backports install apache2 postgresql-9.1 postgresql-contrib php5-pgsql
apt-get -t squeeze-backports install imagemagick libdbd-pg-perl
apt-get -t squeeze-backports install mediawiki

I use a separate IP for the wiki, so need to add it to the interface:

mcedit /etc/network/interfaces
# wiki on it's own IP
auto eth0:3
iface eth0:3 inet static
    address 192.168.10.4
    netmask 255.255.255.0

/etc/init.d/networking restart

Apache configuration

# I use the mod_rewrite module in Apache
a2enmod rewrite

# I prefer the config file in sites-enabled
# (but it's really just a symlink to /etc/mediawiki/apache.conf):
mv /etc/apache2/conf.d/mediawiki.conf /etc/apache2/sites-enabled

My virtual host config:

<VirtualHost *:80>
    ServerName wiki.example.lan
    ServerAlias wiki.example.lan
    ServerAdmin webmaster@example.com
    DocumentRoot /docs/www-wiki

    ErrorLog /var/log/apache2/wiki-error.log
    CustomLog /var/log/apache2/wiki-access.log combined

    ServerSignature On

    Alias /icons/ "/usr/share/apache2/icons/"

    RewriteEngine On
    RewriteRule ^/w(iki)?/(.*)$  http://%{HTTP_HOST}/index.php/$2 [L,NC]

    <Directory /docs/www-wiki/>
        Options +FollowSymLinks
        AllowOverride All
        # Default is Deny. Exceptions listed below with "Allow ...":
        Order Deny,Allow
        Deny from All
        Satisfy any
        # LAN
        Allow from 192.168.10.0/24
        # VPN
        Allow from 10.0.10.0/24

# If using LDAP
#        AuthType Basic
#        AuthName "Example Wiki. Requires user name and password"
#        AuthBasicProvider ldap
#        AuthzLDAPAuthoritative on
#        AuthLDAPURL ldap://localhost:389/ou=People,dc=example,dc=lan?uid
#        AuthLDAPGroupAttribute memberUid
#        AuthLDAPGroupAttributeIsDN off
#        Require ldap-group cn=users,ou=Groups,dc=example,dc=lan
    </Directory>

    # some directories must be protected
    <Directory /docs/www-wiki/config>
        Options -FollowSymLinks
        AllowOverride None
    </Directory>

    <Directory /docs/www-wiki/upload>
        Options -FollowSymLinks
        AllowOverride None
    </Directory>

    <Directory "/usr/share/apache2/icons">
        Options Indexes MultiViews
        AllowOverride None
        Order allow,deny
        Allow from all
    </Directory>
</VirtualHost>

Moving files

I used a directory other than the default /var/lib/mediawiki. So I had to move things over:

cp -rp /var/lib/mediawiki /docs/www-wiki

The only tricky part, with the fix:

Before starting the web configurator in http://wiki.example.lan/config/ you need to define a password for the "postgres" database user. Mediawiki will start the psql client as the www-data system user, but with the -U argument to set the user to "postgres". Even if you defined a password for the system user "postgres", this is not the password of the database user "postgres".

So you need to start psql as the postgres system user, which you can do as root using sudo -c, and then set the password inside the psql client:

sudo -u postgres psql
psql (9.1.9)
Type "help" for help.

postgres=# \password
Enter new password:
Enter it again:
postgres=# \q

If you don't do this, the Mediawiki config will end with this error:

Attempting to connect to database "postgres" as superuser "postgres"... error: No database connection

And a big pink and unhelpful error box below.

The Postgresql log (tail /var/log/postgresql/postgresql-9.1-main.log) will show:

FATAL:  password authentication failed for user "postgres"

Finally

Now you just have to move LocalSettings.php to /etc/mediawiki/.

And if you used a different install root, you have to edit it to change the MW_INSTALL_PATH:

define('MW_INSTALL_PATH','/docs/www-wiki');



Labels: , , , , , , , ,

Thursday, January 31, 2013

rsync server daemon on Mac OS X with launchctl

(Update: Added the --no-detach option to the rsync command. Newer MacOS versions wouldn't start the daemon without it. With the added argument, it now works again in Sierra.)

There are many web pages describing how to enable the rsync daemon on Mac OS X using launchd/launchctl mechanism. But I had to use a different (and simpler) plist file in LaunchDaemons to make it work across reboots on Lion (10.7.4).

(I started by following this guide , and this very similar one. And I also read this and this. In the end, what helped me getting the plist file right was this thread. Particularly this post: "For one you have both a Program and a ProgramArguments key, when you should have only one or the other (you use Program if there is just one element to the command, or ProgramArguments if there are multiple." And this one.)

This is the .plist file I used in /Library/LaunchDaemons/org.samba.rsync.plist : 

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Disabled</key>
    <false/>
    <key>Label</key>
    <string>org.samba.rsync</string>
    <key>ProgramArguments</key>
    <array>
        <string>/usr/bin/rsync</string>
        <string>--daemon</string>
        <string>--no-detach</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>KeepAlive</key>
    <dict>
        <key>SuccessfulExit</key>
        <false/>
    </dict>
</dict>
</plist>

This is an example /etc/rsyncd.conf file:

secrets file = /etc/rsyncd.secrets
hosts allow = 192.168.1.0/24 10.0.0.1 *.cust.isp.tld

uid = nobody
gid = nobody
list = yes
read only = yes

[shared]
path = /Users/Shared
comment = Users-Shared
uid = someuser
gid = admin
auth users = user_in_secrets

The file /etc/rsyncd.secrets looks like:

some_rsync_user:password
other_user:other_password

To install it:

sudo -s
chown root:wheel /etc/rsyncd.*
chmod 644 /etc/rsyncd.conf
chmod 600 /etc/rsyncd.secrets
launchctl load /Library/LaunchDaemons/org.samba.rsync.plist
launchctl start org.samba.rsync ## (this last command is probably unneeded)

To check if it is installed and running:

launchctl list | grep rsync
808  -    0x7fddb4806c10.anonymous.rsync
-    0    org.samba.rsync

ps ax | grep [r]sync
  808   ??  Ss     0:00.00 /usr/bin/rsync --daemon

rsync --stats someuser@localhost::

To remove it:

sudo launchctl unload /Library/LaunchDaemons/org.samba.rsync.plist
sudo killall rsync

For logging transfers, add

log file = /var/log/rsyncd.log
transfer logging = yes

to /etc/rsyncd.conf. And to have the log rotated, create a file like /etc/newsyslog.d/rsyncd.conf and add

# logfilename          [owner:group]    mode count size when  flags [/pid_file] [sig_num]
/var/log/rsyncd.log   644  5    5000 *     J

 

Labels: , , ,

Sunday, January 06, 2013

scripting disk partitionning in Linux - take 2

It is possible to use parted to script/automate disk partitioning in Linux, as described in "Command-line partitioning and formatting".

Another way is to use sgdisk from the GPT fdisk programs.

In Debian and derivatives, it can be installed with sudo apt-get install gdisk.

The current version 0.8.1 from the Ubuntu 12.04 repository would partition only the first 2TB of a 4 TB. disk. So you may need to get a more recent version from the downloads page. I got version 0.8.5 for x64, and that worked very well.

The following will create and format a single NTFS partition on an entire drive:

disk=/dev/sdb            # Make sure you got this right !!
label="My_Disk_Name"
echo "disk $disk will be completely erased."

sudo sgdisk -Z $disk
sudo sgdisk --new=0:0:-8M -t 1:0700 $disk
sudo sgdisk -p $disk
sudo mkntfs --verbose --fast --label "$label" --no-indexing --with-uuid ${disk}1

-Z removes any left-over partitions

--new=0:0:-8M creates a single partition from the start of the disk to 8MB before the end (just in case it's useful to not end on the very last sector)

-t 1:0700 sets the first partition we just created to type "Microsoft Basic Partition", which is the type we want for a simple NTFS partition. Linux would be -t 1:8300. Use sgdisk -L to get a list of partition types.

Note that for comfortable (and safer) manual partitioning, there is also cgdisk. It is like the old cfdisk, but works with new disks over 2TB.

Labels: , , , , , , , ,

Wednesday, January 02, 2013

Set up your own Dynamic DNS

The problem with external dynamic DNS services like dyndns.org, no-ip.com, etc. is that you constantly have to look after them. Either they are free, but they expire after 1 month and you have to go to their web site to re-activate your account. Or you pay for them, but then you need to take care of the payments, update the credit card info, etc. This is all much too cumbersome for something that should be entirely automated.

If you manage your own DNS anyway, it may be simpler in the long run to set-up your own dynamic DNS system.

Bind has everything needed. There is a lot of info on the Internet on how to do it, but what I found tended to be more complicated than becessary or insecure or both. So here is how I did it on a Debian 6 ("squeeze") server.

The steps described below are:

Initialize variables

To make it easier to copy/paste commands, we initialize a few variables

binddir="/var/cache/bind"
etcdir="/etc/bind"

(In Debian, you can use grep directory /etc/bind/named.conf.options to find the correct binddir value)

For dynamic hosts, we will use a subdomain of our main zone: .dyn.example.com.

host=myhost; zone=dyn.example.com

Create key

Most example use the dnssec-keygen command. That would create 2 files (with ugly names): one .private and one .key (public) file. This is useless since the secret key is the same in both files, and the nsupdate method doesn't use a public/private key mechanism anyway.

There is a less-known and more appropriate command in recent distributions : ddns-confgen. By default, it will just print sample entries with instructions to STDOUT. You can try it out with:

ddns-confgen -r /dev/urandom -s $host.$zone.

The options we use here are to use an "hmac-md5" algorithm instead of the default "hmac-sha256". It simplifies things with nsupdate later. And we also specify the key name to be the same as the host's name. That way, we can use a wildcard in the "update-policy" in named.conf.local and don't need to update it every time we add a host.

ddns-confgen -r /dev/urandom -q -a hmac-md5 -k $host.$zone -s $host.$zone. | tee -a $etcdir/$zone.keys

chown root:bind   $etcdir/$zone.keys
chmod u=rw,g=r,o= $etcdir/$zone.keys

Depending on how you intend to use nsupdate, you may want to also have a separate key file for every host key. nsupdate cannot use the $zone.keys file if it contains multiple keys. So you might prefer to directly create these individual keyfiles by adding something like > $etcdir/key.$host.$zone :

ddns-confgen -r /dev/urandom -q -a hmac-md5 -k $host.$zone -s $host.$zone. | tee -a $etcdir/$zone.keys > $etcdir/key.$host.$zone

chown root:bind   $etcdir/$zone.keys $etcdir/key.*
chmod u=rw,g=r,o= $etcdir/$zone.keys $etcdir/key.*

Configure bind

Create zone file

Edit $binddir/$zone :

$ORIGIN .
$TTL  3600 ; 1 hour

dyn.example.com IN SOA  dns-server.example.com. hostmaster.example.com. (
         1 ; serial (start at 1 for a dynamic zone instead of the usual date-based serial)
      3600 ; refresh by secondaries (but they get NOTIFY-ed anyway)
       600 ; retry (every 10 minutes if refresh fails)
    604800 ; expire (slaves remove the record after 1 week if they could not refresh it)
       300 ; minimum ttl for negative answers (5 minutes)
)

$ORIGIN dyn.example.com.
NS      dns-server.example.com.

Edit /etc/bind/named.conf.local

Edit /etc/bind/named.conf.local to add :

// DDNS keys
include "/etc/bind/dyn.example.com.keys";

// Dynamic zone
zone "dyn.example.com" {
    type master;
    file "/var/cache/bind/dyn.example.com";
    update-policy {
        // allow host to update themselves with a key having their own name
        grant *.dyn.example.com self dyn.example.com.;
    };
};

Reload server config

rndc reload && sleep 3 && grep named /var/log/daemon.log | tail -20

(adjust the sleep and tail values depending on the number of zones your DNS server handles, so that it has time to report any problems)

Test

If you created individual key files, or your $zone.keys file contains only a single key, you can test like this:

host=myhost; ip=10.11.12.13; zone=dyn.example.com; server=dns-server.example.com; keyfile=$etcdir/key.$host.$zone
echo -e "server $server\n zone $zone.\n update delete $host.$zone.\n update add $host.$zone. 600 A $ip\n send" | nsupdate -k "$keyfile"

Or, more readable and with an extra TXT record:

cat <<EOF | nsupdate -k $keyfile
server $server
zone $zone.
update delete $host.$zone.
update add $host.$zone. 600 A $ip
update add $host.$zone. 600 TXT "Updated on $(date)"
send
EOF

(If you get a could not read key from $keyfile: file not found error, and the file actually exists and is owned by the bind process user, you may be using an older version of nsupdate (like the version in Debian Etch). In that case, replace nsupdate -k $keyfile with nsupdate -y "$key_name:$secret" using the key name and secret found in your key file.)

Check the result:

host -t ANY $host.$zone

It should output something like

myhost.dyn.example.com descriptive text "Update on Tue Jan  1 17:16:03 CET 2013"
myhost.dyn.example.com has address 10.11.12.13
If you try to use a file with multiple keys in the -k option to nsupdate, you will get an error like this:

... 'key' redefined near 'key'
could not read key from FILENAME.keys.{private,key}: already exists

Usage

In a /etc/network/if-up.d/ddnsupdate script.

If you have setup an update CGI page on your server, you could use something like this, letting the web server use the IP address it received anyway with your request.

#!/bin/sh

server=dns-server.example.com
host=myhost
secret="xBa2pz6ZCGQJ5obmvmp26w==" # copy the right key from $etcdir/$zone.keys

wget -O /dev/null --no-check-certificate "https://$server/ddns/update.cgi?host=$host;secret=$secret"
Otherwise, you can use nsupdate, but you need to determine your external IP first :

#!/bin/sh

server=dns-server.example.com
zone=dyn.example.com
host=myhost
secret="xBa2pz6ZCGQJ5obmvmp26w==" # copy the right key from $etcdir/$zone.keys

ip=$(wget -q -O - http://example.com/myip.cgi)

cat <<EOF | nsupdate
server $server
zone $zone.
key $host.$zone $secret
update delete $host.$zone.
update add $host.$zone. 600 A $ip
update add $host.$zone. 600 TXT "Updated on $(date)"
send
EOF

I used a very simple myip.cgi script on the web server, to avoid having to parse the output of the various existing services which show your IP in the browser:

#!/bin/sh
echo "Content-type: text/plain"
echo ""
echo $REMOTE_ADDR

This alternative script example uses SNMP to get the WAN IP from the cable router. It only does the update if the address has changed, and logs to syslog.

#!/bin/sh

zone=dyn.example.com
host=myname
secret="nBlw18hxipEyMEVUmwluQx=="
router=192.168.81.3

server=$(dig +short -t SOA $zone | awk '{print $1}')

ip=$( snmpwalk -v1 -m RFC1213-MIB -c public $router ipAdEntAddr | awk '!'"/$router/ {print \$4}" )

if [ -z "$ip" ]; then
 echo "Error getting wan ip from $router" 1>&2
 exit 1
fi

oldip=$(dig +short $host.$zone)

if [ "$ip" == "$oldip" ]; then
 logger -t `basename $0` "No IP change for $host.$zone ($ip)"
 exit
fi

cat <<EOF | nsupdate
server $server
zone $zone.
key $host.$zone $secret
update delete $host.$zone.
update add $host.$zone. 600 A $ip
update add $host.$zone. 600 TXT "Updated on $(date)"
send
EOF

logger -t `basename $0` "IP for $host.$zone changed from $oldip to $ip"

Web server update.cgi

An example update.cgi :

#!/usr/bin/perl

## Use nsupdate to update a DDNS zone.

## (This could be done with the Net::DNS module. It
##  would be more portable (Windows, etc.), but also
##  more complicated. So I chose the nsupdate utility
##  that comes with Bind instead.)

# "mi\x40alma.ch", 2013

use strict;

my $VERSION = 0.2;
my $debug = 1;

my $title = "DDNS update";

my $zone     = "dyn.example.com";
my $server   = "localhost";
my $nsupdate = "/usr/bin/nsupdate";


use CGI qw(:standard);

my $q = new CGI;

my $CR = "\r\n";

print $q->header(),
      $q->start_html(-title => $title),
      $q->h1($title);


if (param("debug")) {
    $debug = 1;
};

my $host   = param("host");
my $secret = param("secret");
my $ip     = param("ip") || $ENV{"REMOTE_ADDR"};
my $time   = localtime(time);

foreach ($host, $secret, $ip) {
    s/[^A-Za-z0-9\.\/\+=]//g; # sanitize, just in case...
    unless (length($_)) {
        die "Missing or bad parameters. host='$host', secret='$secret', ip='$ip'\n";
    }
}

my $commands = qq{
server $server
zone $zone.
key $host.$zone $secret
update delete $host.$zone.
update add $host.$zone. 600 A $ip
update add $host.$zone. 600 TXT "Updated by $0 v. $VERSION, $time"
send
};

print $q->p("sending update commands to $nsupdate:"), $CR,
      $q->pre($commands), $CR;

open( NSUPDATE, "| $nsupdate" ) or die "Cannot open pipe to $nsupdate : $!\n";
print NSUPDATE $commands        or die "Error writing to $nsupdate : $!\n";
close NSUPDATE                  or die "Error closing $nsupdate : $!\n";

print $q->p("Done:"), $CR;

my @result = `host -t ANY $host.$zone`;

foreach (@result) {
    print $q->pre($_), $CR;
}


if ($debug) {
# also log received parameters
    my @lines;
    for my $key (param) {
        my @values = param($key);
        push @lines, "$key=" . join(", ", @values);
    }
    warn join("; ", @lines), "\n";
}

print $q->end_html, $CR;

__END__

Labels: , , , , , , , ,

Tuesday, June 26, 2012

USB OSX installer for the impatient

To make a bootable USB disk with the Mac OS X installer, the guides I found are much too verbose for my taste, and have too many cute screenshots and ads. Here is a summary for the impatient.

For Mavericks, Yosemite, El Capitan, Sierra

There is a handy "createinstallmedia" command.

The only difficulty is getting the installer, which must be downloaded from the App store. If you need an older installer than the current version, the only way seems to be to find it in the "purchased" page.

For the Sierra (10.12) installer, try this App Store link: https://itunes.apple.com/us/app/macos-sierra/id1127487414?mt=12

The downloaded installer image is automatically started. If you proceed with the install, it will be deleted afterwards. So copy it before installing or just close the installer.

  • Get a USB disk of 8GB or more.
  • Create a single GPT (GUID) partition on the USB key, and format it. This can be done in Disk Utility, but command line junkies can also do it this way:
    diskutil list ## check which is the device name to format
    disk=/dev/diskX ## USE CORRECT DISK found with previous command
    echo "This will completely destroy '$disk'"
    # diskutil partitionDisk $disk GPT hfs+ Untitled 100% ## Remove leading dash once you are sure
    
  • Define variables for the installer location and your USB disk:
    ("/Volumes/Untitled" is the mount point of your USB key, which will be erased.)
    installer="/Applications/Install OS X Yosemite.app"
    USBdisk="/Volumes/Untitled"
  • Then run:
  • sudo "$installer/Contents/Resources/createinstallmedia" --volume "$USBdisk" --applicationpath "$installer" --nointeraction

That's it.

For older versions like (Mountain) Lion

The installer disk image can be found in Applications / Install Mac OS X Lion.app (right-click -> Show Package Contents) / Contents / SharedSupport / InstallESD.dmg

  • Open InstallESD.dmg. You get a "Mac OSX Install ESD" disk on the desktop
  • Partition and format the (8 GB.) USB key as standard Mac OSX Extended with journal. (The partition table defaults to MBR for USB drives; that's OK)
  • In the "Restore" tab of Disk Utility:
    • the source is the mounted image on your desktop: "Mac OS X Install ESD" (NOT the .dmg file)
    • the destination is your new USB Mac partition (not the drive itself)

Other instructions suggest using the InstallESD.dmg file as the source, and the USB key itself (not the partition it contains) as the destination. That may work too. Just don't mix both methods. I had tried that and failed, but maybe it was because I had first made a GPT partition table instead of MBR?

If you only have a 4GB key, it seems to work using Carbon Copy Cloner and de-selecting all unneeded language packs. But I haven't tried an install from such a key.

Labels: , , , ,

Sunday, June 03, 2012

Microsoft Security Essentials trouble

Update: Over 2 years later (August 2014), I still come across this problem. This time, the error code was 0x8004ff19, but the solution was the same: delete "%PROGRAMFILES%\Microsoft Security Client" and re-install with the current installer.

It seems that there is a pretty bad problem with Microsoft Security Essentials. I was surprised to notice that it wasn't running on several machines. It turns out that an automatic upgrade through Windows Update fails in a very bad way: it sort of uninstalls the old version, and then fails to install the new version. Users don't notice anything special.

Trying to re-install it by hand also fails with a very informative message (as usual for MS error messages):

Cannot complete the Security Essentials installation
An error has prevented the Security Essentials setup wizard from completing successfully. Please restart your computer and try again.
Error code:0x80070005

Of course, clicking on the "Get help" link is of no help at all.

Apparently, the code "0x80070005" means "Access denied", but there is no way to find out to what the access was denied.

Searching through the event logs reveals other errors, which I will list here in the hope that it helps other Googlers

Log Name:      System
Source:        Microsoft-Windows-WindowsUpdateClient
Event ID:      20
Task Category: Windows Update Agent
Description: Installation Failure: Windows failed to install the following update with error 0x80070643: Microsoft Security Essentials Client Update Package - KB2691905.
Log Name:      Application
Source:        Microsoft Security Client Setup
Event ID:      100
Description: HRESULT:0x80070005
Description:Cannot complete the Security Essentials installation. An error has prevented the Security Essentials setup wizard from completing successfully. Please restart your computer and try again. Error code:0x80070005. Access is denied.

And also older errors which may or may not be related:

Log Name:      Application
Source:        SideBySide
Event ID:      72
Description: Activation context generation failed for "c:\program files\microsoft security client\MSESysprep.dll".Error in manifest or policy file "c:\program files\microsoft security client\MSESysprep.dll" on line 10. The element imaging appears as a child of element urn:schemas-microsoft-com:asm.v1^assembly which is not supported by this version of Windows.

Advice found on the web which didn't work:

  • Uninstall MSSE then re-install (it was not listed in the installed programs, so I couldn't uninstall it)
  • Uninstall any other anti virus software (I didn't have any)
  • Run OneCareCleanup (silly because it was never installed)
  • etc.

Anyway, after a lot of useless searching and trying, what worked for me was to simply
rmdir /S /Q "%PROGRAMFILES%\Microsoft Security Client"

(Be careful with rmdir /s /q ! It deletes the whole folder and sub-folders without asking first!)

After that, I could re-install normally.

But it is very disturbing to see that an antivirus can just stop working without any obvious alert or user notification.

PS: It turns out that even Mark Russinovich had a problem with MSSE. His immediate error was different, but was one I also eventually found in the logs. His solution was to delete the HKCR\Installer\UpgradeCodes\11BB99F8B7FD53D4398442FBBAEF050F registry key. I had tried his procmon tool to try to find what returned "access denied", but then decided to resort to some primitive and brutal approaches first...

Labels: , ,

Wednesday, February 15, 2012

WPKG client in Windows 7

Wpkg is a fantastic tool to manage software installs on groups of Windows machines without a Windows server with Active Directory. However, I had a few problems with it in Windows 7. These were solved by replacing the Wpkg Client with Wpkg-GP.

By default, the Wpkg service runs at startup and does it's installs in the background. But very often, it failed for some reason to get a connection to the network share at the right time when the service was starting, and aborted. The log showed

WNetAddConnection2-> The network location can not be reached.

I tried to add dependencies to the service, but didn't really find a reliable solution.

So in services.msc, I changed the service startup to "Automatic (delayed)". That solved the connection problem, but brought another. If I want to upgrade Thunderbird for example, the installer has a taskkill command to close Thunderbird before upgrading it. But with a delayed start, the user probably has already started Thunderbird, and it seems quite inappropriate to just kill it while it may actually be in use.

In Windows XP, it was possible to delay the login window, so that wpkg could have done it's thing before the user logged in, but for some reason, this doesn't work in Windows 7 anymore.

So the next step was to change the configuration in settings.xml to have wpkg run at shutdown instead. This also failed because, as far as I understand, Windows Vista/7 don't allow a process to prevent shutdown for more than 5 seconds.

Finally, the solution was to remove the standard Wpkg Client, and replace it with Wpkg-GP. That seems to work. I changed the wpkg configuration back to running at startup, and added a wpkg-gp package which also takes care of uninstalling the original wpkg client:

<package id="wpkg-gp" name="Wpkg-GP" revision="%version%">

    <variable name="version" value="0.15" />

    <check type="uninstall" condition="versiongreaterorequal" path="Wpkg-GP %version% .*" value="%version%"/>

    <install cmd="%SOFTWARE%\wpkg-gp\Wpkg-GP-0.15_x64.exe /S /INI %SOFTWARE%\wpkg-gp\Wpkg-GP.ini">
        <exit code="3010" reboot="delayed" />
    </install>
    <install cmd='msiexec /x "%SOFTWARE%\wpkg\WPKG Client 1.3.14-x64.msi" /qn /norestart' />

    <upgrade cmd="%SOFTWARE%\wpkg-gp\Wpkg-GP-0.15_x64.exe /S /INI %SOFTWARE%\wpkg-gp\Wpkg-GP.ini">
        <exit code="3010" reboot="delayed" />
    </upgrade>
</package>
 

Labels: , , , , ,

Tuesday, July 26, 2011

Importing root certificates into Firefox and Thunderbird

Update Feb. 2012: see at the end for an alternative for new profiles.

This is ridiculously complicated and makes me wonder whether I should just drop Firefox in Windows and go back to IE.

The problem:

How to automatically pre-import your self-signed certification authority into all user profiles for Firefox and Thunderbird.

The solution:

You need the Mozilla certutil utility (not the Microsoft certutil.exe).

In Windows, you would need to compile nss tools or use some ancient hard to find Windows binary to get it. But all my user profiles are on a Samba server, so it was much easier to do it on the server, with the added benefit of having Bash and not needing to struggle with the horrible cmd.exe.

First install the tools. In Debian, it would be:

apt-get install libnss3-tools

Then adapt this long command to your paths:

find /path/to/users-profiles -name cert8.db -printf "%h\n" | \
while read dir; do \
  certutil -A -n "My Own CA" -t "C,C,C" -d "$dir" -i "/path/to/my_own_cacert.cer"; \
done

(-printf "%h\n" prints just the directory, without the file name, one per line. That is fed to the $dir variable needed in the certutil command. The -n option is a required nickname for the certificate. -t "C,C,C" is what will make you accept any certificate signed by this CA you are importing).

See also: the certutil documentation, and a better explanation of the trust arguments (-t option).

Alternative:

The above solution works to add a certifcate to an existing profile's cert8.db. To have newly created profiles include the certificate, you need to put a good cert8.db file into the Program's directory.

  1. Either import your certificate(s) manually into an existing profile, or use the steps above to add the certificate(s) to a cert8.db file.
  2. Copy the new cert8.db to the Firefox (or Thunderbird) program directory, into a "/defaults/profile" subdirectory. (ie. "C:\Program Files (x86)\Mozilla Firefox\defaults\profile\").

This way, newly created profiles will copy this cert8.db file instead of creating a new one from scratch.

Labels: , , , , , , , , , , , ,

Sunday, July 03, 2011

Etch to Lenny trouble with libxml2

While upgrading a few Debian Etch systems to Lenny, I had a lot of trouble which showed up like this:
symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64

The real cause seems to have been that I had 2 libz libraries installed:

 # /sbin/ldconfig -pNX | grep libz
 libz.so.1 (libc6) => /lib/libz.so.1
 libz.so.1 (libc6) => /usr/lib/libz.so.1

So the solution was quite simple:

 # rm /lib/libz.so.1*

That's all that was needed to get rid of the mountain of dpkg errors, and continue the upgrades following the Debian guide. The next upgrade to Squeeze went smoothly.

For the benefit of Google searchers, here is a full error listing:

 Unpacking replacement shared-mime-info ...
update-mime-database: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: warning - old post-removal script returned error exit status 127
dpkg - trying script from the new package instead ...
update-mime-database: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error processing /var/cache/apt/archives/shared-mime-info_0.30-2_i386.deb (--unpack):
 subprocess new post-removal script returned error exit status 127
update-mime-database: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error while cleaning up:
 subprocess post-removal script returned error exit status 127
Preparing to replace libgnomevfs2-common 1:2.14.2-7 (using .../libgnomevfs2-common_1%3a2.22.0-5_all.deb) ...
Unpacking replacement libgnomevfs2-common ...
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: warning - old post-removal script returned error exit status 127
dpkg - trying script from the new package instead ...
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error processing /var/cache/apt/archives/libgnomevfs2-common_1%3a2.22.0-5_all.deb (--unpack):
 subprocess new post-removal script returned error exit status 127
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error while cleaning up:
 subprocess post-removal script returned error exit status 127
Preparing to replace libgnome2-common 2.16.0-2 (using .../libgnome2-common_2.20.1.1-1_all.deb) ...
Unpacking replacement libgnome2-common ...
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: warning - old post-removal script returned error exit status 127
dpkg - trying script from the new package instead ...
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error processing /var/cache/apt/archives/libgnome2-common_2.20.1.1-1_all.deb (--unpack):
 subprocess new post-removal script returned error exit status 127
gconftool-2: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64
dpkg: error while cleaning up:
 subprocess post-removal script returned error exit status 127
Errors were encountered while processing:
 /var/cache/apt/archives/shared-mime-info_0.30-2_i386.deb
 /var/cache/apt/archives/libgnomevfs2-common_1%3a2.22.0-5_all.deb
 /var/cache/apt/archives/libgnome2-common_2.20.1.1-1_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
A package failed to install.  Trying to recover:
dpkg: dependency problems prevent configuration of libbonoboui2-0:
 libbonoboui2-0 depends on libglade2-0 (>= 1:2.6.1); however:
  Version of libglade2-0 on system is 1:2.6.0-4.
 libbonoboui2-0 depends on libgtk2.0-0 (>= 2.12.0); however:
  Version of libgtk2.0-0 on system is 2.8.20-7.
dpkg: error processing libbonoboui2-0 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgnomecanvas2-0:
 libgnomecanvas2-0 depends on libglade2-0 (>= 1:2.6.1); however:
  Version of libglade2-0 on system is 1:2.6.0-4.
 libgnomecanvas2-0 depends on libgtk2.0-0 (>= 2.12.0); however:
  Version of libgtk2.0-0 on system is 2.8.20-7.
dpkg: error processing libgnomecanvas2-0 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgail18:
 libgail18 depends on libgtk2.0-0 (>= 2.12.0); however:
  Version of libgtk2.0-0 on system is 2.8.20-7.
dpkg: error processing libgail18 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgail-common:
 libgail-common depends on libgail18 (>= 1.9.1); however:
  Package libgail18 is not configured yet.
 libgail-common depends on libgtk2.0-0 (>= 2.12.0); however:
  Version of libgtk2.0-0 on system is 2.8.20-7.
dpkg: error processing libgail-common (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgnomevfs2-extra:
 libgnomevfs2-extra depends on libgnomevfs2-common (>= 1:2.22); however:
  Package libgnomevfs2-common is not installed.
 libgnomevfs2-extra depends on libgnomevfs2-common (<< 1:2.23); however:
  Package libgnomevfs2-common is not installed.
dpkg: error processing libgnomevfs2-extra (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libgnomevfs2-0:
 libgnomevfs2-0 depends on libgnomevfs2-common (>= 1:2.22); however:
  Package libgnomevfs2-common is not installed.
 libgnomevfs2-0 depends on libgnomevfs2-common (<< 1:2.23); however:
  Package libgnomevfs2-common is not installed.
dpkg: error processing libgnomevfs2-0 (--configure):
 dependency problems - leaving unconfigured
Setting up libgnome-keyring0 (2.22.3-2) ...
dpkg: dependency problems prevent configuration of libgnome2-0:
 libgnome2-0 depends on libgnomevfs2-0 (>= 1:2.17.90); however:
  Package libgnomevfs2-0 is not configured yet.
 libgnome2-0 depends on libgnome2-common (>= 2.20); however:
  Package libgnome2-common is not installed.
 libgnome2-0 depends on libgnome2-common (<< 2.21); however:
  Package libgnome2-common is not installed.
dpkg: error processing libgnome2-0 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of xserver-xorg-input-mouse:
 xserver-xorg-input-mouse depends on xserver-xorg-core (>= 2:1.4); however:
  Version of xserver-xorg-core on system is 2:1.1.1-21etch5.
dpkg: error processing xserver-xorg-input-mouse (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of xserver-xorg-input-kbd:
 xserver-xorg-input-kbd depends on xserver-xorg-core (>= 2:1.4); however:
  Version of xserver-xorg-core on system is 2:1.1.1-21etch5.
dpkg: error processing xserver-xorg-input-kbd (--configure):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 libbonoboui2-0
 libgnomecanvas2-0
 libgail18
 libgail-common
 libgnomevfs2-extra
 libgnomevfs2-0
 libgnome2-0
 xserver-xorg-input-mouse
 xserver-xorg-input-kbd


Labels: , , , , ,

Tuesday, June 21, 2011

Command-line partitioning and formatting

Automatic non-interactive formatting in Linux is possible with parted.
The following creates 1 single ext3 partition on an entire disk. Of course, if you assign the wrong disk to the $disk variable, it will be a bad day...
# Select the disk device and chose a label for the partition
# ptype=msdos for disks up to 2 TB. ptype=gpt for disks over 2 TB
disk=/dev/sdx; label=my_part_label; ptype=msdos
I have added sleep commands, so I can just copy/paste the whole thing and still have a chance to Ctrl-C if I change my mind at the last second.

# print the current partition(s) state
parted $disk print ; sleep 10

# create a gpt ot msdos partition table (depending on $ptype variable defined above)
parted -a optimal $disk mklabel $ptype ; sleep 5

# create the partition, starting at 1MB which may be better
# with newer disks
parted -a optimal -- $disk unit compact mkpart primary ext3 "1" "-1" ; sleep 5

# format it
mke2fs -j -v -L "$label" ${disk}1 && echo "OK. That's it"

Update: see also scripting disk partitionning in Linux - take 2 for another way, using sgdisk instead of parted.

Labels: , , , , , , ,

Tuesday, June 07, 2011

Windows installers options for silent installs

Different installers use different command-line options for silent or unattended installs. Since I had started these notes, I have found a good overview on unattended.sourceforge.net.

Inno Setup

can be identified with the "Inno Setup" string appearing in various places in the installer's .exe. The options are described here. The most useful ones are:
  • /SAVEINF="filename"
    Save installation settings to the specified file.
  • /LOADINF="filename"
    Load the settings from the specified file after having checked the command line.
  • /SP-
    Disables the This will install... Do you wish to continue? prompt at the beginning of Setup.
  • /SILENT, /VERYSILENT
    When Setup is silent the wizard and the background window are not displayed but the installation progress window is. When a setup is very silent this installation progress window is not displayed. Everything else is normal so for example error messages during installation are displayed.
  • /DIR="x:\dirname"
  • /LANG=language
    Specifies the language to use. language specifies the internal name of the language as specified in a [Languages] section entry.

Nullsoft's NSIS

can be identified with the "NSIS" string appearing in various places in the installer's .exe. The options are described here, but there seem to be only 2 useful ones:
  • /S
    Silent installation
  • /D=C:\Bla
    Set output folder

Labels: , , , , , ,

Sunday, May 29, 2011

Mac and OpenLDAP: Local homes for network users

I wanted a Mac to authenticate users against our Debian OpenLDAP server, but to create a local home directory on the Mac (see here for more details). The usual configuration for network users on the Mac is to mount their homes from the server over NFS. There are many excellent instructions on the net on how to do that. But finding help on how to have them use a local home instead was much more difficult.

It turns out it can be done very simply, by disabling one line in /etc/auto_master on the Mac. By default, it contains +auto_master, which tells the Mac's automounter to look for an automount map in LDAP. If this line is disabled, the Mac will create a local home for network users the first time they log in. Since our userHomes in LDAP are defined as /home/username, the Mac home is created under /home instead of /Users, which is fine.

So for such a setup, you do NOT need to import an Apple schema into your LDAP directory. (That was quite a hassle because you need to tweak the original schema which is not quite kosher; but it was unnecessary).

All you need to do is comment out this single line in /etc/auto_master to make it

#+auto_master  # Use directory service

Or copy/paste this:

sudo perl -i.orig -pe 's/^(\+auto_master.*)/## $1/' /etc/auto_master

Labels: , , , , , ,

Saturday, November 20, 2010

Moving IMAP Maildir to another user

A little recipe to move a user's IMAP mails to another user. (Tested on the Courier IMAP server on Debian).

Useful in situations like John leaving the company and Bob needing to have access to John's old emails.

olduser=john; newuser=bob
maildirmake -f $olduser /home/$newuser/Maildir/
cd /home/$olduser/Maildir/
for d in * ; do \
    cp -pr "$d" "/home/$newuser/Maildir/.$olduser/"; \
done
echo "INBOX.$olduser"  >>/home/$newuser/Maildir/courierimapsubscribed
for d in .??*; do \
    cp -pr "$d" "/home/$newuser/Maildir/.$olduser$d"; \
    echo "INBOX.$olduser$d" >>/home/$newuser/Maildir/courierimapsubscribed; \
done
chown -R $newuser /home/$newuser/Maildir

(Beware that if John had a folder with a one-letter name, that one will not be copied. It's because "for d in .*" would do a mess trying to copy "." and "..". So line 6 uses "for d in .??*" instead.)

Labels: , , , , , , ,

Wednesday, December 30, 2009

OpenVPN client on Ubuntu 9.04 Jaunty

A few notes on setting up the openvpn client on Ubuntu, after my move from Windows. Configuration through the Network Manager VPN tab didn't work for me. As far as I could see, there was no way to directly import or copy my existing .ovpn files from Windows because NM doesn't use them. Instead, it uses it's own config files, which do not provide all the options of the standard openvpn client. The solution was to
  • install openvpn and resolvconf so that the name servers can be updated: sudo apt-get install openvpn resolvocnf
  • copy my .ovpn and key files to /etc/openvpn,
  • install gopenvpn to have a handy GUI launcher in the Gnome Panel. (the .deb package needs to be downloaded from the site)
  • Edit my .ovpn files to add up /etc/openvpn/update-resolv-conf and down /etc/openvpn/update-resolv-conf
It seems to work fine now. One example client .ovpn file looks like this:
client
dev tun
proto udp

remote hostname.example.com 1194

resolv-retry infinite
nobind
persist-key
persist-tun
mute-replay-warnings

ca example-cacert.pem
cert clientname.example.lan.pem
key clientname.example.lan.key

comp-lzo
verb 3

up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
2 little things are annoying: I need to enter my password, because changing the network requires root privileges. I'm sure there must be a solution, but the annoyance is probably not worth the time needed to find and apply it. And the other glitch is that window asking for my key's password sometimes opens behind the others.

Labels: , , , ,

Friday, September 18, 2009

Boot an iso file on your hard disk using grub

I needed to update the firmware on several of the infamous Seagate Barracuda 7002.11 drives with the buggy "SD15" firmware.

Seagate offers a Windows executable or an ISO file to create a bootable CD. However, I had only a Mac, a Linux notebook without a CD drive, and a Windows notebook with neither a CD nor any possibility to attach SATA drives.

After some googling, it looked like the Intel Mac could be booted from such a DOS CD, and that might have worked ... but I didn't even have a blank CD.

But wouldn't it be possible to boot from an .iso file on the hard disk?

As it turns out, it is. And this will probably prove quite handy to boot various Live CDs.

My notebook has Ubuntu, with the standard Grub boot loader. Grub cannot boot from an iso file, but another boot loader called grub4dos can. And Grub can boot that smart cousin of his... Yes, it's a sort of weird setup, but it worked for me.

There are also ways to do that on a Windows system. But here is the Linux version:

  1. Get grub4dos from sourceforge or gna.org
  2. Extract grub.exe from the zip file and copy it to your /boot directory.
  3. Also put the .iso you want to boot into your /boot directory.
  4. Edit your /boot/grub/menu.lst file and add these lines:
title == Use grub4dos for the following entries: ==

title 1: Reload this menu using grub4dos to enable booting the next entries
kernel /boot/grub.exe

title 2: boot the Seagate firmware ISO file
map --mem /boot/MooseDT-SD1A-3D4D-16-32MB.ISO (hd32)
map --hook
chainloader (hd32)

(Adapt the "title 2: ..." line, and of course the "map --mem /boot/name-of-your.iso" line with the right file name. Note that it is case-sensitive)

When booting, you will first be in your standard grub, where you select the "1: ..." entry to load grub4dos. It will do just like grub, find the same menu.lst file, and display it again. Now you can select the .iso entry, which grub4dos will understand.

It is probably also possible to store the iso files on other partitions, but that is "left as an exercise to the reader" ...

See also: the wiki, this guide, a "success stories" thread or this other thread with a much hairier setup.

Labels: , , , ,

Wednesday, March 18, 2009

Installing latest FFMPEG on Debian Etch

How to install the latest FFMPEG on a Debian 4 ("Etch") server? This post encouraged me to try it, despite the fact that it needs compiling from source, and that Etch isn't even the current "stable" Debian anymore. PhillC's post helped a lot, but it still didn't work for me exactly as described there. So here is how it eventually did work for me.

# echo "deb http://www.debian-multimedia.org etch main" >>/etc/apt/sources.list

or

# echo "deb http://www.debian-multimedia.org stable main" >>/etc/apt/sources.list

(I used both, and fiddled with enabling and disabling that repository, so I'm not sure anymore which one ended up being useful).

aptitude update gave me a GPG error, so I had to add the key it mentioned:

# gpg --keyserver hkp://wwwkeys.eu.pgp.net --recv-keys 07DC563D1F41B907
# gpg --armor --export 07DC563D1F41B907 | apt-key add -
# aptitude update

The following didn't work, or only worked partially:

# apt-get build-dep ffmpeg
Reading package lists... Done
Building dependency tree... Done
E: Unable to find a source package for ffmpegcvs
I continued anyway with the long install line of various libraries. I had to remove some of these libraries from the suggested install line. Particularly, since I had to recompile libx264 anyway, I should have removed libx264-dev at this point. It is removed in the line below:
# aptitude install liblame-dev libfaad-dev libfaac-dev libxvidcore4-dev liba52-0.7.4 liba52-0.7.4-dev build-essential subversion

# cd /usr/src
# svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg

And so I got the current version as of March 17:

Checked out external at revision 28979.
Checked out revision 18021.

And I tried configure:

# cd /usr/src/ffmpeg
# ./configure --enable-gpl --enable-pp --enable-libvorbis --enable-liba52 --enable-libdc1394 --enable-libgsm --enable-libmp3lame --enable-libfaad --enable-libfaac --enable-pthreads --enable-libx264 -enable-libxvid

After various errors, and removing options, I ended up with this error:

ERROR: libx264 version must be >= 0.65.

And trying to install that from the debian-multimedia.org repository didn't work either:

# aptitude install libx264-65
The following packages have unmet dependencies:
libx264-65: Depends: libc6 (>= 2.7-1) but 2.3.6.ds1-13etch8 is installed and it is kept back.
So this thread came to rescue, and I embarked on getting x264 and compiling that from source too:
# aptitude install git git-core

Trying to use Git at this point gives an error, but suggests the solution:

# update-alternatives --config git
There are 2 alternatives which provide `git'.
Selection    Alternative
-----------------------------------------------
*+        1    /usr/bin/git.transition
        2    /usr/bin/git-scm

Press enter to keep the default[*], or type selection number: 2

Next steps:

# cd /usr/src/
# git clone git://git.videolan.org/x264.git
# cd x264
# ./configure --enable-shared

This gave an error about yasm, which was not the right version. I could have tried to compile that too, as shown on the Ubuntu forum, but impatiently decided to try the suggested disable option instead. So the x264 part which worked:

# ./configure --enable-shared --disable-asm
# make
# make install
# ldconfig

And finally, ffmpeg:

# cd /usr/src/ffmpeg/
# ./configure --enable-gpl --enable-postproc --enable-pthreads --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libx264 --enable-libxvid
# make
# make install
I also had to remove the old ffmpeg version (aptitude purge ffmpeg) which I had installed some time before this, and finally did this:
# echo /usr/local/lib >> /etc/ld.so.conf.d/local.conf
# ldconfig

Since I had a leftover libx264 installed with aptitude and which was too old, that caused a segmentation fault when I tried to encode with ffmpeg. After searching (aptitude search x264), I found i had to aptitude purge libx264-54 libx264-dev . Then, just to be sure, I re-did the ./configure, make clean, make, make install incantations for both x264 and ffmpeg.

In the end, ffmpeg is working. I suppose the --disable-asm option on x264 will make encoding slower, so it may be worth compiling yasm, and re-compiling x264 again.

Now that ffmpeg is working, the main problem is trying to understand it's myriad of incomprehensible and cryptically documented options.

Labels: , , , , , ,

Sunday, January 11, 2009

try windows 7 beta

So we can try this Windows 7 beta now, which is said to be better than Vista, and which we'll have to get used to anyway. Here a few tips which I may need when I find a machine to try it on, and which may help you too.

The official download seems to only work with IE 7, but if you get the direct link it does work normally. So with wget, that would be:

For the 32 bit version:
wget -c http://download.microsoft.com/download/6/3/3/633118BD-6C3D-45A4-B985-F0FDFFE1B021/EN/7000.0.081212-1400_client_en-us_Ultimate-GB1CULFRE_EN_DVD.ISO
And for the 64 bit version:
wget -c http://download.microsoft.com/download/6/3/3/633118BD-6C3D-45A4-B985-F0FDFFE1B021/EN/7000.0.081212-1400_client_en-us_Ultimate-GB1CULXFRE_EN_DVD.ISO

To have the beta working until August, you need a key. Apparently, there are only a few keys used:

7XRCQ-RPY28-YY9P8-R6HD8-84GH3
RFFTV-J6K7W-MHBQJ-XYMMJ-Q8DCH
482XP-6J9WR-4JXT3-VBPP6-FQF4M
D9RHV-JG8XC-C77H2-3YF6D-RYRJ9
JYDV8-H8VXG-74RPT-6BJPB-X42V4

4HJRK-X6Q28-HWRFY-WDYHJ-K8HDH
QXV7B-K78W2-QGPR6-9FWH9-KGMM7
6JKV2-QPB8H-RQ893-FW7TM-PBJ73
GG4MQ-MGK72-HVXFW-KHCRF-KW6KY
TQ32R-WFBDM-GFHD2-QGVMH-3P9GC

Possibly also useful:

Labels: , , , , ,