Wednesday, 16 December 2009

TortoiseGit

I have been worrying about poor GUI support for Git because that is a major factor for many people not to use it. Especially true in companies where Git would cause some extra costs, at least initially.

Lo and behold, meet TortoiseGit.

Tuesday, 8 December 2009

Finding ALSA devices programmatically

Finding out the ALSA audio devices in a Linux box programmatically can be a bit tricky. I did it like this:



Of course if you don't need to do it yourself, you can use the aplay command from the shell.

Monday, 7 December 2009

Debugging in Linux with advanced IDE's

This took me a while to figure out. I have been trying out several different IDE's for Linux C development (KDevelop, Code::Blocks, NetBeans) and I only managed to get debugging work like I wanted (e.g. code stepping, local variables etc) in Code::Blocks.

Finally I figured out why: I was compiling the code incorrectly. I thought I was doing it right, but I was actually only giving the -ggdb flag to the linker. Code::Blocks worked because it creates a new project for you and knows how to compile for a debug compilation.

KDevelop is just poor, it keeps crashing and the interface is rather unintuitive (how to use a Makefile and how to compile an application without a GUI?). Code::Blocks is nice, if ugly-looking but unfortunately it doesn't want to use the original Makefile. NetBeans is big and slow, but supports the Makefile directly. It's going to be a tough choice between Code::Blocks and NetBeans...

The only thing going for KDevelop is it has support for git, but then again, you can use it from the command line....

But how to do the Makefile correctly:

After that all the debugger options work inside the IDE.

Monday, 2 November 2009

Cockburn's Ph.D thesis

I mentioned this in my previous post but I have to comment on it again as I got around reading it. Alistair Cockburn's Ph.D thesis is something that you should read if you are interested in process development. Read it, understand it and afterwards you will probably need to weep a little. How many times have we seen an attempt to deploy a new prcess model without little thought on how it actually fits the project/company?

Friday, 23 October 2009

The Perfect Software Process - And It Really Works!

Sarah A. Sheard has written a really insightful essay on software processes, or silver bullets. Having read several blog posts on Agile methods it looks like we are fast approaching Phase 10, Blaming the Method, especially regarding Scrum (well, okay, some of those are already quite old and Scrum is still alive 'n kicking). So, it's Kanban and Lean methods next? How many years will they survive? The sorry part is that while methods like XP now deemed a failure by many evolve, people who tried them and failed (usually because of the reasons Sheard describes) won't look back.

Sheard's article explains in a indirect manner, what the perfect Software Process is. I would summarize it like this:
  1. Pick a process. Any will do, but the closer it is to your needs, the better.
  2. Use the process until you understand why things are done the way they are (shouldn't take too long for people familiar with processes).
  3. Start a continuous improvement process where you gradually improve the process to best meet your needs. This should be embedded to the software process so that this is done all the time and by the people who really are involved.
And that's it! Eventually, if you do this right you will have a perfect process (or at least until things change - but at least you will close to the ideal most of the time). Of course if you make a poor choice in the first phase it will take a long time to get the right one.

What Scrum seems to want you to do is to keep short iterations and keep this improvement cycle going around those iterations (Scrum reviews). So if it is done correctly, constant improvement is built into the process, which can't be said of every process. CMMI for instance includes this continuous improvement only at the higher levels.

Of course this cycle could lead to a Scrum team eventually not using Scrum if it doesn't fit their way of working. I don't know if Scrum ideology endorses this idea, but I'm confident in saying that if some organization manages to do this they are definitely working well.

In any case, any organization trying to endorse a new software process (or any other process, for that matter) that fail to implement phases 2 and 3 are almost certainly going to be disappointed with their choice. I'm constantly amazed at how so many people fail to see this obvious thing. No two projects are the same so how do you think a process developed by people you don't know in environments you are probably unfamiliar with is going to be a perfect fit in you projects?

I find Agile methods appealing for one reason: they not only accept change, but endorse it. I'm yet to see a software project that didn't have a backlog of change requests (bug reports are just a subset of these). Most processes I know handle change requests by having a change request process that includes a CCB (Change Control Board), requires extensive impact analysis, lots of communication, and so on just to be able to decide whether to go ahead with the change or not and how to go ahead with it. That's an awful lot of extra work. And why? Because no matter what the process developers say, they do not endorse change. Change is inevitable, but also a negative thing. So handling it is unnecessary complicated because even though change is the norm, these processes handle change as an exception. "We have this neat process that is constantly being interrupted by these change requests."

Contrast with agile methods which take changes granted. They are not exceptions but the rule around which the process is largely developed.

Having said that agile methods do not work for every project. You can't use Scrum as-is developing a cellular base station because you have thousands of pages of specification you must follow. You have to understand them and carefully plan your architecture in such a way it doesn't stop you from creating a base station that works according to the specs. So you need a rigorous specification and design phase which means that be default changes will be exceptions. You could still use some parts of agile methods but trying to use, say, Scrum without any modifications would be just stupid.

Having said this I realised one major problem with this approach: how do you handle this in a company with multiple projects? If every project modifies the process to fit them best you will end up with different processes for each project. This increases cost of training when people switch projects and also assumes that people fine-tuning the processes are doing a good job.

Oh, one more thing: I find Alistair Cockburn's Ph.D. dissertation on the web. I have to find time to read it because it's exactly about these issues.

Tuesday, 20 October 2009

Mounting a Windows network drive to Linux

Easy enough after I figured the first part, that Ubuntu did not come with the necessary package by default. So this is how I did it:
  • sudo aptitude install smbfs
  • sudo mkdir /mnt/mntpoint
  • sudo mount -t smbfs //server/directory /mnt/mntpoint -o credentials=/home/my/.smbpasswd
The .smbpasswd file looks like

username=<AD username>
password=<AD password>

I did a chmod 600 on that file so no-one else can see my AD password.

To automount this on boot, I added the following line to /etc/fstab:
//server/directory /mnt/mntpoint smbfs
credentials=/home/my/.smbpasswd,uid=1000,gid=1000 0 0


Where uid and gid are my uid and gid on the box.

Monday, 19 October 2009

Scrum resources

In my quest to get my B.Sc. thesis done I will list some Scrum and Continuous Integration resources in this post. I will add to this if and when I find better ones.

Friday, 16 October 2009

Parsing files with Bourne shell

My useful piece of Bourne shell script for the day is a piece of script that goes through .change files in the current directory and copies all the files listed to a different directory. It skips the _sources.change files as well.



The obvious use is to copy the files so we can use "reprepro processincoming" to save packages to a repository.

There are probably smarter ways to do this, but this is pretty portable as it only requires grep and cut, and very minimal functionality from them as well. And even grep is not needed if you want to copy everything.

Wednesday, 14 October 2009

Changing user names in Gitorious

It seems there is no mechanism for changing a user name in Gitorious. I was using the wrong one in one server so I decided to change it straight from the database. After that I kept getting "You don't have write access to this repository" but no matter how hard I looked everything still looked fine (git configuration, ssh key's fingerprint etc). Looking at pre-receive-guard.log, I noticed that for some reason I was trying to access Gitorious with the wrong user name which happened to be the old one. I even deleted and re-added the ssh key just in case.

A bit of digging revealed that the problem was simple: the old ssh key was still in ~git/.ssh/authorized_keys and being the first one it was used instead of the one connected to the new user name. Deleting this key from the file was all that was required.

Monday, 12 October 2009

Adding More Disk Space to a Virtual Machine

This was something I didn't know and as the main function of this blog is to save this sort of stuff for myself for later reference, here's how it can be done when using kvm virtualisation.

There are at least two ways:

  1. Start off by creating a an extended partition of any free space you have (with fdisk) OR create a a file to act as the disk with something like "dd if=/dev/zero of=/path/to/file bs=1024 count=$((10*1024*1024))" (this would create a 10 gigabyte file).
  2. Open virt-manager and go to the virtual machine's Details page. Click the "Add Hardware" button.
  3. Select "Storage" as the hardware type.
  4. If you created the extended partition, enter the device name in the block device text box, or if you created the file, select the file in the disk image box. In both cases, use "IDE disk" as the target.
  5. Start up the virtual machine and create the partition(s) using fdisk.

Friday, 2 October 2009

Redmine installation issues

There are some more straightforward ways to install Redmine than what I have discussed earlier, as this wiki article describes. But my method also seems to work - I have now deployed Redmine in production.

Also, it seems installing plugins to Redmine is really easy if those plugins are in GitHub. Couldn't get any easier, all you do is run script/plugin install git://github.com/plugin_dir/plugin.git and hey presto!

Getting LDAP to work with Redmine was really easy as well. The Redmine guys have written short instructions that tell almost all you need. And that's all you need, LDAP support is written directly into Redmine meaning you don't even have to configure your webserver.

Linking Gitorious repositories to Redmine is a bit annoying thanks to the fact that Gitorious hashes the repository names. The solution I came up with is a crontab task that creates links in the form of project/repository.git (i.e. in plain text) that link to the Gitorious repositories. I can then link from these links to the repositories in Redmine, meaning I don't have to dig up the hashed directories from Gitorious by hand.

Monday, 14 September 2009

Handling files in creation order in bourne shell

Sometimes the simplest sounding things get quite complicated. I initially thought listing files (and only files) in a directory in the order they were created would be easy. Turns out I couldn't find a way to sort by creation time, only by modification time (ls -ct). No probs, I did

Which is nice as long as there are no sub directories in the directory. Getting it right took a bit more time, this is how I did it:
'ls' just didn't seem to cut it. find did, but it did get a bit complicated for such a simple task. But I'm sure there's an easier way...

Friday, 4 September 2009

More on Gitorious installations

So, turns out my advice in the earlier post wasn't quite true. Sure, the installation goes through and you can connect to the site but trivial stuff like using the repositories doesn't. Turns out the reason is that Gitorious doesn't really support sub uri's, it expects to be in the site's root. Thomas Schamm has modified the source in his repository (check out the sub_uri branch) which I haven't had time to test yet as I have been trying to merge that to the mainline (there is some stuff there that I don't really need). It does force gitorious to be in the /gitorious sub uri, but I guess that's not too much to ask. But the downside is this version forces it to be located in that sub uri, it doesn't work from the root any more. This will be reason enough for the Gitorious developers not to take this feature into their mainline. I think I'll have a look at how to fix this at some point.

Friday, 28 August 2009

WPA cracked

WPA can now be cracked in a minute. So, what are we supposed to use from now on?Surely developing a strong encryption scheme can't be that impossible, after all even GSM is hard to crack and it was developed in the 80's!

Thursday, 27 August 2009

Redmine installation on Ubuntu Jaunty

So, in the previous entry I gave instructions on installing Gitorious on Ubuntu Jaunty. This post will tell how to install Redmine on the same box. It will also use Passenger and MySql so no extra software is installed.

In case you only want to install Redmine, I will also tell which steps from the Gitorious installation instructions must also be done.

Install packages
  1. aptitude install ruby-pkg-tools ruby1.8-dev libapache-dbi-perl libapache2-mod-perl2 libdigest-sha1-perl
Redmine only: also do steps 1,2, 4 and 8.

Fetch Redmine
  1. cd /var/www/
  2. wget http://rubyforge.org/frs/download.php/56909/redmine-0.8.4.tar.gz
  3. gunzip redmine-0.8.4.tar.gz && tar xvf redmine-0.8.4.tar
  4. mv redmine-0.8.4 redmine
Create user
  1. adduser --system --home /var/www/redmine --no-create-home --group --shell /bin/bash redmine
  2. chown -R redmine:redmine /var/www/redmine
Configure Apache
  1. Copy attached redmine.conf file to /etc/apache2/conf.d/redmine.conf.
Redmine only: you also need to do steps 1-4, 7 and 9. SSL is only not needed by Redmine.

Create database and database user
  1. mysql -p
  2. create database redmine character set utf8;
  3. create user 'redmine'@'localhost' identified by '[password]';
  4. grant all privileges on redmine.* to 'redmine'@'localhost';
  5. quit
  6. Copy attached database.yml to /var/www/redmine/config/database.yml. Change the password to the one you selected in previous step.
Bootstrap Redmine
  1. cd /var/www/redmine
  2. rake db:migrate RAILS_ENV="production"
  3. rake redmine:load_default_data RAILS_ENV="production"
Setup email
  1. Copy attached file email.redmine.yml to /var/www/redmine/config/email.yml. Fix domain name.
Finally restart Apache one more time. Default admin user name and password is admin/admin.

Attachments

redmine.conf:
Alias /redmine /var/www/redmine/public

<directory /var/www/redmine/public>
PassengerAppRoot /var/www/redmine

RailsBaseURI /redmine
</directory>
database.yml:
production:
adapter: mysql
database: redmine
host: localhost
username: redmine
password: [password]
email.yml:
production:
delivery_method: :smtp
smtp_settings:
address: 127.0.0.1
port: 25
domain: DOMAIN
authentication: :login
user_name: redmine
password: redmine

Gitorious installation on Ubuntu Jaunty

So I managed to install Gitorious and not only that, I also have Redmine working on the same machine. Both use Apache2, Passenger, MySql and git. No virtual hosts are used, these tools are accessed with "https://hostname/gitorious" and "http://hostname/redmine". I'll start with instructions on how to install Gitorious and the next one will handle Redmine installation on the same machine. I tried to make these instructions as simple as possible, but if something goes really badly wrong, it's on you. I also did this on a fresh virtual machine so I had to install some packages that are installed automatically on "normal" installation. So, here we go. Everything is done as root (sudo -i).

Install needed packages and gems
  1. aptitude update
  2. aptitude install apache2 ruby rubygems git-core git-doc libmysql-ruby mysql-server mysql-client libmysqlclient15-dev phpmyadmin libdbd-mysql-ruby build-essential cron rake wget
  3. aptitude install zlib1g-dev tcl-dev libexpat-dev libcurl4-openssl-dev postfix apg geoip-bin libgeoip1 libgeoip-dev sqlite3 libsqlite3-dev imagemagick libpcre3 libpcre3-dev zlib1g zlib1g-dev libyaml-dev apache2-dev libonig-dev ruby-dev libopenssl-ruby libmagick++-dev zip unzip memcached git-svn git-cvs irb
  4. gem install -b --no-ri --no-rdoc passenger rake
  5. gem install -b --no-ri --no-rdoc rmagick chronic geoip daemons hoe echoe ruby-yadis \ ruby-openid mime-types diff-lcs json rack ruby-hmac stompserver
  6. gem install -b --no-ri --no-rdoc -v 1.3.1.1 rdiscount
  7. gem install -b --no-ri --no-rdoc -v 1.1 stomp
  8. ln -s /var/lib/gems/1.8/bin/rake /usr/bin
  9. ln -s /var/lib/gems/1.8/bin/stompserver /usr/bin
Install Sphinx
  1. cd /tmp
  2. wget http://sphinxsearch.com/downloads/sphinx-0.9.8.1.tar.gz
  3. gunzip sphinx-0.9.8.1.tar.gz && tar xvf sphinx-0.9.8.1.tar
  4. cd sphinx-0.9.8.1
  5. ./configure --prefix=/usr && make all install
Fetch Gitorious
  1. git clone http://git.gitorious.org/gitorious/mainline.git /var/www/gitorious
  2. ln -s /var/www/gitorious/script/gitorious /usr/bin
Create user

This step will create the "git" user for the system and also the ssh keyring which is used to identify other users.
  1. adduser --system --home /var/www/gitorious/ --no-create-home --group --shell /bin/bash git
  2. chown -R git:git /var/www/gitorious
  3. su - git
  4. mkdir .ssh
  5. touch .ssh/authorized_keys
  6. chmod 700 .ssh
  7. chmod 600 .ssh/authorized_keys
  8. mkdir tmp/pids
  9. mkdir repositories
  10. mkdir tarballs
  11. exit
Configure Gitorious
  1. Copy attached file gitorious.yml to /var/www/gitorious/config. Change gitorious_client_host and gitorious_host to the system hostname or IP address.
  2. cp /var/www/gitorious/config/broker.yml.example /var/www/gitorious/broker.yml
Configure services
  1. cp /var/www/gitorious/doc/templates/ubuntu/git-ultrasphinx /etc/init.d/
  2. sed -e "s/opt\/ruby-enterprise\/bin\/ruby/usr\/bin\/ruby/" /var/www/gitorious/doc/templates/ubuntu/git-daemon > /etc/init.d/git-daemon
  3. chmod 755 /etc/init.d/git-daemon
  4. Copy attached stomp file to /etc/init.d.
  5. Copy attached git-poller file to /etc/init.d
  6. cp /var/www/gitorious/doc/templates/ubuntu/gitorious-logrotate /etc/logrotate.d/gitorious
  7. chmod 755 /etc/init.d/git-ultrasphinx /etc/init.d/git-daemon /etc/init.d/stomp /etc/init.d/git-poller
  8. update-rc.d stomp defaults
  9. update-rc.d git-daemon defaults
  10. update-rc.d git-ultrasphinx defaults
  11. update-rc.d git-poller defaults
Configure Apache
  1. /var/lib/gems/1.8/bin/passenger-install-apache2-module
  2. Copy attached file passenger.load to /etc/apache2/mods-available/passenger.load.
  3. a2enmod passenger
  4. a2enmod rewrite
  5. a2enmod ssl
  6. a2ensite default-ssl
  7. ln -s /etc/phpmyadmin/apache.conf /etc/apache2/conf.d/phpmyadmin.conf
  8. Copy attached gitorious.conf file to /etc/apache2/conf.d/gitorious.conf.
  9. /etc/init.d/apache2 restart
Step 7 fixes phpmyadmin so it works again.

Create database and database user
  1. mysql -p
  2. create user 'git'@'localhost' identified by '[password]';
  3. grant all privileges on * . * to 'git'@'localhost' identified by '[password]';
  4. grant all privileges on `gitorious_production` . * to 'git'@'localhost' with grant option ;
  5. Copy attached file database.yml to /var/www/gitorious/config/database.yml. Change the password to the one you selected in previous step.
Bootstrap Gitorious
  1. cd /var/www/gitorious
  2. export RAILS_ENV="production"
  3. rake db:create
  4. rake db:migrate
  5. rake ultrasphinx:bootstrap
  6. Add this to crontab: * * * * * cd /var/www/gitorious && /usr/bin/rake ultrasphinx:index RAILS_ENV="production"
And finally
  1. /etc/init.d/apache2 restart
  2. ruby /var/www/create_admin
Last step creates the Gitorious super user. Note that admin email must be fully qualified email address, e.g. "admin@foo.bar.net"

Attachments

gitorious.yml:
production:
# The session secret key (`apg -m 64` is always useful for this kinda stuff)
cookie_secret: ssssht

# The path where git repositories are stored. The actual (bare) repositories $
# in repository_base_path/#{project.slug}/#{repository.name}.git/:
repository_base_path: "/var/www/gitorious/repositories"

# Stuff that's in the html head. custom stats javascript code etc
extra_html_head_data:

# System message that will appear on all pages if present
system_message:

# Port the ./script/gitorious script should use:
gitorious_client_port: 80

# Host the ./script/gitorious script should use:
gitorious_client_host: HOSTNAME

# Host which is serving the gitorious app, eg "gitorious.org"
gitorious_host: HOSTNAME

# User which is running git daemon
gitorious_user: git

# Email spam on server errors to:
exception_notification_emails:

# Mangle visible e-mail addresses (spam protection)
mangle_email_addresses: true

# Enable or Disable Public Mode (true) or Private Mode (false)
public_mode: true

# Define your locale
locale: en

# Where should we store generated tarballs?
# (should be readable by webserver, since we tell it to send the file using X$
archive_cache_dir: "/var/www/gitorious/tarballs"
# Which directory should we work in when we generate tarballs, before moving
# them to the above dir?
archive_work_dir: "/tmp/tarballs-work"

# is it only site admins who can create new projects?
only_site_admins_can_create_projects: false

# Should we hide HTTP clone urls?
hide_http_clone_urls: true

# Is this gitorious.org? Read: should we have a very flashy homepage?
is_gitorious_dot_org: false

stomp:
#!/bin/sh
# Start/stop the stompserver
#
### BEGIN INIT INFO
# Provides: stomp
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 1
# Short-Description: Stomp
# Description: Stomp
### END INIT INFO

test -f /usr/bin/stompserver || exit 0

. /lib/lsb/init-functions

case "$1" in
start) log_daemon_msg "Starting stompserver" "stompserver"

start-stop-daemon --start --name stompserver --startas /usr/bin/stompserver --background --user git
log_end_msg $?
;;

stop) log_daemon_msg "Stopping stompserver" "stompserver"

start-stop-daemon --stop --name stompserver
log_end_msg $?
;;

restart) log_daemon_msg "Restarting stompserver" "stompserver"

start-stop-daemon --stop --retry 5 --name stompserver
start-stop-daemon --start --name stompserver --startas /usr/bin/stompserver --background --user git
log_end_msg $?
;;

status)

status_of_proc /usr/bin/stompserver stompserver && exit 0 || exit $?
;;

*) log_action_msg "Usage: /etc/init.d/stomp {start|stop|restart|status}"

exit 2
;;

esac
exit 0
git-poller:
#!/bin/sh
# Start/stop the git poller
#
### BEGIN INIT INFO
# Provides: git-poller
# Required-Start: stomp
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 1
# Short-Description: Gitorious poller
# Description: Gitorious poller
### END INIT INFO

/bin/su -- git -c "cd /var/www/gitorious;RAILS_ENV=production script/poller $@"
passenger.load:
LoadModule passenger_module /var/lib/gems/1.8/gems/passenger-2.2.4/ext/apache2/mod_passenger.so
PassengerRoot /var/lib/gems/1.8/gems/passenger-2.2.4
PassengerRuby /usr/bin/ruby1.8
gitorious.conf:
Alias /gitorious /var/www/gitorious/public

<directory /var/www/gitorious/public>
PassengerAppRoot /var/www/gitorious

RailsBaseURI /gitorious
</directory>

database.yml:
production:
adapter: mysql
database: gitorious_production
username: git
password: [PASSWORD]
host: localhost
encoding: utf8

Tuesday, 25 August 2009

Gitorious installation revisited

I wrote earlier about how difficult installing Gitorious is. Thankfully the developers have since written a very good guide about it. I have successfully installed it using those instructions!

Next task is to get Redmine running on the same box. There are some interesting challenges there.

Friday, 21 August 2009

HTTP proxies and sbuild

After studying how mk-sbuild-lv and sbuild itself work it looks clear that mk-sbuild-lv doesn't really support http proxies. That is, if you are behind one, tough luck.

Fortunately this is easy to fix, I already submitted a bug report and a patch to launchpad. It is as simple a patch as can be, it just adds one line to the mk-sbuild-lv script which exports the http_proxy setting to the chroot environment.

Thursday, 30 July 2009

Multi-boot is dead easy. Yeah right.

My comp's motherboard flipped (some capacitors blew making it hard to boot the machine) so I got this crazy idea of buying a new harddrive and use one of the existing ones removing the oldest disks (one 120GB, one 160GB) and creating a multi-boot machine with XP and Ubuntu.

Theory is that you install XP, then install Ubuntu and everything works. This is what happened to me.

I installed XP, all the service packs and most of the software I needed (iTunes, FireFox, Thunderbird, Spotify, DVD Profiler etc). Then I installed Ubuntu so that the XP disk was the bootable disk, so Grub was installed there. End result was that neither XP nor Ubuntu booted. After some mucking about I got Ubuntu to start but XP was completely dead. It was in fact so dead that even when I tried reinstall, it didn't work until I booted the rescue disk and did "fixmbr".

I'm not going to go through all I had to do to get things to work, needless to say I ended up installing XP about four times from scratch and Ubuntu twice. Once because gparted somehow messed up the XP disk after I tried to resize the C-drive to 1TB. I had done it the first time using the Ubuntu Live CD so I thought it would work again. It didn't. Part of the trouble was my XP installation disk is original so it doesn't support disk sizes greater than 130GB (SP2 added that support).

The way I managed this in the end was this:
  • Remove the 1TB disk from the machine.
  • Install Ubuntu on the 500GB disk, Grub and all.
  • Attach the 1TB disk to the machine and remove the 500GB disk.
  • Install XP and updated it. Create the second partition on the leftover disk space.
  • Attach the 500GB disk and make sure the system boots using this disk.
  • Edit grub's menu.lst (in /boot/grub) and add the XP there and map the drives so that XP imagines it's on the first HD. This is done by adding the following section:
title Windows 95/98/NT/2000
root (hd0,0)
map (hd0) (hd1)
map (hd1) (hd0)
makeactive
chainloader +1
and then run "sudo update-grub".

To me it looked like trying to install Grub on XP's disk while Ubuntu was on a different drive was the problem. I ended up messing XP's MBR several times attempting this. The last part on my list is required because XP likes to think it's the top dog in a system (i.e. on the first drive).

So now I have a dual-boot, although XP has two partitions instead of one. But I decided this was the last time I'm installing XP on my home machine. If I still require a Microsoft product the next time I'm up for a system upgrade it will probably be Windows 7.

Thursday, 2 July 2009

How Not To Use Goto In C Init Functions

There is the idea that Goto is useful in some complex init functions to make the code simpler. Such as:

if (OK != doSomething()) goto cleanup;
if (OK != doSomethingElse()) goto cleanup;

There could dozens of functions that need to be called except if something goes wrong in which case the rest of the init functions should not be called.

This is thought to be better because the normal style of if's makes the code much more complex. But there is a better way of doing initialization code like this that doesn't make the code any longer:

error = (!error) && doSomething();
error = (!error) && doSomethingElse();

Or, if you want to know which function failed you can either have different return codes for each function or do this:

error = (!error) && (doSomething() << SHIFT_DOSOMETHING);
error = (!error) && (doSomething Else() << SHIFT_DOSOMETHINGELSE);

I would prefer each function returning it's own error code because the latter style is not quite as easy to read and relies on functions returning 1 on error (the first version works with any return values).

This is a pretty easy way of solving the same problem and does not add any cyclomatic complexity and keeps the code short, without any extra constructs.

Tuesday, 30 June 2009

A History of Violence

I had high expectations for this movie as I have read some rave reviews about it. The movie started slowly and not too convincingly but I thought it would pick up. Unfortunately it didn't. There's just so much wrong in this movie. I wouldn't mind the simple story of a man living a happy family life being caught up with his past if the script and acting was good but in this case neither is. There are just so many unnecessary scenes in this movie that you have to wonder what the contents would have been if they had been removed as already the movie only last just over an hour and a half.

The opening scene in itself is completely unneeded. The two guys are bad, ok we get that. But it was obvious in the scene where they enter the diner and after that it doesn't matter any more. These two characters are really only the vehicle to get the movie going, now they got too much screen time as they are not particularly interesting. Quite the opposite, in fact.

The baseball scene at the beginning is another scene that I don't understand. The whole point is to show that one of the school mates of Jack's is a bully. But that becomes apparent right in the next scene. There is nothing particularly interesting in the baseball scene despite it being built up like there was (Tom and Jack talking about it earlier). There are quite a few scenes which don't really serve anything, they are not even interesting, have witty dialogue or good acting. They are just a waste of time.

Jack also gets a lot of screen time and it's apparent that Cronenberg just wants to show how violent tendencies are inherited. Trouble is, again, that these scenes feel disconnected and the message is far too obvious. The whole Jack plot is added to add some depth but it just doesn't work.

Acting and dialogue are another issue. Both are pretty poor. The only actor that comes through with something of an intact reputation is Ed Harris and he only has to play a simple bad guy role.

Ending of the movie is another disappointment. You could see it coming and it is only made worse by the terrible acting of William Hurt.

All in all, the core plot of the movie is ok but it is hidden behind numerous scenes that don't serve it and the rest is spoiled by mediocre acting and uninteresting dialogue. And I didn't even mention the sex scenes that were unneeded.

** / *****

Wednesday, 24 June 2009

What's wrong with these people?

I'm all for digital rights and all that. Everyone should have the right to say how their creations can be used. If they want money for their track, software or whatever, fine. We then pay or not use their product.

But sometimes (quite often these days, it seems) the digital rights' people make it very hard to support these rights even though I know that in the end piracy hurts the artists the most. I've seen some ridiculous claims (like having to pay $80,000 per downloaded track) but this case really takes the biscuit.

Honestly, what sort of imbecil thought of that? If I knew piracy only damaged these idiots I would be all for it. Every time you think these people can't come up with a more stupid idea they do. The only reason I can think of for these is that they have a bunch of lawyers working for them on result-based salaries. No lawsuits, no salary. The other alternative is they have some sort of competition on who can come up with the most ridiculous idea to take to court.

Maybe it is time to look at this again. Demolish ASCAP and other organizations like it because it's obvious they have grown way beyond their means. Their job should be to support the artists, not come up with ideas that make more people turn against legitimate products and into piracy. It's obvious ASCAP has far too many employees when they have time to use their time on something like this.

Thursday, 18 June 2009

The ridiculous task of installing Gitorious

I can accept I don't know anything about Linux, but the process of installing Gitorious is getting downright ridiculous. The help file for Ubuntu is badly outdated and documentation is non-existent (or I just couldn't find any). There are so many dependencies and tools required that you seem to have to fix everything by hand. I tried following the instructions in gitorious/doc/recipes/install-ubuntu.txt except for installing Ruby Enterprise. Here's a partial list of stuff I have so far done on top of that:

1. I have installed by hand the next apt packages (aptitude install), some were not mentioned some didn't install correctly on first try: apache2-dev, mysql, imagemagick, libopenssl-ruby, rubygems1.8, ruby1.8-dev, libmagickwand-dev, libaspell-dev, aspell-en.

2. mysql didn't start so I had to do "set LD_LIBRARY_PATH=/usr/lib" and uncomment "skip-innodb" from /etc/mysql/my.cnf

3. I also had to update rake because the version in Jaunty was buggy. Fortunately Karmic has a working version, so:
  • cd /etc/apt
  • cp sources.list sources.list.jaunty
  • sed -i 's/jaunty/karmic/" sources.list
  • aptitude update
  • aptitude install rake
  • mv sources.list sources.list.karmic
  • ln -s sources.list.jaunty sources.list
  • aptitude update
4. I had to install a number of gems (gem install) that were not listed in the recipe: mysql, hoe, json, stomp, rdiscount, diff-lcs, oauth, stompserver, raspell.

5. I fixed gitorious.yml by copying contents of test section to production section and made sure the values are correct.

6. /etc/init/init.d/git-daemon needed to be fixed becaus GIT_DAEMON pointed to wrong directory.

7. I had to chmod 777 /var/www/gitorious/log/message_processing.log.

8. In /var/www/gitorious/config: cp broker.yml.example broker.yml .

9. Aspell was missing ap.multi file, so: cp /var/www/gitorious/vendor/plugins/ultrasphinx/examples/ap.multi /usr/lib/aspell . Link

10. Still problems with Ultrasphinx, so: echo "lapse" | aspell --lang=en create master ap.rws . Link

11. I installed Stompserver at this point and configured and started it (hopefully) since this link says it's needed. I don't know how else to run it, so I did this: /usr/bin/ruby /var/www/gitorious/log/stompserver -p 61613 -b 0.0.0.0 & . It also says about the need for memcached but I'm not so sure I need that because at least it seems like the requests are forwarded from Apache to Gitorious correctly.

12. I had to remove these lines from /var/www/gitorious/app/views/layouts/second_generation/application.html.erb:

<%= text_field_tag :q, params[:q], :id => "main_menu_search_form_query", :class => "unfocused", %>



because this was causing an exception.

These were of course mostly on top of the stuff said in the recipe. Some of this stuff was probably not needed, but then again it's impossible to know this.

Now I'm stuck trying to configure Ultrasphinx some more as I suspect this might be the problem. But if I try to any rake commands with Ultrasphinx (like rake ultrasphinx:index) I just get this trace:

rake aborted!
no such file to load -- echoe
/var/www/gitorious/vendor/plugins/ultrasphinx/Rakefile:2:in `require'
/var/www/gitorious/vendor/plugins/ultrasphinx/Rakefile:2
/usr/lib/ruby/1.8/rake.rb:2359:in `load'
/usr/lib/ruby/1.8/rake.rb:2359:in `raw_load_rakefile'
/usr/lib/ruby/1.8/rake.rb:1993:in `load_rakefile'
/usr/lib/ruby/1.8/rake.rb:2044:in `standard_exception_handling'
/usr/lib/ruby/1.8/rake.rb:1992:in `load_rakefile'
/usr/lib/ruby/1.8/rake.rb:1976:in `run'
/usr/lib/ruby/1.8/rake.rb:2044:in `standard_exception_handling'
/usr/lib/ruby/1.8/rake.rb:1974:in `run'
/usr/bin/rake:28


Problem is, echoe is installed (gem list --local).

In any case, currently there are no errors in production.log (under gitorious) and this in apache's log:

[ pid=22272 file=ext/apache2/Hooks.cpp:470 time=2009-06-18 14:21:37.964 ]:
Forwarding / to PID 24930
[ pid=22264 file=ext/common/StandardApplicationPool.h:405 time=2009-06-18 14:31:01.785 ]:
Cleaning idle app /var/www/gitorious (PID 24930)
[ pid=22264 file=ext/common/Application.h:448 time=2009-06-18 14:31:01.787 ]:
Application 0x80a3498: destroyed.


The last part seems ok, the app is just killed if there is no action for some time. It is automatically restarted when I refresh the front page (this and a few of the pages work, mostly either nothing happens or I get the "Failed to Connect" page).

The most problematic thing is I don't know how this is supposed to work so I can't really use any systematic approach to solving these problems. So I don't honestly know if I will be able to install Gitorious at all in the end. There are still a lot of stuff I'm not too sure will work, like the actual connection to git. In conclusion it looks like you don't want to install any software that uses Ruby on Rails unless you know RoR already. There is bound to be problems with missing gems, errors when migrating them and so on and everything is dependand on all of the sw components being available and being the correct version: rake, correct version of Ruby (1.8 and 1.9 don't seem to completely compatible) etc.

Redmine at least had a Bitnami stack which made the installation seem almost ridiculously easy in comparison, I'm guessing if I have to install it from scratch it'll be a similar experience.

Friday, 12 June 2009

More on digital photography

And just as soon as I wrote about digital cameras I came across a magazine review about HD quality digital video cams. They are pretty much what I suggested would be the ideal camera for most people. 3 megapixels, enough zoom, good lenses (aperatures could start at something like f1.8 and even at full tele be as low as f3.0 although I don't know how these work in low light), compact and decent still picture quality.

Only thing is they need to be a bit less expensive, in Finland these cost from 700 and up. Once these drop to sub-500 euros for a decent one (there are some available in US in that price range but they seem to have a few compromises too much), anyone who isn't dead serious about filming should consider one of these babys in the near future. You would get a HD quality video cam that you can also use to take your holiday shots with!

Tuesday, 9 June 2009

More is not always better

Camera business is an excellent example of when customer driven business (the usual case, I know) is sometimes harmful to the customers. I could think of many other examples as well, but this is as good as any.

The problem comes from the fact people think more is always better. I'm of course referring to pixel count. Two cameras, same price tag, other has 4 megapixels, the other 12, guess which one is sold? Why not stop and think what might be better in the 4 megapixel camera instead of assuming it's just worse?

It would be beneficial to at least 99% of camera buyers if the manufacturers stopped increasing pixel count and concentrated on other issues. Making a camera with 4 megapixels would basically mean better image quality in most situations. The camera could have higher usable ISO settings, so shooting in low light would be possible and since a smaller pixel count would allow cheaper parts, not just the CMOS chip, but also CPU etc. the manufacturer might be able to squeeze in a better quality lens as well for the same price. This would have better aperature, clearer image and larger dynamic range. And the camera would work faster since there would be a lot less pixels to handle.

But of course manufacturers have to keep increasing pixel counts because consumers expect it. And partly this is because in most magazines pixel count is the first thing mentioned. I have read lately several comparisons where the heading was in the form of "10 megapixel cameras reviewed". So the onus is in pixel count. The really funny thing is these cameras are usually in the so-called prosumer class, i.e. cameras meant for more than casual photographers. People you would expect to know things like this. But they fall for the same.

I have had a few prosumer cameras and by far the best lens was in my first, the Canon G3. It was really slow, like digital cameras then used to be, but it had a great f2.0 lens. It was so good I could take pictures inside French cathedrals without a tripod! Yep, all of those pictures were taken without one (I didn't want to lug one with me). Unfortunately the LCD screen stopped working and since it didn't have an EVF it became practically useless (since I couldn't access the menus etc any more) so I had to buy a new one. This time a Canon S2 IS, which I have to say is a lot faster and has more megapixels, bu struggles in low light. I also have a cheap SLR (Nikon D40) but prosumers are a very good niche. They offer almost the same features as an SLR in normal conditions but in a much more compact form. SLR's get really useful only in special conditions and special needs and when you need one you are carrying a big backbag just for the camera in contrast to the small bag hanging from your belt (like I carry my S2).

Unfortunately there doesn't seem to be a market for 3-5 megapixel cameras with good lenses because everyone wants more megapixels instead. Maybe the trend will stop some day when everyone realises they have no use with all these pixels. How many actually make so large inprints that they need 12 megapixels? The details are still fuzzy if you try to zoom, mostly because of the lens. But of course it would be a lot harder to sell better picture quality and more shots without camera shake.

Another thing is the long tele lenses. How many pictures does anyone apart from some special cases (like wildlife photographers) take that require a 300+ mm tele lens? But to get these 20x zooms manufacturers have to sacrifice image quality, aperature and dynamic range and then try to fix some of these problems with software (hence stuff like HDR entering the market).

A camera with a good lens (f2.0-f5.6 for instance), lowish pixelcount and a decent zoom range (say, 28 - 200) with quick operations and manual, programmable and auto settings in a decently sized package would be the perfect solution for at least 99% of camera buyers. They would probably never need anything else.

Friday, 5 June 2009

Installing plugins to Redmine

Installing plugins to Redmine is both simple and not so simple. There are two problems:
All plugins don't seem to install (the install process gets stuck on these, at least occasionally) and then there's the issue of restarting the system...

The steps to install a Redmine plugin are:
  1. Download the plugin, some useful ones can be found from Redmine's Wiki.
  2. Unzip the archive to redmine/vendor/plugins (some require that the directory you create is named in a certain way).
  3. (Change to bitnami stack with ./use_redmine if you are using that)
  4. Go to your redmine installation directory.
  5. rake db:migrate_plugins RAILS_ENV=production to migrate the plugins (not every plugin requires this, but most seems to).
  6. ps axf | grep mongrel (or replace mongrel with what ever backend you are using)
  7. Search the part that says something like tmp/pids/mongrel.3002.pid (I have two of those).
  8. For each process, do mongrel_rails restart -P <redmine dir>/tmp/pids/mongrel.3002.pid obviously replacing the file name with the correct one from the ps output.
I noticed that if I didn't start both processes again, I got occasional "Page not found" errors in Redmine, even though clicking the link again opened the page.

Thursday, 4 June 2009

Migrating to Redmine from Trac

I just managed to move a Trac database to Redmine on Ubuntu, but it had it's quirks. Installing Redmine was dead simple with Bitnami's stack (just download the stack bin, chmod 755 to it and run, couldn't be any easier). But to get the Trac database to Redmine was a bit more complicated, even more so because the db was sqlite3 and Bitmine's stack doesn't come with it. So, this is how I managed it:
  1. I copied the Trac db from the other machine to this (there apparently is some way of doing this remotely, but this was much easier).
  2. cd redmine-0.8.4-0
  3. ./use_redmine
  4. cd ruby/bin
  5. gem install sqlite3-ruby
  6. cd ../../apps/redmine
  7. rake redmine:migrate_from_trac RAILS_ENV="production"
Then I used the following values for the migration process:
Trac directory []: /var/trac/myproject
Trac database adapter (sqlite, sqlite3, mysql, postgresql) [sqlite]: sqlite3
Database encoding [UTF-8]:
Target project identifier []: myproject
And tadaa! When I opened Redmine in my browser all the tickets etc. are visible. Job done. Pretty much everything else was described on this page (and this for some ), but since I used Bitnami's stack, I had to do the "./use_redmine" thing to get the sqlite3 gem installed inside the stack.

EDIT: Damn! Wiki pages didn't transfer correctly, only one page and that one is wrongly formatted.

EDIT2: Damn damn. The Trac database used LDAP as the authentication method and Redmine doesn't support it. This is probably the reason why I can't log into the db, just view it.

EDIT3: Ah, things are looking up. The Wiki thing is no biggie, in this database there was just one page, so it's just a matter of fixing the formatting (mainly links). The password thing was much more complicated, this is what was required:
  1. cd redmine-0.8.4-0
  2. ./use_redmine
  3. cd apps/redmine
  4. RAILS_ENV=production script/runner 'user = User.find(:first, :conditions => {:admin => true}) ; user.password, user.password_confirmation = "my_password"; user.save!'
Now the password is set to "my_password" and with the default admin name is "redmine" and I can finally log in! If the admin name is something else you can find out it with this command:
  • RAILS_ENV=production script/runner 'puts User.find(:first, :conditions => {:admin => true}).login'

Monday, 1 June 2009

Songsmith

Sometimes Microsoft manages to make something useful. Usually unintentionally, but still. Microsoft Songsmith seems to be one such thing. It works by adding instruments when you sing to it. So, people have entered the vocals for some hits and the results are... completely wacky. Check these out:

The Clash : The Clash - Should I Stay Or Should I Go
Billy Idol : White Wedding
Metallica: Enter Sandman
Motörhead: Ace of Spades

Queen: We Will Rock You

And many others you can find in Youtube.

Friday, 29 May 2009

Why Not To Git

The header of my previous post was a lame joke that nobody else got, I admit...

Anyways, there are also reasons why git is not the best choice and these come down to one (or two, depending on how you look at it) reason: complexity. Distributed version control systems are a different scheme to start with and git doesn't make an attempt to make it any easier to understand.

There's also the problem of security in the more common way, rather than way where git excels. I'll discuss this point in this post and maybe delve into usability and ease of learning leater.

In the modern time companies are out-sourcing development to off-shore companies and in these cases it's usually desirable that not everyone gets access to everything. There are no in-built mechanisms to control access in git, by default everyone can access everything in a repository. The idea as far as I can tell is to create small repositories so that we can give access to entire repositories to people. This is quite awkward in practice. And since git doesn't deal with single files but commits which can include several files at a time fixing this is not simply a case of forbidding access to some files for some people. Imagine if we have two files and two developers. One of them has access to both, the other only to one file. Developer one changes both files and commits. What happens when developer two tries to pull? We would have an incomplete repository (very much against the whole idea of distributed vcs's) and furthermore the SHA calculation would fail (?).

I can only think of one way this can be solved in git's model. That is to use encryption on files where the user doesn't have access. But this would also require that the entire access set is delivered with the repository. Possibly this would also require the inclusion of public keys into the repository and it would definitely need access to the server where the company's key is. Crypting and decrypting these files would make repository cloning a lot slower especially if there are lots of files with access sets (which often is the case).

So it might well be that this is something where git will never be good at forcing companies to either use other vcs's or building complicated work-arounds. But it is very common practice that access is restricted. For instance in a large system you might not want to give every design document to everyone, only what is needed for them to do their work. But if every part is in a different repository, building the system would be difficult and people who needed access to everything (like architects) would find this awkward. The practical solution would be built a middle-layer that automatically updates some repository which holds all this info and where only select people have access, but this would require a lot more work to build and maintain.

I believe this is not something Linus and the guys had in mind since withholding information goes against FOSS, but in reality not every company can afford to give out everything. Otherwise some Chinese company will push them out of business with their version.

Tuesday, 26 May 2009

Why git

There are compelling reasons why git should be choice of the next generation (of developers, and why not us old farts as well). Rather than listing them here, here's some links:
And so on. There are clearly some problems with git:
  1. GUI support is not great. There are some tools like git extensions and qgit but apparently they are still work in progress.
  2. Git does not work as well in Windows as in Linux, a lot of this has to do with NTFS/ext filesystems and the fact git was designed for Linux. This means you need to install cygwin to even run git on Windows.
  3. Plugin support is still lacking.
  4. Documentation, while quickly improving, is still behind some of the competition.
I'm sure these will improve rapidly because git is gaining popularity fast. Hopefully the development community realises that git really needs to work well on other platforms as well. There are many cross-platform projects and very few companies run on Linux alone so this could be the sticking point for many to move to git from what ever vcs they are currently using.

In any case, git is still a huge improvement over such vcs's like cvs and svn and free compared to such extremely expensive ones like ClearCase and Dimensions.

I have used ClearCase, PVCS/Dimensions, Continuus, StarTeam, SourceSafe and some others extensively and I have come to the conclusion these do not offer enough bang for the buck. Some pros and cons I have noticed:

ClearCase:
  • Pros: Merge works well.
  • Cons: Complicated, requires people just to support the repository, expensive, slow.
PVCS/Dimensions:
  • Pros: Reasonable simple for developers.
  • Cons: Expensive, very slow, poor user interface, some things do not comform to any normal conventions.
Continuus:
  • Pros: Change tracking works well (if you use it correctly).
  • Cons: Very complicated to use (requires extensive training), expensive.
StarTeam:
  • Pros: Easy to use, quick, intuitive.
  • Cons: Even though basic version is a lot cheaper than the alternatives, full version costs about the same.
SourceSafe:
  • Complete piece of crap. Stay clear at all costs.
This list was just some things I could think of straight away. There are lots of other issues, for instance cross-platform usage that I didn't consider. Someone should look at StarTeam's UI and write a similar one for git and it would be a winner.

The whole point of a vcs should be that it's quick and doesn't hinder development. The developers should be able to do what they need quickly, easily, hassle-free and without making any mistakes (like committing to wrong branches etc). I would say the first two points are the most important ones. If the version control is quick and easy to use, developers use it more (they merge more often, they check-in their stuff more often etc) which is basically what you want them to do. A good vcs is more than just a safe place to store your code. It's a tool used to collaborate with others and the easier this is to do, the more your developers collaborate (share code). Collaboration is the key for git and something where it stands heads and shoulders above all these expensive commercial products (and cvs, svn etc). Distributed vcs has such possibilities in improving productivity that very few techniques at the moment can boast anything similar.

Friday, 22 May 2009

Mach and mock

Mach and mock are used to build rpm based packages in chroot's. Nice, if, for instance, you need to do that in Ubuntu.

Today's lesson is: use mock if you can. Mach doesn't really seem to work and is a hell to configure, mock is a re-write and I actually managed to get it to work after I updated sqlite. This is how you create an rpm to the environment you desire:
  1. Install mock if you don't have it already.
  2. Add root to the mock group.
  3. Run mock -r rebuild where is one of the files in /etc/mock (like fedore-devel-i386) and package is the srpm you want rebuilt.
And that's it. It should be possible to run this as non-root, but so far I have only gotten "permission denied"s. Adding myself to the group didn't help.

The downside is mock only supports epel and fedora, not Red Hat, centOS, SuSE, Yellow Dog etc. like mach. Hopefully either mock people add support or at least it's easy to add it yourself. (should be if adding a config file is adequate, they seem simple enough). At least it looks like adding Red Hat to the list is simple enough.

There are still a number of issues that should be checked:
  1. How easy would it be to use LVM snapshots with these chroots?
  2. If I have a deb package and run alien to convert it to rpm and then rebuild it to a different target, is it really a functional package (i.e. dependencies are correct)? The other alternative is to build the rpm directly from source which requires a spec file (alien doesn't).
All in all, another day which showed that there are still lots of things in Linux that are poorly implemented, complicated to use and lack proper documentation. Basic stuff is there, but if you run into problems and google doesn't help you are out of luck. The bug in Ubuntu certainly didn't help either.

Things you learn..

It's only 10.30 and I have already had to learn a few new things, so all in all, not a bad day at the office. Except some of this stuff is a bit annoying.

Let's start with Gnome. Which managed to completely lock up. Quick googling on an other computer reveals that Gnome is really dead, because ctrl+alt+backspace did nothing. But I was able to connect to the Linux machine from the other comp, so more googling and

sudo /etc/init.d/gdm restart


Did the trick, Gnome restarted. Of course everything I had open in desktop closed but at least I didn't have to reboot the machine.

I did remember I can somehow open other tty's but couldn't remember the shortcut before after that. Then I remembered: ctrl+alt+f[1-9] opens it, so I could have fixed the problem locally as well. Oh well, I'll remember the next time.

What else?

Oh yes, Thunderbird doesnt' support OpenGPG keys, you have to install the Enigmail plugin for that. After that it's a snip, just run a wizard and select whether you want to use an existing key or create a new one. I had to switch over to Thunderbird from Evolution, because Evolution just doesn't work. Twice now it has decided there is no connection to the mail server when there clearly is. Restarting doesn't help, last time I just kept trying all sorts and finally it started working again. I don't know what I did and didn't want to figure it out either so I switched to Thunderbird which I use at home as well and has worked well enough for years.

I need digital signatures so I can report the bug mentioned earlier to Launchpad. But for some reason it's not accepting my key fingerprint, at least not so far. I tried to to add my key to Ubuntu's key server with gpg --keyserver keyserver.ubuntu.com --send-keys and Ubuntu's key server says it's there. Another annoying thing: that command requires a parameter, the key id. Now, the command is "send-keys", i.e. in plural which suggests it sends all the keys. Nope, if you omit the key id, gpg says nothing. Very helpful, because you don't know that you needed to supply it and don't get any hint that nothing was actually done. This begs the question: what sort of propeller head writes these programs?

Ah, now it worked. There was a delay before Launchpad realised I had sent the key.

EDIT: Yipee! I managed to report this bug after a few tries (like, you need a space before the "affects" line in the email -> not good). Direct link to bug.

Wednesday, 20 May 2009

Bug in Ubuntu jaunty

There seems to be a problem in the jaunty jackalope release. Namely the sqlite3 version you can get with the package manager has a serious bug that prevents some programs from running. I found this out when I tried to run mach.

Apparently that version (and a few older ones) don't allow the creation of a column named release without quoting it. And guess what mach tries (or, rather yum, but I don't know which of these names the column). The bug is described here.

Ubuntu's version is 3.6.10, installing the latest one (3.6.14-1) seems to have fixed the problem.

Installing the latest version of sqlite is done like this:
  1. Download the amalgamation package from the download page.
  2. Extract the package and cd to that directory.
  3. Run ./configure --prefix=/usr && make && make install.

Tuesday, 19 May 2009

Creating rpm's in Ubuntu

Converting a deb package to an rpm package is simple, just use alien (alien also supports other package types, tgz, pkg, slp and lsb). But what if you need to start from scratch? Here's how to create an rpm package from scratch in Ubuntu:

1. Install rpm ("sudo aptitude install rpm").
2. Create the spec file (e.g. pkg-1.0.spec).
3. Create the tarball for your sources and place it in /usr/src/rpm/SOURCES.
4. Run rpmbuild -ba pkg-1.0.spec.

This will create two packages: /usr/src/rpm/SRPMS/pkg-1.0-1.src.rpm and /usr/src/rpm/RPMS/i386/pkg-1.0-1.i386.rpm (if your target was i386).

The spec file looks like:

Summary: My SW
Name: MySw
Version: 1.0
Release: 1
Source0: %{name}-%{version}.tar.gz
License: GPL
Group: Development/Tools
%description
This is my SW.
%prep
%setup -q
%build
./configure
make
%install
make install
%files
%defattr(-,root,root)
/usr/local/bin/mysw
%doc /usr/local/info/mysw.info
%doc %attr(0444,root,root) /usr/local/man/man1/mysw.1
%doc COPYING AUTHORS README NEWS

This file should be self-explanatory. In this example the file in /usr/src/rpm/SOURCES should be called MySw-1.0.tar.gz. The spec file can be named anything.

Friday, 15 May 2009

Perl formats

Reasons to love Perl, part whatever. The following script takes in input that looks like

ID WorkId Date
1 2 12-10-2009
2 16 01-01-2008

(Fields are tab formatted)

and prints out:

+-----+----------+---------------+
|ID |WorkId |Date |
+-----+----------+---------------+
|1 |2 |12-10-2009 |
+-----+----------+---------------+
|2 |16 |01-01-2008 |
+-----+----------+---------------+


How many lines would it take in some other languages? Perl needs this many:

It's that easy to transform input to almost any output, it's obviously trivial to do calculations on fields, change their orders etc as well. And you can also define headers, print page numbers, define page widths and so on easily as well.

Thursday, 14 May 2009

Frank Sinatra - Sinatra at the Sands

One more and then I'm off for while:

I picked this album up because I got it so cheap. One of the best buys I have ever made regarding CD's. Live albums don't get much better than this and this is a great testament to why Sinatra was one of the great entertainers of the 20th century. Great singer, excellent comedian with an immaculate timing and an ability to improvise. Not only are the songs great to listen, the jokes crack me up every time I hear them.

"People ask me if Dean Martin is an alcoholic. I can tell you he is an unqualified, one hundred percent drunk. If drinking was an Olympic sport he would be the coach of the US team!"

On top of Sinatra's great performance the band is in great swing as well. This album has one of the best moods in any album. You can almost feel being at the Sands. People who were there were very lucky to witness this.

I'm not really even into this sort of music in general but if you don't like this album, there's something wrong with you, not the album.

Steely Dan - Aja

More reviews from rateyourmusic:

What can be said of one of the greatest albums ever that hasn't been said already? How the musicians managed to stay sane while doing this album, I don't know. Usually I'm not too interested in over-produced music, but this is so over-the-top produced that it takes on a different level.

It's obvious some get this album, some don't. From the out it could remind you of elevator music but I wish elevator music could be this good. I have to admit it took a few tries before I got this album, but just like Velvet Underground & Nico for instance music doesn't always have to be easy to understand. You have to grow to some albums.

Steely Dan has produced a number of great albums like Pretzel Logic and Can't Buy a Thrill but without Aja they would be just another good band.

The Velvet Underground - 1969 Velvet Underground Live With Lou Reed

For some reason I hadn't blogged some of the albums I have reviewed at the time I did them, so I'll blog them now, starting with a brilliant live album.

Velvet Underground, arguably one of the greatest bands of all time, at least one of the most influential, playing live at the top of their career sounds like a perfect recipe.

Well, it is. And isn't. Sort of. Depends on how fussy your are because the sound quality is, frankly, quite bad at some points. The underlying "whiz" sound is quite loud, the reproduction of different sounds is not very accurate and even though this is a stereo album (I think) most tracks sound like mono. I would have thought this *is* mono but for a few weak glimpses of stereo sound on some tracks. Overall, some tracks sound like they were recorded on a poor C-cassette, copied a few times and the final copy was used for the album. Note that this review is based on the vinyl version so it is possible that later CD versions (which I haven't heard) have better sound quality.

But once you start to listen to music, most of this is forgotten. The music is simply so brilliant especially because the tracks are performed live giving the artists the freedom the good ones benefit from. The album is full of superb tracks like "Waiting for My Man", "What Goes On", "Femme Fatale", "Beginning to See the Light", "Pale Blue Eyes" and "White Light/White Heat" to pick the best one. I'm sure this album can be listened to over and over again and every time you find something new. Somehow Lou Reed also sounds a bit more.. enthusiastic, if that's a good word, than he did later in his career. There is a bit more...spring (another interesting choice of words if I may say so) in his singing. It doesn't make the songs any more chirpy or happy than they are but somehow I like it. Lou Reed can sound really miserable but when he also sounds like he doesn't really care about the singing it can get a bit too depressing.

Funnily enough I finally completely realised why it is said that anyone who went to a VU concert started their own band. I've been practising with an electric guitar for a few days (which means I can barely get out a few notes) so I've been listening to the guitars in particular. They mostly sound quite simply yet the music is brilliant. There really isn't that much stuff that sounds difficult to play (at least with my limited understanding) so it's easy to start thinking that if it really is possible to make such great music with such simple ingredients why couldn't I? But the geniousnes of VU is not about skillful use of instruments but the way they merge together. Each instrument has a very distinct identity yet they form a fantastic whole.

Velvet Underground shows what great music is. Anyone should get one from The Velvet Underground & Nico, The Velvet Underground or this album and listen to them a few times. Say five. If you still don't like it, fair enough. But just in the off-chance that you do "get" the album you will forever be happy you gave it a try.

I "only" gave the album four and a half stars because of the sound quality. I'm fussy like that. Don't let that put you off, though.

Hidden Secrets of Git

This caused a bit of a headache. This is the scenario:

In repo 1:
- Edit some stuff,
- git commit,
- git tag -a,
- git push --tags.

In repo 2:
- git pull --tags,

-> The changes from the edit are visible.

This definitely isn't clear from the documentation:
--tags

All refs under $GIT_DIR/refs/tags are pushed, in addition to refspecs explicitly listed on the command line.

But very cool, nevertheless. This means it's not possible to accidentally only push the tags and not the changes associated with them.

Friday, 8 May 2009

On the Edge

I wrote a review of Brian Bagnall's brilliant On the Edge: The Spectacular Rise and Fall of Commodore to Librarything.

A brilliant book that pretty much makes other books about Commodore redundant. This book is so comprehensive that it's hard to think of what might be missing. This is a book about a very peculiar company and some brilliant people doing brilliant stuff. Commodore did some brilliant stuff, like the sound and video chips on the Commodore 64. I hadn't even realised that the CPU of C64 was actually the same as in VIC-20, C64 just had those chips and more memory, but it was on a totally different level performance and price wise to anything on the market at the time. The sound chip, SID, was way better than anything else in anything affordable to normal consumers even though it was a rush-job (like most of the stuff done at Commodore during their peak). The result was a chip that has produced some of the most memorable game music ever and is used to this day.

I also didn't know Amiga wasn't actually a Commodore product at first but the way it was built was reminescant of the way Commodore built some of it's products (PET, VIC-20, C64): by going far beyond what anyone else had ever done. Amiga, at the time of it's release, was an amazing home computer offering unrivalled price and performance. It's too bad Commodore marketing messed the North American market completely and effectively killed the product there. European sales were not enough to keep Commodore afloat in the end. What made Commodore a success at first was the combined price cutting by Tramiel who always wanted to sell at far lower prices than the competition and the engineers who didn't like compromises and were brilliant at what they did. To make such important products required both.

But first and foremost this book is about people. The odd, the brilliant, the obsessed people that made Commodore what it was. From founder and CEO Jack Tramiel who was not a very nice person and ruined a lot of lives but after whose dismissal by majority shareholder Irving Gould Commodore went downhill fast and would have folded a lot earlier without Amiga to the brilliant engineers like Chuck Peddle who created the computer section of Commodore and was then destroyed by Tramiel and the engineers who created C64, Robert Russell and Bob Yannes and many others. These guys were motivated to the point of obsession, brilliant at what they did and some very eccentric, none more so than Bil Herd, a brilliant hardware guy and alcoholic. The stories of some of the antics they made are just brilliant, like how and why Herd punched a hole in the wall of his office.

For someone who spent way too much of his youth playing with the C64 this book is a goldmine of information. For instance, the reason why the 1541 floppy disk was so ridiculously slow is revealed. Commodore was a major factor in bringing computers to home. Much more so than IBM and Commodore easily out-sold companies like Apple during the critical years when the home-market was created (the first three true home computers were TRS-80, Commodore PET and Apple II and the Apple sold way less than the other two). So this book is also a valuable tale of how the computer arrived to our homes. Everyone might have an IBM PC clone now but in the 80's it looked for a while like the future might be in Commodore machines but for some shocking marketing decisions. The whole computer business seems to have changed from brilliant engineers doing brilliant, ground-breaking work to the business-driven model of today where products are changed just enough to make people want to buy it.

The only little thing I was left hoping for was some sort of timeline of the events listed in the book, including the things competition did that was mentioned in the book. Sometimes it was hard to follow the order of the events because things happened in parallel so you might be tossed back several years with a new chapter. Another thing is, of course, that since this book is largely the voice of the engineers that made this happen, the opinions might be a bit lop-sided but this really only bothered me on a few occasions.

For me some of the best gaming experiences ever were with C64 and it's the reason I now work in the computer industry even though I never did do much programming on it. But I, like countless others, were drawn into the computing industry by Commodore. So their legacy is still strong even though the company folded in 1994. This book brings back many memories and also makes you wonder have we lost something while gaining computers with computing powers completely unthinkable 25 years ago.

A company that created PET, VIC-20, C64 and Amiga (they made it a success by creating a cheaper version which made it affordable to many more than the original Amiga 1000) deserves it's history to be told. This book does it and brilliantly. A must-read for anyone who has ever owned one of Commodore's computers.

Thursday, 7 May 2009

Using signed tags with git

Another thing I tried out with git was signing tags with gpg keys. Things would have been a lot easier if I knew more of git or gpg - I'm not yet too familiar with either. Obviously I have used encryption with emails before, but since this has been with Outlook, the whole thing works a little bit differently (the interface for key ring handling is in Outlook).

So here's how you can do it:

1. The user creates a gpg key with gpg --gen-key.
2. Then he exports the public key with gpg --armor --export user@email.com > mypk
3. He sends the public key file to you for you to save it in the git user's keyring (assuming you are using gitosis). You then import it with gpg --import mypk. It might also be a good idea to sign this key with gpg --edit-key user@email.com.
4. Next you need to add the verification somewhere in the git hooks, pre-receive might be the best bet. Checking the validity is done with git tag -v "tag_id". The code could be something like (I haven't done this yet myself):


And hey presto! You have just made sure that your system only accepts tags signed by people you have accepted.

Improved security with git and gitosis

So, I started working on Linux and one of my first tasks has been to learn to use git, the open source version control system used by, among others, Linus Torvalds. I'm familiar with a number of SCM systems but git has quite a lot of new stuff for me. I also installed gitosis for added security. Gitosis removes the need to create user accounts for everyone who needs to read or write to the repository, which improves security a lot. Setting it up had it's small quirks which meant I couldn't use the otherwise excellent guides to the point. But I did get there eventually.

Anyways, what I wanted to do was to check that the user is who he says. By simply saying "git config user.email=whoever@whatever.com && git config user.name=Mr. Fake" a user can hide his identity - in practice allowing him to add what he wants to the repository as long as he has write access. Also, there is no extra security for anything else. For instance if you have conditional hooks in your git repository, you can't just the user id for access rights.

The solution is to use gitosis and check that the user really is who he says he is. I already asked this question in stackoverflow and then ended up solving the problem myself. The solution requires fixing gitosis, reinstalling it and the adding a pre-receive hook to the git repository. Not overly complicated, but hopefully someone could add that fix to the "official" gitosis code as well.

This solution verifies that the user email address used to create the ssh key for gitosis matches the address the user is using which should be pretty secure. This way the repository history is correct and the culprits can always be tracked down.