Making Music using Rosegarden on Fedora 17

I’ve always loved music – as do both of my parents. They have excellent, but divergent tastes in music. With my Mum I share a love of Sandy Denny, Jeff Lynne and George Harrison, with my father there was a shared affection for Eric Coates, Henry Hall and G. F. Handel. And when you mix the two together you get my love of Maestoso, Mike Oldfield, Kevin Ayers and Barclay James Harvest.

As well as listening to music, I also enjoy making it. But I always thought making a music on a computer seemed so difficult to do I never bothered really trying.

However, recently I got a bit of inspiration from my friend TA Walker (Tim). Earlier this year Tim signed up for something called the 5090 Challenge – writing 50 songs in 90 days. Given Tim has a full-time job, a wife and a young daughter that was insanely ambitious but astoundingly he managed 36 excellent songs which I have been known to raid for my YouTube videos. In order to reach his goal Tim was making music anywhere using anything – he was even overdubbing vocals and recording guitalele in his car during his lunch-breaks using an iPod Touch. Here is Tim playing one of his 5090 songs:

So, if Tim could make music in a car (or on a very nice looking white leather sofa) I had no excuse sitting in front of a computer that had access to a repository of free software for making noises.

I’m using Fedora 17 and I wanted to try and record music entirely using free software. This is because a) I’m on a budget of £0 and b) I think it’s the right thing to do.

Rosegarden running on Fedora 17

The first program I tried to install was something called Rosegarden. It seemed a pretty welcoming program for beginners as music programs go and therefore a good place to start. It used staves and notes – things that a dinosaur like me can (almost!) understand. However before I could get Rosegarden to make any noise I needed a synthesiser. I don’t have a real synthesiser, so instead I needed a soft synthesiser – a program that runs on the computer and pretends it’s a real synthesiser sitting on your table.

The synthesiser that everyone seemed to recommend was something called FluidSynth, so I thought I’d install that. FluidSynth is a free software synthesiser that can take MIDI data from a program like Rosegarden and turn it into audio.

It normally comes with a “SoundFont” bank containing a nice range of sounds for a beginner, so it seemed a good start. However to use FluidSynth it’s best to have a nice graphical interface so you can fiddle with it using knobs and buttons on your desktop. The most common one is called QSynth. It looks very impressive!

A very impressive addition to any desktop!

Only, before I could use the virtual synthesiser I needed something to plug it into the computer’s sound hardware. In other words, FluidSynth needs somewhere to send all this audio it’s creating. That somewhere is a piece of software called the JACK Audio Connection Kit (JACK). But before I could use JACK I thought I’d find it easier if I something graphical to could control JACK with. So I needed something called QJackCtl – a graphical JACK Control panel.

QJackCtl with JACK not started

So I downloaded all the bits I needed. I had Rosegarden (a music studio), Fluid Synth (a synthesiser), JACK (a sound server), QJackCtl (a graphical interface for JACK) and QSynth (a graphical interface for FluidSynth). It was, literally, like a house that JACK built.

Now I tried to make a noise. I worked out after a couple of minutes that it’s not enough to simply load QJackCtl – JACK has to be started and stopped by pressing the Start and Stop buttons. So I tried to start JACK and it did nothing but spit error messages at me and I certainly couldn’t get anything to make any sound.

Now, this is where the cutting-edgeness of Fedora had just bitten me on the bum. Normally you should be able to start JACK and it will work without error. And indeed, since this morning’s software repository updates that’s exactly what it does do. However at that time there was a permissions problem within Fedora so I needed to type:

su -c "modprobe snd-seq-midi"

It took me an hour or so to find that out, and before I did so I couldn’t start JACK or make any noise at all. Normally I would have given up long before this point, but with M4 and Mr Cable The Sysadmin ringing in my ears I was determined and pressed on.

There were a couple of other things I had to do in JACK to get it to work. After pressing the Setup… button I had to uncheck Realtime, check Force 16bit and change the interface to hw:0.

QJackCtl with JACK started

With JACK running happily, I started QSynth to get FluidSynth running. Everything seemed OK, so the next step was to run Rosegarden. No problems. I opened one of the examples in the examples folder, pressed the play button and success! Music!

However, music on my headphones only – there was nothing coming out of my speakers. I went to QJackCtl and pressed the Connect button to see what was going on.

QSynth in headphone-only mode

As you can see, the left output of QSynth (l_00) was going to my system’s playback_1 and the right output of QSynth (r_00) was going to my system’s playback_2. This was giving me music in my headphones. However, what were the other playbacks?

QSynth will now use my speakers too

I tried to connect the left output of QSynth (l_00) to playback_3 and the right ouput (r_00) to playback_4, and it worked. Music through my speakers!

So every time I want to make music I…

  1. load QJackCtl, 
  2. start JACK by pressing the Start button, 
  3. load QSynth 
  4. then load Rosegarden

…always in that order.

Provided I just wanted to enter musical notation into Rosegarden I was now fine, but that’s not much fun. The frustrated Woolly Wolstenholme in me wanted to have a keyboard to play!

The trouble was as well as not having a synthesiser, I don’t have a keyboard either. Fortunately there are “virtual keyboards” available that allow you to play music using your computer’s keyboard. The one I chose out of a field of three was called Virtual MIDI Piano Keyboard (VMPK). I chose this one because it was the only one that seemed able to play nicely with the Hungarian keyboard on my computer.

Be Woolly in the comfort of your own home…

However, in order to record MIDI data created with a virtual keyboard meant I had to plug it into something that records MIDI data – Rosegarden. It was back to the QJackCtl Connect dialog:

VMPK running, but QJackCtl shows nothing to plug it into

VMPK had appeared in the MIDI tab of the QJackCtl Connect dialog. The trouble was, nothing else did – the only thing I could plug my virtual keyboard into was itself.

This proved to be a very tricky problem to sort out. It took me a long time to find an answer but the answer was a program called a2jmidid. Apparently there are two kinds of MIDI on a GNU/Linux machine – ALSA MIDI and JACK MIDI. They can’t talk to each other without a “bridge” program. The bridge is called a2jmidid and it’s available in the Fedora repository. To use it I had to start a terminal window and type:

a2jmidid

Then, provided I kept the terminal window open, when I go back to my QJackCtl Connect dialog I get some extra things in the MIDI tab:

VMPK connected to Rosegarden in QJackCtl

As you can see, I can now connect the VMPK Output to the Rosegarden record in and, hey bingo, I’ve got a MIDI keyboard connected to Rosegarden.

VMPK configured for a Magyar keyboard

The only thing left to do with VMPK is create a Hungarian key mapping – this was very easy to do using a dialog provided by the program.

The first thing I wanted to try and record was a tune I remembered from my childhood. It was an early music or baroque piece for recorder and a small ensemble used by the Open University before their morning broadcasts. I have never heard since early mornings in the 1980s when I used to get up early to watch a maths foundation course on calculus or the foundation course on modern art.

A lost childhood memory

I did a rather twee arrangement using a harpsichord, viola, cello and a recorder. I think the real thing was less Bach and more Henry VIII.

However when I came to play it the recorder just didn’t sound right. It sounded very good, but it didn’t sound like the recorder I had in my head. So I looked on-line to see if there were any other SoundFont banks I could use with QSynth.

I was in luck because the pianist and composer S. Christian Collins has put together an excellent SoundFont for bank for QSynth and put it on his website here. It’s called the GeneralUser GS Soundfont bank.

GeneralUser GS Soundfont bank loaded into QSynth

To load it I had to get QSynth running and press Setup…. Next, I had to go to the Soundfonts tab and replace the default soundfont bank (an sf2 file) with the GeneralUser GS SoundFont bank I had downloaded.

To my delight the recorder sounded much more how I wanted it to sound.

So now I  had finished and was happy with my sounds I realised I needed some way of recording what I’d just done as an audio file instead of a Rosegarden file.

When I ran QJackCtl with nothing else running the Connect dialog looked like this:

By default I can only get sound from the microphone

If you look at the Readable Clients box on the left you’ll see the only place I could get any audio is from  capture_1 and capture_2. They represent the microphone socket on my computer. capture_1 is the left channel and capture_2 is the right channel of the stereo microphone input.

If I ran Rosegarden I found they were connected automatically to Rosegarden’s record in 1 L and record in 1 R Input Ports:

Rosegarden connects to the microphone automatically

I looked in the QJackCtl Setup dialog and saw a Monitor check box which was unchecked. It sounded like what I needed so I checked it.

However you can enable monitoring

When I restarted JACK I saw this:

And route the monitor output where you want it

So now I have monitor output in addition to microphone input as a potential source of audio. What monitor output means is that I can record whatever I can hear through the speakers. This is just what I needed.

Such as here, where the monitor output is routed to Rosegarden

I started Rosegarden up again and connected up monitor_1 and monitor_2 to record_in_2 L and record_in_2_R.

This meant that now Rosegarden had the system’s sound output available as a source of audio. Now I could use Rosegarden to record whatever Rosegarden was playing as an audio file!

You can easily turn the metronome off in Rosegarden

Setting this up in Rosegarden is quite easy and pretty logical once you work it out (which took me a long time!). The first thing you need to do is go to Studio-> Manage Metronome to turn off the click track. You usually don’t want that on your master recordings!

The next thing you need is an audio track that can accept the monitor output as its audio input:

Rosegarden all set up to record Rosegarden

You can see in the picture above I’ve set track 17 as my current track. It’s an audio track and I’ve called it RECORD.

On the left hand side you’ll notice that I’ve set the In: to In2. This is very important – In2 is the Rosegarden input we connected up to the monitor output in QJackCtl earlier. Never use In1 – it’s quiet and full of  interference noise!

Finally you’ll notice I’ve armed track 17 to record – shown by the red light to the left of the track name. Now when I press the record button the my Rosegarden file will play and be recorded as an audio file on track 17 at the same time.

My recorded Rosegarden output in Rosegarden

When the track has finished you will see the waveform displayed in your recording track as it is above.

Double-clicking on an audio track segment in Rosegarden opens Audacity

Now you can double click on the recorded segment and it will open in Audacity. Don’t forget to set Audacity to JACK Audio Output as I have in the picture above, or it will freeze and not play.  From Audacity you can edit or export the audio in the usual way.

For OGG Vorbis files or MP3 files I normalize to -2.0dB

I always save a lossless FLAC file from Audacity first. If I want to save to a lossy format such as OGG Vorbis or MP3 I always Normalize to -2.0 dB first before I export.

Being able to set Audacity to use JACK audio output is very handy – particularly if you find you want to listen to audio files while you are working.

So now I had a FLAC file, an Ogg Vorbis file and an mp3 file. The FLAC file was fine, but what I really wanted to do was get a picture in my mp3 and ogg files so they would be just like the ones I downloaded from TA Walker’s Bandcamp page.

To do this I found an excellent program called EasyTAG which does exactly what it’s name suggests. This program allows you to add a picture to your audio files and is very easy to use. Although I tend to use Ex Falso for most of my tagging (it’s better for classical music) I’ll use EasyTAG for tagging my own files in future.

The next thing I decided to do was re-record the OU Tune in Mike Oldfield style. When I was a child I remember watching Simon Groom visit Mike Oldfield to see him re-record the Blue Peter signature tune. That video had a enormous effect on me as a child and recording something like that was something I always wanted to try.

I had a lot of fun in Rosegarden pretending to be Mike – particularly tapping away on my computer’s keyboard pretending to play the bodhrán.

When I finished Tim very kindly recorded a real electric guitar solo for me to add to my track. He supplied it to me as a FLAC file, but the funny thing was I could not find any way of importing a FLAC file into Rosegarden – only .WAV files.

TA Walker’s solo shown on the red track

However, by accident, I discovered you could import FLAC files directly into Rosegarden if you dragged and dropped them onto the time track.

I’d enjoyed myself so much with the Open Universtiy tune I decided to record another tune Mike Oldfield-stylee, so I dusted off my recording of Border Television’s Keltic Prelude March by L. E. DeFrancesco and did that as well!

The reason I did the Keltic Prelude March was so that I could upload my video of a Border Television start-up that I had pulled down earlier this year. The reason I had pulled it down was because of a copyright claim over the track Holiday People by James Clarke that I had used over the menu rundown. Therefore I decided I would also create a pastiche of Holiday People to use in my Border start-up. I came up with a tune I called Lionel’s Talking!

Lionel’s Talking in Hydrogen

I needed a “real” drum kit for Lionel’s Talking so I decided to use a special drum synthesiser called Hydrogen which does the job flawlessly. Hydrogen also works beautifully in tandem with Rosegarden. The Rosegarden website has a wonderful tutorial to explain how to do this here.

So put it all together and what do you have? Well something like this…

Producing music on GNU/Linux can be a bewildering and frustrating experience at first. There are so many things you need – each one has to be set up correctly in order to work properly. There is a huge amount to learn and sometimes you feel the easiest thing is to just give up. I spent a lot of time trying, and failing, to do things which I thought should have been easy.

In addition, differences in hardware mean what you have to do get everything working is slightly different for everyone.

But with a little perseverance you find that things rapidly begin to make sense, there is a common logic that underlies all the things you have to do and you begin working out answers to your problems yourself.

I hope you try making some music too.

ATV Yesterday and Today

If you’ve read my blog before, you may have come across some posts about my friend Roddy Buxton. Roddy is an incredibly inventive chap – he’s rather like Wallace and Grommit rolled into one! He has his own blog these days and I find everything on it fascinating.

One of Roddy’s cracking contraptions

One of the subjects recently covered on Roddy’s blog is the home-made telecine machine he built. The telecine was a device invented by John Logie-Baird at the very dawn of broadcasting (he began work on telecine back in the 1920s) for transferring pictures from film to television.

Roddy also shares my love of everything ATV, so naturally one of the first films Roddy used to demonstrate his telecine was a 16mm film copy of the ATV Today title sequence from 1976.

This title sequence was used from 1976-1979 and proved so iconic (no doubt helped immeasurably by the rather forgetful young lady who forgot to put her dress on) it is often used to herald items about ATV on ITV Central News. Sadly, as you can see below, the sequence was not created in widescreen so it usually looks pretty odd when it’s shown these days.

How the sequence looks when broadcast these days.

The quality of Roddy’s transfer was so good I thought it really lent itself to creating a genuine widescreen version. In addition, this would provide me with a perfect opportunity to learn some more about animating using the free software animation tool Synfig Studio.

The first thing to do when attempting an animation like this is to watch the source video frame by frame and jot down a list of key-frames – the frames where something starts or stops happening. I use a piece of free software called Avidemux to play video frame by frame. Avidemux is like a Swiss Army knife for video and I find it handy for all sorts of things.

Video in Avidemux

I write key-frame lists in text file that I keep with all the other files for a project. I used to jot the key frames down on a pad, but I’ve found using a text file has two important advantages: it’s neater and I can always find it! Here is my key frame list in Gedit, which is my favourite text editor:

Key-frame list in Gedit

After I have my key-frame list I then do any experimenting I need to do if there are any parts of the sequence I’m not sure how to achieve. It’s always good to do this before you start a lot of work on graphics or animation so that you don’t waste a lot of time creating things you can’t eventually use.

The ATV Today title sequence is mostly straightforward, as it uses techniques I’ve already used in the Spotlight South-West titles I created last year. However one thing I was not too sure about was how to key video onto the finished sequence.

Usually, when I have to create video keyed onto animation I cheat. Instead of keying, I make “cut-outs” (transparent areas) in my animation. I then export my animation as a PNG32 image sequence and play any video I need underneath it. This gives a perfect, fringeless key and was the technique I used for my News At One title sequence.

However, with this title sequence things were a bit trickier – I needed two key colours, as the titles often contained two completely different video sequences keyed onto it at the same time.

Two sequences keyed at once

Therefore I had to use chromakeying in Kdenlive using the “Blue Screen” filter, something I had never had a lot of success with before.

The first part was simple – I couldn’t key two different video sequences onto two different coloured keys at once in Kdenlive. Therefore I had to key the first colour, export the video losslessly (so I would get no compression artefacts), then key the second colour.

The harder part was making the key look smooth. Digital keying is an all or nothing affair, so what you key tends to have horrible pixellated edges.

Very nasty pixel stepping on the keyed video

The solution to this problem was obvious, so it took me quite a while to hit upon it! The ATV Today title sequence is standard definition PAL Widescreen. However, if I export my animation at 1080p HD and do my keys at HD they will have much nicer rounded edges as the pixels are “smaller”. I can then downscale my video to standard definition when I’ve done my keying and get the rounded effect I was after.

Smooth keying, without pixel stepping

The other thing I found is that keying in Kdenlive is very, very sensitive. I had to do lots of test renders on short sections as there was only one “Variance” setting (on a scale between 1 and 100) that was exactly right for each colour.

So now I was convinced I could actually produce the sequence, it was time to start drawing. I created all of my images for the sequence in Inkscape, which is a free software vector graphic tool based around the SVG standard.

However, in order to produce images in Inkscape I needed to take source images from the original video to trace over. I used Avidemux to do this. The slit masks that the film sequences are keyed on to are about four screens wide, so once I had exported all the images I was interested in I needed to stitch them together in the free software image editor The GIMP. Here is an example, picked totally at random:

She’ll catch her death of cold…

Back in Inkscape I realised that the sequence was based around twenty stripes, so the first thing I did before I created all the slit mask images was created guides for each stripe:

These guides saved me a lot of time

The stripes were simply rounded rectangles that I drew in Inkscape. It didn’t take long to trace all of the slit masks for the title sequence. Two of the masks were repeated, which meant that I didn’t have as many graphics to create as I was fearing.

Once the slit masks were out of the way I could create the smaller items such as the logo:

ATV Today logo created in Inkscape

And, with that, all the Inkscape drawing was done. It was time to animate my drawings now, so I needed to export my Inkscape drawings into Synfig Studio. To do this I was able to use nikitakit’s fantastic new Synfig Studio SIF file Exporter plug-in for Inkscape. This does a fabulous job of enabling Inkscape artwork to be used in Synfig Studio, and it will soon be included as standard in Inkscape releases.

When I did my Spotlight title sequence I exported (saved) all of my encapsulated canvases (akin to Symbols in Flash) that I needed to reuse within my main Synfig file. This was probably because I came to Synfig from Macromedia Flash and was used to the idea of having a large file containing all the library symbols it used internally.

I have been playing with Synfig Studio a lot more since then, and I realised a far more sensible way to work was to have each of what would have been my library symbols in Flash saved as separate Synfig files. Therefore I created eight separate Synfig Studio files for each part of the sequence and created a master file that imports them all and is used to render out the finished sequence.

The project structure

This meant that my finished sequence was made up of nine very simple Synfig animation files instead of one large and complicated one.

The animation itself mainly consisted of simply animating my Inkscape slit masks across the stage using linear interpolation (i.e. a regular speed of movement).

I could type my key-frames from my key-frame text file directly into the Synfig Studio key-frame list:

Key-frames for one part of the animation

The glow was added to the ATV Today logo using a “Fast Gaussian Blur”, and the colour was changed using the “Colour Correct” layer effect – exactly the same techniques I used in the Spotlight South-West titles.

ATV Today logo in Synfig

In order to improve the rendering speed I made sure I changed the “Amount” (visibility) of anything that was not on the stage at the present time to 0 so the renderer wouldn’t bother trying to render. You do this using Constant interpolation so that the value is either 0 or 1.

I had a couple of very minor problems with Synfig when I was working on this animation. One thing that confused me sometimes was the misalignment of key-frame symbol between the Properties panel and the Timeline.

This misalignment can be very confusing

As you can see above, the misalignment gets greater the further down the “Properties Panel” something appears. This makes it quite hard at times to work out what is being animated.

Some very odd Length values indeed!

Another problem I had was that the key-frame panel shows strange values in the time of length columns – particularly if you forget to set your project to 25 frames per second at the outset.

However, overall I think Synfig Studio did brilliantly, and I would chose it over Flash if I had to create this sequence again and could choose any program to create it in.

The most important technical benefit of Synfig Studio for this job was the fact that it uses floating point precision for colour, so the glows on the ATV Today logo look far better than they would have done in Flash as the colour values would not be prematurely rounded before the final render.

I rendered out my Synfig Studio animation as video via ffmpeg using the HuffyUV lossless codec, and then I was ready to move onto Kdenlive and do the keying.

Obviously I needed some “film sequences” to key into the titles, but I only have a small selection of videos as I don’t have a video camera. To capture video I use my Canon Ixus 65, which records MJPEG video at 640 x 480 resolution at 30fps.

My 16mm film camera

Bizarrely, when the progressive nature of its output is coupled with the fact it produces quite noisy pictures, I’ve found this makes it a perfect digital substitute for 16mm film camera!

I “filmised” all the keyed inserts, so that when they appear in the sequence they will have been filmised twice. Hopefully, this means I’ll get something like the degradation in quality you get when a film is then transferred to another film using an optical printer.

Once the keying was done the finished sequence was filmised entirely using Kdenlive using techniques I’ve already discussed here.

And so, here’s the finished sequence:

Although I’m not happy about the selection of clips I’ve used, I’m delighted with the actual animation itself. I’m also very pleased that I’ve completed another project entirely using free software. However, I think the final word should go to Roddy:

Thanks for the link. I had a bit of a lump in my throat, seeing those titles scrolling across, hearing the music, while munching on my Chicken and Chips Tea… blimey, I was expecting Crossroads to come on just after!

If you are interested in ATV, then why not buy yourself a copy of the documentary From ATV Land in Colour? Three years in the making, over four hours in duration, its contains extensive footage (some not seen for nearly fifty years) and over eleven hours of specially shot interviews edited into two DVDs.

Spotlight on Synfig

The only thing I haven’t been able to do using free software since moving to GNU/Linux in 2008 is animate. And it bugged me. Everything else – raster graphics, vector graphics, offline video editing, audio editing, font design, desktop publishing – I could achieve, but animation was the reason I’ve had WINE and Macromedia Flash 8 installed on my machine for the past three years.

When I first started playing with GNU/Linux I came across a program called Synfig Studio which could do animation, but at that time it needed to be compiled from source code. It seemed a bit too much like brain surgery for a GNU/Linux beginner! However, the other day I was banging my head trying to do some animation in Flash. I decided to Google for any free software tools that might be able to help and I was reminded of Synfig Studio once again.

Blue hair? Why, it’s Mrs. Slocombe!

I went to the Synfig Studio website and the first thing I noticed was that a brand new shiny version of Synfig Studio was available as an RPM for Fedora. In other words, all I had to do was download, double click and go. Everything worked perfectly. I found the Synfig Studio website was excellent, there were a large number of tutorials and an extensive manual and so I set about reading.

Animation programs are always off-putting to beginners due to their complexity, and Synfig Studio was no exception – partly because it began life as an in-house tool in a professional animation company and that really shows in the power and complexity of what it offers.

I learned Flash 2 back in 1998 by trying to create the ATV Colour Zoom ident as I thought it would be quite a good challenge and force me to look into the tool properly. For the same reason I dusted off one of the more challenging animations in my “TODO” list to learn Synfig – the BBC South West Spotlight dots titles.

My plan was to draw the Spotlight logo in Inkscape, import that into Synfig Studio and then animate it. The first thing I did was set up my canvas. Changing the units to pixels is very important – Synfig Studio uses points by default which seems a strange choice for a tool not centred on printed work.

When I tried importing my artwork from Inkscape it came in at the wrong size:

Imported SVG from Inkscape

The reason was obscure and not what I had been expecting. I had assumed it was the old Inkscape dpi (dots per inch) problem, but it was to do with something called Image Span which is related to the aspect ratio of the end animation. After reacquainting myself with Pythagorean theorem I worked out I needed to set the Image Span to 16 for 768 by 576 pixel artwork from Inkscape.

Setting Image Span in Synfig Studio

Then artwork comes in correctly from Inkscape. However, now I could see some problems with imported SVG:

Problems with Imported Inkscape SVG

There were two problems – the holes had disappeared in the “P” and “O” and there was a segment missing from the circle of the letter “O”.

Paths with holes are imported into Synfig Studio as two objects or “layers” (everything in Synfig Studio is a layer) – the letter and its hole. To make a letter with a hole in it you need to place the hole layer above the letter layer, and then give the hole a layer an “alpha over” blend method. As you can see, the logic behind the program is very different to Flash!

Using Alpha Over in Synfig

The nick out of the letter “O” was Inkscape’s fault. When you convert text to paths in Inkscape you often get double nodes (nodes stacked on top of each other). Double nodes also cause problems in Inkscape itself so it’s always a good idea to merge these nodes in Inkscape.

The join nodes button in Inkscape

Inkscape ellipses don’t import as Synfig Studio circles (they come in as something called Blines instead), so I redrew the dots in the Spotlight logo as Synfig Studio circles to make animation easier later. In fact to get an ellipse in Synfig Studio you draw a circle and then apply a transformation layer to it – again, a bit strange for a beginner! So, now I had the artwork imported:

Inkscape SVG imported perfectly

I discovered I didn’t actually need the background rectangle I’d drawn in Inkscape in Synfig Studio, there’s a special type of layer for solid backgrounds called “Solid Colour” that always fills the background however large your animation is. This is analogous to “Background Colour” in Flash, only in Synfig Studio you could use a “Gradient” instead.

Now I needed to colour my artwork. I found a small bug in Synfig Studio which means that you cannot use the HTML-style RGB value (a six digit hexadecimal number) to enter colours. My background colour in hexadecimal was #171a17. When I entered this into Synfig Studio I got a mid grey, instead of the charcoal colour I was expecting.

A Lighter Shade of Dark

I went into the GIMP and discovered that #171a17 is equivalent to the the RGB percentages 9% 10% 9%.

The GIMP Colour Picker information dialog

I entered the values 9%, 10%, 9% into the Red, Green and Blue spinboxes on the Synfig Colours dialog box, and I got the colour I expected. However, I also found that the HTML code displayed on the Colours dialog became 010101 – not what I expected!

In Synfig Studio, the HTML code is wrong

The ever-helpful Genete on the Synfig Studio Forums suggested that I might have a non-linear palette selected for my file, but this turned out not to be the case. So the moral of the story is, sadly, only enter colour values as RGB percentages.

Speaking of colours, it would be great if Synfig Studio could load GIMP palettes, or create a palette from the currently imported layers.

I then set about animating. This is quite different to Macromedia Flash as in addition to “keyframes” you also have the concept of “waypoints”. A “keyframe” stores every setting of every “layer” item on the current canvas at a particular point, whereas a “waypoint” just stores one setting. You also have to forget about the concept of “frames” that was so key to Macromedia Flash. Synfig Studio, in common with Swift 3D, uses the concept of time instead. As far as the time-line was concerned I am very glad that I had done some work in Swift 3D before approaching Synfig Studio.

Keyframe labels appear on the canvas too

One thing I did like is the fact you could label not only your layers but your keyframes – that saved me an awful lot of scribbling! Once you have your keyframes set up Synfig Studio really excels. There are numerous different ways of defining how the animation gets from one keyframe to another. The default was TCB which gives beautiful naturalistic movement, but for Spotlight it would cause arcing like this:

Arc caused by TCB Interpolation

When I really wanted linear tweening to give me straight edges like this:

Corrected by Linear Interpolation

Another little gotcha I found whilst animating was that the time-lines starts at “0f”, not “Frame 1” as in Flash. This caught me out when I was putting the animation together as I was getting odd blank frames!

Whilst animating I came across a niggle caused by my operating System. In GNU/Linux Alt and left-click is used to move windows around. However, in Synfig Studio Alt and left-click is used to transform (i.e. scale) objects. Fedora 15’s deskptop GNOME 3 compounds this problem by removing the “Windows Movement Key” setting that you could adjust in Gnome 2 to change this behaviour. Fortunately the wonderful Synfig Studio forum came to the rescue as “nikolardo” had a cunning work-around:

“Another workaround for the Alt issue presents itself when you realize it only happens when you Alt-click. Pressing Alt and then clicking gets picked up by the WM (openbox, in my case), but clicking on a vertex and then holding the Alt key produces the scaling behavior intended. So, next time you Alt-click and the window moves, let go, and then click-Alt.”

Whilst working I found that “Groups” were not what I expected at all. The purpose of Groups in Synfig Studio is to collect disparate items around your animation so they can be selected together. In fact, when creating the animation I never used any groups at all, although I can see how they would be useful on other animations.

I loved the fact I could enter a frame number e.g. 454 to move somewhere on the time-line and it got converted into seconds and frames. I tend to think in frame numbers and it’s great I don’t have to keep dividing by 25 and working out the remainder. This was a huge help when setting up keyframes.

Useful for creating guides at 0x and 0y

Another thing I found was I could use the Canvas Metadata window, which at first seemed useless, to adjust the guides. It would be even better if you could use pixels instead of internal units to adjust the guide positions in this window.

One thing I soon learned as I worked was that Synfig Studio’s canvas window is not always WYSIWYG, and the Preview Window isn’t always an accurate reflection of the end result either (but this is being rewritten for the next release) – you have to do a render in order to see how your final result is coming along. This is particularly true if you are using effects like Motion Blur. For instance, when the Spotlight S is rotating, this is what I get to see on the stage:

What you see in Synfig Studio…

Whereas this is what the end result looks like:

…is much more impressive when rendered!

Correction from Genete:

“That’s because your display quality settings were not set to high quality. There is a spin button on the top of the canvas that allows you to set the quality to 1 (better), instead of use 8 (worse) the default value. WYSIWYG is fully done always in Synfig Studio. The problem is that it takes some time to render complex effects like motion blur, duplicate layer, etc.”

For my renders I used a PNG sequence, and only rendered the frames I’d just worked on. One thing I noted when rendering is that the render progress bar and cancel button on the canvas window don’t work. In the future I would love it if a WebM render option was added to Synfig Studio, particularly given the popularity of YouTube.

Notice that zooms, blurs and colour corrections are layers.

As I’ve said before, in Synfig Studio everything is a layer. Not just every single shape but a whole host of other things such as colour changes, blur effects, tranforms. So, obviously the number of layers you get soon gets large and unwieldy. However you can “encapsulate” layers together into what are called “Paste Layers” and then deal with these encapsulated layers as one object.

The capsules show encapsulated layers

You may be thinking this sounds a bit like the Flash concept of having symbols, but it isn’t – yet. The encapsulated layers are still on the main canvas and therefore use the main canvas’s time-line. In order to use encapsulated symbols in a way analogous to Flash library symbols you need to “Export” the Paste Layer as a separate Canvas. It will then appear in the Canvas Browser.

The Canvas Browser

Now your capsule of layers is a canvas in its own right, with its own independent time-line and you can use it in a way akin to library symbols in Flash. As you work, you’ll find that the main canvas’s time-line gets cluttered with keyframes and waypoints, so it’s worth exporting encapsulated layers to simplify your work.

The only real downside of the Synfig Studio time-line design is shared by Swift 3D. It’s that you can’t add and remove things from your animation easily. If you want to “hide” something you have to set its amount to 0 and then you have to fiddle about with waypoints with constant interpolation in order to show it again. It seems too much work when you simply want to put things on and take things off of your canvas.

Exporting a Paste Layer after you have already done work on an animation needs some care. Key frames are not brought across to the new canvas, and the exported animation duration defaults to 5s (five seconds) which means you have to increase it to the right length manually. So, before you start work on an animation it’s better to decide upon its structure first. But that was always the case anyway!

One minor thing – I found that I could only remove things out of encapsulated layers by dragging and dropping which was not discoverable for me – I expected to find another way of doing it via a button of some kind too.

Put a space in an Exported Canvas name and…

Entering a canvas name with a space in gives a message telling you about the C++ standard type library throwing an exception – not something most cartoonists would find particularly helpful!

When adding an exported canvas from the canvas browser on your main canvas you can offset its start-point by any number of frames. However, the offset needs to be a negative number of frames to make it appear a positive number of frames later and a positive number to make it start earlier which foxed me for a bit too!

Anyway, enough moaning – these are only very minor points! What you should take away from all this is that with exported canvases I found I could work exactly the same way as I was used to in Flash.

This does the hard work in the Spotlight animation.

Meanwhile, back to my animation. I wanted to emulate some optical film effects in my animation. The first one, motion trails, was easy to do with the Synfig Studio Motion Blur layer. This gives you a huge amount of control over the appearance of your finished trail.

Software doesn’t get any more magical.

I also needed some “optical glow”. I achieved this very easily by using the Colour Correct layer. This actually had a setting for Over Exposure – the exact effect I wanted to emulate – built into it! I was absolutely amazed! And not only that, I could animate the Over Exposure setting too. Incredible.

A bit of Blur (of which there are a dazzling array) helped to sell the glow even more.

The range of effects you can add to your animations in Synfig Studio is truly overwhelming. I think I’ll be blogging for months about the huge range of things you can do in Synfig Studio. It is an enormous amount of fun.

Zoom layers are a very clever idea.

To zoom in and out I used, naturally enough, the Zoom layer. Having a zoom on a separate layer is incredibly sensible when you actually start using it, but seemed very odd at first appearance.

And, it goes without saying, moving the dots around the canvas in Synfig Studio was simplicity itself.

So, here’s the finished result:

Did I mention Craig Rich knew my Granny…

Synfig files are very small and compact. The final file size was tiny – 11.9KB. I found that utterly incredible and it compares very favourably to Flash.

I could have completed these titles in about two hours in Macromedia Flash 8, in Synfig it took me two days to learn the tool and complete the animation which I was quite pleased with.

Synfig is an excellent tool that is staying firmly installed on my computer! I really love using it and I am excited about what I can achieve using it in the future and the vast range possibilities it opens up. It is powerful, flexible, stable and rewards the effort you put into learning it a thousand times over. It also has a friendly and helpful community. Recommended.

Washington Post

For a child born in 1971 and growing up in 70s Britain, probably the most magical place in Britain would have been BBC Television Centre. And, thanks to Blue Peter, it was a building that I was pretty familiar with. After all, Peter Purves had shown me countless times that the building was ‘like a huge doughnut, with studios around the outside, offices inside the centre ring and a fountain in the middle’.

BBC Television Centre, front gate

One of the most distinctive features of the building was its signage. The same typeface was used on everything from cameras to warning lights to the front gate.

EMI 2001 with Raymond Baxter

The typeface employed was a very common sight when I was five years old. It was used all over Chard Post Office, on signs made by SWEB (the South Western Electricity Board), and even for signs on the changing room doors at Maiden Beech School in Crewkerne. But, as I grew up, this signage was slowly replaced by signs using more modern faces. By the early 80s BBC Television Centre was just about the only place where it could be seen.

BBC Television Centre Studio One

I’d always wondered what the typeface was. The first clue was when I bought the book Encyclopedia of Typefaces by W.P. Jaspert et. al. The book contained a small scan of the face labelled as ‘Doric Italic’. This led me to search on font websites under the ‘Ds’ until I found a typeface that was called ‘AT Derek Italic’. This was close. In fact, it was very close. But it wasn’t right.

AT Derek Italic

For instance, in order to recreate the 1960s caption below, I had to alter the AT Derek lettering extensively:

BBCtv Science and Features recreated

The face used came up in conversation at The Mausoleum Club. The Mausoleum Club is a web forum for people who want to talk about proper television rather than other the kind that we get these days.

By a stroke of good fortune, BBC Graphic Designer Bob Richardson was present and he told me for the first time definitively the name of the font. It was called Washington. I then spent a couple of days plucking up courage to ask Bob if he would be kind enough to send me a scan of the font so that I could recreate a digital version.

Bob was very, very kind and also keen to see a version of the font in truetype form – I received a scan of Washington the next day. The scan he sent was taken from his copy of the BBC Graphic Design Print Room specimen sheets. The book contains all of the metal typefaces that were available to graphic designers (or ‘commercial artists’ as they were initially known) from the early 1950s until circa 1980.

Washington recreated by the BBC for a capgen

Bob told me that the BBC had actually recreated Washington in a format suitable for a caption generator for ‘The Lime Grove Story’ (a 1991 documentary to commemorate the closing of the BBC’s Lime Grove studios) but the BBC didn’t have a version of the font in truetype form.

So, now I had a scan I needed to recreate the font. The plan was, as usual, to trace each character or ‘glyph’ in Inkscape

Tracing in Inkscape

…then import the glyphs I had traced into FontForge

Glyph imported into FontForge

…and use FontForge to generate the final typeface.

The finished typeface

This is exactly the same way as I had recreated the Central Television corporate font, Anchor and Oxford. Only this time I had the best source material possible.

As I’ve talked about recreating fonts extensively in the past I’ll just talk about a couple of things that were either new or different in this case.

P and R superimposed

The first thing of interest was that the font was a real, live metal type and it wasn’t as ‘regular’ as I had come to expect from digital faces. The width of the vertical stroke in the ‘P’ would be quite different in width to the vertical stroke in the ‘R’ which would both differ in width of the vertical stroke in the ‘D’.

It was this kind of irregularity that really gave the font its charm and sold it as an old metal typeface. Therefore I was determined to keep that as much as possible and not to try and make the font too regular and clinical by ‘fixing’ all these quirks.

R coming to the point

The second thing I needed to know was when to ignore curves. Letters such as the capital R would have curves at the corners where you would expect them to come to a point. I did toy with the idea of leaving these curves in place but that looked dreadful at large sizes so that was one thing I did end up ‘fixing’.

There were a number of glyphs I had to create myself, as they didn’t exist when Washington was created or were not a part of the original face. For instance the Greek letter mu is a combination of the letters p, q and u:

P, Q, U make a MU, Cuthbert dribbled and guffed

I also added things like Euro and Rupee currency symbols, copyright and trademark symbols and so on.

One thing I did this time, which I should have done before, was get FontForge to create all the accented glyphs for me. In other words, instead of creating separate Inkscape files for each accented character and importing them into FontForge, I simply created each accent as a glyph and got FontForge to automatically create all the accented characters for me. This saved me a huge amount of time.

Once you’ve created these few characters…

It’s important for me to have a decent coverage of the Latin alphabet as I know first hand how frustrating Hungarians find it to have to use a tilde or diaeresis instead of their double acute. I also like to make sure that the Welsh language can be used with any typeface I create.

…you get all these free!

FontForge created the accented glyphs almost perfectly and out of a few hundred glyphs I only needed to adjust half a dozen by hand. I found this pretty amazing.

Buoyed with my success at automatic accented glyph creation I thought I’d try some automatic kerning. Kerning is the adjustment of the spaces between letters. For instance the distance between the letters ‘T’ and ‘o’ in ‘To’ is quite different to the distance between the letters ‘T’ and ‘h’ in ‘The’.

Good kerning makes all the difference to the appearance of a typeface. Here’s the word ‘colour’ unkerned…

Colour with no kerning

…and here it is kerned.

Colour kerned

For all my other fonts I had sat down and kerned every possible letter combination by hand. The results are excellent but it also involves a large amount of wasted effort. The reason is that many letters (e.g. c, o and e) kern exactly the same as each other. FontForge not only allows you to put these letters into ‘classes’ to kern as a group, but it will also detect these ‘classes’ for you and attempt to kern them all into the bargain.

Kerning by classes – click to enlarge

I tried using this feature for the first time with Washington, and it worked pretty well for most letter combinations. However I do need to tweak this kerning by hand to ensure that all possible combinations of letters look good. Until this is done the font is only really useful for desktop publishing or vector art where you can alter the kerning of each letter combination by hand.

This task will take two or three days to do and it’s not something I want to do now, as it is really a job you need to come to fresh. So in about a month or so I’ll kern the font and release version 1.1 – I’ll post here when the hand kerned version is available.

So when the font is exported, how does it fair? Well, here’s an example I put together which compares Washington to AT Derek:

A comparison – click to enlarge

As you can see, AT Derek may be more elegant but Washington is definitely more ‘BBC’!

The Washington Book typeface is released under the SIL Open Font licence.

All the software I used to create the typeface was free software, including the operating system – Fedora.

You can download the latest version of the Washington font from here. Windows owners will need 7-zip to uncompress the archive. The font is free – the only thing I ask is that if you find it useful please drop me a line or add a comment below as I’d love to hear from you.

Replay Replayed

Replay Expo time is fast approaching again, which is why Barbara Kelly and Lady Isobel Barnett are pictured below modelling an original piece of my artwork:

Sadly, Doris Speed wasn’t free.

Replay Expo is an Arcade, Video Game and Retro show that takes place every autumn at the Norcalympia Exhibition Centre in Blackpool. Last year’s event attracted 3,200 visitors over two days and the organizers are hoping to attract 5,000 this time. The show is timed to coincide with the last weekend of Blackpool Illuminations.

Last year I was involved in designing fliers, banners, advertisements and the website for the show and the organisers very kindly asked me if I would like to continue doing so this year.

r3play 2010

The first thing I needed to do was devise a “Replay” logo for this year’s event. The brief was “the same, but different”. The previous logo was originally designed by “Greyfox”, also known as the talented Irish graphic designer Darren Doyle. It was a beautiful logo and worked fantastically well so I wanted to keep as close to it as possible.

I had two main ideas. Firstly, I wanted make the logo a little more colourful, as the show will be a little more colourful this year. Secondly, I wanted to include a cartoony black outline around the lettering to increase the contrast from a distance and also to evoke the black outlines around cartoony video game characters.

In addition the logo had to be vector illustration, as I would need to export it at some very large sizes indeed. This meant I created it entirely in Inkscape.

This is what I came up with:

replay 2011 – click to enlarge

It was one of the only occasions I’ve ever got it right first time! You’ll notice I had to reverse the “E” because last year people insisted on calling the previous event “are three play” which rather upset the organizers!

B790 – I’m sure this face has a real name!

Next was the question of typography. Last year was easy – I was using lots and lots of lovely Microgramma. This year it was again “the same, but different”, so I settled on a Hermann Berthold art deco typeface called B790. This was similar enough to Microgramma that I could use it in the same sort of ways, whilst at the same time looking very different.

The one thing I was disappointed about this year was the design of the 2011 lettering. I spent day after day producing draft after draft:

My favourite – I spent hours on this!

Another massive fail

Obviously massive fails come in threes

However, in the end nothing I produced seem to grab the client – something that was entirely my fault. In the end, with half an hour or so to spare before stuff went needed to go off to the printers I gave up and produced something quick I’m really ashamed of.

At least the client liked it.

As you can see, the B790 ended up with a bit of a starring role as I used it for the word “EXPO”.

Producing the fliers and roll up banners for Replay Expo was rather interesting this year as the printers decided that only CMYK PDF files were acceptable. In previous years, they had accepted RGB TIFF files exported at 300dpi (dots per inch), which I could export from either The GIMP or Inkscape. But neither Inkscape nor The GIMP can currently produce CMYK PDF files. Therefore, after meaning to do so for nearly three years now, I finally had a good reason to grips with Scribus, a free software desktop publishing package.

The first thing I had to do was a lot of reading. The Scribus documentation is excellent and very thorough, so it was a pleasure to go through it all. Then I went through the tutorial. I had to do that when the children were at school as the first couple of hours featured a statue of a rather forgetful Indian lady who had absentmindedly neglected to put on her undergarments.

She’ll catch her death of cold…

Fortunately, I had colour management set up on my computer, so soft-proofing worked perfectly. This meant that whatever I saw on screen was very close to how my finished artwork would appear in print.

I produced my Replay logo artwork in Inkscape, and exported the logo as a 300dpi RGB PNG file for import into Scribus. Usually I could import my Inkscape files directly into Scribus, but in this instance I was using Inkscape layer effects (i.e. SVG image filters) that Scribus is currently unable to cope with.

Flier created in Scribus

I then created the text and frames directly in Scribus, and imported the photographs into them. It’s actually a very nice way of working as you are using each tool for what it does best.

Once finished, I could export my Scribus file as a CMYK PDF, send it off to be printed and hope for the best.

This was all completely new to me, and I was really nervous as to whether my exported PDF files would even be accepted by the printers, let alone print properly. What was worse was the fact that the Gadget Show Live event was two days away and there wasn’t time to do anything if my files were no good!

Anne Ladbury and Mary Morris

However, as you can see, they turned out quite well. Scribus is an excellent piece of software and I would recommend it to anyone.

Replay Expo takes place at the Norcalympia Exhibition Centre, Norbreck Castle, Blackpool on the 5/6 November 2011. Tickets for the event are available from http://replayexpo.com/tickets

ZX Spectrum Filter Revisited

Well, a day is a long time in Free Software. Since I posted yesterday about the ZX Spectrum filter for The GIMP, I’ve had a lovely exchange of e-mails with the original author nitrofurano, I’ve improved the filter further and I’ve found out why it was written.

Spectral Spectrum

First things first, improving the filter. I had become rather rusty at working on filters for The GIMP but eventually everything came flooding back to me.

The first thing that helps when writing a Python filter in The GIMP is to run The GIMP from the command line in a terminal window. That way you get to see all the error messages the plug-in produces and are not working “blind”. You can also see the output of any print statements you add to help you debug.

The second thing I remembered was that you should use a symbolic link to the filter in The GIMP’s plug-in folder, so you can work somewhere more convenient than a hidden folder that’s several levels down.

Sugary Spectrum – click to enlarge

Once I’d got myself working sensibly I could have a look at improving the filter. The first thing I did was to speed the filter using this technique described in Akkana‘s blog.  It cuts down on writing to the actual image, which is slow. Instead you copy the image to a byte array, work with that and then copy all the bytes back to the image when you have finished. Using Akkana’s technique had the added bonus of allowing the filter to be adapted easily work with either RGB or RGBA images.

However, the resulting changes didn’t seem to generate the desired increase in speed until I realised I had stupidly queried the image class’ size and width repeatedly instead of storing the values in variables. Once I did that the filter literally flew.

Nitrofurano (Paulo Silva) has been lovely and very encouraging as I’ve been hacking his lovely code to bits. He’s also as enthusiastic about free software as I am. I think it’s fantastic that people who have never met before can work on each other’s software, share ideas and get to know each other – the GPL really does work as advertised.

Sinclair TV – thanks to Nitrofurano

The reason the code was written originally was to be part of a very interesting project Paulo is working on to create “retro” vision web-cams. You can find out more about it here.

You can download the updated python-fu ZX Spectrum filter for The GIMP here.

Totally Bazaar

Well, after promising myself I’d get around to doing this long, long ago I’ve finally put my house (or should that be home?) in order and submitted my Projects folder to the discipline of a version control system.

Nautilus, but nice

I create a huge amount of work on various projects in the course of a year – much of which I blog about here. And I often go back and revisit files numerous times. But to err is human and sometimes I mess things up or find that my new version wasn’t the improvement I’d hoped. Therefore I tend to create large numbers of back-ups just in case.

Up until now I’ve been using an ad hoc system of either backup folders or appending version numbers to the end of file-names. However this is messy, wasteful on disc space and prone to error. That’s where a version control system comes in.

Even though version control systems are usually seen as a collaboration tool, they are also a really good idea for personal projects.

I’d actually chosen the version control system I wanted to use a couple of years ago – it’s the extremely elegant Bazaar (or BZR), a free software project started by Canonical. The Bazaar project not only encourages personal use of version control – they even provide instructions especially for personal users.

There were three big attractions of Bazaar for me. The first is that it has a single command – bzr. The second is that it puts everything in a single folder /.bzr. I don’t need a database or a server or anything like that. And the third is that it takes five minutes to learn. I actually use it from the command line, but there are graphical front ends for it too.

As most of my work these days is either in Python or Inkscape – which are both based on text files – there was no excuse for not using a version control system earlier.

Lost In Translation

I’ve blogged a few times about ClapTraps, the ingenious free software puzzle game written in PyGame and Python by testpilotmonkey. The last time I blogged about it, I mentioned I was planning to add i18n (internationalisation) and a bit of l10n (localisation) to the game.

Claptraps in British English

In terms of internationalisation, what I wanted to do was to make the game multi-lingual so that on my daughters’ computer, where they use Hungarian rather than English, they would be able to play the game in Hungarian.

I had been under the mistaken impression that all I needed to do to prepare a Python script for i18n was to import the gettext library and then surround all translatable strings with the function _().

It turned out to be a little bit more complicated than that. The reason was that I wanted ClapTraps to stay self contained in a single folder rather than force the user to install it.

ClapTraps’ self-contained folder structure

Normally when GNU/Linux programs are installed on a computer, the compiled translation scripts are copied to a central location for access by all users of the computer. On Fedora GNU/Linux, that location is /usr/share/locale. Python always expects the compiled translation files to be stored there. However, I wanted my translations to remain in the ClapTraps folder.

It would have taken me quite some time to work out what to do if Mark Mruss had not already solved all the problems for me in his excellent blog post on translating Python/PyGTK programs. There’s a fantastic snippet of code there called “Translation stuff” that magically does it all!

The next step, now I had the Python script prepared, was to generate a POT (Portable Object Template) file. The POT file is the template file from which individual translations can be prepared.

The POT file for ClapTraps

It contains a header, then a list which shows the line a translatable string appears in, the id of the translatable string, and a space for the translation.

I found one problem when first creating my POT file was that it only had five strings in it! The reason was that testpilotmonkey had used single quotes for most of his strings but xgettext, the tool you use to create a POT file, only recognises double quotes.

From my POT file, I used the msginit tool to create two PO (Portable Object) files – one for en_GB (British English), the other for hu (Hungarian). Now came the really hard part – the Hungarian translation! I did this myself, but it took me three days until I was happy. The main problem was my difficulty with the imperative mood in Hungarian. Fortunately the excellent website HungarianReference.com helped enormously. My Hungarian is still a bit dodgy, but I like to think that adds to the charm.

Finished PO file with Hungarian translation

Now all I needed to do was compile my PO files in MO files and I could see if it worked. To test the game I used the LANG variable from the command language.

To force the game to run in Hungarian whilst running Fedora 13 in British English I typed LANG=hu python claptraps and….

Magyar and testpilotmonkey – a tricky combination!

Success! That was a lovely feeling. Now the next challenge was a bit of l10n. The problem I faced was that ClapTraps regularly asks the user to press Y for Yes or N for No. But in Hungarian that should be I for Igen or N for Nem. So I needed some way of changing the keys that you need to press to suit the current language.

This turned out to be quite easy – first I had to import the locale library. Next, I just needed to ask the locale class for the initial letters used for “yes” and “no” in a language like this:

# Make lists of "yes" and "no" keys for current language
locale.setlocale(locale.LC_ALL, '')
yes_keys=list(locale.nl_langinfo(locale.YESEXPR)[2:-3]);
no_keys=list(locale.nl_langinfo(locale.NOEXPR)[2:-3]);

The nl_langinfo function returns a regular expression string with the acceptable keys for Yes or No for the current locale. For English it would be:

^[yY].*
^[nN].*

For Hungarian it would be:

^[IiyY].*

^[nN].*

Note that Y is also acceptable for Hungarian. I used the Python slice operator to slice off the bits I didn’t want and then bunged the result in a variable. So now, when I want the user to press yes or no I simply do this:

while(1):
    wait_event = pygame.event.wait()
    if wait_event.type == pygame.KEYDOWN:
        if pygame.key.name(wait_event.key) in yes_keys:
        return False
    elif pygame.key.name(wait_event.key) in no_keys:
        program_quit = False
        break 
    else:
        pass

And that solves it. Since I made these changes to ClapTraps my daughters have both had hours of fun out of the game. It’s a tribute to the genius of testpilotmonkey – and Richard Stallman. For, without the GPL, I wouldn’t have been free to make these changes which allowed my children to enjoy this fantastic piece of software.

ClapTraps v1.1

You might remember that in April I blogged about the computer game ClapTraps, which was written by my friend testpilotmonkey. ClapTraps is a fiendishly addictive puzzle game written in Python using the PyGame SDL library.

ClapTraps v1.1 –  reworked title screen

Back in May I also blogged about my Inkscape reworking of the graphics for the game.

ClapTraps v1.1 – reworked graphics

Since then testpilotmonkey has released a new, expanded version of the game – ClapTraps v1.1. It includes my reworked graphics and a number of new features.

For instance, you can now redefine the control keys. This is very useful if you, like me, have a Hungarian keyboard!

Even better, people who can code in Python can easily change the behaviour the game objects, just as you can in Repton Infinity. To demonstrate this, ClapTraps comes with a XOR-style came called Arrows which has the potential to be very interesting indeed.

Arrows – a real head scratcher

Despite my best efforts to recommend the game to everyone I know, ClapTraps has been largely ignored, even by fans of Repton. I think this is inexplicable and a crying shame. It’s an absolutely fantastic puzzle game and has given me and my daughters a lot of fun. It’s free as in beer and free as in freedom so there is no reason not to download it and have a go – and let testpilotmonkey know what you think too!

It is available to download for Windows, GNU/Linux and OS X here.

NB for GNU/Linux users: I found I couldn’t run the ClapTraps code by double clicking on the main ClapTraps.py icon from Nautilus, the GNOME file browsing program. After much experimentation I found the problem was simply that ClapTraps.py was corrupted in some way. If you copy and paste its contents into a new file it will work correctly. The copy and pasted file will be 2K shorter than the original, but work perfectly.

Dave Dunce Goes Gloss

At the end of April, I blogged about Claptraps – my friend testpilotmonkey‘s new computer game. Claptraps is a fiendishly addictive puzzle game written in Python using the PyGame SDL library.

I also noted at the time that Claptraps was a piece of free software – testpilotmonkey released it using Free Software Foundation’s General Public Licence or GPL. This means that anyone is free to study or modify the source code and send copies of the game to their friends.

I felt that releasing something as fantastic as Claptraps under the GPL was an act of generosity that should not go unrewarded. As I don’t have any money, I decided to donate some of my time instead.

The place I felt best able to help out was the graphics. Claptraps is a tile based game, with 40 x 40 pixel tiles which are stored as 32-bit Portable Network Graphics (PNG) files. testpilotmonkey had decided on a cartoon look for his graphics, with flat fills bounded by black outlines. Here is a screenshot of the game with the testpilotmonkey’s original graphics:

Original graphics – Click to enlarge

I love these graphics, as they have bags of character and the animation is very pleasantly squishy and organic. But I thought it might be nice to take these graphics and see if I could make them a bit three-dimensional. To do this I decided to use a piece of free software called Inkscape. The reason I decided to use Inkscape, as opposed to a raster graphics package like The GIMP, was I felt cartoon-ish nature of the graphics lent themselves to be created in vector format. It also meant I could create one set of graphics that could be used at any size.

I made my canvas 40 x 40 pixels in Inkscape, for convenience. As far as guides were concerned, I just used two, one half way vertically and one half way horizontally which marked out the centre of the canvas for me. I imported the original graphics into Inkscape and used the Pen tool to trace over them. For each graphic, I created a separate Inkscape SVG (Scalable Vector Graphics) file.

As I was creating vectors to export to (rather small) bitmaps, there were a couple things I needed to bear in mind when tracing the original graphics to ensure good results. The first thing was I wasn’t free to put black outlines anywhere I chose. The end result would be rather grey and muddy due to too much anti-aliasing being used on the exported bitmaps. To keep anti-aliasing to a minimum (and hence keep the graphics as clear and colourful as possible) I needed to ensure that the nodes for my lines and Bezier curves were on half pixel boundaries. The second was to ensure that all my outlines had a one pixel stroke width, and were completely black – sounds obvious, but easy to mess up.

One traced frog – click to enlarge

Once I had an outline for a graphic it was time to colour it in, and again it was Nancy Kominsky to the rescue. I used her technique of using three tones – light, medium and dark to fill each shape. Then, if necessary I’d add any highlights with light yellow and any lowlights with dark purple. The nice thing about Painting Along with Nancy in Inkscape is I don’t have to put my spare colours into the fridge and cover them with cling film.

Incidentally, in case you think I’m joking about following Nancy Kominsky’s advice, here’s the Claptraps apple filled using a radial fill made up of my three tones – light, medium and dark red:

Where’s the light coming from Alan?

And here are my three tones I used in the gradient fill editor :

Cadmium orange, vermilion and alizarin crimson

Here is my purple lowlight and my lemon yellow and white highlight:

That’s one teaspoon of purple, the colour…

And here is the finished apple – Nancy Style.

And no need to apply any varnish

And, just for you to compare, here are Nancy’s apples:

Picture courtesy Rory Clark

So, after much colouring in but, sadly, no Alan Taylor to chat to while I was working, I had a finished set of graphics:

As seen in Fedora 13 Nautilus

And here are the finished graphics in place in the game:

Click to Enlarge

I was determined when working on the graphics that my input be as invisible as possible – Claptraps has bags of testpilotmonkey’s character and I wanted to keep that. The graphics remain very much testpilotmonkey’s, just coloured in slightly differently.

After showing testpilotmonkey my graphics, he said he’d like to use them in future releases of the games – which was very kind of him and made all the effort worthwhile. I’m hoping to spend more time on Claptraps and to return to it on this blog soon. In the meantime, why don’t you download the game and have a play? You can get hold of it here.