ATV Yesterday and Today

If you’ve read my blog before, you may have come across some posts about my friend Roddy Buxton. Roddy is an incredibly inventive chap – he’s rather like Wallace and Grommit rolled into one! He has his own blog these days and I find everything on it fascinating.

One of Roddy’s cracking contraptions

One of the subjects recently covered on Roddy’s blog is the home-made telecine machine he built. The telecine was a device invented by John Logie-Baird at the very dawn of broadcasting (he began work on telecine back in the 1920s) for transferring pictures from film to television.

Roddy also shares my love of everything ATV, so naturally one of the first films Roddy used to demonstrate his telecine was a 16mm film copy of the ATV Today title sequence from 1976.

This title sequence was used from 1976-1979 and proved so iconic (no doubt helped immeasurably by the rather forgetful young lady who forgot to put her dress on) it is often used to herald items about ATV on ITV Central News. Sadly, as you can see below, the sequence was not created in widescreen so it usually looks pretty odd when it’s shown these days.

How the sequence looks when broadcast these days.

The quality of Roddy’s transfer was so good I thought it really lent itself to creating a genuine widescreen version. In addition, this would provide me with a perfect opportunity to learn some more about animating using the free software animation tool Synfig Studio.

The first thing to do when attempting an animation like this is to watch the source video frame by frame and jot down a list of key-frames – the frames where something starts or stops happening. I use a piece of free software called Avidemux to play video frame by frame. Avidemux is like a Swiss Army knife for video and I find it handy for all sorts of things.

Video in Avidemux

I write key-frame lists in text file that I keep with all the other files for a project. I used to jot the key frames down on a pad, but I’ve found using a text file has two important advantages: it’s neater and I can always find it! Here is my key frame list in Gedit, which is my favourite text editor:

Key-frame list in Gedit

After I have my key-frame list I then do any experimenting I need to do if there are any parts of the sequence I’m not sure how to achieve. It’s always good to do this before you start a lot of work on graphics or animation so that you don’t waste a lot of time creating things you can’t eventually use.

The ATV Today title sequence is mostly straightforward, as it uses techniques I’ve already used in the Spotlight South-West titles I created last year. However one thing I was not too sure about was how to key video onto the finished sequence.

Usually, when I have to create video keyed onto animation I cheat. Instead of keying, I make “cut-outs” (transparent areas) in my animation. I then export my animation as a PNG32 image sequence and play any video I need underneath it. This gives a perfect, fringeless key and was the technique I used for my News At One title sequence.

However, with this title sequence things were a bit trickier – I needed two key colours, as the titles often contained two completely different video sequences keyed onto it at the same time.

Two sequences keyed at once

Therefore I had to use chromakeying in Kdenlive using the “Blue Screen” filter, something I had never had a lot of success with before.

The first part was simple – I couldn’t key two different video sequences onto two different coloured keys at once in Kdenlive. Therefore I had to key the first colour, export the video losslessly (so I would get no compression artefacts), then key the second colour.

The harder part was making the key look smooth. Digital keying is an all or nothing affair, so what you key tends to have horrible pixellated edges.

Very nasty pixel stepping on the keyed video

The solution to this problem was obvious, so it took me quite a while to hit upon it! The ATV Today title sequence is standard definition PAL Widescreen. However, if I export my animation at 1080p HD and do my keys at HD they will have much nicer rounded edges as the pixels are “smaller”. I can then downscale my video to standard definition when I’ve done my keying and get the rounded effect I was after.

Smooth keying, without pixel stepping

The other thing I found is that keying in Kdenlive is very, very sensitive. I had to do lots of test renders on short sections as there was only one “Variance” setting (on a scale between 1 and 100) that was exactly right for each colour.

So now I was convinced I could actually produce the sequence, it was time to start drawing. I created all of my images for the sequence in Inkscape, which is a free software vector graphic tool based around the SVG standard.

However, in order to produce images in Inkscape I needed to take source images from the original video to trace over. I used Avidemux to do this. The slit masks that the film sequences are keyed on to are about four screens wide, so once I had exported all the images I was interested in I needed to stitch them together in the free software image editor The GIMP. Here is an example, picked totally at random:

She’ll catch her death of cold…

Back in Inkscape I realised that the sequence was based around twenty stripes, so the first thing I did before I created all the slit mask images was created guides for each stripe:

These guides saved me a lot of time

The stripes were simply rounded rectangles that I drew in Inkscape. It didn’t take long to trace all of the slit masks for the title sequence. Two of the masks were repeated, which meant that I didn’t have as many graphics to create as I was fearing.

Once the slit masks were out of the way I could create the smaller items such as the logo:

ATV Today logo created in Inkscape

And, with that, all the Inkscape drawing was done. It was time to animate my drawings now, so I needed to export my Inkscape drawings into Synfig Studio. To do this I was able to use nikitakit’s fantastic new Synfig Studio SIF file Exporter plug-in for Inkscape. This does a fabulous job of enabling Inkscape artwork to be used in Synfig Studio, and it will soon be included as standard in Inkscape releases.

When I did my Spotlight title sequence I exported (saved) all of my encapsulated canvases (akin to Symbols in Flash) that I needed to reuse within my main Synfig file. This was probably because I came to Synfig from Macromedia Flash and was used to the idea of having a large file containing all the library symbols it used internally.

I have been playing with Synfig Studio a lot more since then, and I realised a far more sensible way to work was to have each of what would have been my library symbols in Flash saved as separate Synfig files. Therefore I created eight separate Synfig Studio files for each part of the sequence and created a master file that imports them all and is used to render out the finished sequence.

The project structure

This meant that my finished sequence was made up of nine very simple Synfig animation files instead of one large and complicated one.

The animation itself mainly consisted of simply animating my Inkscape slit masks across the stage using linear interpolation (i.e. a regular speed of movement).

I could type my key-frames from my key-frame text file directly into the Synfig Studio key-frame list:

Key-frames for one part of the animation

The glow was added to the ATV Today logo using a “Fast Gaussian Blur”, and the colour was changed using the “Colour Correct” layer effect – exactly the same techniques I used in the Spotlight South-West titles.

ATV Today logo in Synfig

In order to improve the rendering speed I made sure I changed the “Amount” (visibility) of anything that was not on the stage at the present time to 0 so the renderer wouldn’t bother trying to render. You do this using Constant interpolation so that the value is either 0 or 1.

I had a couple of very minor problems with Synfig when I was working on this animation. One thing that confused me sometimes was the misalignment of key-frame symbol between the Properties panel and the Timeline.

This misalignment can be very confusing

As you can see above, the misalignment gets greater the further down the “Properties Panel” something appears. This makes it quite hard at times to work out what is being animated.

Some very odd Length values indeed!

Another problem I had was that the key-frame panel shows strange values in the time of length columns – particularly if you forget to set your project to 25 frames per second at the outset.

However, overall I think Synfig Studio did brilliantly, and I would chose it over Flash if I had to create this sequence again and could choose any program to create it in.

The most important technical benefit of Synfig Studio for this job was the fact that it uses floating point precision for colour, so the glows on the ATV Today logo look far better than they would have done in Flash as the colour values would not be prematurely rounded before the final render.

I rendered out my Synfig Studio animation as video via ffmpeg using the HuffyUV lossless codec, and then I was ready to move onto Kdenlive and do the keying.

Obviously I needed some “film sequences” to key into the titles, but I only have a small selection of videos as I don’t have a video camera. To capture video I use my Canon Ixus 65, which records MJPEG video at 640 x 480 resolution at 30fps.

My 16mm film camera

Bizarrely, when the progressive nature of its output is coupled with the fact it produces quite noisy pictures, I’ve found this makes it a perfect digital substitute for 16mm film camera!

I “filmised” all the keyed inserts, so that when they appear in the sequence they will have been filmised twice. Hopefully, this means I’ll get something like the degradation in quality you get when a film is then transferred to another film using an optical printer.

Once the keying was done the finished sequence was filmised entirely using Kdenlive using techniques I’ve already discussed here.

And so, here’s the finished sequence:

Although I’m not happy about the selection of clips I’ve used, I’m delighted with the actual animation itself. I’m also very pleased that I’ve completed another project entirely using free software. However, I think the final word should go to Roddy:

Thanks for the link. I had a bit of a lump in my throat, seeing those titles scrolling across, hearing the music, while munching on my Chicken and Chips Tea… blimey, I was expecting Crossroads to come on just after!

If you are interested in ATV, then why not buy yourself a copy of the documentary From ATV Land in Colour? Three years in the making, over four hours in duration, its contains extensive footage (some not seen for nearly fifty years) and over eleven hours of specially shot interviews edited into two DVDs.

Sunday’s Newcomers

Click to enlarge

Going through my old Flash files, I stumbled across an early version of this image, which I first produced in 2005. I didn’t know how to make it look realistic then, but I’ve since been given lots of good advice from Rory Clark. This new version was produced in Inkscape and aged in The GIMP.

In case you’re wondering, these were all real IBA Transmitters.

Doing my pennants…

I often spend idle half hours looking around flickr for anything of interest. The other day I found a very nice Anglia logo from 1959. Obviously, I couldn’t resist recreating it in Inkscape while I was listening to a pod-cast:

Click to enlarge

This stylised Anglia pennant logo formed the basis of Anglia Television’s original end-caps, including the one seen on their opening program.

Cheap Dirty Film

Three years ago I talked about the programs I used to simulate old 16mm film. Back in 2008 I was using Windows XP, Adobe Premiere Elements 4.0 and a VirtualDub filter called MSU Old Cinema. I found I could use them to create some half-decent 16mm film:

These days I’m using Fedora 15 as my operating system and Kdenlive as my off-line video editor. That meant I’ve had to change the way I simulate old film quite a bit. I have been continuing to use VirtualDub and the MSU Old Cinema plug-in via WINE. Although VirtualDub is free software, the MSU Old Cinema plug-in is not, and this bothered me. I wondered what I could achieve in Kdenlive alone and I started experimenting.

In the course of this blog post I’m going to use the same image – an ITV Schools light-spots caption from the 70s that I recreated in Inkscape. Here’s the original image exported directly from Inkscape as PNG:

Created in Inkscape

The most obvious sign that you are watching something on a bit of old film are the little flecks of dirt that momentarily appear. If the dirt is on the film itself it will appear black. If it was on the negative when the film is printed it will appear white.

Kdenlive comes with a Dust filter that tries to simulate this effect. However, it has a very small database of relatively large pieces of dirt. In total there were just six pieces of dirt, drawn as SVG files, and that limited number led to an unconvincing effect. If I used the filter on a long piece of video I found I began to recognise each piece! There were also no small bits of dirt.

I drew 44 extra pieces of dirt in Inkscape and added them to the Dust filter. I also redrew dust2.svg from the default set. I call this particular piece of dirt “the space invader” as I found it was too large and too distracting!

The video below compares the Dust filter (with identical settings) before and after I added my extra files:

You may find you prefer the Kdenlive dust filter with just the default six SVG files. However, if you prefer what I have done you can download my extra SVG files from here.

With the modifications I’ve made, I actually prefer the dirt created by the Dust filter in Kdenlive to the dirt you get in the MSU Old Cinema plug in. The dirt from Kdenlive’s filter is less regular in shape and simply by changing the SVG files in the /usr/share/mlt/oldfilm folder I can tailor the dust to any specific application I have in mind.

After flecks of dirt, the second most obvious effect that you are watching old film is a non-uniform shutter causing the picture to appear to flicker very slightly. The MSU Old Cinema plug-in can simulate this effect, but wildly over does it. It is not suitable for anything other than simulating silent movies, so I never used it.

Luckily the Kdenlive Old Film plug-in does a much more convincing job. The settings that I found worked for me are shown below:

KdenLive Old Film settings for uneven shutter

And they create the results shown below:

It looks a bit odd on it’s own, but when added to all the other effects I’m describing here it will look fine.

I’ve noticed that when I am creating these effects it’s best if I move away from the monitor to a normal TV viewing distance to see how they look – otherwise I tend to make the effects too subtle to be noticed when I come to watch the results on my television!

The next thing that will help to sell the output as film is having some film grain. Film grain is irregular in shape and coloured. In fact, I used the Colour Spots setting of the MSU Noise filter to create film grain in VirtualDub.

Kdenlive has a Grain filter, which simply creates random noise of 1 pixel by 1 pixel in size. Although technically this is not at all accurate, it can look pretty good if you are careful.  The settings for film grain will vary from job to job, so some trial and error is involved.

As a starting point, these settings are good:

Kdenlive Grain settings

And will look like this:

Again, it looks odd by itself (and you can’t really see it at all on lossy YouTube videos!) but it will look fine when added to the other effects. You’ll start to notice the rendering begin to slow down a bit when you have added Grain! Incidentally, Grain is still worth adding even if YouTube is your target medium because it helps break up any vignette effect you add later.

The next thing you need to do is to add some blur – edges on 16mm film in particular tend to be quite soft. Kdenlive has a Box Blur filter which works just fine for blurring. How much blur you add depends on your source material, but a 1 pixel blur is fine as a starting point.

Colour film is printed with coloured dyes, so it has a different colour gamut to the RGB images you create with The GIMP, Inkscape or a digital video camera. In addition, it also fades over time. Therefore to make computer-originated images look like film-originated images some colour adjustment is normally required.

Luckily, Kdenlive has a Technicolor filter that allows you to adjust the colours to better resemble film.

Kdenlive Technicolor settings

The way colour film fades depends on whether it has been kept in a dark or light place. If I’m recreating a colour 16mm film that has been stored safely in a dark tin for many years, I make it look yellowish. If I’m recreating a colour 16mm film that’s been left out in the light a bit too much I make it look blueish. Both these looks rely adjusting the Red/Green axis slider – not the Blue/Yellow axis slider as you might think!

Source image faded with Technicolor

You soon begin to notice that the telecine machines used by broadcasters could adjust the colours they output to make colours that were impossible to resolve from the film. For instance, some of the blue backgrounds on ATV colour zooms were too rich to have been achieved without some help from the settings on the telecine machine. So the precise colour effect you want to achieve varies from project to project, and sometimes you will be actually increasing colour saturation rather than decreasing.

The Technicolor filter is, ironically, the filter you use to make colour source material monochrome too!

The biggest problem when trying to recreate old film is recreating gate weave – that strangely pleasing effect whereby picture moves almost imperceptibly around the screen as you watch.

MSU Old Cinema created an accurate but very strong gate weave which was too severe for recreating 16mm film. The Kdenlive Old Film filter has what it calls a Y-Delta setting, that makes the picture jump up and down by a set number of pixels on a set number of frames. It’s easy and quick (a Y-Delta of 1 pixel on 40% of frames is good) but introduces black lines at the top of the frame and is so obviously fake it won’t really fool anyone!

So there is, sadly, no quick way to create gate weave in Kdenlive. However, the good news is there is a way, provided you’re prepared to do a bit of work. You need to use the Pan and Zoom filter. The Pan and Zoom filter is intended to do Ken Morse rostrum camera type effects – it’s particularly good if you have a large image and want to create a video to pan around it.

However, what we can do is use the Pan and Zoom filter to move the frame around once per second. Firstly you zoom the image in by 108%. This means you won’t see any black areas around the edge of the frame as the picture moves around.

First of all, zoom the image very slightly

Next, you create key frames on each second:

Then add one key frame per second

Then you move the image around slightly on each keyframe – plus or minus two or three pixels from the starting position is often plenty.

Obviously, for a 30 second caption that’s 30 keyframes and 30 movements – a lot of work if done “by hand”. However it won’t go to waste, as you can save your Pan and Zoom settings as a Custom effect and resuse it again and again on different clips.

And, luckily, doing all this by hand isn’t even necessary. Custom effects are stored as simple XML files in the /kde/shared apps/kdenlive effects folder so it is possible to write a small Python script to automatically create as much gate weave as you want – something I’ll come back to.

As well as gate weave, you can also use the Pan and Zoom filter to stretch the frame, which is perfect for simulating stretched film. Again, that’s hopefully something I’ll return to another time.

Here’s an example of video moving with the Pan and Zoom filter:

The Pan and Zoom filter also adds hugely to your rendering time, so it’s best to switch it off until you do your final render.

Glow is a very important effect to add when simulating film, particularly monochrome film. Kdenlive does not have a glow filter, so if I need to add glow to a video file I have to improvise. I export the video as a PNG sequence, add glow to the PNG files using a GIMP batch script (written in Scheme), and then reimport the video file. It’s worth the effort, as it’s amazing how much glow helps to sell something as being originated on film.

Glow added using The GIMP

The GIMP glow filter tends to be rather harsh, and tends to wash out images if you use too much glow. Therefore you have to experiment a lot.

Finally, there is often uneven brightness or contrast visible across a film frame. In VirtualDub I used a filter called the Hotspot filter. The hotspot filter is actually designed to remove this effect from old film, but turned out to be just as good at putting the effect in!

However, with Kdenlive, this effect is best achieved in the GIMP when required as Kdenlive’s Vignette effect is too unsubtle to be of any real use.

So, put it all together, and you get something like this:

All in all, Kdenlive does a pretty good job of making digitally originated images look like 16mm film but although there is room for improvement. The film scratches filter needs work, there is no glow and the film grain is really just noise rather than grain. However you can still get some excellent results and I’m really pleased with it.

From Recreations to Replicas

If you’ve been reading my blog for a while, you’ll know that whenever my attention turns to the Midlands it’s usually at the prompting of my friend Roddy Buxton.

Roddy Buxton, courtesy Fake Festivals

Roddy is a lighting engineer, electrician and visual effects designer. He is now based in South Yorkshire but grew up in ATV Land. Roddy’s TV career started in the Central Television film department, working as a spark on such programmes as Peak Practice and Boon.

As with Oliver Postgate, Roddy’s branching out into visual effects design happened quite by chance when a director noticed how practical he was and decided he’d be the perfect person to knock up a semi-practical suitcase nuke for a film!

In 2009 Roddy thought he’d like to have a go at creating a working replica of the ATV station clock from the sixties. As far as I know, this clock only exists as an off-air photograph in the Transdiffusion archive.

Photo courtesy Transdiffusion

Station clocks used to be shown at numerous times during the day by all televisions stations from the 1950s to the early 1990s. Giving the correct time was not only seen as a valuable public service to the viewer, but the clock equipment was used to sync the studio’s signals with external sources. This was vital to prevent visual glitches such as this:

Roddy asked if I could supply him with the artwork in a format suitable for printing, something I was only too happy to do. I sent him two Inkscape files, one for the clock face, the other for the hands:

Watch your face…

…and hands.

Roddy soon had other things on his mind – a new addition to his family! – and I thought no more of the model clock.

However, earlier this year Roddy started asking me questions about my Flash recreation of the BBC Schools dots. The BBC Schools dots were shown in the minute immediately preceding BBC One’s programmes for schools and colleges between September 1977 and June 1983.

In the final year the dots were digitally originated using technology similar to Richard Russell’s GNAT clock, but before that the dots were a mechanical model in the “Noddy room”.

Noddy Room, courtesy VT Old Boys

The Noddy room was a special studio in the BBC that held various mechanical models and 12″ by 10″ captions. These were captured in black and white by a remote controlled camera that used to “nod” up and down as different ones were selected – hence the name “Noddy” room. Colour was usually added electronically to the images before they were broadcast.

Roddy discussed the lighting for the dots:

The lighting for the BBC Noddy wasn’t anything specialised. It consisted of two P38 flood lamps (available from all good DIY/Electrical stores) – these are likely to have been photographic lamps – however the only difference being is the price and box they come in. They are the same voltage, wattage and colour temp. The lamps were attached to the camera, so wherever the camera was pointing that area would be lit.

So I supplied Roddy with the dots artwork as an Inkscape SVG file and I also uploaded my latest recreation of the dots in Flash to YouTube:

In May I was delighted to receive a mail from Roddy with a photograph of this prototype dots model:

Prototype dots, courtesy Roddy Buxton

Roddy said:

The clock face is made from hardboard; though I am not happy with the results; as the dots are not that hard edged. I think I will end up using this clock face as a template to make the actual clock out of punched steel/aluminium – that way I can get hard edged dots.

Here are some more of Roddy’s pictures of making the prototype:

Holes drilled through the hardboard

Temporary captions applied to check sizing

Matt paint added to remove the reflections

I thought the prototype looked absolutely fabulous, and by now I was looking forward to seeing the finished product enormously. I didn’t have long to wait – on the 11th June Roddy wrote to say:

The artwork for the BBC “Dots” has arrived in printed form. The company I have used did all of my printing for me for £11 including P&P, and to the correct sizes – and in a matt finish too.

And on the 6th of July I finally got to see how the final schools dots model was progressing:

I showed the progress so far to my friend Rory Clark who summed it all up beautifully when he said:

Bloody hell – they’re impressive!

Not only that, but Roddy had started work on something else as well!

In Colour – the ATV Station Clock replica

Roddy assures me:

Another week and the dots will be vanishing ;-)! (Finally)

I’m very much looking forward to that. And I’m also hoping Roddy will also recreate some other models from times past – and it looks like I’m in luck!

Roddy tells me:


On the “to do list” – I have always wanted a BBC Globe – so will look at that. The “Diamond” is pretty easy to do (will definitely look at that!)

Then there’s the “Pie Chart”!!

Stay tuned!

Spotlight on Synfig

The only thing I haven’t been able to do using free software since moving to GNU/Linux in 2008 is animate. And it bugged me. Everything else – raster graphics, vector graphics, offline video editing, audio editing, font design, desktop publishing – I could achieve, but animation was the reason I’ve had WINE and Macromedia Flash 8 installed on my machine for the past three years.

When I first started playing with GNU/Linux I came across a program called Synfig Studio which could do animation, but at that time it needed to be compiled from source code. It seemed a bit too much like brain surgery for a GNU/Linux beginner! However, the other day I was banging my head trying to do some animation in Flash. I decided to Google for any free software tools that might be able to help and I was reminded of Synfig Studio once again.

Blue hair? Why, it’s Mrs. Slocombe!

I went to the Synfig Studio website and the first thing I noticed was that a brand new shiny version of Synfig Studio was available as an RPM for Fedora. In other words, all I had to do was download, double click and go. Everything worked perfectly. I found the Synfig Studio website was excellent, there were a large number of tutorials and an extensive manual and so I set about reading.

Animation programs are always off-putting to beginners due to their complexity, and Synfig Studio was no exception – partly because it began life as an in-house tool in a professional animation company and that really shows in the power and complexity of what it offers.

I learned Flash 2 back in 1998 by trying to create the ATV Colour Zoom ident as I thought it would be quite a good challenge and force me to look into the tool properly. For the same reason I dusted off one of the more challenging animations in my “TODO” list to learn Synfig – the BBC South West Spotlight dots titles.

My plan was to draw the Spotlight logo in Inkscape, import that into Synfig Studio and then animate it. The first thing I did was set up my canvas. Changing the units to pixels is very important – Synfig Studio uses points by default which seems a strange choice for a tool not centred on printed work.

When I tried importing my artwork from Inkscape it came in at the wrong size:

Imported SVG from Inkscape

The reason was obscure and not what I had been expecting. I had assumed it was the old Inkscape dpi (dots per inch) problem, but it was to do with something called Image Span which is related to the aspect ratio of the end animation. After reacquainting myself with Pythagorean theorem I worked out I needed to set the Image Span to 16 for 768 by 576 pixel artwork from Inkscape.

Setting Image Span in Synfig Studio

Then artwork comes in correctly from Inkscape. However, now I could see some problems with imported SVG:

Problems with Imported Inkscape SVG

There were two problems – the holes had disappeared in the “P” and “O” and there was a segment missing from the circle of the letter “O”.

Paths with holes are imported into Synfig Studio as two objects or “layers” (everything in Synfig Studio is a layer) – the letter and its hole. To make a letter with a hole in it you need to place the hole layer above the letter layer, and then give the hole a layer an “alpha over” blend method. As you can see, the logic behind the program is very different to Flash!

Using Alpha Over in Synfig

The nick out of the letter “O” was Inkscape’s fault. When you convert text to paths in Inkscape you often get double nodes (nodes stacked on top of each other). Double nodes also cause problems in Inkscape itself so it’s always a good idea to merge these nodes in Inkscape.

The join nodes button in Inkscape

Inkscape ellipses don’t import as Synfig Studio circles (they come in as something called Blines instead), so I redrew the dots in the Spotlight logo as Synfig Studio circles to make animation easier later. In fact to get an ellipse in Synfig Studio you draw a circle and then apply a transformation layer to it – again, a bit strange for a beginner! So, now I had the artwork imported:

Inkscape SVG imported perfectly

I discovered I didn’t actually need the background rectangle I’d drawn in Inkscape in Synfig Studio, there’s a special type of layer for solid backgrounds called “Solid Colour” that always fills the background however large your animation is. This is analogous to “Background Colour” in Flash, only in Synfig Studio you could use a “Gradient” instead.

Now I needed to colour my artwork. I found a small bug in Synfig Studio which means that you cannot use the HTML-style RGB value (a six digit hexadecimal number) to enter colours. My background colour in hexadecimal was #171a17. When I entered this into Synfig Studio I got a mid grey, instead of the charcoal colour I was expecting.

A Lighter Shade of Dark

I went into the GIMP and discovered that #171a17 is equivalent to the the RGB percentages 9% 10% 9%.

The GIMP Colour Picker information dialog

I entered the values 9%, 10%, 9% into the Red, Green and Blue spinboxes on the Synfig Colours dialog box, and I got the colour I expected. However, I also found that the HTML code displayed on the Colours dialog became 010101 – not what I expected!

In Synfig Studio, the HTML code is wrong

The ever-helpful Genete on the Synfig Studio Forums suggested that I might have a non-linear palette selected for my file, but this turned out not to be the case. So the moral of the story is, sadly, only enter colour values as RGB percentages.

Speaking of colours, it would be great if Synfig Studio could load GIMP palettes, or create a palette from the currently imported layers.

I then set about animating. This is quite different to Macromedia Flash as in addition to “keyframes” you also have the concept of “waypoints”. A “keyframe” stores every setting of every “layer” item on the current canvas at a particular point, whereas a “waypoint” just stores one setting. You also have to forget about the concept of “frames” that was so key to Macromedia Flash. Synfig Studio, in common with Swift 3D, uses the concept of time instead. As far as the time-line was concerned I am very glad that I had done some work in Swift 3D before approaching Synfig Studio.

Keyframe labels appear on the canvas too

One thing I did like is the fact you could label not only your layers but your keyframes – that saved me an awful lot of scribbling! Once you have your keyframes set up Synfig Studio really excels. There are numerous different ways of defining how the animation gets from one keyframe to another. The default was TCB which gives beautiful naturalistic movement, but for Spotlight it would cause arcing like this:

Arc caused by TCB Interpolation

When I really wanted linear tweening to give me straight edges like this:

Corrected by Linear Interpolation

Another little gotcha I found whilst animating was that the time-lines starts at “0f”, not “Frame 1” as in Flash. This caught me out when I was putting the animation together as I was getting odd blank frames!

Whilst animating I came across a niggle caused by my operating System. In GNU/Linux Alt and left-click is used to move windows around. However, in Synfig Studio Alt and left-click is used to transform (i.e. scale) objects. Fedora 15’s deskptop GNOME 3 compounds this problem by removing the “Windows Movement Key” setting that you could adjust in Gnome 2 to change this behaviour. Fortunately the wonderful Synfig Studio forum came to the rescue as “nikolardo” had a cunning work-around:

“Another workaround for the Alt issue presents itself when you realize it only happens when you Alt-click. Pressing Alt and then clicking gets picked up by the WM (openbox, in my case), but clicking on a vertex and then holding the Alt key produces the scaling behavior intended. So, next time you Alt-click and the window moves, let go, and then click-Alt.”

Whilst working I found that “Groups” were not what I expected at all. The purpose of Groups in Synfig Studio is to collect disparate items around your animation so they can be selected together. In fact, when creating the animation I never used any groups at all, although I can see how they would be useful on other animations.

I loved the fact I could enter a frame number e.g. 454 to move somewhere on the time-line and it got converted into seconds and frames. I tend to think in frame numbers and it’s great I don’t have to keep dividing by 25 and working out the remainder. This was a huge help when setting up keyframes.

Useful for creating guides at 0x and 0y

Another thing I found was I could use the Canvas Metadata window, which at first seemed useless, to adjust the guides. It would be even better if you could use pixels instead of internal units to adjust the guide positions in this window.

One thing I soon learned as I worked was that Synfig Studio’s canvas window is not always WYSIWYG, and the Preview Window isn’t always an accurate reflection of the end result either (but this is being rewritten for the next release) – you have to do a render in order to see how your final result is coming along. This is particularly true if you are using effects like Motion Blur. For instance, when the Spotlight S is rotating, this is what I get to see on the stage:

What you see in Synfig Studio…

Whereas this is what the end result looks like:

…is much more impressive when rendered!

Correction from Genete:

“That’s because your display quality settings were not set to high quality. There is a spin button on the top of the canvas that allows you to set the quality to 1 (better), instead of use 8 (worse) the default value. WYSIWYG is fully done always in Synfig Studio. The problem is that it takes some time to render complex effects like motion blur, duplicate layer, etc.”

For my renders I used a PNG sequence, and only rendered the frames I’d just worked on. One thing I noted when rendering is that the render progress bar and cancel button on the canvas window don’t work. In the future I would love it if a WebM render option was added to Synfig Studio, particularly given the popularity of YouTube.

Notice that zooms, blurs and colour corrections are layers.

As I’ve said before, in Synfig Studio everything is a layer. Not just every single shape but a whole host of other things such as colour changes, blur effects, tranforms. So, obviously the number of layers you get soon gets large and unwieldy. However you can “encapsulate” layers together into what are called “Paste Layers” and then deal with these encapsulated layers as one object.

The capsules show encapsulated layers

You may be thinking this sounds a bit like the Flash concept of having symbols, but it isn’t – yet. The encapsulated layers are still on the main canvas and therefore use the main canvas’s time-line. In order to use encapsulated symbols in a way analogous to Flash library symbols you need to “Export” the Paste Layer as a separate Canvas. It will then appear in the Canvas Browser.

The Canvas Browser

Now your capsule of layers is a canvas in its own right, with its own independent time-line and you can use it in a way akin to library symbols in Flash. As you work, you’ll find that the main canvas’s time-line gets cluttered with keyframes and waypoints, so it’s worth exporting encapsulated layers to simplify your work.

The only real downside of the Synfig Studio time-line design is shared by Swift 3D. It’s that you can’t add and remove things from your animation easily. If you want to “hide” something you have to set its amount to 0 and then you have to fiddle about with waypoints with constant interpolation in order to show it again. It seems too much work when you simply want to put things on and take things off of your canvas.

Exporting a Paste Layer after you have already done work on an animation needs some care. Key frames are not brought across to the new canvas, and the exported animation duration defaults to 5s (five seconds) which means you have to increase it to the right length manually. So, before you start work on an animation it’s better to decide upon its structure first. But that was always the case anyway!

One minor thing – I found that I could only remove things out of encapsulated layers by dragging and dropping which was not discoverable for me – I expected to find another way of doing it via a button of some kind too.

Put a space in an Exported Canvas name and…

Entering a canvas name with a space in gives a message telling you about the C++ standard type library throwing an exception – not something most cartoonists would find particularly helpful!

When adding an exported canvas from the canvas browser on your main canvas you can offset its start-point by any number of frames. However, the offset needs to be a negative number of frames to make it appear a positive number of frames later and a positive number to make it start earlier which foxed me for a bit too!

Anyway, enough moaning – these are only very minor points! What you should take away from all this is that with exported canvases I found I could work exactly the same way as I was used to in Flash.

This does the hard work in the Spotlight animation.

Meanwhile, back to my animation. I wanted to emulate some optical film effects in my animation. The first one, motion trails, was easy to do with the Synfig Studio Motion Blur layer. This gives you a huge amount of control over the appearance of your finished trail.

Software doesn’t get any more magical.

I also needed some “optical glow”. I achieved this very easily by using the Colour Correct layer. This actually had a setting for Over Exposure – the exact effect I wanted to emulate – built into it! I was absolutely amazed! And not only that, I could animate the Over Exposure setting too. Incredible.

A bit of Blur (of which there are a dazzling array) helped to sell the glow even more.

The range of effects you can add to your animations in Synfig Studio is truly overwhelming. I think I’ll be blogging for months about the huge range of things you can do in Synfig Studio. It is an enormous amount of fun.

Zoom layers are a very clever idea.

To zoom in and out I used, naturally enough, the Zoom layer. Having a zoom on a separate layer is incredibly sensible when you actually start using it, but seemed very odd at first appearance.

And, it goes without saying, moving the dots around the canvas in Synfig Studio was simplicity itself.

So, here’s the finished result:

Did I mention Craig Rich knew my Granny…

Synfig files are very small and compact. The final file size was tiny – 11.9KB. I found that utterly incredible and it compares very favourably to Flash.

I could have completed these titles in about two hours in Macromedia Flash 8, in Synfig it took me two days to learn the tool and complete the animation which I was quite pleased with.

Synfig is an excellent tool that is staying firmly installed on my computer! I really love using it and I am excited about what I can achieve using it in the future and the vast range possibilities it opens up. It is powerful, flexible, stable and rewards the effort you put into learning it a thousand times over. It also has a friendly and helpful community. Recommended.

Test Card F Prototype

One of my favourite websites is Mikey Bennett’s Vintage Technology page. I love looking at all the vintage television sets Mikey has lovingly restored back to full working order. If your old Bush has lost its colour or your horizontal hold is ruining your enjoyment of your Rank then Mikey’s your man.

Mikey on an experimental 1969 3-D television

A few years back Rory Clark created a very entertaining DVD to demonstrate all the sets in the South West England Vintage Television Museum collection that Mikey curates. The DVD featured a range of test cards and tuning signals from the very old up to the present day accompanied by a selection of tones and music. Although it has given sterling service since then, Rory wanted to create an updated and expanded DVD for Mikey.

One of the cards Rory wanted to include this time was a prototype Test Card F featuring a rather different picture in place of Carol Hersee. Here’s the original:

String vests have never photographed better

Unfortunately the surviving scan of the card wouldn’t really show off Mikey’s television sets to best effect as it has faded quite considerably – the grey linearity squares had a distinctly reddish cast and the green castellations in the reference generator area had almost gone black. Therefore Rory asked me if I could recreate the card.

To do this, I used Inkscape as I only draw items in Flash now if it’s completely unavoidable. This is what I came up with in Inkscape:

She’s gone. Was it something I said?

The hardest job when recreating the card was doing the hand lettering on the caption. I did experiment to see if I could get away with using Benguiat Condensed, but it simply didn’t look close enough. In the end, the lettering took as long as the whole of the rest of the card.

It’s interesting to see the differences between this card and Test Card F. A good place to go to find out what’s missing is Alan Pemberton’s Pembers’ Ponderings website. He has two clickable Test Card Fs which will tell you exactly what each part of the card does.

I can’t wait to see how Rory “distresses” the my Inkscape drawing to make it look like a real transparency on the finished DVD.

Washington Post

For a child born in 1971 and growing up in 70s Britain, probably the most magical place in Britain would have been BBC Television Centre. And, thanks to Blue Peter, it was a building that I was pretty familiar with. After all, Peter Purves had shown me countless times that the building was ‘like a huge doughnut, with studios around the outside, offices inside the centre ring and a fountain in the middle’.

BBC Television Centre, front gate

One of the most distinctive features of the building was its signage. The same typeface was used on everything from cameras to warning lights to the front gate.

EMI 2001 with Raymond Baxter

The typeface employed was a very common sight when I was five years old. It was used all over Chard Post Office, on signs made by SWEB (the South Western Electricity Board), and even for signs on the changing room doors at Maiden Beech School in Crewkerne. But, as I grew up, this signage was slowly replaced by signs using more modern faces. By the early 80s BBC Television Centre was just about the only place where it could be seen.

BBC Television Centre Studio One

I’d always wondered what the typeface was. The first clue was when I bought the book Encyclopedia of Typefaces by W.P. Jaspert et. al. The book contained a small scan of the face labelled as ‘Doric Italic’. This led me to search on font websites under the ‘Ds’ until I found a typeface that was called ‘AT Derek Italic’. This was close. In fact, it was very close. But it wasn’t right.

AT Derek Italic

For instance, in order to recreate the 1960s caption below, I had to alter the AT Derek lettering extensively:

BBCtv Science and Features recreated

The face used came up in conversation at The Mausoleum Club. The Mausoleum Club is a web forum for people who want to talk about proper television rather than other the kind that we get these days.

By a stroke of good fortune, BBC Graphic Designer Bob Richardson was present and he told me for the first time definitively the name of the font. It was called Washington. I then spent a couple of days plucking up courage to ask Bob if he would be kind enough to send me a scan of the font so that I could recreate a digital version.

Bob was very, very kind and also keen to see a version of the font in truetype form – I received a scan of Washington the next day. The scan he sent was taken from his copy of the BBC Graphic Design Print Room specimen sheets. The book contains all of the metal typefaces that were available to graphic designers (or ‘commercial artists’ as they were initially known) from the early 1950s until circa 1980.

Washington recreated by the BBC for a capgen

Bob told me that the BBC had actually recreated Washington in a format suitable for a caption generator for ‘The Lime Grove Story’ (a 1991 documentary to commemorate the closing of the BBC’s Lime Grove studios) but the BBC didn’t have a version of the font in truetype form.

So, now I had a scan I needed to recreate the font. The plan was, as usual, to trace each character or ‘glyph’ in Inkscape

Tracing in Inkscape

…then import the glyphs I had traced into FontForge

Glyph imported into FontForge

…and use FontForge to generate the final typeface.

The finished typeface

This is exactly the same way as I had recreated the Central Television corporate font, Anchor and Oxford. Only this time I had the best source material possible.

As I’ve talked about recreating fonts extensively in the past I’ll just talk about a couple of things that were either new or different in this case.

P and R superimposed

The first thing of interest was that the font was a real, live metal type and it wasn’t as ‘regular’ as I had come to expect from digital faces. The width of the vertical stroke in the ‘P’ would be quite different in width to the vertical stroke in the ‘R’ which would both differ in width of the vertical stroke in the ‘D’.

It was this kind of irregularity that really gave the font its charm and sold it as an old metal typeface. Therefore I was determined to keep that as much as possible and not to try and make the font too regular and clinical by ‘fixing’ all these quirks.

R coming to the point

The second thing I needed to know was when to ignore curves. Letters such as the capital R would have curves at the corners where you would expect them to come to a point. I did toy with the idea of leaving these curves in place but that looked dreadful at large sizes so that was one thing I did end up ‘fixing’.

There were a number of glyphs I had to create myself, as they didn’t exist when Washington was created or were not a part of the original face. For instance the Greek letter mu is a combination of the letters p, q and u:

P, Q, U make a MU, Cuthbert dribbled and guffed

I also added things like Euro and Rupee currency symbols, copyright and trademark symbols and so on.

One thing I did this time, which I should have done before, was get FontForge to create all the accented glyphs for me. In other words, instead of creating separate Inkscape files for each accented character and importing them into FontForge, I simply created each accent as a glyph and got FontForge to automatically create all the accented characters for me. This saved me a huge amount of time.

Once you’ve created these few characters…

It’s important for me to have a decent coverage of the Latin alphabet as I know first hand how frustrating Hungarians find it to have to use a tilde or diaeresis instead of their double acute. I also like to make sure that the Welsh language can be used with any typeface I create.

…you get all these free!

FontForge created the accented glyphs almost perfectly and out of a few hundred glyphs I only needed to adjust half a dozen by hand. I found this pretty amazing.

Buoyed with my success at automatic accented glyph creation I thought I’d try some automatic kerning. Kerning is the adjustment of the spaces between letters. For instance the distance between the letters ‘T’ and ‘o’ in ‘To’ is quite different to the distance between the letters ‘T’ and ‘h’ in ‘The’.

Good kerning makes all the difference to the appearance of a typeface. Here’s the word ‘colour’ unkerned…

Colour with no kerning

…and here it is kerned.

Colour kerned

For all my other fonts I had sat down and kerned every possible letter combination by hand. The results are excellent but it also involves a large amount of wasted effort. The reason is that many letters (e.g. c, o and e) kern exactly the same as each other. FontForge not only allows you to put these letters into ‘classes’ to kern as a group, but it will also detect these ‘classes’ for you and attempt to kern them all into the bargain.

Kerning by classes – click to enlarge

I tried using this feature for the first time with Washington, and it worked pretty well for most letter combinations. However I do need to tweak this kerning by hand to ensure that all possible combinations of letters look good. Until this is done the font is only really useful for desktop publishing or vector art where you can alter the kerning of each letter combination by hand.

This task will take two or three days to do and it’s not something I want to do now, as it is really a job you need to come to fresh. So in about a month or so I’ll kern the font and release version 1.1 – I’ll post here when the hand kerned version is available.

So when the font is exported, how does it fair? Well, here’s an example I put together which compares Washington to AT Derek:

A comparison – click to enlarge

As you can see, AT Derek may be more elegant but Washington is definitely more ‘BBC’!

The Washington Book typeface is released under the SIL Open Font licence.

All the software I used to create the typeface was free software, including the operating system – Fedora.

You can download the latest version of the Washington font from here. Windows owners will need 7-zip to uncompress the archive. The font is free – the only thing I ask is that if you find it useful please drop me a line or add a comment below as I’d love to hear from you.

Replay Replayed

Replay Expo time is fast approaching again, which is why Barbara Kelly and Lady Isobel Barnett are pictured below modelling an original piece of my artwork:

Sadly, Doris Speed wasn’t free.

Replay Expo is an Arcade, Video Game and Retro show that takes place every autumn at the Norcalympia Exhibition Centre in Blackpool. Last year’s event attracted 3,200 visitors over two days and the organizers are hoping to attract 5,000 this time. The show is timed to coincide with the last weekend of Blackpool Illuminations.

Last year I was involved in designing fliers, banners, advertisements and the website for the show and the organisers very kindly asked me if I would like to continue doing so this year.

r3play 2010

The first thing I needed to do was devise a “Replay” logo for this year’s event. The brief was “the same, but different”. The previous logo was originally designed by “Greyfox”, also known as the talented Irish graphic designer Darren Doyle. It was a beautiful logo and worked fantastically well so I wanted to keep as close to it as possible.

I had two main ideas. Firstly, I wanted make the logo a little more colourful, as the show will be a little more colourful this year. Secondly, I wanted to include a cartoony black outline around the lettering to increase the contrast from a distance and also to evoke the black outlines around cartoony video game characters.

In addition the logo had to be vector illustration, as I would need to export it at some very large sizes indeed. This meant I created it entirely in Inkscape.

This is what I came up with:

replay 2011 – click to enlarge

It was one of the only occasions I’ve ever got it right first time! You’ll notice I had to reverse the “E” because last year people insisted on calling the previous event “are three play” which rather upset the organizers!

B790 – I’m sure this face has a real name!

Next was the question of typography. Last year was easy – I was using lots and lots of lovely Microgramma. This year it was again “the same, but different”, so I settled on a Hermann Berthold art deco typeface called B790. This was similar enough to Microgramma that I could use it in the same sort of ways, whilst at the same time looking very different.

The one thing I was disappointed about this year was the design of the 2011 lettering. I spent day after day producing draft after draft:

My favourite – I spent hours on this!

Another massive fail

Obviously massive fails come in threes

However, in the end nothing I produced seem to grab the client – something that was entirely my fault. In the end, with half an hour or so to spare before stuff went needed to go off to the printers I gave up and produced something quick I’m really ashamed of.

At least the client liked it.

As you can see, the B790 ended up with a bit of a starring role as I used it for the word “EXPO”.

Producing the fliers and roll up banners for Replay Expo was rather interesting this year as the printers decided that only CMYK PDF files were acceptable. In previous years, they had accepted RGB TIFF files exported at 300dpi (dots per inch), which I could export from either The GIMP or Inkscape. But neither Inkscape nor The GIMP can currently produce CMYK PDF files. Therefore, after meaning to do so for nearly three years now, I finally had a good reason to grips with Scribus, a free software desktop publishing package.

The first thing I had to do was a lot of reading. The Scribus documentation is excellent and very thorough, so it was a pleasure to go through it all. Then I went through the tutorial. I had to do that when the children were at school as the first couple of hours featured a statue of a rather forgetful Indian lady who had absentmindedly neglected to put on her undergarments.

She’ll catch her death of cold…

Fortunately, I had colour management set up on my computer, so soft-proofing worked perfectly. This meant that whatever I saw on screen was very close to how my finished artwork would appear in print.

I produced my Replay logo artwork in Inkscape, and exported the logo as a 300dpi RGB PNG file for import into Scribus. Usually I could import my Inkscape files directly into Scribus, but in this instance I was using Inkscape layer effects (i.e. SVG image filters) that Scribus is currently unable to cope with.

Flier created in Scribus

I then created the text and frames directly in Scribus, and imported the photographs into them. It’s actually a very nice way of working as you are using each tool for what it does best.

Once finished, I could export my Scribus file as a CMYK PDF, send it off to be printed and hope for the best.

This was all completely new to me, and I was really nervous as to whether my exported PDF files would even be accepted by the printers, let alone print properly. What was worse was the fact that the Gadget Show Live event was two days away and there wasn’t time to do anything if my files were no good!

Anne Ladbury and Mary Morris

However, as you can see, they turned out quite well. Scribus is an excellent piece of software and I would recommend it to anyone.

Replay Expo takes place at the Norcalympia Exhibition Centre, Norbreck Castle, Blackpool on the 5/6 November 2011. Tickets for the event are available from http://replayexpo.com/tickets

It’s just not Flash…

I’ve finally worked out I can put YouTube videos in my blog that can be viewed without having the Flash plug-in installed. Which is handy for me, as I don’t have the Flash plug-in installed. And presumably it’s also a boon for anyone using one of those ingenious new Apple etch-a-sketch things.

So, now I can watch stupid rubbish without using my daughters’ computer, here’s some that I (and Rory) made earlier…