And so what has been inside my computer for the past 9 months, finally begins its translation into reality. It may not be as painful as birth, but it might be almost as messy. This week, a friend who works as a signwriter was good enough to take the plans I put together and machined all the panels for the table out of PVC sheeting.
The dimensions I drew up a couple of weeks ago were all adhered to, with the only change being that we needed to cut the table top in half, for ease of transport (it will be making a few journeys for marking between now and the exhibition in November). Also keeping with ease of setup, is the way the table base simply bolts together and slots into the underside of the table top. I can’t take credit for that idea – it comes with the experience of a signwriter.
The only downside of the table being built from PVC was that it was white. We unloaded around 5 cans of high quality spray paint on the panels before I gave up and bought a tin of paint and a roller.
As far as construction goes, all that now remains is building the tray which will be the interactive surface and adding some grates either side to protect speakers. After adding a couple of hinges to the access panel at the back, the table is done.
I’ve also finally begun work on the audio which will be a large part of this project. For the soundtrack, I’ve created about a dozen loops, which work together as a single piece. Each of these loops will fade in and out randomly, giving the sense of an evolving soundtrack. Here’s an early iteration…
Although I’d never used the MSP (audio processing) side of Max/MSP, it was relatively simple to get the loops running randomly. Every 20 seconds or so, each loop has a 50% chance of fading in or out (depending on its present state), else it stays at its current volume. This gives the sense that the music is changing, but hopefully won’t become repetitious.
I’m also yet to make a final call on what will actually be uncovered once a participant digs to the bottom of the surface. It’s likely that I will let the triggered content be my cue on what visually to present, but in the meantime I put together some ghastly-coloured 3D models of the CarriageWorks facade.
The idea is, once one of the images is totally uncovered, it will trigger a sound event. These will form the audio history, which tells the story of the space. Whilst work on the audio triggers has yet to begin, the style of the interaction has slowly begun to take shape, with an early test patch and mockup interface…
With the table almost complete, I’m just about ready to grab a projector and start some serious testing. There’s certainly a lot still to do, but it’s finally taking shape.