I carried out the first of the test experiments this week. A simple set up with a data projector displaying a Processing sketch, controlled by a Wii remote. The feedback I received from the 5 participants was interesting.
It seems most people were distracted by the relationship between the image and the Wii remote. Some were trying to understand what to do with it, whilst others were bored by its lack of opportunity. This response reminded me of seeing Brad Miller’s exhibition, augment _me last year. The subtlety of interaction was near to perfect for this work. Whilst it wasn’t immediately clear, it didn’t take long to realise that as you moved around the room, you were impacting the way video moved. The best part however, was that whilst it was easy to be involved, it was difficult (or even deliberately impossible) to master. This gives the work an excellent longevity.
It’s an important point that I hadn’t thought of: giving participants room to explore and play – and a perfect example of why I think the ‘test experiment’ process is a vital one. I can’t pretend that I know what to expect when a wide range of people are involved with my work, so I believe that trying to quantify some of the feedback from smaller projects will help me to create something more effective come the end of the year.
Creating something more open-ended is obviously going to require a lot more work than my simple Processing sketches, which started me considering what tools I will end up using for the final project. I’m now seeking more advice around content creation, with my two major directions being: real-time and pre-rendered graphics. Sound and imagery created in real-time is likely to give me more flexibility and possibly a better sense of interactivity. However, even with the processing power of today’s computers, I won’t be able to match the depth and finish of something created and rendered beforehand.
In the real-time camp (for me) is Processing and MaxForLive. I’m more familiar with Processing, but I’m aware that Jitter (part of Max/MSP) is potentially a lot more powerful. Also helping out the MaxForLive cause is it’s ability to interface well with Arduino and other sensors/devices. Couple that with it’s integration in Ableton Live and it’s a compelling choice. The achilles heel is that I will need to invest a good deal of time into becoming more familiar with it, and therefore won’t be able to churn out test experiments at such a regular rate.
Both of these programming environments would also allow me to import pre-rendered content, and are not restricted to elements generated in real-time. The attraction of working this way is that I will be able to point out my use of ‘industry standard’ software. As much as I enjoy working in different ways to most people, I am very aware that this project will be my major portfolio piece and therefore a selling point for my employability as a designer/artist/whatever.
Therefore, using After Effects to create motion graphics is an appealing direction. I quite enjoy working with the program and the potential to get work in the film and television industry as an After Effects operator is real. As seems to be the way with my wandering mind, I have been looking around for more information about keeping interactive parameters intact when exporting files from After Effects. Like many audio software packages, After Effects allows automation of chosen parameters to be recorded via an external interface (such as a MIDI keyboard). If there was a way of keeping that ability with a file after it had been exported from the After Effects software, things could get very, very interesting. XML perhaps?
But I could just be getting ahead of myself here. Back to basics…
Planning what will be the basis for the next test experiment has lead me down some interesting paths. Removing the physical interface (in Test Experiment 01, the Wii remote) entirely is at the top of the list. This means that I will need to use sensors, or motion tracking to get input from participants. I’m not sold on either idea just yet, so for the next project, it’s likely I’ll stick with what I know and use motion tracking in Processing.
I did spend some time reading Golan Levin’s article, Computer Vision for Artists and Designers, which lead me to discover Pelletier’s cv.jit computer vision tools for Jitter. Unlike some other Jitter options, these are free and come with some excellent tutorials.
Motion tracking is reliant on keeping environmental conditions within certain boundaries. Computers still aren’t very good at finding meaningful information in video, so you really do need to make it as simple as possible for them to track movement.
A tried and tested way to do this during low-light conditions (it’s unlikely that my projection work will function during the day), is to use infrared flood lights. The human eye cannot see the light source, however certain cameras can and once installed, infrared should create enough contrast between human bodies and the surrounding environment to track movement clearly. I’ve also discovered an excellent tutorial to create your own infrared LED floodlight, which may end up being a semester 2 project this year.
Finally, I have begun researching the technical specifications of data projectors. It’s one of those complex and relative things that will yield many and varied results if you pop it into Google. It was suggested to me this week that I should be looking to projectors with a high contrast ratio, and not simply “the more brightness the better”. At some point, brightness becomes blurred if there isn’t enough of a visible difference between the whites and blacks.
All of this amounts to more research, which is potentially bogging me down. There is a huge amount of work in front of me, but it seems the more I know, the more I need to find out. I would like to temper this research with practical work, so the test experiments feel like they’re the most important part of my project right now. Regardless, I need to get myself a clear schedule, in case the whole thing gets out of hand.