There is quite a large gap between this post and the last in terms of work done. Whilst the technology I was developing the project with didn’t change too much, I spent a great deal of my time refining its response to the interaction of the dancers. A decision made in my concept, I didn’t want to inhibit the movement of the performers, but instead the technology should support their actions.
So where possible, I continued to map the movements of the dancers and look for clear data that I could use as a trigger. Watching the incoming values within OSCulator and Processing was helpful, even if a little bit overwhelming – the most difficult part was refining the information to block out unwanted noise.
Here are a few sequences during the mapping process (apologies for the poor quality)…
Once the basic structure of the performance was established, I added a degree of control for the music for the dancers. To avoid the audience focusing on a “call and response” between the performers and the technology, I chose to change the way that the interaction occurred: some sections would not begin until a physical movement took place; some of the performance was in response to sound/light; and other interactions were a combination of the two.
Adding lights came very late in the process. It was difficult to regularly get all my gear into the rehearsal space, so I put off adding light interaction much more than I should have. Whilst I fretted about getting my hands on “proper” stage lights, in the end, it was standard 150W flood lights that did the trick. And at $10 a pop including bulbs from the hardware store, this was an excellent alternative.
The 4 channel dimmer box that I picked up runs at 5A per channel, but only 11A total. Without getting into lighting formulas, the bottom line is that I can only plug 600W of lighting per channel into the box before I blow a fuse(s). For my small project, that was more than enough. However, the response of the lights to movement wasn’t as snappy as I had hoped. Whether this resulted from my inefficient code, a lack of processing power from the USB/DMX converter, or the dimmer box itself is something that I need to revisit.
The most frustrating stage came when attempting to document the final performance. At this point, I was well and truly sick of seeing the inside of the uni classroom, so we arranged to take all the equipment and setup on the other side of the city in a much nicer space. Of course, this is when everything failed completely for the very first time.
After a complete non-event in the new space, I looked into the possible causes. It seems as though Bluetooth and wireless networks inhabit a similar range of frequencies. In the untested space, there was a lot of wireless traffic, making it impossible for the Wii remotes to communicate with my laptop. Moral of the story: test, test, test.
I feel as though this was an excellent process for me. Being given the latitude at uni to go outside the normal structure of my subjects and explore new technologies was a fantastic experience. Of course, this lead to its own issues around how much support I was able to receive from academics and so makes it difficult to produce polished output at this scale, from just a couple of months work.
When I watch the video, I can see moments where things really come together quite beautifully, but for the most part, it still looks far from finished. I would love to revisit this at some point, but I’m not sure if I would use the same performance, music or technologies again. Nevertheless, I learnt a lot from this piece and I’m grateful I had the chance to take it on.
The biggest appreciation to my dancers: Andrew Koblar and Kristina Hood – I can’t thank you enough for all your time and energy…
Note: This is a ‘journal entry’ as part of a longer Advanced Multimedia Authoring documentation process. The post above was previously titled ‘Advanced Multimedia Authoring – Week 12′.