Multimedia Authoring – Week 11

Multimedia Authoring – Week 11

Code Development

Well, against all odds, it would seem my final coding project is complete (or as complete as I’m willing to make it – adding more code is now making it all very unstable). In the end, there was a bit of a struggling and wrangling disparate sketches, trying to get them to play well together.

Before I go into my process over the last week, I think that it’s important to point out that whilst this sketch ‘works’, it’s likely that I haven’t put it together in the most efficient manner possible. My knowledge of which code puts pressure on CPU is totally non-existent, so I don’t doubt that this project could be cleaned up a great deal to run better. My work around for this project was to decrease the size of the display window to just 320×240.

Sketch 01

My first step was to bring the main elements together: the RSS feed and particles. I ended up quickly ditching my original idea of using the traer.physics balls in favor of using Andy Best’s bubble popping sketch. To keep things simple to begin with, everything is triggered by holding down the mouse button for a few seconds…

bubbletimer_v01
Sketch 1 screengrab

Source code

Sketch 02

To create some movement from different sides of the sketch, I shifted the falling bubbles to come from the left of screen. But instead of them simply moving from left to right, I wanted them to move toward the face. Introduction of face detection suddenly made the sketch run very slowly. A bit of digging through the Processing forums led me to discover that other people have had similar experiences: video feed at 640×480 in Processing can be very clunky…

bubbletimer_v02
Sketch 2 screengrab

Source code

Sketch 03

Dropping the size down to 320×240 seemed to do the trick…

bubbletimer_v03
Sketch 3 screengrab

Source code

Sketch 04

I was struggling to get an x and y coordinate out of the face recognition rectangle, so instead I started by using the mouseX and mouseY coordinates. I also switched to using the traer.physics cloud sketch, which gave the particles more movement and randomisation, as they head towards the mouse cursor…

bubbletimer_v04
Sketch 4 screengrab

Sketch 05

Finally, I found the face detection coordinates and replaced the red bubbles with mini newspaper articles. However, the events were still being triggered by the mouse button being held down for a few seconds, not through movement of the user…

bubbletimer_v05
Sketch 5 screengrab

Sketch 06

The next iteration of the sketch refined the movement of the particles and added a sound loop with the Ess library. As part of my pitch, I wanted to have an array of sound samples playing in a completely randomised and generative way, almost like a John Cage piece. Because of the time constraints I found myself with, I instead used a single sample of tuning between radio channels…

bubbletimer_v06
Sketch 6 screengrab

Sketch 10

The final few versions of the sketch simply moved through refinement of the major elements: the sound would slowly lower its volume as the particles covered the face; the video feed would decrease in brightness the longer the user did not move; the RSS feeds would repeat and cover previous feeds to create a nonsensical version of bars across the image; and finally, the loop would restart once the user moves their head a certain amount…

Note: the movie maker in Processing has exported at a fairly quick frame rate and I’ve added the sound in afterwards.

%CODE1%

There are a couple of problems with the final sketch. As you can see in the video, occasionally the action loop will restart, even though there has been minimal movement. This seems to be when the face detection thinks there is another face in the image. The jump in coordinates re-triggers the loop.

I also think that even though this is very close to what I planned as my concept pitch, in the end it was not successful as an interactive work. Although I had planned it to be the antithesis of interactive: it’s triggered by non-action, there needed to be another element to encourage users to explore this non-action. Whether there was a secondary event that happened when users move, or something that happened at the end of the loop, I think that the end result was a bit weak.

Having said that though, I am encouraged that I managed to pull this together in a short amount of time. Over the break, I plan on trying to get this piece to work more effectively and through that, learn a little more before I set foot in Multimedia Authoring 2 next semester.



Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.