Multimedia Authoring – Week 06
With the presentations held in last week’s tutorial, we had something of a respite from the weekly code editing and ‘hacking’ tasks. I probably should have been a little bit more proactive and done my own code work, but following the homework rush over the holiday ‘break’, I was feeling a little flat and took some time out. Some trolling… er research on the Internet was stressful enough for my weary head this week.
The first thing I came across, whilst not totally related to this project, was quite interesting. It seems that every other week, there is a new release of another open source (or at least freely available) software, ready to take on the established giant. Whilst I don’t put Processing in the category of multinationals like Adobe or Microsoft, it certainly does have something of a following now and there aren’t a lot of other packages that I can think of which enjoy the community following that Processing does. So it’s interesting to see Field come along (also interesting that it came from the same institution that bore Processing – MIT).
I have to say, at first glance it looks like Processing could be under siege here. Whilst each environment uses different languages (Processing is based on Java, whilst Field is based on Python), they are both designed to help you get ideas down as quickly as possible. The area that Processing falls down in here is the need to refresh the applet window every time you update the code (particularly annoying in OS X, seeing as Apple don’t believe in supporting Java fully). However, Field not only looks like it updates changes in real time, but allows you to edit certain parameters in the graphic window itself. Nerdariffic.
I can’t say I have a spare week or so to learn a new code language right now, so Field seems to have been built with lazy people like me in mind: you can run Processing inside Field! I think I’ve prattled on about this long enough, so if you’re interested, here’s the video…
But back to the work at hand…
In my presentation last week, I linked to Memo Akten‘s ofxMSAPhysics library. It’s not really for Processing, but the traer.physics library that it’s based on is. Taking a look at the examples on the project page, the Bouncy Balls sketch looks promising. It both attracts and repels elements to one another, so if I’m able to add the silhouette of the participant as an element and choose at which time to attract and repel the other elements, this sketch could be particularly useful.
To tell the program when to attract and when to repel the elements, I need a trigger for the change. I think the simplest way to do this will be something along the lines of counting a pixel difference between frames, but I did stumble across something that could be put to interesting use: Gesture Recognition. This video shows handwriting being recognised, but I can’t think of a reason that this couldn’t also recognise body gestures as well…
I think this is something I should leave until I can see how much time I’m going to have left to actually get this work completed. It looks like something I could end up spending a lot of time playing around with, rather than getting the rest of the project underway.
Searching the Discourse forum on the Processing site yielded some interesting results for the RSS feed section of my project. Not only have people been adding RSS feeds into sketches for some time, but there are also people working on feeds to work with the Twitter andlast.fm API,
much like my header on this page. This is an interesting idea, because particularly with last.fm, there is a lot more information that can be drawn from than simply headings: statistics; artist bios; relationships between users etc are all stored on last.fm.
Clearly, there’s a lot of information out there that will be useful for my project. Now I’ve just got to start experimenting with it.