IxD and Assistive Technology

Professional Portfolio – Week 03

Professional Portfolio – Week 03
Blob-tracking in my Max/MSP/Jitter patch.
Blob-tracking in my Max/MSP/Jitter patch.

I began work on motion tracking in Max/MSP/Jitter this week, by way of blob detection. Using the PS3 Eye that I replaced the light filter on last semester, the plan is to use it to track how much light is passing through a translucent table top, giving an impression of how much interaction is taking place.

Blob detection is common practice for camera-based-interaction and there are quite a few methods out there, depending on what your software of choice may be. For this project, I’m using some of the OpenCV objects for Max/MSP/Jitter created by Jean-Marc Pelletier. These provide much of the functionality of OpenCV, without needing to get your hands too dirty with code.

I found an excellent starting point for anyone getting into blob detection, with a patch by eatmoreart in the NUI Group forums. This patch takes the important step of giving meaningful data from the blobs: x coordinate, y coordinate and area for blob detected. It also sends this data out through OSC, which wasn’t necessary for my project.

I did find this patch to be quite processor hungry however, so I took many of the ideas and stripped them down into those parts that were useful to me and let go of the rest. This is where I find Max a great tool for anyone that struggles with code, or is even simply stuck at a point in their own work: there’s a seemingly endless community of people that are willing to give you their own patches and experiences, allowing you to ‘cut and paste’ the ideas that are useful to you – and learning some tricks along the way.

Download above Max/MSP/Jitter patch.

Although I’ve been saying otherwise, I’ve realised after creating the above test that I need to start working with the content that will be used in the final project as soon as possible. Not because I want to know what the visuals will look like but I need to start thinking about how they will respond. A 1:1 style response like that in the video above is not going to be applicable for this project, as I’m tracking an amount of light passing through the table-top, not the direction that light is moving.

What may be useful however, is the point at which audience members approach/interact with the table. When looking at the project from top-down, like the image below, the visual response for this work will mostly be horizontal, particularly as the data being mapped will display a timespan. However, the movement of people will be more vertical, with little motion once they’ve arrived at the table to interact.

Top-down mockup of installation, looking at the movement of audience vs the movement of visuals.

Connecting this movement in a meaningful way to the visual response will be important in making the audience feel as though they are truly interacting with the work. Whilst I don’t believe that interaction should always be expected and ultimately mastered, I do want this piece to engage a broad cross-section of people – so making interaction overly abstract may not be ideal.



Leave a Reply