Kinect Mask

Kinect Mask

So I have this friend who is insane: he’s just bought a video hire store in Sydney. On the upside, he’s moved into lovely new premises which have kept their 1930/40s deep-set window bays. In exchange for lugging DVDs from the old store to the new one last week, he’s agreed to let me use one of these for a simple interactive display.

I’ve been stinging to get the new Max/MSP/Jitter since it came out a couple of months ago, but knew it would be to the detriment of my uni work, so I restrained myself. Now I have an excuse to get back into it, along with a Kinect camera I picked up from uni for the break. The simple idea of this project is to encourage passing foot traffic to stop in front of the store. When they do, the Kinect camera will capture them and place them inside a movie poster, projected in the shop window. Cause everyone wants to be in the movies. Ouch.

kinectCamera_20111201
The XBOX 360 Kinect camera.

I tried using Processing first up, but the insane hoops that need to be jumped through to get the Kinect to work nearly sent me over the edge. Once I did get it up and running, it sent my CPU usage through the roof. My MacBook Pro is starting to show it’s age, but I don’t think that Processing yet has an efficient route for setup and running Kinect just yet. Max, on the other hand was a cinche. Jean-Marc Pelletire has yet again released an external for Jitter that makes me a little bit depressed about my lack of coding knowledge (he’s also responsible for the ubiquitous cv.jit externals). It was up and running immediately, with around 1/3 of the CPU usage I was seeing in Processing.

First task is to separate the body from their background (in the case of the store, this will be the road behind them). It’s quite simple with the depth map that comes from the Kinect. One of the cameras in the unit ‘sees’ things in greyscale – the closer it is to the camera, the lighter it becomes. Perfect for a mask, where white represents opaque and black is transparent. In Max, I simply took the greyscale depth map, turned it into a binary black/white image at a changeable threshold and used this to mix out the background from the RGB camera.

kinectMaskTest_20111130
Removing background with depth mapping.

Download the Max patch here.

As you can see from the image, the different viewpoints from each camera causes the mask to be noticeably out of alignment. Next step is to shift the mask, so that it matches as well as possible, then place the body within movie poster(s). I’m looking forward to playing with the new MGraphics system for that.



5 thoughts on “Kinect Mask”

  • Hi, one of the reasons your mask doesn’t match nicely is because the way your patch is wired, you’re using the depth from the previous frame to mask the current RGB image. (The patch cords cross before going to the last jit.op.)

    • Right you are, Jean-Marc. What a rookie error! Thanks for that, and thanks for the externals – they are the best around for CV in Jitter.

  • This is a sweet patch. I have been using the Blair kinect patch for something similar. It also allows an element of selecting which depth to mask. Fun stuff. I did not follow the crossed patchcords info however. Am I missing something there? Thanks for the great share.

    • Thanks, Robbie. I think (though haven’t checked – I need to revisit this patch) Jean-Marc was referring to my mixing up the order of bangs when crossing patch cords. Because Max processes from right-to-left, the mask is from the previous frame, rather than the current. A simple [t b b] object or similar should sort that out.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.