As a part of the observational studies during my PhD, I will be filming participants interacting with the immersive space that I am now in the process of constructing. It is important that the children are able to feel comfortable to behave as they choose. To achieve this, only the child and their parent or carer will be in the environment at one time, and their interactions will be recorded (both video and data capture from the system) to be observed by qualified therapists.
COFA isn’t geared for setting up studies like this one, so I’m building my own recording system. In an attempt to keep costs down, I’m using 4 to 6 Foscam FI8909W‘s and recording via a network router to an external hard drive. I’m using EvoCam software to take care of the recording. This last element has been a little tricky and I’ve spent the last couple of days testing the setup at home.
The latest version of EvoCam didn’t have the FI8909W listed as being a compatible camera, so with the suggestion of their support staff, I had to figure out how to set one of these up for remote testing, so they could check the feed. For anyone that stumbles across this post with similar problems, here are my crib notes:
- Using the web interface for the FI8909W, change the port from 80 to something else. 8090 is a good starting place.
- Open this port via your modem (this could see you spending some quality time on tech forums, so start here)
- Check your external IP plus port number from a connection other than your own
This last point was where I came unstuck. Some routers apparently don’t like the connection ‘turning back’ on itself and so although I had done everything correctly, I didn’t realise until I checked from another internet connection – in my case, my phone. In the end, EvoCam connected up without too much hassle, but I did need to include the port number with the IP address for each camera, so I also had to setup a static IP for each FI8909W, again, via the web interface.
All the networking nerdery behind me, I tried a small face recognition test (above). After recording streams from 2 cameras in EvoCam, I ran the video through Max, using the cv.jit objects to identify faces in the image. As you can see from the video, it’s a little bit hit and miss within such a complex image; the computer briefly ‘sees’ faces in the furniture and even the dog (but unfortunately not his face).
I learnt whilst setting up Ozge Samanci’s work at If A System Fails In A Forest this year that a complex background can really be the downfall of computer vision. Not only can false-positives pop up in the image, but it really bogs down the processing speed. In the video above, Max eventually dropped down to around 7fps, on my admittedly underpowered computer.
For now, I’m happy that the setup will work. I’ll test again in the interactive space, which will give me a much simpler backdrop. If I can get it running with less of a load on the computer, I might even take real-time video into Max and use face tracking as some form of interface.