After almost 3 years of research, I have finally completed the first observational study in the Interactive Multi-Sensory Environment (IMSE) I have constructed. I’ve settled on the IMSE nomenclature to differentiate the space from a history of passive Multi-Sensory Environments (MSEs), like the Snoezelen room. Without getting into too much detail in this post, the problem with MSEs within my own work is that they are unable to present an advanced model of interaction; where biological and computer systems could be suggested as being in Conversation with each other.
I have posted several times on the development of this space, so I won’t go into further detail about the structure or technology here. This is more of a brief reflection on the interaction of the children that participated in the first study, titled ’01n’ to indicate the neurotypical population (an autistic population is yet to be observed).
Useful to describe here however, is how the system responds. Using the reacTIVision framework, the position and orientation of coloured boxes placed on the interface table are tracked. This interaction is represented by corresponding colour and position of lights and sound projected onto the IMSE surface. It’s a very simple mode of reaction (not true interaction), designed to establish an understanding of cause/effect agency in the space.
Through the observational studies, I found that this relationship wasn’t explicit enough. Only 1 of the 4 participants made the connection between interaction and system output, and they were one of the eldest from this group. This has led to 2 possible considerations for the next iteration:
- The Early Intervention age range I have been looking at (2-6 years) may be too young to cognitively recognise the cause/effect relationship of the system. The upper end of this age range may still be worth exploring.
- The relationship between the interface and IMSE surface may need to be better connected. Particularly with the response of the system occurring behind the participant much of the time, the output may have appeared arbitrary.
It’s entirely possible that the design will be more ‘successful’ (or more appropriately, function the way I expected) with an autistic population, which typically are better at pattern recognition than neurotypicals. It’s also possible that I need to take a closer look at the personality of each child; part of the study included the parent of each child completing a questionnaire that asked about the child’s experience with technology – those that use iPads and similar more frequently had a different experience to those that didn’t.
The data from both observational video and the completed questionnaires will hopefully start making some interesting connections, which I can use for the next design iteration. Coding this data is my job for the holidays. But my first instinct is that this system will need quite a lot of work before I can start claiming any kind of Conversation exists between the children and the IMSE.
Below is a short video of my CRL colleagues interacting with the IMSE…