Week 06 marks the quarter-mark for this year, which frankly, is damn frightening. It’s also the point at which everyone starts to prioritise each and every assignment (letting the less important ones slide), which is why I’ve combined 2 weeks of posts here. There are lengthy essays calling my name.
Let’s kick things off with some good news: Aspect approved my ethics application. This is actually massive. The bottom-line is that I can now start to look for participants (with the assistance of Occupational Therapists at Aspect), and I will start to get an idea of what sensory devices will be appropriate. It marks the start of real design and development.
But that’s for the coming weeks. Recently, I’ve been working on the ‘core’ of the devices: PoE, Arduino and Android. PoE was something I started looking into last week, and seems to be suiting the project so far. I would prefer for the devices not to be tethered in any way, but at this stage, it’s far more practical to keep costs and time spent coding down, by avoiding anything tricky like a wireless network.
The problem I have come up against my lack of knowledge around networking. Even though I haven’t (yet) gone down the wireless path, using an ethernet lead instead of USB cable still requires some work. And so, I’m struggling to get Arduino passing sensor data onto Processing via a network.
I have decided that spending some time with Tom Igoe‘s Making Things Talk could be valuable, so I’m into some of the tutorials there. He looks at some practical ways of (surprisingly enough) getting devices and sensors to communicate to each other. I’ve now just to find some time to work through the networking chapters.
Android (or more specifically, Android Processing) is also something I’ve been working with for a couple of weeks. I got my Likert Scale on this week though, and put together a mockup design for the questionnaire that I will be giving to the parent/carers of participants in the study.
The idea being that parents/carers can easily record and send data directly to me, without the need for paper or the subjectivity of a written response. From the Android application, I will be sent a floating point number from each question/scale and I can then record this in a database, along with the date and time. Although the image above might suggest that it’s up and running, it’s a facade to hide crashville, lurking inside.
I have also been looking at some of the work of Temple Grandin. Rather than getting bogged down by psychology and science papers on autism, it has been interesting to start getting a sense of what someone with autism goes through at a young age – in their own words. Grandin is in the unique position of being one of the few people capable of putting these experiences into words, which is both enlightening and inspiring.
A couple of things have really stuck with me after seeing Grandin speak. First of all, she is very focused on the idea of many autistic people “think in images”. There is a body of thought which posits that because autistic people think with the right side of their brain, they are more capable of thinking visually and spatially, rather than with language. Grandin’s ability to ‘see’ what cattle see, and design very successful animal handling structures is testament to that idea.
What I found even more interesting is the way that Grandin has ‘learnt’ social behaviour. As she states in the TED Talk above, she responds during social interactions in a similar way to a performer learning how to act in a stage play. She watches for visual cues, and can intellectualise how and why to respond during interactions, but still not necessarily feel that response.
This does suggest that whilst children with autism may display what we’d consider unacceptable behaviour (Temple was non-verbal until around 4yrs of age, did not like to be hugged or touched, and even smeared her own faeces on walls), this does not mean that these children cannot learn new social behaviours. Grandin was still learning well into her 20’s at university.
Having said all that, this is probably getting off topic a little bit. The devices I create are in no way supposed to have a learning goal. The purpose of this study is purely to look at engagement and compare that to non-interactive objects. However, there is the lofty ambition of trying to create co-operative interaction. This may not be an immediate goal in the design of the objects, but it is something that I hope to look at during the year. If the child could engage socially with another person (parent, carer, etc), that would be a huge achievement.
For now, here are the slides outlining the current status of my work, which I will be presenting this week. Next week will be a big one, in terms of giving me a sense of who will be participating in the study and what their sensory needs are. I will be meeting with Occupational Therapists from Aspect, who have gone above and beyond to help me with this project. And I definitely need as much help as I can get!