A sticking point for my PhD research thus far has been how to identify when children with ASD are expressing an intention through interaction with the environment that I have designed (if they are at all). An area that is proving useful for this problem is one that I was looking at closely in my first year of research, but somehow neglected since: Cybernetics.

A field that was established as a framework in the 1950’s, Cybernetics at its core is the study of regulatory systems (the word is derived from a Greek term, which means to steer, navigate or govern). Cybernetics is not confined to mechanic or computer systems; much of the research in the area is indeed focused on biological systems. The particular area of Cybernetics that I’m focusing on for this research is Gordon Pask’s Conversation Theory. Amongst other things, Pask was interested in how people learn, and with one of the main tenets of Cybernetics being that all humans are systems (and made up of systems at that), Pask believed that we could observe the learning process by examining goal-oriented systems. Whether these were biological or mechanic was seen to be irrelevant.

Broadly, Cybernetics is all about goal-oriented systems, and so these diagrams from the Dubberly, Pangaro and Haque (2009) paper, What is Interaction? Are There Different Types? is a useful starting point in modelling the steps through which a system becomes something which can be considered to fall into the ‘learning’ category.

Linear, first and second-order systems. From Dubberly, Pangaro and Haque (2009).

Linear, first and second-order systems. From Dubberly, Pangaro and Haque (2009).

The ‘linear system’ is a standard cause/effect interaction. There is no feedback to speak of, perhaps like turning on a light. The ‘self-regulating system’ is a step into what is considered First-Order Cybernetics; the system has a goal (often the analogy of a thermostat is used), but it is self-contained and doesn’t consider external influences. This was a criticism of early Cybernetics, in that it perpetuated the idea of a God-like engineer, being able to have an absolute and objective view of the system in question. However, it’s self-contained nature makes it a much more manageable starting point for programming.

The final model, the ‘learning system’, becomes Second-Order Cybernetics and where most of the current theories on Cybernetics reside (. There is a recognition in this model that the researcher can never be truly outside the system and the word affect gets thrown around until it loses all meaning. The image above is a simplistic learning model, in that it is only nested once, but is what will become the basic building block of the interaction model I will construct for the multi-sensory environment.

Combinations of interactive systems. From Dubberly, Pangaro and Haque (2009).

Combinations of interactive systems. From Dubberly, Pangaro and Haque (2009).

The models above continue to unpack different types of interactive systems, of which ‘2-2 Conversing’ is the ultimate. Going back to Pask’s Conversation Theory, he believed that for conversation to take place (as opposed to communication, which he is at pains to point out is quite a different – but not mutually exclusive – type of interaction), at least two learning systems must be able to reach agreement. In this context, agreement is used similarly to understanding: I may not agree with your point of view, but I should be able to recognise and reference it, should we to be in a conversation.

This concept becomes very useful in my research for describing the agency of children in the multi-sensory environment. They may be unable to express themselves through language, but we should be able to observe them reaching agreement with an interactive system.

So that’s really the end-game for the simple Max patch below. I’ve taken Dubberly, Pangaro and Haque’s self-regulating model and replicated it as closely as possible. I’m not entirely sure that Max is the easiest language to use in this case, but it’s visual nature does lend itself very well to sitting within a graphical model structure.

After setting the goal there are only 2 things happening here: if there is any disturbance added to the system, this is measured against the goal and then the system acts to iteratively remove that disturbance, returning the system to its goal-state. This is best reflected by the graph in the middle of the model. I’m not sure that I’ve patched this in the most efficient way, so I’m very open to feedback; take a look at the patcher here.

This is all fairly straightforward at this stage. The next step is to nest this model to create the learning system outlined above and use this to control a DMX lighting rig. Technically, there are no problems there, by far the most difficult part is understanding how someone recognises that they are in conversation with the system. To express that their agency is not only having an aesthetic affect on the system, but the system is in turn responding to their input; a conversation with no words.

Leave a Reply