Among the mince pies and over-cooked turkey over Christmas, I managed to find a little time to read an interesting paper. #Scanners: Exploring the Control of Adaptive Films using Brain-Computer Interaction shows once again, that the cool people are all at the University of Nottingham. What these particular four cool guys did was put a mini cinema in an old caravan. But this particular cinema wasn’t showing an ordinary film. Rather, the “film was created with four parallel channels of footage, where blinking and levels of attention and meditation, as recorded by a commercially available EEG device, affected which footage participants saw.”
Building on research in Brain Computer Interface (BCI) the team worked with an artist to create a filmed narrative that “ran for 16 minutes, progressing through 18 scenes. However, each scene was filmed as four distinct layers, two showing different views of the central protagonist’s external Reality and the other two showing different views of their internal dream-world.” Which layers each viewer saw was selected by the EEG device, for rather by the viewers’ blinks and states of “attention” or “meditation” as recorded by the device. The authors admit to some skepticism from the research community about the accuracy of the device in question, but that was not what as being tested here. Rather, they were interested in the viewers’ awareness of the ability to control the narrative, and their reaction to that awareness.
I was interested in the paper for two reasons. First of all, their conclusions touch upon an observation I made very early in in my own research, looking at Ghosts in the Garden, I got a small number (therefore not a very robust sample) of users of that interactive narrative to fill out a short questionnaire, and I was surprised by the number of respondents who were not aware that they could control (were controlling) the story through the choices they made. The #Scanners team noticed a similar variation in awareness, but more than that, they found that “while the BCI based adaptation made the experience more immersive for many viewers, thinking about control often brought them out of the experience.”
They conclude that “a traditional belief in HCI is that Direct Manipulation (being able to control exactly what you want to control) sits at the top of both these dimensions. We examined, however, how users deviate from line, and enjoyed the experience more by either not knowing exactly how it worked, or by giving up control and becoming re-immersed in the experience. […] these deviations from the line between knowledge and conscious control over interaction are most interesting design opportunities to explore within future BCI adaptive multimedia experiences.”
With which, I think I agree.
The other reason the paper interests me is that they described their research as “Performance-Led Research in the Wild” and pointed me towards another paper to read.