Simulating ideology in storytelling

The Story Extension Process, from Mei Yii Lim and Ruth Aylett (2007) Narrative Construction in a Mobile Tour Guide

Another great piece from Ruth Aylett, this time from 2007. Here, she and collaborator Mei Yii Lim are getting closer to what I’m aiming for, if taking a different approach. They kick off by describing Terminal Time, a system that improvises documentaries according to the user’s ideological preference, and an intelligent guide for virtual environments which take into account the distance between locations, the already told story, and the affinity between the the story element and the guide’s profile when selecting the next story element and location combination to take users to. They note that this approach could bring mobile guides “a step nearer to the creation of an ‘intelligent guide with personality'” but that it “omits user [visitor] interests”. (I can think of many of a human tour guide that does the same). They also touch on a conversation agent that deals with the same issues they are exploring.

This being a 2007 conference paper, they are of course using a PDA as their medium. Equipped with GPS and text to speech software, a server does all the heavy lifting.

“After [an ice-breaking session where the guide extracts information about the user’s name
and interests], the guide chooses attractions that match the user’s interests, and plans the shortest possible route to the destinations. The guide navigates the user to the chosen locations via directional instructions as well as via an animated directional arrow. Upon arrival, it notifies the user and starts the storytelling process. The system links electronic data to actual physical locations so that stories are relevant to what is in sight. During the interaction, the user continuously expresses his/her interest in the guide’s stories and agreement to the guide’s argument through a rating bar on the graphical user interface. The user’s inputs affect the guide’s emotional state and determine the extensiveness of stories. The system’s outputs are in the form of speech, text and an animated talking head.”

So, in contrast to my own approach, this guide is still story lead, rather than directly user led, but it decides where to take the user based on their interests. But they are striving for an emotional connection with the visitor. So their story elements (SE) are composed of “semantic memories [-] facts, including location-related information” and “emotional memories […] generated through simulation of past experiences”. Each story element has a number of properties, sematic memories for example incude: name ( a coded identifier); type; subjects; objects; effects (this is interesting its lists the story elements that are caused by this story element, with variable weight); event; concepts (this that might need a further definition when fist mentioned); personnel (who was involved); division; attributes (relationship to interest areas in the ontology); location; and, text. Emotional story elements don’t include “effects and subjects attributes because the [emotional story element] itself is the effect of a SE and the guide itself is the subject.” These emotional memories are tagged with “arousal” and “valence” tags. The arousal tags are based on Emotional Tagging, while the valence tag “denotes how favourable or unfavourable an event was to the guide. When interacting with the user, the guide is engaged in meaningful reconstruction of its own past,” hmmmmm.

So their prototype, a guide to the Los Alamos site of the Manhatten project, the guide could be either “a scientist who is interested in topics related to Science and Politics, and a member of the military who is interested in topics related to Military and Politics. Both guides also have General knowledge about the attractions.” I’m not convinced by the artifice of layering onto the interpretation two different points of view – as both such are being authored by a team who in their creation of the two points of view will, even if striving to be objective, will make editorial decisions that reveal a third, authentic PoV.

When selecting which SE to tell next, the guide filters out the ones that are not connected to the current location. Then “three scores corresponding to: previously told stories; the guide’s interests; and the user’s interests are calculated. A SE with the highest overall
score will become the starting spot for extension.” The authors present a pleasingly simple (for a non-coder like me) algorithm for working out which SE goes next. But the semantic elements are not the only story elements that get told. The guide also measures the Emotional, Ideological story elements against the user’s initial questionnaire answers and reactions to previous story elements and decides whether or not to add the guide’s “own” ideological experience on to the interpretation, a bit like a human guide might. So you might be told:

Estimates place the number of deaths caused by Little Boy in Hiroshima up to the end of 1945 at one hundred and forty thousands where the dying continued, five-year deaths related to the bombing reached two hundred thousands.

Or, if the guide’s algorithms think you’ll appreciate it’s ideological perspective, you could hear:

Estimates place the number of deaths caused by Little Boy in Hiroshima up to the end of 1945 at one hundred and forty thousands where the dying continued, five-year deaths related to the bombing reached two hundred thousands. The experience of Hiroshima and Nagasaki bombing was the opening chapter to the possible annihilation of mankind. For men to choose to kill the innocent as a means to their ends, is always murder, and murder is one of the worst of human action. In the bombing of Japanese cities it was certainly decided to kill the innocent as a means to an end.

I guess that’s the scientist personality talking, perhaps the military personality would  instead add a different ideological interpretation of the means to an end. As I mentioned before, I’m not convinced that two (or more) faux points of view are required when the whole project and every story element that the guide gets to choose from are already authored with a true point of view. But in many other aspects this paper is really useful and will get a good deal of referencing in my thesis.

Mind Control and responsive narrative

Among the mince pies and over-cooked turkey over Christmas, I managed to find a little time to read an interesting paper. #Scanners: Exploring the Control of Adaptive Films using Brain-Computer Interaction shows once again, that the cool people are all at the University of Nottingham. What these particular four cool guys did was put a mini cinema in an old caravan. But this particular cinema wasn’t showing an ordinary film. Rather, the “film was created with four parallel channels of footage, where blinking and levels of attention and meditation, as recorded by a commercially available EEG device, affected which footage participants saw.”

Building on research in Brain Computer Interface (BCI) the team worked with an artist to create a filmed narrative that “ran for 16 minutes, progressing through 18 scenes. However, each scene was filmed as four distinct layers, two showing different views of the central protagonist’s external Reality and the other two showing different views of their internal dream-world.” Which layers each viewer saw was selected by the EEG device, for rather by the viewers’ blinks and states of “attention” or “meditation” as recorded by the device. The authors admit to some skepticism from the research community about the accuracy of the device in question, but that was not what as being tested here. Rather, they were interested in the viewers’ awareness of the ability to control the narrative, and their reaction to that awareness.

I was interested in the paper for two reasons. First of all, their conclusions touch upon an observation I made very early in in my own research, looking at Ghosts in the Garden, I got a small number (therefore not a very robust sample) of users of that interactive narrative to fill out a short questionnaire, and I was surprised by the number of respondents who were not aware that they could control (were controlling) the story through the choices they made. The #Scanners team noticed a similar variation in awareness, but more than that, they found that “while the BCI based adaptation made the experience more immersive for many viewers, thinking about control often brought them out of the experience.”

They conclude that “a traditional belief in HCI is that Direct Manipulation (being able to control exactly what you want to control) sits at the top of both these dimensions. We examined, however, how users deviate from line, and enjoyed the experience more by either not knowing exactly how it worked, or by giving up control and becoming re-immersed in the experience. […] these deviations from the line between knowledge and conscious control over interaction are most interesting design opportunities to explore within future BCI adaptive multimedia experiences.”

With which, I think I agree.

The other reason the paper interests me is that they described their research as “Performance-Led Research in the Wild” and pointed me towards another paper to read.

Resonance: Sound, music and emotion in historic house interpretation

Just drafted an abstract for my Sound Heritage presentation:

This presentation explores what computer games can teach us about emotional engagement in cultural heritage interpretation. Beginning with a model of emotional affect drawn from the work of Panksepp and Biven (Panksepp, 2012), Lazarro (Lazarro, 2009), Sylvester (Sylvester, 2013)and Hamari et al (Hamari et al., 2014), it reveals how music especially has become a versatile emotional trigger in game design.

Drawing on the work of Cohen (Cohen, 1998)and Collins (Collins, 2008)eight functions that music has in games:

Masking – Just as music was played in the first movie theaters, partly to mask the sound of the projector, so music in new media can be used to mask the whir of the console’s or PC’s fan.

Provision of continuity – A break in the music can signal a change in the narrative, or continuous music signals the continuation of the current theme.”

Direction of attention – patterns in the music can correlate to patterns in the visuals, directing the attention of the user.

Mood induction; and,
Communication of Meaning- the nice distinction here is between music that makes the user sad, and music that tells the user “this is a sad event” without necessarily changing the user’s mood.

A cue for memory – The power of the music to invoke memories or prepare the mind for a type of cognitive activity is well recognized in advertising and sonic brands such as those created for Intel and Nokia.

Arousal and focal attention – With the user’s brain stimulated by music s/he is more able to concentrate on the diagesis of the presentation.

Aesthetics – The presentation argues that all too often music is used for aesthetic value only in museums and heritage sites, even if the pieces of music used are connected historically with the site or collection.

As an example, the presentation describes a project to improve the way music is used in the chapel at the Vyne, near Basingstoke. Currently, a portable CD player is used to fill the silence with a recording of a cathedral choir, pretty, but inappropriate for the space and for it’s story. A new recording is being made to recreate about half an hour of a pre-reformation Lady Mass, with choisters, organ and officers of the church, to be delivered via multiple speakers, which will be even more pretty but also a better tool for telling the place’s story.

With a proposed experiment at Chawton House as an example, we briefly explore narrative structure, extending the concept of story  Kernels and Satellites described by Shires and Cohan (Shires and Cohan, 1988)to imagine the cultural heritage site as a collection of narrative atoms, or Natoms (Hargood, 2012), both physical (spaces, collection) and ephemeral (text, video, music etc.). Music, the presentation concludes is often considered as a “mere” satellite, but with thought and careful design there is no reason why music can not also become the narrative kernals of interpretation.

 

COHEN, A. J. 1998. The Functions of Music in Multimedia: A Cognitive Approach. Fifth International Conference on Music Perception and Cognition. Seoul, Korea: Western Music Research Institute, Seoul National University.

COLLINS, K. 2008. An Introduction to the Participatory and Non-Linear Aspects of Video Games Audio. In: RICHARDSON, J. A. H., S. (ed.) Essays on Sound and Vision. Helsinki: Helsinki University Press.

HAMARI, J., KOIVISTO, J. & SARSA, H. Does Gamification Work? — A Literature Review of Empirical Studies on Gamification.  System Sciences (HICSS), 2014 47th Hawaii International Conference on, 6-9 Jan. 2014 2014. 3025-3034.

HARGOOD, C., JEWELL, M.O. AND MILLARD, D.E. 2012. The Narrative Braid: A Model for Tackling The Narrative Paradox in Adaptive Documentaries. NHT12@HT12. Milwaukee.

LAZARRO, N. 2009. Understand Emotions. In: BATEMAN, C. (ed.) Beyond Game Design: Nine Steps Towards Creating Better Videogames. Boston MA: Course Technology / Cangage Learning.

PANKSEPP, J. A. B., L. 2012. The Archaeology of Mind: Neuroevolutionary origins of human emotions, New York, W. W. Norton & Company.

SHIRES, L. M. & COHAN, S. 1988. Telling Stories : A Theoretical Analysis of Narrative Fiction, Florence, KY, USA, Routledge.

SYLVESTER, T. 2013. Designing Games – A Guide to Engineering Experiences, Sebastolpol, CA, O’Reilly Media.

Shine On: part two

In the afternoon Graham Festenstein, lighting consultant, kicked off a discussion about using lighting as a tool for interpretation. New technology, he said,  especially LED, presents new opportunities, “new revolution” in lighting. It’s smaller, with better optics, and control. And also more affordable! He used cave paintings as an example. Lighting designers could take one of three approaches to lighting such a subject:  they might try and recreate the historical lighting which, for a cave painting, would have been primitive indeed,  a tallow bowl light, revealing small parts of the painting at time and with an orange light; it’s more likely, given the needs of the visitor, that they might go for more wide angle lighting, revealing the whole of the painting at once; or, they might light for close up inspection of the work, to show the mark making techniques. Traditionally, a lighting designer would have had to chose just one of these approaches. But with the flexibility and versatile control of modern lighting technology, we can do all three things – caveman lighting, wide angle panorama, and close up technical lighting.

Graham’s presentation was not the strongest. heexplained that he was sceptical about LED lights at first pilot as a sceptic. He recalls a visit to a pilot project at the National Portrait Gallery. His first impressions were disappointing, but then he realised that what heith missed about the tungsten lighting to the way it it the gilded frames, and the the LED lighting was better serving the pictures. Then he went on to talk about colour, how the warm lights of the Tower of London’s torture expedition undermined the theme, but the presentation overall was somewhat woolly.

Zerlina Hughes, of  studio ZNA, came next, with a very visual presentation which I found myself watching rather than taking notes.It explained her “toolkit” of interpretive lighting techniques, but I didn’t manage to list all the tools. A copy of the presentation is coming my way though, so I might return with more detail on that toolkit in a later post. One of her most recent jobs looks great however, and I’m keen to go. Say You Want a Revolution, at the V&A follows on from the Bowie show a year or so ago, but with (she promises) less clunky audio technology. I want to go.

Jonathan Howard, of  DHA design, explained that like Zerlina, “most of us started as Theatre designers.” I (foolishly, I think, in retrospect) passed up an invitation to do theatre design at Central St Martins, and I think I would have been fascinated by lighting design, if I had gone, so I might have ended up at the same event, if on the other side of the podium. Museums audiences today are expecting more drama in museums, having experienced theatrical presentations like Les Miserables, and theme parks etc. I was interested to learn that in theatre, cooler colours throw objects into the background, and warmer colours push them into the foreground. This is apparently because we find the blue end of the spectrum more difficult to to focus on. In a museum space, he says, you can light the walls blue so that the edges of the gallery fall away completely. But he did have a caveat about using new lighting technology. Before rushing in to replace your lighting with LEDs and and the modern bells and whistles, ask youself:

Why are we using new tech?
Who will benefit?
Who will maintain it?
Who will support it?

Kevan Shaw, offered the most interesting insight into the State of the Art. He pointed out that lighting on the ceiling has line of sight to most things, because light travels in straight lines (mostly), and we tend to point it at things. So, he said, your ligthting letwork could make a useful communications network too. He wasn’t the first presenter to include and image of a yellow centered squat cylinder in their slide deck. And they spoke as though we all knew what it was. I had to ask, after the presentation, and they explained that it was one of these. These LED modules slip into many exisiting lamps or lumieres. They are not just a light source, but also a platform for sensors and a communications device. Lighting, Kevan argues could be the beachhead of the Internet of Things in museums.

He briefly discussed two competing architectures for smart lighting, Bluetooth, which we all know, and Zigbee, which you may be aware of through the Philip’s Hue range (which I was considering for the the Chawton experiment). He also mentioned Casambi and eyenut, I’m not sure why he thinks these are not part of the two horse race. He argues that we need interoperability. So I guess he’s saying that eventually the competing systems will eventually see a business case in adopting either Bluetooth or Zigbee as an industry standard.

With our lightbulbs communicating with each other, we can get rid of some of our wires, he argues, but it needs to be robust, reliable. And the secret to reliability is a mesh networking, robust networks for local areas. Lighting is a great place for that network to be. That capability already exits in Zigbee (so I think zigbee is what I should be using for Chawton), but its coming soon in Bluetooth. And I think Kevan believes that when it does, Bluetooth will become the VHS of the lighting system wars, and relegate Zigbee to the role of Betamax.

But the really exciting thing is Visible Light Communication, by which the building can communicate with any user with a mobile device that has a front facing camera (and the relevant software installed. He showed us a short video of the technology in Carrefour (mmm the own-brand soft goat cheese is delicious).

The opportunities for museums are obvious but, he warns, to be effectively used, museums will need resource to manage and get insight from all the data these lighting units could produce in resource. Though he says optimisticly to his fellow lighting consultants, “that need could be an opportunity for us!”

Finally we heard from Pavlina Akritas, of Arup, who took the workshop in the direction of civil engineering. Using LA’s Broad Museum as an example, she explained how in this new build, Arup engineered clever (North facing) light-wells which illuminated the museum with daylight, while ensuring that no direct Los Angeles sun fell directly onto any surface within the galleries. The light-wells included blackout blinds to limit overall light hours and photocells to measure the amountof light coming in and if neccessary, automatically supplement the light with LEDs. She also talked briefly about a project to simulate skylight for the Gagosian gallery, Grosvenor Hill.

All in all, it was a fascinating day.

This post is one of two, the first is here.

Pokemon Go: Why is it such an extraordinary success?

A Ratatta in a glass, Jerry mouse he ain’t. Photo: Tom Tyler-Jones

I was talking with my son about Pokemon Go today, and I thought it might be useful to run the game though my model of affect and affordances. Would it reveal why this game is so spectacularly popular, given the barriers to engagement that locatative games have had in the past?

My son pointed out that one thing the game has, especially over its Niantic stablemate, Ingress, is the Pokemon brand, which two or three generations have grown up with since the mid-nineties. Speaking as someone who didn’t grow up with Pokemon however I could not believe this was the only reason for its success.

A huge difference from Ingress is ease of entry. As my survey a couple of years ago may have indicated (though I could not disprove the null hypothesis) here-to-fore locatative games have only held any interest for Hard Fun (otherwise Hardcore) gamers. Pokemon seems to be the first truly casual locatative game (though some might give that honour to Foursquare, I don’t think it was much of a game).

So lets run it though the model:

Game Affects stripped

Leaderboards – Although Pokemon Go doesn’t have a leaderboard as such, it does have Gyms. Just today with my 11 year old son’s advice I managed to (very temporarily) take over a local gym from some quite high powered Pokemon of an opposing team.  So a few minutes after that, I was indeed at the top of a very local leaderboard.

Badges – There are a huge variety of medals you can win for achievements like, for example, collecting ten Pokemon of a particular type.

Rewards – Pokemon Go has LOADS of rewards. For a start, visit a Pokestop, and spin the dial and you will acquire a randomly generated reward of Pokeballs, Eggs, and Potions etc, all of which will be useful in the game. Take over a Gym (as I did this afternoon) and you can claim a reward of 10 Pokecoins every day that you keep the Gym under your control.  Capture a Pokemon, and not only can you add it to your collection, but you are also rewarded with Stardust and Evolution Candy. Every time you go up a level you also earn rewards such as new equipment.

Points and Levels – To level up, you need to earn experience points, which you get for pretty much everything you do, collecting Pokemon, especially new types, spinning Pokestops, hatching eggs, earning badges, evolving Pokemon, battling in gyms, etc.

Story/theme – There isn’t much of a story inherent within the Pokemon Go game, but players who have been brought up on the other computer games and TV series, will know of quite a complex backstory. However, not knowing this story does not seem to be a disadvantage to players. Story knowledge isn’t essential to play, and the lack of story within the game seems to attract (or at least not be a barrier to) players of all generations, many too old to have been captured by the original Pokemon game. My son also points out that as you play you do procedurally generate a story of your own trainer avatar, even if that is only in your head, as Sylvester describes.

Progress – As you go up in level, you do get better equipment, and are more like to catch Pokemon with higher combat power, and you are more likely to encounter rarer Pokemon.

Feedback – The game is casual enough that you don’t need to be looking at the screen all the time, but because the game does not allow you to put your device into sleep mode, you end up holding it, waiting for the tell-tale buzz of nearby Pokemon.

Spectacle and Environment – The graphics and Augmented reality are not very sophisticated, but they are fun. Two things make it so. One is that creatures that only exist in fiction now appear in our real life world. The other is that they can (with some luck and an little movement of the screen) appear in amusing places (on your knee, in your dinner or drink, on your friend’s head), and if they do, you can take and keep a photo.

Challenge – There isn’t much skill based challenge in the gameplay. Capturing rare Pokemon, is more of a feat of luck than skill. There is a real-world challenge of sorts though, and that is to walk around, which is the only way to hatch eggs. Some eggs only require two kilometer walks, but other more rewarding eggs require ten kilometres.

The game lacks (or doesn’t make the most of) a number of emotional triggers:

Music – My son likes the music, but I turned it off early in my play. The music isn’t a very sophisticated feedback generator. One track plays pretty much continuously, and the only changes are for evolution cut-scenes (my boy likes this track best) and Pokemon encounters.

Insight – There  is very little learning through play. My son teaches me most of what I need to learn, and he has leaned most of that through YouTube.

There is no Threat or Sex (even when you capture Male and Female Nidoran), and no real character arc.

So, given the affordancies listed above, we can predict which emotions players will be feeling: playful Amusement (from humorously placed AR Pokemon); the social emotions Fiero and Naches (because though the gameplay isn’t inherently social there are enough players currently on the streets for conversation; advice and insight; and even a degree of cooperation; to take place); the seeking emotions, Excitement and Curiosity (especially when find new types of Pokemon); Frustration, a rage affect (when Pokemon randomly break out of your Pokemon); and some degree of Care (from nicknaming, nuturing and powering up your stable of Pokemon).

And let us not forget the Panic/grief, when nothing makes your phone buzz -you are out of mobile reception or have a weak signal, and especially when your phone battery is running low!

Interactive story beats

In my exploration of interactive storytelling I’ve concentrated on computer games, because I’m exploring the digital delivery of story. But I’ve already decided that for my experiment at Chawton next year, I’m going to “wizard of Oz” it – use actual people instead of trying to write a computer program to deliver the interactive narrative.

I’ve been thinking about the issues around that. People are natural storytellers, though some are better than others, so I have a double edge problem. As I recruit and train people to be my “wizards of Oz”, I need to train the poor story-tellers to be better, and weirdly, I need to train the great storytellers to be worse! My reasoning is this, I want to prototype what a computer might do, there’s little or no experimental value in simply enhancing a great storyteller’s natural ability with some environmental bells and whistles. So part of what I’m trying to learn is about how to systematize (is that a word? It’ll do) story.

I’ll explain about Kernels and Satellites of course, but I need (I think) some sort of simple system of identifying how different story elements might fit into the emotional journey the visitor is going to take.

So, I’m reading Robin D. LawsHamlet’s Hit Points. Laws is a game designer but mostly of tabletop, or “pen and paper” role-playing games (though he has written for some computer games too). This book attempts to systematize (I think it is a word) story, with an audience of role-playing gamers in mind. I think it may be useful for me, because it attempts to train the Game Master of such games (the “referee” who, together with the players, makes the story) to be aware of the emotional impact of each scene or action (which he calls, using a screen-writing term, “beats”) on the players, and better choose which element to serve up next to keep everyone emotionally engaged. Tabletop Roleplaying Games must be the most interactive, responsive, stories ever created. In a way, my “wizards of Oz” will be like a Game Master, not telling a story they prepared earlier, but working with their visitors to create a story on the fly, but keep it emotionally engaging.

In a handy short opening chapter called “How To Pretend You’ve Read This Book” Laws explains “With its system of beat analysis, you can track a narrative’s moment-to-moment shifts in emotional momentum. Beat analysis builds itself around the following very basic fact:

Stories engage our attention by constantly modulating our emotional responses.”

Sadly though, I can’t get away with reading just this chapter. It’s only later that he actually shares the classification of beats that he uses in his analysis.

Hamlet’s Hit Points Icons and Arrows by Gameplaywright LLP and Craig S. Grant is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

He begins with two types that he says will make up the majority of beats in any story, Procedural and Dramatic beats. Procedural beats move the protagonist towards (forfilling the audience’s hopes) or away from (realizing the audience’s fears), his practical, external goal. Dramatic beats do the same for the protagonist’s inner goals. “We hope that the beat moves him closer to a positive inner transformation and fear that it might move him towards a negative transformation.”

Laws talks a lot about hope and fear. In fact he simplifies the audience’s emotional response to every beat (which he describes as its resolution) as being a movement towards one of these poles. I’ve got fear on my nascent emotional affect and affordances diagram, its one of Panksepp’s primal emotions, but I’m not yet sure where hope sits – I wonder, is it in care?

In both types of beat, Laws describes two parties, the petitioner, who wants the thing, and the granter, who must be negotiated with. Dramatic beats are mostly actual verbal negotiations, procedural beats might also be fights, tricks, races or other challenges.

From the way Laws describes them, I’d expect that most kernels in a story are likely to be one of these two types of beat. And the other types are more likely to be satellites. He lists:

Commentary – “in which the protagonist’s movement towards or away from his goal is momentarily suspended while the author underlines the story’s thematic elements.” Laws uses Mercutio’s Queen Mab speech in Romeo and Juliet as an example.

Anticipation – which “create[s] an expectation of coming procedural success, which we look forward to with pleasure.” The example here is “Popeye has eaten his spinach. (any given episode of Popeye)”

Gratification – “a positive emotional moment that floats free from the main narrative. They often appear as rest breaks between major sequences. A musical interlude often acts as a gratification beat (unless it also advances the story, as it frequently does in musical genre).”

Bringdown – the opposite of gratification. “Jerry Lundergaard’s car alone in a desolate parking lot, is completely iced over after his father-in-law bars him from a promising business deal. (Fargo)”

Then Laws offers us three “informational beats”:

Pipe – “A beat that surreptitiously provides us with information we’ll need later, without tipping the audience to its importance.”

Question – “introduces a point of curiosity we want to see satisfied […] a question usually resolves as a down beat.”

Reveal – “provides the information we were made to desire in a previous question beat, or surprises us with new information. In the latter case it might come out of the blue, or have been set up with one or more pipe beats laying the groundwork for the surprise.” The example he uses is the Revelation that Bruce Willis’ character in The Sixth Sense is dead. “We tend to be more engaged by exposition when it has been teased to us by a prior question, or can clearly see its impact on our hopes and fears.”

(Laws explains that literary fiction makes much use of question/reveal cycles to manipulate emotion, rather than the procedural / dramatic beats that fill genre fiction and thrillers.)

Laws goes on to analyse three scripted narratives in full, Shakespeare’s Hamlet, and the films Dr No and Casablanca, but that’s not what I’m discussing now, though having recently rewatched Casablanca as part of my children’s continuing cinema education, I was  interested to read his analysis of that. It is worth pointing out, however, that the “curve” of a story like Casablanca is inexorably downward. Laws compares the maps his analysis creates with “the classic chart you may recall from secondary school literature classes” (which I’ve touched on before) and notes that the lines his analysis creates are “flatter overall. It tends to resemble a stock tracker measuring the progress over time of a slowly deflating security […] Even stories that end happily […] tend to move downward over time.” He explains that narratives build up fear with numerous incremental steps, before sudden uplifting moments of hope. So in most stories, there are simply more down beats than up beats, given that the up beats are more intense. I think there is also a point that Laws misses, in many of those narrative curves the absolute value of emotional intensity is being measured, with no thought as to whether the emotion is hopeful or fearful.

So, is all this useful to me? Well I think at the very least I think I can get my “wizards of Oz” to think about up beats and down beats, and make sure not to pile on too many down beats in a row without the occasional up beat. Whether or not heritage interpretation lends itself to procedural and dramatic beats, there is definitely room for question/reveal beats, and it could be argued that too much interpretation goes straight for the revelations without asking the questions or laying the pipes first. So I think it is something that may prove useful.

The powers of people

I was at Chawton again yesterday (before going to Petworth for yesterday’s mobile fun) to meet with Jane, one of the house’s most experienced volunteers. I’d challenged her to give me a 45 minute tour of her choice. She really wanted me to tell her what I was most interested to hear, but I wouldn’t. I wanted her unbiased perception of what were the most “important” bits of her her encyclopedic knowledge of Chawton and the surrounding area to share, given the 45 minute time limit.

(I always recommend that 45 minutes is the absolute maximum for a guided tour. In fact I suggest that half an hour is what people should work to. People who want more will stay behind to chat, but there is some evidence from the National Trust’s monitoring of visitors for conservation, that the average dwell time in a house, whatever it’s size is about 45 minutes.)

In the end she gave me what I’d call an architectural tour of the house, pointing out how the thick exterior walls of the the original manor had become interior walls after Richard Knight’s extensions. It was great, and reminded me about some of the things I’d forgotten about being a tour guide that make guided tours (with the right guide) so entertaining.

I’ve always said that guided tours often offer the best historic house experience. A good paid or volunteer guide can weave a compelling story as s/he escorts you around the house. He or she can reveal things you might otherwise have have missed. They can respond to your interests, and level of expertise, to give you a tailored experience. But Jane reminded me how they can transform the place, by pointing out those thick walls, or turning over a framed note hanging on the wall to reveal the ancient deeds from which the paper had been recycled. A good guide turns their audience into detectives – rather than simply telling them how Montague Knight installed a safe into what once had been an old garderobe chute, they help their audience work it out for themselves – a moment of insight, that emotional trigger where everything that has come before “clicks into place and reveals the shape of the whole” as Tynan Sylvester puts it.

Of course, Jane’s tour also demonstrated that the VERY best historic house experience would be to have the guide all to yourself. Not everyone on a larger tour (and there were a couple running yesterday that we bumped into) could have lifted the framed note from the wall to read the reverse. As I hung it back on the hook, I had conservation alarm bells ringing in my head. Every handling, every movement of this glass framed note (which Montague Knight had hidden beneath the floor for future generations to find) put it at risk. The more people given the opportunity I had, the greater the chance that it might be damaged.

Not everyone can do what I did, arrange a personal tour at a time of my convenience after an email introduction from the Director. For those other tour groups we met, the guided tour experience gets diluted, less personal, less tailored to each individual’s interests.

The technological approach I’m investigating might be able to address some of the personalisation challenges, but can it ever offer the magical moments of insight that Jane offered me?

Representing affect and affordancies

Game Affects stripped

Just a short post today, to share what I spent too much of yesterday doing. You may recall a previous post wherein I was struggling to represent all the different emotional models I’d been reading about in my literature review.

A presentation I’m writing for a conference in a couple of weeks gave me the opportunity to have another go, and (importantly for me) make it look a good deal prettier. By about 11.30 last night, I felt that at last I was getting somewhere. So this post simply shares my work in progress, in the picture above.

The central donut represents Panskepp’s work, then moving out from that Lazarro’s adaptation of Ekman, Sylvester’s “triggers”, and then floating around the outside, the motovational affordancies listed by Hamari et al.

It’s not “finished”. I want to return to Lazarro’s work and reassess that, because I found myself editing out a lot of her emotional responses when I was putting this together. I’m also painfully aware that a trigger like “Music” may elicit all sorts of emotional responses, and it sits possibly uncomfortably linked to Panskepp’s PLAY core emotion (though I justified my decision by saying to myself that surely that music IS play).

Now I better get back to my to-do list.

What I meant to say was…

Back at the University for the second day of PGRAS, the post-graduate archaeology symposium which I spoke at yesterday. My talk didn’t go brilliantly well. Despite my preparation last weekend, producing a script as well as my slide deck, I went off-script about a third of the way through, and didn’t get back on it, so I feel a lot of what I had meant to say went unsaid. I often find this when I a script myself, it’s seems I stick more to what I plan to say when I only use bullet points and ad-lib around those. When it’s a full script something in my mind rebels and I end up saying nothing in the script.

So, here’s what I meant to say:

  1. This is a session about storytelling. So I’m going to tell you a story, and like all good stories, its going to have a beginning, middle and an end. Given the audience I feel I must warn you – I can’t promise that this will have much archaeology in it. But I have included one piece, so keep an eye out for it
  2. Last time I was speaking in front of this forum, I explained that I was researching what cultural heritage interpretation might learn from digital games. Those of you that were here may remember that I’d was interested in eight “emotional triggers” (adapted from (Sylvester, 2013)) that engage players in games. You can ask me about these four afterwards. Right now I’m interested in these four, where I think cultural heritage may have more to learn from games.
    1. Generally we don’t like people Acquiring stuff from cultural heritage sites. But actually the “Can you spot ?” type sheets that heritage sites have for decades given to bored children, are using the acquisition trigger.
    2. Challenge is an interesting one, many games are at the best when the degree of challenge matches the player’s ability and they get into “flow”, but seriously how much challenge are cultural heritage visitors looking for, on a day out? We’ll briefly return to this in a while.
    3. Here’s a tip from me, of you have any musically minded mates looking for a PhD subject, then the world of music and cultural heritage interpretation is an open field. There is nothing published. Zero, Nada. Having done my literature review, its what I’d be studying, if I could play, or … er … tell the difference between notes, or even keep a rythym.
    4. But I can’t, so storytelling is the focus of my study.
  3. Before me move on to that, I’d like to pause for a small digression. Those of you who are still listening to me – take a moment to look around the audience. No I don’t want you to point anybody out. I don’t want to shame anybody. But just put your hand up if you can see anyone who isn’t looking at me, but rather looking at their mobile device.
    That’s OK. I know I can be boring. But it’s a demonstration of the secret power of mobile devices. They are teleportation machines, which can transport you away from the place you are physically in.
    And most cultural heritage visitors don’t want that. They have come to our places (they may even have used their phones to help transport them to this place – with on-line bookings or GPS route-finding) to be in the place.
    Of course, that doesn’t stop all sorts of people using mobile devices to “gamify” cultural heritage interpretation. This game at the National Maritime museum, is an example of one that adds new technology to the classic acquisition trigger. You co round the world, collecting crew and cargo from various ports. It adds the challenge trigger to the mix, because you can only SEE the ports if you look at the giant map through the screen’s interface.
  4. There’s a lot of research currently looking at interfaces for cultural heritage (Reunanen et al., 2015) considered for example, getting visitors to make swimming motions in front of a Kinect to navigate a simulated wreck site. But the more I read, and the longer I considered it, I’m more and more of the opinion that there is an interface for cultural heritage that technologists are ignoring: (click) Walking around, looking at stuff.
  5. Now, when it comes to storytelling, “walking around looking at stuff” is not without its problems. People like to choose their own routes around cultural heritage venues, avoid crowds, and look only at some of the objects.
  6. What that means, is that sites often tell their most emotionally engaging story, the beginning, (click)middle (click) and end ( click) towards the beginning of the visit, with a multimedia experience in the visitor centre, or if they can’t afford that, an introductory talk. Then, everything else (click). Which is what game designers call a branching narrative. And what Aylett (Aylett, 2000, Louchart, 2003) calls the “Narrative Paradox … how to reconcile the needs of the user who is now potentially a participant rather than a spectator with the idea of narrative coherence — that for an experience to count as a story it must have some kind of satisfying structure.” (Aylett, 2000). We can learn from our games address with paradox.
  7. Imagine then, a site where the visitor’s movements will be tracked around the site, and the interpretation will adapt to what they have experienced already. Museum and heritage sites consist of both physical and ephemeral narrative atoms (“natoms” after (Hargood, 2011)). Persistent natoms include the objects and the collection but also the spaces themselves, either because of their historic nature, or their configuration in relation to other spaces (Hillier, 1996). Ephemeral natoms are media that can be delivered to the visitor responsively including, but not exclusive to, lighting effects, sound and music, audiovisual material, and text.
    All of these natoms comprise the “curated content” of any exhibition or presentation. The physical natoms are “always on,” but the others need not be (hence the “ephemeral” designation). The idea of the responsive environment would be to eventually replace text panels and labels with e-Ink panels which can deliver text natoms specific to needs of the visitor. Similarly, loudspeakers need not play music or sound effects on a loop, but rather deliver the most appropriate piece of music for the majority of visitors within range.
    To reduce the impact of the narrative paradox (Louchart, 2003), the natoms will be tagged as either Satellites (which can be accessed in any order) or Kernels, which must be presented in a particular order (Shires and Cohan, 1988). Defining which natoms are satellites or kernels becomes the authorial role of the curator.
    Here’s comes the gratuitous piece of archaeology – does this diagram remind you of anything? (click) But in fact it seems somehow appropriate. Because, this is the Apotheosis moment. I want to make the visitor the “God” of his or her own story. Not quite putting them in the place of the protagonist, whose choices were made years ago, but both watching and controlling the story as it develops.
  8. I’m no technologist, so my plan is to “wizard of Oz” a trail run, using people following visitor groups around, rather than a fancy computer program. My intention is to test how people respond to being followed, and how such a responsive environment would negotiate the conflicting story needs of different visitor groups sharing the same space. I have a venue, the Director Chawton House has promised me a couple of weeks worth of visitors to play with, next year. This is where I am at so far, having spent a couple of weeks breaking down the place’s stories into Natoms.
    There’s a lot more to do, but next year I hope to tell you how Chawton’s visitors were able to explore the place entirely freely, (click) and still manage to be told an engaging story from (click) beginning, though (click click) middle (click) and end.

Chawton

I dreary day to photograph a fine building, but the meeting made up for the weather!
A dreary day to photograph a fine building, but the meeting made up for the weather!

Just a quick note today to reflect on the meeting I had this morning with Gillian Dow, Executive Director of Chawton House Library. This place has been preying on my thoughts since I visited for the last Sound Heritage workshop. In fact, somebody (my friend Jane and her colleague Hilary) had suggested last year that it might be the perfect place to try out my Responsive Environment ideas. But my visit for Sound Heritage made me think more and more that they were right.

  • The place has many interesting stories but ones that can conflict with each other. Do people what to know about it’s centuries as a residence for the Knight family, its connections with Austen, and/or its modern day research into early female writers?
  • It’s a place that hasn’t been open to the public long (this year its its first full season welcoming days out visitors) and is still finding it’s voice.
  • Its relatively free of “stuff” and has modern display systems (vitrines and hanging rails), which means that creating the experience should not be too disruptive.
  • It has pervasive wi-fi (the library’s founding patron Sandy Lerner, co-founded Cisco systems) which will make the experiment a lot easier and cheaper to run, even though I’ve decided to Wizard of Oz it.

So today I explained my ideas to Gillian and, I’m pleased to say, she liked them. We’ve provisionally agreed to do something in the early part of 2017, before that year’s major exhibition is installed. I brought away a floor plan of the house, and I have just this moment received a copy of the draft guidebook, so I can start breaking the story into “natoms”. It looks very much like its all systems go!

I have to say I’m very excited.

(But right now, I’m meant to be taking the boy camping so, I’ll leave it there…