Plymouth Plantation from (David L. Ryan/Globe Staff/file)

In my paper last week, struggling to find a way to describe the environmental “sense (or spirit) of place” that may drive emotional engagement in (games and) cultural heritage environments, I chose to use the word “Presence.”  I first came across in Pinchbeck’s writing but I was nervous about using it, until, by coincidence, Erik Champion also used the word when he commented on this post.

There are two reasons why I hadn’t used it before, and why I’m still unsure about it: Firstly, presence is a term used to talk about virtual environments (like games) but I’m looking to apply what I learn to real environments (cultural heritage). Secondly, I worry (as I mentioned in my presentation) that presence may encompass all (or most) of what I’m currently calling “emotional drivers,” rather than being one driver.

But there needed to be a word there, so i flippantly settled on presence, and promised myself I’d investigate it later,

Later starts now, and Erik handily left me some links in a comment on this post. I’ve only been able to read his own co-authored “Evaluating presence in cultural heritage projects” (2011Pujol, P.  & Champion, E.) so far the other two come from a journal Southampton doesn’t subscribe to, so I’m hoping I can fix an inter-library loan. However, this paper works quite well as a primer on what Erik calls “Cultural Presence.”

Lets kick off with how they start their paper with a little bit of  history:

Presence originates from the term ‘telepresence’, made famous by the computer scientist Marvin Minksy in a 1980 paper of the same name (Minsky 1980). From around 1991 (the date of the first issue of the MIT journal Presence: Teleoperators and Virtual Environments), presence has been typically defined as the capacity of the technology to make the user feel transported into a remote place and be able to efficiently interact with it.

Of course, to be honest, it looks like it started out as, as my science adviser used to say in Civilization, “a mainly military technology,” designed to help drone pilots in Colorado feel transported to remote mountains on the Afgan/Pakistan border and “efficiently interact” with jihadi hideouts. But it soon found civilian uses.

Archaeologists have long been using computer modelling to help visualise sites, “rebuilding” what might today be ruins, or virtually “restoring” buildings that have seen centuries of modification. The primary purpose of such models has been experimental, helping the researcher compare hypotheses. Of course once these models have been made, people want to share them, with each other and with the public. Sometime they are shared incredibly badly, even now. One of the presenters at the York Digital Heritage conference seemed surprised that a computer model that was a outcome of a research project was, when displayed in a shop window, totally ignored by the passers-by. But when its done well, its done with the intention of transporting the audience to a particular place and time, and that’s where presence comes in. As Pujol and Champion say:

Presence is typically seen in academic research as the aim of virtual reality environments. Since ‘virtual heritage’ is the name VR applications are given when used for the dissemination of cultural heritage, it logically follows that in VR applied to cultural heritage, a meaningful sense of presence is also the intended outcome.

However they follow that up with a warning that cultural heritage interpretation isn’t just about how buildings look. They argue that to thing of virtual heritage as simply the re-creation of buildings or other “tangible” artifacts in the digital domain is to ignore the importance of  human interaction, ritual, communication, symbolism and representation and all the other intangibles that are part of culture. They also quote Stone and Ojika (2000, Virtual heritage: what next?. IEEE Multimedia, 7 (2), 73–74):

[Virtual heritage is] . . . the use of computer-based interactive technologies to record, preserve, or recreate artefacts, sites and actors of historic, artistic, religious, of cultural significance and to deliver the results openly to a global audience in such a way as to provide formative educational experiences through electronic manipulations of time and space.

But here is where I begin to worry about the idea of presence. Because I’m not convinced that “the use of computer-based interactive technologies … to provide formative educational experiences through electronic manipulations of time and space” needs to be about immersive virtualisation.

You see, I very much enjoy the model of “Infinite Possibility” set out bu Pine and Korn (2011) in the book of the same name. Their claim is that Digital Technology offers so much more than Virtual (or even Augmented) Reality. We should, they argue, be thinking in terms of all the possible combinations of the variables of the Reality (Time, Space and Matter) and the equivalent variables of the digital Virtuality (for want of better words: no-time, no-space and no-matter). The diagesis of a computer game, or VR model, is a function of these three no-variables, as it is made of bits of computer code. But run that virtual world through a pair of VR goggles, or even a humble Tom-Tom navigator, and you suddenly have Augmented reality (Time and space, but no-matter). Conversely, use a wii-controller to  are intmanipluate a computer game and suddenly you find yourself in the realm of no-time and no-space, but with matter – what Pine and Korn call Augmented Virtuality.

Superimpose a different time (or no-time) on  space and matter, and you have what they call warped reality:

Such reality-based time travel happens whenever experiences simulate another time… such as Renaissance Fairs and living history museums (Plimouth Plantation, Colonial Williamsburg, and the like) or transport us … even into the future (albeit a fictional future) such as at, yes, Star Trek conventions.

In these warped realities, a cultural heritage audience is able to participate in the construction of realities that capture objects and processes of scientific, social or spiritual value [and presents them] as accurately, authentically, and engagingly as possible. Places like Plimouth Plantation share their work  in a sensitive, safe and durable manner to as wide and long-term an audience as possible, to provide an effective and inspirational learning environment that best communicates the intended pedagogical aims. Every italicised statement here actually comes from Pujol and Champion’s summary of what every good Virtual Heritage project should be. And yet Plimouth Plantation and its like a currently entirely analogue creations, and that no-one would consider applying the word “virtual” to. I think, and Pine and Korn imply, that digital technology has the potential to greatly enhance warped reality experiences, without making them virtual. I’d also argue that cultural “presence” occurs in these warped reality spaces, and yet … and yet, can it only apply to Virtual worlds?

Handily, Pujol and Champion have a crack at unpicking the definitions of presence for me. Starting with the idea that the ideal is a sense of being there, or blanking out the digital mediation of screen and controller, they touch upon immersion as a product of field of view and optical resolution. They also briefly summarize the idea that the human component of the system is likely to respond to the affordancies offered by the VR according to their interests, if the virtual component can in turn respond in a realistic way. They touch upon co-presence (sharing the VR with other users) before arguing that social presence (interacting with other users and virtual agents) is vitally important idea in the “potential [their emphasis] convergence between the presence and cultural heritage fields.”

So already we can see that the concept of presence is a very complex and even ambiguous construct: there are several definitions based on disparate theories, which focus on specific aspects or give different names to the same concept, partially overlapping or even contradicting each other. Therefore, the conventional notion of presence as the sensation of ‘being there’ is a highly simplified way of expressing an internal perception of the environment and of ourselves in relation to it. A more comprehensive explanation would be that the sense of presence results from the interaction of various factors. These factors depend both on the system (immersivity, visual accuracy, real-time physical and social interactivity, invisibility of devices, consistency of the content) and on the participant (perception, attention, empathy, engagement, meaningfulness or relevance of the content, control, suspension of disbelief).

They go on to try and define “cultural presence,” citing and earlier work by Champion (2005, Cultural presence. In: S. Dasgupta, ed. Encyclopedia of virtual communities ad technologies) to suggest that “cultural presence corresponds to the feeling that people from a specific culture occupy or have occupied a virtual environment and transformed it into a culturally meaningful place.” This is something I recognise from what the National Trust tries to do in the places it looks after. But, they say, such “environments represent a palimpsest in which past social interactions are layered and carved into the fabric of the environment. Although visitors can see ‘culture’, they cannot participate in it, either due to a lack of culturally constrained creative understanding or because the originators have long since passed away.”

So, is the previously mentioned “social presence” the key? Possibly not. Pujol and Champion briefly look at chatrooms, virtual communities and video games. The social interaction in chatrooms is fleeting and non-permanent. Virtual communities on the other hand, “do establish rules and elements of identity; nonetheless, their limited virtuality and transient ubiquity ironically prevents them from owning a sense of cultural place, where identity is expressed and recognized through dynamic processes that are materially situated.” (Hmmm Second Lifers may disagree – see my forthcoming report from Decoding the Digital). Of the three, they think, games have the most potential for social presence, because of their interactivity and exploration, virtual agents and (sometimes) co-operative play. Game mechanics though, can sometimes get in the way of cultural learning.

They summarise their discussion of cultural presence with the following:

So cultural presence in the cultural heritage field is not limited to the reconstruction of a place; ideally it would also encourage empathy, interaction and collaboration to enhance awareness and understanding of past or foreign cultures. So for cultural presence, ‘presence’ is the means and ‘culture’ is the goal. Unlike the test environments of typical presence research, virtual heritage projects should not aim at the fidelity of representation of the world in general, but towards a cultural context, containing not only objects and active agents but also the inter-relationship of their situated beliefs and values. Hence, presence becomes a ‘being – not only physically but also socially, culturally – there and then’.

Which is interesting, because in that whole paragraph virtual reality isn’t once mentioned. Is it taken as read? Or is it not required?

Evaluating emotional triggers

The organisation I work for asks a question of it’s visitors, along the lines of “how strongly do you agree or disagree with the statement ‘this place had a real emotional impact on me’?” We can see that the more people agree with that statement (even if only the minority strongly agree), the more likely people are to have a very enjoyable day, and recommend a visit to friends and relations. But we don’t really measure what drives that emotional response.

I asked a similar question when evaluating Ghosts in the Garden, and I experienced a familiar sense of frustration about how little insight it allowed me. So I plan to work out a set of questions that might be more informative about what drives an emotional connection with a place.

I’m going to start by looking in more detail at this thesis by Mohd Kamal Othman, which I first heard about at the CDH conference in July. Othman was evaluating mobile experiences, and in doing so created a Museum Experience Scale and an Church Experience Scale (one of the projects he was evaluating was a mobile guide for Churches).

I’m particularly interested in the questions the evaluation asked to measure what he termed “emotional connection”:

  • The exhibition enabled me to reminisce about my past
  • My sense of being in the exhibition was stronger than my sense of being in the real world
  • I was overwhelmed with the aesthetic/beauty aspect of the exhibits
  • I wanted to own exhibits like those that I saw in the exhibition
  • I felt connected with the exhibits
  • I like text-based information as supporting material at museum exhibitions
  • I felt spiritually involved with the church and its features
  • I felt connected with the church and its features
  • I felt emotionally involved with the church and its features
  • I felt moved in the church
  • The church had a spiritual atmosphere
  • My sense of being in the church was stronger than my sense of being in the rest of the world

Now it strikes me that the church specific questions are a little less specific than the ones created for exhibitions, but I’ve not yet read about the reasoning behind them. Some of the questions though (touching on presence, spectacle and acquisition, for example) resonate with what I’ve been discovering about emotional triggers in games. I feel there’s something here to build on.

(Now, I better get back to writing that presentation for Decoding the Digital)

Is this an insight on the Narrative Paradox?

I’ve been analysing the data collected for my evaluation of Ghosts in the Garden. Yesterday I sent my preliminary observations to the guys who created it, and by the end of today I hope to have completed the first draft of my full report. If everyone approves I’ll share it all here in future.

But I did want to share, and possibly sense-check, my key bit of insight. We asked participants to rate how strongly they agreed with a number of statements about the experience, using a seven point Likert scale. So here’s a sample of the sort of response we got to a simple statement, “The Ghosts in the Garden experience added to my enjoyment of the visit today”:

A simple bar chart, showing that most visitors strongly agreed that the Ghosts in the Garden experience added to the enjoyment of their visit
A simple bar chart, showing that most visitors strongly agreed that the Ghosts in the Garden experience added to the enjoyment of their visit

Which is very nice and positive. But I’m looking for emotional engagement, and the responses to the statement “The story I heard had a real emotional impact on me” were less positive:

Most users were non-committal about emotional engagement, and some did not agree that the story had any emotional impact.
Most users were non-committal about emotional engagement, and some did not agree that the story had any emotional impact.

Now, to be honest I’m not sure I’m asking the right question here. I used this wording only because we ask the question in a similar way at the National Trust where I work, and this being my first bit of research I wanted something that I could easily compare these data with. (For comparison, some of the National Trust’s most emotionally engaging places get something over 20% of visitors ticking to top (number seven) box, in this sample, only about 8% did.)

Asking people to rate their emotional response is according to many, a futile task, and there are likely better ways to measure it, but allow me to indulge myself for a moment. If I can assume that the story was indeed not as emotionally engaging as it might be, I might ask myself “why not”?

Remember, Ghosts is the Garden has been described by its creators as a “choose-your-own-adventure style story.” When you pick up the “listening device” your make your first choice – balloons or fireworks – and then, at every point you are offered a choice of two locations to explore, and the narration explains that the choices you make will affect the outcome of the story. And yet when we asked users whether they agreed that the choices they made changed the story, quite a bit of skepticism was evident:

A  number of people agreed that they choices they made changed the story, but more were a lot less sure.
A number of people agreed that the choices they made changed the story, but more were less convinced.

So my next overriding question is, did confidence that they were changing the story affect users’ emotional engagement? I think I can do a cut of the data to find that out, but the sample size is too small to be really confident in what it might show. Given what I’ve been uncovering about the story structures of the video games I’ve played though, I beginning to wonder if there’s any value to this sort of interactivity. For me, Skyrim, with its wider story structure has been a lot less emotionally involving then either Red Dead Redemption or Dear Esther, both of which take the player towards one single, inevitable, ending. And then there’s the Narrative Paradox.

I wonder whether, rather than trying to construct a number of possible endings, Splash and Ripple (the creators of Ghosts in the Garden) might have better used their time, and the interactive nature of the device, to offer visitors a choice of points-of-view on one single story. And if they had done so, would that have made the narrative stronger, and more emotionally compelling?

Holiday Reamde

Last week, for my holiday in Cornwall, I took some “hard” reading with me, but I was determined to have some holiday reading too. Having mentioned Neal Stephenson in a previous post, I was reminded that I hadn’t ever picked up one of his more recent books, Reamde. Shopping around, it was pretty cheap on Kindle so I downloaded it, and took it with me.

I wasn’t expecting it to immerse me back in the world of games and cultural heritage, in fact, I was hoping to be taken on some flight of scientific fantasy. But as Mick Jagger once sang “you don’t always get what you want…”

IF THERE WERE going to be K’Sheteriae and Dwinn, and if Skeletor and Don Donald and their acolytes were going to clog the publishing industry’s distribution channels with works of fiction detailing their historical exploits going back thousands of years, then it was necessary for those two races to be distinct in what archaeologists would call their material culture: their clothing, architecture, decorative arts, and so on. Accordingly Corporation 9592 had hired artists and architects and musicians and costume designers to create those material cultures consistent with the “bible” of T’Rain as laid down by Skeletor and Don Donald.

 Reamde page 46

Reamde follows the adventures around one Richard Forthrast, co-founder of a company that produces a wildly successful MMORPG called T’Rain. The game is based on (and portrayed as a competitor to) World of Warcraft but the attention to detail in material culture is reminiscent of Skyrim, which has of course inspired more than one “ludic archaeologist“.

I got quite excited as the opening chapters progressed. The last Stephenson book I read, Anathem taught me a lot about mathatics and quantum theory, and I thought he might blow my mind about game design too. Sadly (though entertainingly) the novel became an extended transcontinental shootout involving the various members of  Forthrast family, a couple of chinese teenagers, a Hungarian hacker, a Russian “security consultant”, a British MI6 agent and a Welsh muslim terrorist.

The references to the game are quite fun and experimental though. They do suggest that the author is a narrativist:

Because Corporation 9592, at bottom, didn’t make anything in the way that a steel mill did. And it didn’t even really sell anything in the sense that, say, did. It just extracted cash flow from the players’ desire to own virtual goods that could confer status on their fictional characters as they ran around T’Rain acting out greater or lesser parts in a story. And they all suspected, though they couldn’t really prove, that a good story was as foundational to that business as, say, a blast furnace was to a steel mill.

Reamde page 209

Which is why this fictional company has a department called Narrative Dynamics. But his leading character does think ludically too: the novel recounts how they come up with the idea that the core “Medieval Armed Combat” mechanic could be used to help with monotonous real-world jobs. This is like an idea my wife had mentioned a couple of years back. The example in the book was airport security, but it made me laugh when I saw the story about Fraxinus.

The other thing that I liked about T’Rain (and something that I miss in Skyrim) was the vassal system – players were not simply lone adventurers, but could recruit (or be recruited into) a gang, warband, household or army, in something like a pyramid selling scheme, all of which feels like a more realistic medieval style world than one in which everyone is equal. The novel recounts how this eventually divides the players into two factions, not the artificial Good and Evil factions invented by the games creators, but the Forces of Brightness (Manga inspired players who dressed their characters in lurid colours) and the Earthtone Coalition (more eurocentric gamers who enjoyed more Tolkienesque fantasy). These two factions of course starting to produce material cultures that built on the created archaeology of the world, but which were something entirely new and unplanned.

A fun read, even if not quite the escape I was hoping for.

Non-linear sound in video games

The week before last, I wrote about Annabel Cohen‘s paper on music in video games, and mentioned Karen Collins of Collins has written a great deal on games and sound. Her 2007 paper, An Introduction to the Participatory and Non-Linear Aspects of Video Games Audio, from the book Essays on Sound and Vision, seemed a good place to start.

Collins begins by suggesting the subtle difference between the terms “interactive,” “adaptive” and “dynamic”. In her useful set of distinctions “interactive” sounds or music are those that respond to a particular action from the player, and each time the player repeats the action the sound is exactly the same. Citing Whitmore (2003) she argues that “Adaptive” sounds and music are those that respond, not to the actions of the player, but rather to changes occurring in the game (or the game’s world) itself. So “an example is Super Mario Brothers, where the music plays at a steady tempo until the time begins to run out, at which point the tempo doubles.” She goes on go describe “dynamic” audio as being being interactive and/or adaptive.

She also explores the various uses for sound and music in games. She has read Cohen, obviously and so her list is very similar. She quotes Cohen in relation to masking real-world environmental distractions, and in the distinction between the mood-inducing and communicative uses of music. She points out though, that the non-linear nature of game sound means that its more difficult to predict the emotional effects of music (and other sounds). In film, she states, its possible for sounds to have unintended emotional consequences – a director wanting to inform that audience that there is a dog nearby will tell the sound designer to include a dog barking out of shot, but the audience will being their own additional meaning to that sound, based on their previous experiences (which she calls supplementary connotation) . But in games, she argues, where sounds are triggered and combined in relatively unpredictable sequences by player actions, even more additional means are possible.

She also discusses how music can be used to direct the players attention, or to help the player “to identify their whereabouts, in a narrative and in the game.” She points out how “a crucial semiotic role of sound in games is the preparatory functions that it serves, for instance to alert the player to an upcoming event.”

This is something that was made very clear while I played both Red Dead Redemption and Skyrim. Red Dead Redemption would often alert me to an upcoming threat by weaving a more urgent, oppressive tune into the background music. Skyrim took a different approach, the music for Skyrim doesn’t work as hard, but while my cat-creature was sneaking around underground tunnel systems, I was often alerted to potential threats by my enemies muttering to themselves as I approached blind corners. Collins points out that these sorts of cues have occasioned a changing listening style from passive to active listening, among gamers.

Sometimes though, as Collins points out, games are created that put musical choice directly into the players’ hands. The Grand Theft Auto series gives the player a choice of in-car radio stations  to listen too, so that their particular tastes are better catered for. Though they weren’t around at the time of Collin’s writing many iOS and other mobile games have a feature by which the player can turn off game music and even other game sound effects if the so choose, to listen to their own library of music, stored on the device. She even cites the game Vib Ribbon, or the Sony Playstation, which allows the player to load their own music from CDs, and the music then changes the gameplay according the structure of the music the player has loaded.

Collins also discusses the challenges that composers face when writing for games. For a start, Collins points out that “in many games it is unlikely that the player will hear the entire song but instead may hear the first opening segment repeatedly, particularly as they try to learn a new level.” (Though she also points out that many games designers are leaning to include what one composer calls a “bored now switch.” After a number of repeats of the same loop of music, the sound fades to silence, which both informs the player that they should have completed this section by now, and stops them getting annoyed and frustrated by the repetition.

The other main problem is that of transition between different loops (or cues, as she calls them). “Early games tended towards direct splicing and abrupt cutting between cues, though this can feel very jarring on the player.” Even cross-fading two tracks can feel abrupt if it has to be done quickly enough to keep up with game play. So composers have started to write “hundreds of cue fragments for a game, to reduce transition time and to enhance flexibility in music.” This is the approach taken in Red Dead Redemption, where as I move my character around the landscape, individual loops fade in and out according to where I am and what is happening, but layered together they feel (most of the time) like one cohesive bit of music.

Multiplayer games present another problem. “If a game is designed to change cues when a player’s health score reaches a certain critical level, what happens when there are two players, and one has full health and the other is critical?” she asks.

There are rewards too, get the music right, and games publishers can find an additional source of income. She quotes a survey which discovered that “40% of hard-core gamers bought the CD after hearing a song they liked in a video game.” (Ahem, guilty as charged m’lud, even though I’m not a “hard-core gamer.”)

Just before she completes the paper, she has some thoughts on the perception of time too. I’ve noticed a sort of “movie-time” effect in Skyrim, which presents a challenge for my real-world cultural spaces. So I think I might need to look at that in more detail.

Musical interlude

I’ve been on holiday (and heritage free, spending my time bodyboarding, cycling, sea-kayaking and, lest anyone thinks that all sounds too healthy, over-eating in Cornwall) so this blog has been quiet for a week.

But will I was away, a colleague shared a link to a very interesting blog post about London museums creating Spotify playlists to accompany exhibitions.

The writer is conflicted about whether these should be listened too while actually at the exhibition, or before or after a visit. But there’s something interesting here about using music to set the mood, either prior to or at a visit, or when reflecting upon it afterwards.

Music in new media

I’ve been thinking about music again, and staring into the pit of unknown unknowns that is my non-existent understanding of music, except as a casual listener. I know music affects me, and I’ve how important an emotional trigger in the games I’ve been playing for my studies, but I don’t know how or why, and right now I’m wishing I had a degree in Cognitive Psychology to help me understand. (The certificate would sit alongside the degrees in Computer Science, English and History that I don’t have).

Its such a huge subject, but I came across this paper, by Annabel Cohen, which though quite old (1998) I’ve found to be a useful primer. It also led me to the Gamessound website of Dr Karen Collins, Canada Research Chair in Interactive Audio at the Games Institute, the University of Waterloo, Ontario, who has written lots of juicy papers which start where Cohen left off, and are (the clue’s in the URL, a lot more games specific).

Lets start with Cohen though, a sort of new media music 101. She begins from the notion that “music activates independent brain functions that are separable from verbal and visual domains,”  and goes on to define eight functions that music has in new media:

  1. Masking – Just as music was played in the first movie theaters, partly to mask the sound of the projector, so music in new media can be used to mask “distractions produced by the multimedia machinery (hum of disk drive, fan, motor etc) or sounds made by people, as multimedia often occurs in social or public environments.” Apparently lower tones mask higher ones, and listeners filter out incoherent sounds in preference for coherent (musical) sounds . Of course the downside is music can mask speech too when that speech is part of the intended presentation.
  2. Provision of continuity – “Music is sound organised in time, and this organisation helps to connect disparate events in other domains. Thus a break in the music can signal a change in the narrative [I’m reminded of the songs in Red Dead Redemption here] or, conversely, continuous music signals the continuation of the current theme.”
  3. Direction of attention – Cohen has obviously done some experimental research on this function, broadly speaking, patterns in the music can correlate to patterns in the visuals, directing the attention of the user.
  4. Mood induction – ( quick aside here, check out this Mirex wiki page on mood tags for music). I’ve written about this before, and it’s the most obvious function to me, but Cohen is careful to make a distinction between this and the next function, which is:
  5. Communication of meaning – Cohen says “It is important to distinguish between mood induction and communication of meaning by music. Mood induction changes how one is feeling while communication of meaning simply conveys information.” Yet, when she discusses communication of meaning, she uses examples of “emotional meaning: “sadness is conveyed by slow pace, falling contour, low pitch and the minor mode.” I take from this that her nice distinction is between music that makes the user sad, and music that tells the user “this is a sad event” without changing the user’s mood. Hmmm … I’ll have to think about that.
  6. A cue for memory – This is another one that I’ve written about before. Music can trigger a user’s memories from a past event that’s totally unrelated to the new media presentation, if they’ve coincidentally heard the particular piece before, but the effect is more controllable with music especially written for the presentation. The musical term for this (from opera, arguably the first multimedia presentations) is leitmotiv. The power of the music to invoke memories or “prepare the mind for a type of cognitive activity” is well recognized in advertising and sonic brands such as those created for Intel and Nokia.
  7. Arousal and focal attention – “it is a simple fact that when there music, more of the brain is active” Cohen says (without reference). She does on to argue that with more of the brain active, the user is more able to filter out the peripheries of the apparatus running a new media presentation, and concentrate on the diagesis of the presentation, what Pinchbeck calls presence. On the other hand, she admits that some think excess stimulation pulls focus away from central vision and towards the periphery.
  8. Aesthetics – Here we come to what my colleagues report is the biggest issue with using music in interpretation. Cohen says “music is an art form and its presence enhances every situation in much the same way that a beautiful environment enhances the experience of activities within it.” But she admits that aesthetics is subjective, and “music that is not appealing can disturb the user.” Not only that, but some individuals may find all background music difficult to cope with.

So that’s my new media music 101. Next time I’ll look at what Collins has to add.

Principles of digital economy?

A picture of Lev Manovich, which isn’t here at all but rather sitting on a UCLA server. Isn’t modularity wonderful?


I’m enjoying Lev Manovich’s The Language of New Media. I wrote about this before, describing his unconventional prologue which hooked me into buying the book in the first place.

Right now, though I think it’s worth exploring his answer to that most fundamental question: what is “new media”? As he says “the popular understanding of new media identifies it with the use of a computer for distribution and exhibition.” Such an understanding, he argues, leads to the somewhat absurd situation where a text or photograph is old media when printed in a book, exactly the same text or image is somehow transformed into “new media” if distributed via web or e-book or stored on CD -ROM.

Such a definition is too limiting, he argues. To understand new media as a mode of distribution makes it “only” as revolutionary as the printing press. To think of it as a mode of storage renders new media as incapable of transforming culture as the transition from shellac records to vinyl.

The introduction of the printing press affected only one stage of cultural communication – the distribution of media. Similarly, the introduction of photography affected only one type of communication – still images. In contrast, the computer media revolution affects all stages of communication, including acquisition, manipulation, storage, and distribution; it also affects all types of media – texts, still images, moving images, sound, and spatial constructions.

So, instead he sets out five “principles” by which we may know and understand new media. And I think they may have a broader application, to the the digital economy as a whole. That said, Manovich is reluctant to use the word digital in this thesis “because this idea acts as an umbrella for three unrelated concepts – analog-to-digital conversion, a common representational code, and numerical representation.” I am, however, not so fussy. Besides, the word new stands for many more unrelated concepts, doesn’t it?

Anyhow, back to his principles. They are (in a carefully composed order because after the first, each is a consequence of it’s predecessors):

  1. Numerical representation – We’re talking digital media here, and the clue is in the name (whatever Manovich may think). Whether created on a computer, or scanned or ripped from an analogue source, a new media object is a function of numbers. Which means that it can be decribed formally, and can be manipulated algorithmically.
  2. Modularity – As every digital object is a number (or series of numbers, or algorithm) elements of every type (sounds pictures text etc etc can be assembled into other objects. The most obvious example that Manovich uses is web page like ths one, which is only presented as a single object by our browsers, but is in fact made up from a piece of text I’ve written, pictures I saw on anther site (and which is still stored on that site’s server, all I’ve done is point your browser at it) and other features created (for all I know, I didn’t put them there) by the WordPress software.
  3. Automation – As Manovich says “The numerical coding of media (principle 1) and the modular structure of media object (principle 2) allow for the automation of many operations involved in media creation, manipulation and access. Thus human intentionality can be removed from the creative process, at least in part.
  4. Variability – This is particularly interesting (not just because I’m interested in adaptive narrative). Manovich correlates industrial and now digital media with the ideologies of the industrial and digital age: “In industrial mass society everyone was supposed to enjoy the same goods – and to share the same beliefs. This was also the logic of media technology. A media object was assembled in a media factory (such as a Hollywood studio). Millions of identical copies were produced from a master and distributed to all the citizens. Broadcasting, cinema and print media all followed this logic. In a postindustrial society, every citizen can construct his or her own custom lifestyle and and “select” her ideology from a large (but not infinite) number of choices. Rather than pushing the same objects/information to a mass audience, marketing now tries to target each individual separately. The logic of new media reflects this new social logic.” So, if you are ready this page on your mobile browser, it will look different to the same page on a desktop, unless of course, you choose to return the desktop version. And of course this variability is a function of the automation in principle 3, I haven’t created the mobile version, its done automatically.
  5. Transcoding – Everything that happens to a cultural object  in the principles above created a new type of object that exists in two “layers.” In one, cultural, layer it is the old media object we all recognise, the photograph, the song, the story, and in the second “computer layer” is is a file in a database.

As Manovich says “New media may look like media, but this is only the surface.” Is this true of the digital economy as well?

A Young Lady’s Illustrated Primer


There are a lot of things in Neal Stephenson‘s The Diamond Age which I love. If I’m honest with myself I hope to see mediatronic paper and animated digital chops, for example,  become real in my lifetime. There are other aspect of the world created in that novel, for example massive inequality in a post-scarcity society, which I hope we won’t see, but I fear we are already walking down the path towards. At the core of the book though is one idea that some of my recent reading has prompted me to think about again.

The 2009 paper, Serious Games in Cultural Heritage, by Anderson et al., is a fun read, reporting on the state of the art at the time. There are some lovely lines which I’d like to take issue with. The authors, for example, hint at an opinion that a serious game doesn’t need to be fun. To which my reply that if its not fun, then its all “serious” and not a “game,” even if it does make use of gaming technology. The authors cite two examples of virtual reconstructions of Roman life, Rome Reborn and Ancient Pompeii, which use gaming technology as a research tool: “[Rome Reborn] aims to develop a researchers’ toolkit for allowing archeologists to test past a current hypotheses surrounding architecture, crowd behavior, social interactions, topography, and urban planning and development.”  More fun comes from the Virtual Egyptian Temple, and The Ancient Olympic Games examples which have playful or ludic elements in them, even its its only piecing pots back together or successfully answering quizzes set by what the paper calls a “pedagogical agent.” (Crikey! I’m returning to the Ludology vs Narratology debate again – on the side of the Ludologists!)

The paper also discusses the pedalogical value of some commercial games, which Burton calls “documentary games.” The most recent example of this genre brought to my attention is Call of Juarez: Gunslinger (with thanks to Chad at westernreboot). Of course another feature of many modern commercial games that the paper highlights is the bundled content creation tools that allow you to create your own cultural heritage environment, and indeed the Virtual Egyptian Temple mentioned above was built with the Unreal Engine toolset.

There’s also a section on all the various “realities” that gaming technology has to offer, which I’ll return to when I finally get round to writing up Pine and Korn’s Infinite Possibilities. and a section on the various gaming technologies (rendering effects and artificial intelligence) and the like, which a cultural heritage modeler can use, which makes the paper a very good primer on the subject (and one I wish I’d found earlier).

What led me to that paper was looking deeper at one of the poster presentations I saw last week. I didn’t get a chance to talk to (I guess) Joao Neto who was deep in a conversation I didn’t want to interrupt, so I did some Googling. Part of a team working to interpret Monserrate Palace in Sintra, Portugal, Joao and Maria Neto did some of the usual stuff: creating a 3D model from architectural drawings and laser scanning to show how the palace developed over time; an interactive application called The Lords of Monserrate, exploring the lives of the different owners of the palace over the centuries; and The Restoration, which appears to be a mobile app which recognizes the distinctive plasterwork in each room and interprets the restoration process in that room. But they also experimented with what they called Embodied Conversational Agents.

These are virtual historical characters, “equipped with the complete vital informational [sic] of a heritage site.” The idea was that the virtual character would capture the visitor’s interest with a non-interactive animated opening scene, in the manner of a cut-scene on a video game, but then would open up a real time conversation that would immerse the visitor with realistic “face movements, full-body animations and complex human emotions.”  The conversation would be more sophisticated than a simple question and answer system, by being “context aware,” breaking up the knowledge base into modules, to make interactive responses more possible.

In order to achieve this ambition, we developed an Embodied Conversational Agent Framework – ECA Framework. This framework allows the creation, configuration and usage of virtual agents throughout various kinds of multimedia applications. Based on a spoken dialogue system, an Automatic Speech Recognition (ASR), Text-to-Speech (TTS) engines, a Language Interpretation, VHML Processing, Question & Answer and Behavior modules are used. These essential features have very different roles in the global virtual agent framework procedure, but they all work together to accomplish realistic facial and body animations, as well as complex behavior and disposition.

Which all sounds like an amazing feat,even if the end result is (and I’m sure it must be) a little bit clunky. I’d love to see it in action. But what does this have to do with Neal Stephenson and The Diamond Age? Well, the subtitle of that book and the McGuffin (though plot wise, it’s much more than a McGuffin)  is A Young Lady’s Illustrated Primer. In the story,  A Young Lady’s Illustrated Primer is an interactive book, a pedagogic tool commissioned by a very wealthy nobleman to ensure that his daughter’s educational development is superior to her peers. Many of the characters that the reader meets in the Primer are sophisticated virtual agents like those described by Neto and Neto. But some are voiced by a “ractor,” an interactive actor whose voice, expressions and movements are transmitted live to become the voice, expressions and movements of the character in the Primer. One of the characters in Stephenson’s novel make her living as a ractor, playing characters like Kate “in the ractive version of Taming of the Shrew (which was a a butcherous kludge, but popular with a certain sort of male user),” and to “fill in the blanks when things got slow, she also had standing bids, under another name, for easier work: mostly narration jobs, plus anything having to do with children’s media.”

I used to be a “ractor” of sorts, as a costumed interpreter in all sorts of historic sites. I’m proud that my colleagues and I became one of the most interactive and immersive of all the interpretation media available. But having professional people on site is expensive, and not all volunteers have the skills, confidence or desire to take on historical roles. So I’m wondering if another approach to Neto and Neto’s Embedded Conversational Agents is now, technically a possibility.

Could a virtual character be distantly controlled in real time by a human “ractor”? And could that ractor fill their working day becoming different characters (and even at different cultural heritage sites) as and when required? The relatively small audience for cultural heritage after all makes a live ractor experiment a more realistic possibility than it would be for a popular commercial video game.

I REALLY want to try this out. Who wants to help me?

Centre for Digital Heritage #CDH2013

Last Saturday I went to the inaugural conference of the Centre for Digital Heritage at the University of York. The first speaker was Professor Andrew Prescott, who gave us a salutatory reminder that the so-called Industrial Revolution wasn’t quite as revolutionary to those living through it, and that some of what we now realize were world changing developments, were not seen as such at the time. Whether we’ll recognize what is/was important enough about the current so-called Digital Revolution remains to be seen. But don’t let me speak for him, if you like, through the power of digital, you can see his slideshow here:

It was a mature and sobering start to the conference, but also inspirational. Towards the end he mentioned conductive ink that was safe to touch (or to paint on your skin if you want a working circuit-board tattoo) and pointed us towards the work of Eduado Kak as an example of how the digital and real worlds might collide in new ways:

I was particularly interested in the presentation from Louise Sorenson about a project to capture stories from families that emigrated from Norway to the US. The idea was to build a Second Life style recreation of the journey many such emigrants took (from Noway to Hull first of all, the overland to Liverpool to catch the boat to America). This would work as an inter-generational learning tool, letting people explore their forefather’s journeys, and to add to the world from their own family tales and photos or objects that might have been passed down the family from the original travelers. This experiment turned out to be one of those “a negative result is not a failure” types. They didn’t manage to capture much new data (though they did get some, shared on this blog), but learned a lot about why they didn’t, which Louise shared with us. For a start – Second Life? Remember when that was the “next big thing”? Early adopters got very excited and talked about it as though we’d all use it it, like Neal Stephenson’s Metaverse. But us “norms”, if we logged on at all, realised pretty quickly that it was hard work modelling your world, the pioneers were profiteering, selling us land and other stuff that existed only as one and noughts, and most tragically, everywhere you looked there were avatars having kinky sex.

In fact Ola Nordmann Goes West, as Sorenson’s project was called, rejected Second Life as a platform for at least two of those reasons. Instead the team opted for an open source alternative OpenSim. This allowed them to avoid the virtual property speculators and kinky sex, but didn’t solve the hard work problem. The challenge of: downloading the client; installing the client; setting up the client (with IP address, rather than an easy to remember/type URL); and, then signing up was an off-putting barrier to an audience used to just clicking on the next hypertext link. And this is competing for on-line time with more established social networks like Facebook and Flickr. Either of which might have more natural appeal to emigrant families, because they both are natural tools for keeping in touch with distant relations. Then, there’s the numbers problem.

The Ola project tells me around a million Norwegians emigrated to the US between 1825 and 1925, and that about four and half million Americans are descended from those families. Which feels like a large number. But when you slice it up to count the number of people that discover the project, the proportion of those who are interested by it, the number who get past the client barriers, and then the fraction who feel they have something to add to the story, you are going to end up with very few people.

I’ve spent a few paragraphs on this presentation because its particularly relevant to my original proposal, wherein I asked “What can real-world cultural heritage sites learn from the video games industry about presenting a coherent story while giving visitors freedom to explore and allowing them to become participants in the story making?” The Ola project is all about giving people freedom to explore and become participants in the story making, and so its a very useful example of what some of the traps I might have fallen into. Given that the sites I work with have an annual visitorship numbering in the tens of (if they are lucky, hundreds) of thousands, they’re chances of attracting even the tiny number of active community participants are even more limited than Ola Nordmann’s.

An alternative approach to public participation was shown by John Coburn. Tyne and Wear Archives and Museums put their collection on line as many institutions have done, but online collections remain connoisseurs resource: as Coburn said, “its only engaging if you know what you are looking for.” With the Half Memory project, the museums service handed their on-line collection over to creative people of all sorts to create compelling digital experiences. “Designing digital heritage experiences to inspire curiosity and wonder is more important than facilitating learning” Coburn insists.

PhoneBooth, from the LSE library

Ed Fay’s project, PhoneBooth, for the LSE Library, had an even smaller intended audience, students sent our by their geography lecturers from the LSE, to explore the London described by Charles Booth’s survey of 1898-9. He colour-coded every street according to the evidence he witnessed and recorded on the streets, classifying them with one of seven colours raging from Black (Vicious, semi-criminal) to Yellow (Upper-Middle and Upper Class). It reminded me as he spoke of the MOSAIC classification from Experian that the National Trust uses. The library digitized both his published results and all his notes years ago, but the PhoneBooth is an app that lets you take that data with you, and walk the streets just as Booth did. It even lets you overlay the data with the modern equivalent – no, not MOSAIC, but the Multiple Deprivation Index.

Ceri Higgins shared her experiences working with the BBC and other academics to create a documentary about Montezuma. As the programme was being put together, she grew more and more excited. This was a film that was going beyond the old tropes of gold, sacrifice, and invasion by the Spanish to reveal a broader representation of Aztec society. However, by the time it came out of the editing suite, it had become, in her opinion at least, all about the old tropes of gold, sacrifice, and invasion by the Spanish. The bad guys here were the narrativists who, using tried an tested Aristotelian principles of drama, needed a protagonist, an antagonist and plenty of conflict to sell the programme. They didn’t think the more nuanced interpretation that Higgins had hoped for (and I understand, which was filmed) would connect emotionally with the audience. Hmmmm.

Pause for a moment of self reflection.

I wish I’d managed to chat with Ceri during one of the breaks. It strikes me, given all the footage which told different, more nuanced stories, that this is a case for The Narrative Braid!

Another presentation that grabbed me was from a team led by Helen Petrie, presenting their efforts to interpret (and then evaluate the interpretation of) “Shakespeare’s church” Holy Trinity in Stratford-upon-Avon. The interpretation, a smartphone app, was nothing special, using techniques that a myriad of other developers are also trying to push on cultural heritage institutions. But the evaluation was something new. According to Petrie “surprisingly little empirical research is available on the effects of using [smartphone app] guides on the visitor experience.” It’s not so surprising actually, considering how dificult it is to record emotionsal responses without participants intellectualising them. Anyway, they started from a clean slate, creating a psychometric toolset that includes the Museum Experience Scale (and of course the Church Experience Scale). The presentation of a top-line summary of course, but I’m keen to read more about it, as I’m pretty sure I saw at least one bar-chart with an “emotional engagement” label.

Another sort of guide, and one long imagined, was described by Adrian Clark. Ten years ago he started working on a 3D augmented reality model of parts of Roman Colchester, but the technology required at the time was on the limits of what was weaarble, and by no means cheap. Now that the Raspberry Pi is on the scene, he has started work again, and hopes soon to have a viable commercial model.

We also saw a presentation from Arno Knobbe, who showed us ChartEx, a piece of software that can mine Medieval texts (in this case, property charters) and pull out names and places and titles. Then the program will also algorithmically suggest relationships between the people and places mentioned in the charters and thus suggest where the same John Goldsmith (for example) appears in more than one charter. Jenna Ng analysed the use of modern Son et Lumiere shows in historic spaces. Valerie Johnson and David Thomas explained how the National Archives are gearing up for collecting the digital records that will soon be flooding in as the “30 year rule” becomes the “20 year rule.” My supervisor, Graeme Earl introduced a section on the history of Multi-Light imaging, in honor of English Heritage’s guide on the subject. The subsequent papers covered RTI, as well as combining free range photography with laser scanning to create accurate texture maps, and very readable 3D models. One fascinating aside (for me) was that the inventor of the original technique, Tom Malzbender, originally thought it’s main use would in creating more realistic textures for computer games. We also looked at: the digitisation of human skeleton remains (makes putting them together a lot easier apparently); the 3D modelling of the hidden city walls of Durham (though personally I’m more excited by the Durham Cathedral Lego Build which started today, first brick laid by Jonathan Foyle); and the digital recording, and multiple reconstructions, of mediaval wall paintings.

There were poster presentations too. Two that leaped out for me were Katrina Foxton’s exploration of “organic engagement” with cultural heritage on the internet, and Joao and Maria Neto’s experiments with virtual agents as historic characters.