I’ve been reading Eric Champion’s Critical Gaming: Interactive history and virtual heritage. Eric asked his publishers to send me a review copy, but none was forthcoming, and I can’t wait for the library to get hold of a copy – I think I was to quote it in a paper I’m proposing – so I splashed out on the Kindle edition. I think of it as a late birthday present to myself, and I’m not disappointed.
One thing that has struck me so far is a little thing (its a word Champion uses only three times) but it seems so useful I’m surprised it isn’t used more widely, especially in the heritage interpretation context. That word is “multimodality”. As Wikipedia says (today at least) “Multimodality describes communication practices in terms of the textual, aural, linguistic, spatial, and visual resources – or modes – used to compose messages.” But its not just about multimedia, “mode” involves social and cultural making of meaning as well. Champion says:
Multimodality can help to provide multiple narratives and different types of evidence. Narrative fragments can be threaded and buried through an environment, coaxing people to explore, reflect and integrate their personal exploration into what they have uncovered.
Which is surely what all curated cultural heritage spaces are trying to achieve, isn’t it? (Some with more success than others, I’ll admit.) Champion is referring to the multimodality of games and virtual environments, but it strikes me that museums and heritage sites are inherently multi-modal.
It sent me off looking for specific references to multimodality in museums and heritage sites, and indeed, I found a few, this working paper for example, and this blog, but there are not many.
But I digress. I’ve started Eric’s book with Chapter 8 (all the best readers start in the middle) Intelligent Agents, Drama and Cinematic Narrative, in which he examines various pre-digital theories of drama (Aristotle’s Poetics, Propp’s Formalism (with a nod in the direction of Bartle and Yee) and Campbell’s monomyth), before crunching the gears to explore decidedly-digital intelligent agents as dramatic characters. Along the way, he touches upon “storyspaces” – the virtual worlds of games which are by necessity incomplete, yet create an illusion of completeness.
His argument is that there is a need for what he calls “Cultural Agents” representing, recognising, adding to, or transmitting cultural behaviours. Such agents would be programmed to demonstrate the “correct cultural behaviors given specific event or situations” and recognise correct (and incorrect!) cultural behaviours. For example, I’m imagining here characters in an Elizabethan game that greet you or other agents in the game with a bow of the correct depth for each other’s relative ranks, and admonishes you if (in a virtual reality sim) you don’t bow low enough when the Queen walks by.
Which leads on to what he calls the “Cultural Turing Test […] in order to satisfy the NPCs [non-player characters] that the players is a ‘local’, the player has to satisfy questions and perform like the actual local characters (the scripted NPCs). Hence, the player has to observe and mimic these artificial agents for fear of being discovered.” (As he points out, this is in fact a reversal of the Turing test.)
Then he shifts gear again to look at Machinema (the creation of short films using game engines, which I learned about back in Rochester) as a method for users to reflect on their experience in-game, and edit it into an interpretation of the culture the game was designed to explore. Its a worthy suggestion, and could be excellent practice in formal learning, but I fear it undermines the game-play itself, if it becomes a requirement of the player to edit their virtual experiences before comprehending them as a coherent narrative.
Also in all though, I can already see that the book will be an enjoyable and rewarding read.
More time than I’d like has passed since I started creating my design document. In my last post on the subject, I described how a recital I’d seen could be broken down into “Natoms” or Narrative Atoms. The recital itself was constructed to create a story by putting these natoms into an emotionally engaging order.
Now imagine that we want to use the same research to create an exhibition in a museum, or tell a similar story to people visiting a country house. Last time I introduced the idea that the natoms could be all sorts of different media: “documents (which could be original or images); portraits (ditto); text (spoken in this case, but it could be printed); sound (live music in this case, but it could be recordings); and even original instruments.” This list can be divided into two types: physical media – objects, and maybe in an historic environment (rather than a museum gallery) the spaces themselves; and, “ephemeral” media – video, audio, text etc, which can be delivered to the access points on demand. I use the word ephemeral because the physical stuff is by definition in a located in a particular place and (generally) doesn’t move around. By the other media can be delivered to visitors wherever the visitors are. A fundamental difference between this concept and traditional interpretation design, is that text panels cease to be permanent objects in the gallery, letting the collection take pride of place. But the ephemeral stuff is not necessarily less important than the physical stuff, as Cohen and Shires pointed out, the only thing that distinguishes kernels from satellites is that the kernals have come in a certain order.
click to enlarge
In the diagram (above) I’ve separated out the physical natoms (spaces and objects) from the ephemeral ones (indicated by the cloud box), but I think I may have been mistaken by making all the kernels ephemeral natoms. There could well be “wow” objects, that curators place at the very start of an exhibition, to make sure that everyone sees it at they enter. This would obviously be the first Kernel, and maybe a should redo the diagram to show that possibility. Alternatively, curators can put a wow object at the end of an exhibition, for example at the brilliant Life and Death in Pompeii and Herculaneum exhibition at the British Museum, but in fact this is more difficult to do in many free-flow historical environments, so maybe I won’t show that case in the diagram (or maybe I’ll create different diagrams for different curatorial audiences).
What I’m trying to show in the diagram is that there is at least one ephemeral natom for each physical one, for example a catalogue entry, but there may be more – maybe a recording of music being played on an instrument on display, for example. That piece of music may have a specific place in the narrative, making it a kernal. Giving a very simple, broad-brush example, it may in be a minor key, a “sad song” if you will, and works very well in the narrative when we’d like the audience to reflect on the death of protagonist. The object itself may, or may not have a specific place in the narrative. Lets assume it doesn’t.
Now imagine our visitor wonders over to look at the object. The system knows what natoms have been delivered to the visitor at this point, and has a choice: it can let the instrument stand on its own, as a thing of beauty, remember, it is in itself a natom; or it can measure the time the visitor pauses by the object, which indicates a particular interested in it, and deliver (via an e-ink panel say) the catalogue entry; OR it can note that the visitor has recently been told about the death of the protagonist, and so the plays the “sad song”, which, for another visitor who has not yet heard the death story, it holds in reserve until later in the experience.
This isn’t meant to remove all control from the the visitor, who may well have the ability to trigger the music (or another piece) even if the system chooses not to deliver it until later. Indeed, if the visitor goes around triggering every bit of music, a sophisticated version of this system should be able to background the social story, in favour of a more musicological one. Rather, its an acknowledgement that the visitor already takes control of the experience by moving around the spaces, and offers a more flexible way for the curator to tell an emotionally engaging narrative by defining the kernels of the story.
Last Saturday I went to the inaugural conference of the Centre for Digital Heritage at the University of York. The first speaker was Professor Andrew Prescott, who gave us a salutatory reminder that the so-called Industrial Revolution wasn’t quite as revolutionary to those living through it, and that some of what we now realize were world changing developments, were not seen as such at the time. Whether we’ll recognize what is/was important enough about the current so-called Digital Revolution remains to be seen. But don’t let me speak for him, if you like, through the power of digital, you can see his slideshow here:
It was a mature and sobering start to the conference, but also inspirational. Towards the end he mentioned conductive ink that was safe to touch (or to paint on your skin if you want a working circuit-board tattoo) and pointed us towards the work of Eduado Kak as an example of how the digital and real worlds might collide in new ways:
I was particularly interested in the presentation from Louise Sorenson about a project to capture stories from families that emigrated from Norway to the US. The idea was to build a Second Life style recreation of the journey many such emigrants took (from Noway to Hull first of all, the overland to Liverpool to catch the boat to America). This would work as an inter-generational learning tool, letting people explore their forefather’s journeys, and to add to the world from their own family tales and photos or objects that might have been passed down the family from the original travelers. This experiment turned out to be one of those “a negative result is not a failure” types. They didn’t manage to capture much new data (though they did get some, shared on this blog), but learned a lot about why they didn’t, which Louise shared with us. For a start – Second Life? Remember when that was the “next big thing”? Early adopters got very excited and talked about it as though we’d all use it it, like Neal Stephenson’s Metaverse. But us “norms”, if we logged on at all, realised pretty quickly that it was hard work modelling your world, the pioneers were profiteering, selling us land and other stuff that existed only as one and noughts, and most tragically, everywhere you looked there were avatars having kinky sex.
In fact Ola Nordmann Goes West, as Sorenson’s project was called, rejected Second Life as a platform for at least two of those reasons. Instead the team opted for an open source alternative OpenSim. This allowed them to avoid the virtual property speculators and kinky sex, but didn’t solve the hard work problem. The challenge of: downloading the client; installing the client; setting up the client (with IP address, rather than an easy to remember/type URL); and, then signing up was an off-putting barrier to an audience used to just clicking on the next hypertext link. And this is competing for on-line time with more established social networks like Facebook and Flickr. Either of which might have more natural appeal to emigrant families, because they both are natural tools for keeping in touch with distant relations. Then, there’s the numbers problem.
The Ola project tells me around a million Norwegians emigrated to the US between 1825 and 1925, and that about four and half million Americans are descended from those families. Which feels like a large number. But when you slice it up to count the number of people that discover the project, the proportion of those who are interested by it, the number who get past the client barriers, and then the fraction who feel they have something to add to the story, you are going to end up with very few people.
I’ve spent a few paragraphs on this presentation because its particularly relevant to my original proposal, wherein I asked “What can real-world cultural heritage sites learn from the video games industry about presenting a coherent story while giving visitors freedom to explore and allowing them to become participants in the story making?” The Ola project is all about giving people freedom to explore and become participants in the story making, and so its a very useful example of what some of the traps I might have fallen into. Given that the sites I work with have an annual visitorship numbering in the tens of (if they are lucky, hundreds) of thousands, they’re chances of attracting even the tiny number of active community participants are even more limited than Ola Nordmann’s.
An alternative approach to public participation was shown by John Coburn. Tyne and Wear Archives and Museums put their collection on line as many institutions have done, but online collections remain connoisseurs resource: as Coburn said, “its only engaging if you know what you are looking for.” With the Half Memory project, the museums service handed their on-line collection over to creative people of all sorts to create compelling digital experiences. “Designing digital heritage experiences to inspire curiosity and wonder is more important than facilitating learning” Coburn insists.
PhoneBooth, from the LSE library
Ed Fay’s project, PhoneBooth, for the LSE Library, had an even smaller intended audience, students sent our by their geography lecturers from the LSE, to explore the London described by Charles Booth’s survey of 1898-9. He colour-coded every street according to the evidence he witnessed and recorded on the streets, classifying them with one of seven colours raging from Black (Vicious, semi-criminal) to Yellow (Upper-Middle and Upper Class). It reminded me as he spoke of the MOSAIC classification from Experian that the National Trust uses. The library digitized both his published results and all his notes years ago, but the PhoneBooth is an app that lets you take that data with you, and walk the streets just as Booth did. It even lets you overlay the data with the modern equivalent – no, not MOSAIC, but the Multiple Deprivation Index.
Ceri Higgins shared her experiences working with the BBC and other academics to create a documentary about Montezuma. As the programme was being put together, she grew more and more excited. This was a film that was going beyond the old tropes of gold, sacrifice, and invasion by the Spanish to reveal a broader representation of Aztec society. However, by the time it came out of the editing suite, it had become, in her opinion at least, all about the old tropes of gold, sacrifice, and invasion by the Spanish. The bad guys here were the narrativists who, using tried an tested Aristotelian principles of drama, needed a protagonist, an antagonist and plenty of conflict to sell the programme. They didn’t think the more nuanced interpretation that Higgins had hoped for (and I understand, which was filmed) would connect emotionally with the audience. Hmmmm.
Pause for a moment of self reflection.
I wish I’d managed to chat with Ceri during one of the breaks. It strikes me, given all the footage which told different, more nuanced stories, that this is a case for The Narrative Braid!
Another presentation that grabbed me was from a team led by Helen Petrie, presenting their efforts to interpret (and then evaluate the interpretation of) “Shakespeare’s church” Holy Trinity in Stratford-upon-Avon. The interpretation, a smartphone app, was nothing special, using techniques that a myriad of other developers are also trying to push on cultural heritage institutions. But the evaluation was something new. According to Petrie “surprisingly little empirical research is available on the effects of using [smartphone app] guides on the visitor experience.” It’s not so surprising actually, considering how dificult it is to record emotionsal responses without participants intellectualising them. Anyway, they started from a clean slate, creating a psychometric toolset that includes the Museum Experience Scale (and of course the Church Experience Scale). The presentation of a top-line summary of course, but I’m keen to read more about it, as I’m pretty sure I saw at least one bar-chart with an “emotional engagement” label.
Another sort of guide, and one long imagined, was described by Adrian Clark. Ten years ago he started working on a 3D augmented reality model of parts of Roman Colchester, but the technology required at the time was on the limits of what was weaarble, and by no means cheap. Now that the Raspberry Pi is on the scene, he has started work again, and hopes soon to have a viable commercial model.
We also saw a presentation from Arno Knobbe, who showed us ChartEx, a piece of software that can mine Medieval texts (in this case, property charters) and pull out names and places and titles. Then the program will also algorithmically suggest relationships between the people and places mentioned in the charters and thus suggest where the same John Goldsmith (for example) appears in more than one charter. Jenna Ng analysed the use of modern Son et Lumiere shows in historic spaces. Valerie Johnson and David Thomas explained how the National Archives are gearing up for collecting the digital records that will soon be flooding in as the “30 year rule” becomes the “20 year rule.” My supervisor, Graeme Earl introduced a section on the history of Multi-Light imaging, in honor of English Heritage’s guide on the subject. The subsequent papers covered RTI, as well as combining free range photography with laser scanning to create accurate texture maps, and very readable 3D models. One fascinating aside (for me) was that the inventor of the original technique, Tom Malzbender, originally thought it’s main use would in creating more realistic textures for computer games. We also looked at: the digitisation of human skeleton remains (makes putting them together a lot easier apparently); the 3D modelling of the hidden city walls of Durham (though personally I’m more excited by the Durham Cathedral Lego Build which started today, first brick laid by Jonathan Foyle); and the digital recording, and multiple reconstructions, of mediaval wall paintings.
There were poster presentations too. Two that leaped out for me were Katrina Foxton’s exploration of “organic engagement” with cultural heritage on the internet, and Joao and Maria Neto’s experiments with virtual agents as historic characters.
Or rather I’ve read up to somewhere between pages twelve and eighteen, but its been a fun adventure so far. It’s somehow ironic that a book with the ambition of recording the development of digital media semantics is shackled to such an old medium as the printed and bound book. There’s a copy available from the Winchester School of Art Library, but it always seems to be out and I haven’t had the heart to recall it. I can’t say if one person has held onto it for months, or somebody just checked it out moments before I looked on the web catalogue. And having experienced how it feels to bring a book home from the library, and the next day get a recall notice, and have to post it back, wouldn’t want to put another student through that. I was hoping there would be a e-edition available from the library, a couple of books I’ve wanted to look at have been available that way. But, again somehow ironically, it’s dead tree or nothing.
Or so I thought, but when I checked Amazon I discovered they do have a Kindle edition. Yes, it is more expensive than the paper version bought at another online store, but it does mean I can download a preview onto my iPad.
Reading that preview it’s apparent that Manovich is fully aware of the irony inherent in writing a book about new media. The numbered pages are preceded by a prologue, which Manovich titles Vertov’s Dataset. He explains:
The avant-garde masterpiece Man with a Movie Camera, completed by Russian director Dziga Vertov in 1929, will serve as our guide to the language of new media. This prologue consists of a number of stills from the film. Each still is accompanied by a quote from the text summarising a particular principle of new media. The number in brackets indicates the page from which the quote is taken. The prologue thus acts as a visual index to some of the book’s major ideas.
It’s Manovich’s attempt to create an analogue hypertext user interface, or front-end, for the book. It would have been good if the Kindle edition’s page numbers in brackets were links to the pages themselves, as the numbers in the Contents table are, but if I want to use the prologue as intended, I shall have to acquire a paper version of the book.
The prologue is enticing though. A glimpse of page 158 says:
Borders between worlds do not have to be erased; different spaces do not have to be matched in perspective, scale and lighting; individual layers can retain their separate identities rather than being merged into a single space; different worlds can clash semantically rather than form a single universe.
He asks (on page 317) “can the loop be a new narrative form appropriate for the computer age?” And on page 322 argues:
Spatial montage represents an alternative to traditional cinematic temporal montage, replacing its traditional sequential mode with a spatial one. Fords assembly line relied on the separation of the production process into sets of simple, repetitive and sequential activities. The same principle made computer programming possible: A computer program breaks a task into a series of elemental operations to be executed one at a time. Cinema followed this logic of industrial production as well. It replaced all other modes of narration with a sequential narrative, an assembly line of shots that appear on the screen one at a time. This type of narrative turned out to be particularly incompatible with the spatial narrative that had played a prominent role in European visual culture for centuries.
This prologue (and the more conventional introduction that made up the rest of the preview) have got me hooked. I’ve ordered a copy, not from Amazon though and not a Kindle edition. The paper version is available more cheaply, and postage free, from the Book Depository – which itself is, oh irony of ironies, owned by Amazon).