Life and Death at the British Museum

I went to the British Museum yesterday, to check out the Life and Death Pompeii and Herculaneum exhibition. Not being a real archaeologist, its not something I know a lot about (despite a discussion on the subject the Narrative Tools meeting I went to a couple of weeks ago). For those who haven’t been, if you can get a ticket, it’s worth going. Items from the two ruined cities are brought together and and displayed in galleries that take you on a tour of an archetypical house.

Pan with a Goat (image linked from

There are some items from the famed “secret cabinet” of the Royal Museum in Naples, alongside the less racey domestic items and the emotionally engaging void casts that reveal families and their animals at the moment of death. Here’s a handy review from a fellow blogger if you’d like a second opinion.

My wife had heard about a British Museum app that accompanies the exhibition. We checked it out in advance, but the BM didn’t recommend it for visiting, and so we decided we might buy it after we came back, if we were inspired to learn more.

We were. We did. And I’m looking at it now.

It seems to have been made by putting together the text from the museum labels and the content from the multimedia guides that were available for hire at the museum. I am of course now a little suspicious of their clever commercial thinking. By suggesting the app wouldn’t be a useful companion at the exhibition, they might have managed to sell me the same content twice – once when I hired the multimedia guide at the museum, and again when I got home to buy the app. The navigation through the content is different in the museum based guide, where  items are traditionally labeled with an index number that users input on their handheld units (which looked like Samsung phones) to call up the content. The use-at-home app presents you with a map of the region and then more detailed maps of each city and a number of themes (for example: Commerce; Wealth and Status; and, Religion and beliefs), thus placing each exhibited item in its location of discovery and it’s social  context. It would have been a pain to navigate at the exhibition.

By way of example, I was particularly taken by some frescos of tavern life which appear to be a sort of ancient comic strip. To find this in the app, I had to guess which city these frescos were from and which theme they might be found under. Lets start with Herculaneum, I recall that these taverns were for the poorer sort, so not Relaxing in Luxury I guess. I’ll try Commerce. So now I’m presented with plan of Herculaneum parked with fourteen Googlemaps-style pins representing locations where items were found. Touch one at random and the map zooms in as well and displaying a label: “Tap, tank and pipes”. No, not what I’m looking for. I go through each of the pins, but don’t find the comic strip frescos. There is a “Lovers drinking fresco” though, so slightly frustrated I tap that, and get an image and some text. Aha! At the side where the themes were displayed, there’s now a list of all the items in that city and theme. So when I come out to the main map, and then zoom in to Pompeii, I just tap on Commerce, double-tap on a random pin and scroll the list to find what I’m looking for.

What I finally get are two of the four images, accompanied by the text that I recall from the museum label and an which features my favourite Professor of Classics, Mary Beard, talking about the images. Sadly though her description is accompanied by still photos (of herself and two of the frescos) two of the images are missing, which means her commentary makes more sense when you are standing  in front of all the frescos. Given that I can’t find images of the missing frescos on the internet either, I’m wondering whether their modern day owner is exerting strong rights protection.

A modern interpretation of what the frescos may have looked like when new (linked from

I might have like to see more provenance references in this app, but I don’t so I don’t know where these frescos (or any other of the exhibits) live when they’re not starring in this exhibition, or where I can find out more about any of the objects.

There is a nice animated video however, imagining how the catastrophic eruption might have looked from the streets, narrated with a reading Pliny the Younger’s eyewitness account. At a number of points during the video, there is an opportunity to pause it and look at some related text and objects. There are a number of other videos too, in which curator Paul Roberts gives an overview each of the social themes.

The thing that disappoints me most about the app is the omission of any details about the hypothetical house around which the exhibition is based. As visitors enter the show, they are treated to a nicely detailed CGI fly-through of this suggested home and street-tavern. I’ve learned that my fellow archaeologists worry that such visualisations are too often perceived as “the truth” by non-experts. This app would have been a great opportunity to show visitors the research and the decisions that created this particular visualisation, and maybe even dynamically show how the house model might have changed if different assumptions had been made.

The Narrative Paradox

I’ve had a hectic couple of weeks, which has left me with some catching up to do here. But its been an exciting time too, with lots of connections being made and, slowly but surely, a firmer idea of how I might approach this PhD beginning to appear.

Let me start at the beginning though, with a meeting two weeks ago with colleagues from the university’s English and Computing departments, as well as from  Kings College London and the University of Greenwich. We all of us were coming from different directions but arriving at somewhere approximate to the same place. I probably shouldn’t say too much about it now, after all we’ve got to find a lot of money first.

One thing we talked about though, was the idea of Adaptive Hypertext. This was a new term to me, and may prove to be a useful one. If I understand my colleagues right, it’s a bit like the principle of sculptural hypertext, in that all the content is available, but elements are filtered away based on user preferences, location or previous behaviour. What differentiates it (I think) from plan old sculptural hypertext is that its more dynamic, the sculpting is done on the fly, as the user explores the narrative. Clearly it’s something I need to understand better.

The thing I was most excited by though, was when Charlie Hargood put into words something I’ve been struggling with internally. The thing is, the more interactive a story is, the less good it is. Charlie called this the Narrative Paradox. I hadn’t heard of this term before, so I’ve been searching for its origin. The earliest reference to the term I’ve found so far comes from Ruth Aylett’s 2000 paper, Emergent Narrative, Social Immersion and Storification. She says “The well-known ‘narrative paradox’ of VEs is how to reconcile the needs of the user who is now potentially a participant rather than a spectator with the idea of narrative coherence — that for an experience to count as a story it must have some kind of satisfying structure.” Those quotes she around puts around ‘narrative paradox’ don’t come with an endnote, so though she says its “well known” I can’t find an earlier citation. Aylett may, therefore, have coined the term. If so, she deserves some credit, for her definition is a useful one.

Another of Aylett’s papers, co-written with Sandy Louchart is called Solving the narrative paradox in [Virtual Environments] – lessons from [Role Playing Games]. It got me very excited, not just because I’ve been playing RPGs since 1979, but also because I thought they might already have ‘solved the paradox’, but sadly they discover that “it would be much more difficult to build a computational system able to assess and act on user’s satisfaction levels.”

Engaging RPG experiences occur as a result of conversation, mediated by feedback between participants, just as the best interpretation occurs when people talk to each other. Until cheap open-source computer programmes consistently pass “the Turing test” we haven’t got a hope of building a system that replicates that process.

But I’m not that ambitious. I’m not looking for an emergent narrative created on the fly for the user, but rather an adaptive narrative, handcrafted in advance, with a satisfying structure, but which can adapt to the user’s needs and interests. Charlie’s own paper, The Narrative Braid, is closer to what I’m looking for, and his braid metaphor is useful not just for documentaries, but also for, maybe especially for, cultural heritage interpretation.

Museums and Heritage Show

This slideshow requires JavaScript.

I went to the Museums and Heritage show on Wednesday. They claimed it was the biggest ever, and it was in a new venue, the West Hall, Olympia. When I used to exhibit, it was at the Royal Horticultural Society New Hall, near Victoria, Olympia is a little more out-of-the-way, with no direct tube service on weekdays (or if the the show is big enough, which M&H isn’t of course). If my train into London had stopped a Clapham Junction, there would have been a very quick Overground journey to Olympia, but it didn’t so I had to take the tube to Earls Court, and then a bus and a walk to Olympia.

Once there, the “out-of-the-way- ness” continues, as the entrance to West Hall is about as far from any of the main roads as its possible to be. Inside though, the hall was a comfortable size for the show, which had felt a little drowned by space in the Earls Court venue in recent years.

There were the usual stands from industry stalwarts: organisations like the AHI and GEM, exhibition designers like PLB and Hayley Sharpe; vitrine systems from the likes of ClickNetherfield and Conversation by Design, and all the retail product suppliers. But this year was all about the apps – stick a pin in the exhibition plan, and chances are, the stall you pick will be selling an app, or at the very least “a mobile web service that looks just like at app to your visitors.”  It’s the start-ups though, like Huntzz, which I feel sorry for. They think they’re bringing something new to the market, but I wonder what they felt as they walked around the other exhibitors, realizing that their USP isn’t as unique as they thought, and in fact their product isn’t very good either. To be fair, they have apparently sold to a number of clients, including Chatsworth, but I can’t see them surviving in the long term. ATS were one of the first on the app scene, and they took a sensible approach, using a combination of app and bulk-bought iPods to undercut the costs of proprietary audio guide systems. Now the big boys like Acoustiguide and the even bigger Antenna will sell you apps too, though I bet you’ll still get better value from some of the smaller companies.

Apps haven’t killed off proprietary audio hardware though. The trend in hardware is about small solid-state devices with simple interfaces for locative or interactive content. The DiscoveryPEN for example, reads barely visible micro-barcodes, others are activated by Infra-Red, or RFID. The GuideID collects information on the the visitor as well as providing interpretation, but no-one seems to be using visitor information to adapt content to better engage that individual.

There were two highlights of the show for me.

One is a lovely piece of tech that I had a application for as soon as I saw it. The info-point is a tiny self-contained wifi webserver, with a content management system. Load your interpretation text, pictures and video on to it, plug it in and leave it. Tell visitors to point their smartphone browser at it, and it will serve them the content you want, without using their data allowance, or requiring connection to your network. If it falls over for any reason, it reboots itself automatically. Its basically a Raspberry Pi, with a wifi dongle, and some open source software, so you could probably build one yourself for less than a hundred quid. But not everyone has the technical knowhow to do that, and it would take time, so for many sites, an off the self solution like this is well worth the investment. I emailed one of the properties I work with from the stand, because I knew this is just what they need.

The other exciting thing is Storyscope, the product of an EU funded research programme called decipher. Its a server based system that enables heritage professionals to work collaboratively on papers, guidebooks, exhibitions – both virtual and in the real world, and even, dare I say it, apps. Start with your collection: build your story, adding items from other collections as you go; pull in data from collections management systems; add your narrative; share you working; add stories from other professional and the public; edit it, and publish. And the system records every step, so there’s a record of every decision, and the links back to the CMS systems at your and other institutions. I’m involved in a project at the NT right now that would really benefit from a tool like this.

Oh, and one more thing, not a highlight exactly but a nice touch. I’ve had used roller banners more often than I care to remember. They are a necessary evil. The green printers, Seacourt, were showing a nice looking bamboo banner stand, and even the banner itself is made from bamboo fibre.

My musical Friday

I had such an interesting day last Friday but I haven’t had a chance to write it up until now. I kicked off by meeting Ben Mawson at The Cowheards, a pub on the common close to Southampton University. Ben introduced me to his ongoing work Portrait of a City. He gave a me a cheap Android phone, with no sim card, and a pair of headphones (on the longest cable ever – pbviously made for sharing). The phone was running NoTOURS softwhere. It took a while to pick up enough GPS signals to get started, but when it did, he set me walking around the common. Using GPS he’d mapped a number of intercoking virtual circles across the common, and when I took the phone into one, the NoTOURS software would play a specific sound file for that circle. Stand where two circles intersect and you get both sounds played simultaneaously. The composer, Ben tells me, can code each sound to start quietly at the edge of the circle, and get louder as you get closer to the centrepoint, or play at a constant volume throughout the  circle. If you leave the circle, the composer can instruct the music to pick up where you left off, when you re-enter, or alternatively to restart.

Ben’s composition for the Common includes the work of a number of local schoolchildren, telling stories or making music. We didn’t have much time to explore, but Ben whisked me over of an old cemetry on the common (which is a must visit, by the way) and of couse the atmospheric music added to the experience of this minimally maintained landscape. We talked a bit about cultural heritage sites might use the technology, or commission compositions.

Then he drove me over to the main campus, where another part of Portrait of a City was waiting, this time a powerful composition based around an old poem, including an impromptue chior and amplified spring sounds overleaying the real sounds that filtered through the headphones.

Then it was on to meet Professor Jeanice Brooks at the music department. Jeanice has worked with the National Trust, and is very interested in domestic music. In the space of an hour she gave me the quickest Narrative Music 101 course ever (thank you Jeanice), and we discussed the wonderful possibilities of musical potential at National Trust sites.

She mentioned she was on television that evening, in a programme following the recreation of a regency ball in Jane Austin’s home village of Chawton. Then it was off to the library to seek out the “set text” on music and narrative, Claudia Gorbman’s Unheard Melodies.

This is an old (and in my case worn) book, dating from 1987, but to someone like me, its a perfect introduction. I’ve already learned about diegetic music (where musicians are playing in the story, or charcters are listening to the radio for example), nondiegetic music (where as she says “an orchestra plays as coyboys chase indians upon the desert”) and metadiegegtic music (where we hear a character “remember” a bit of music). She also talk about themes, and what Wagner called “motifs or reminisence.” Every thing I read, every single thing, reminds me of the music in a film or TV programme, and now I can’t stop making connections with the book whenever I watch TV. Last night for example, at the end of Game of Thrones, after (spoilers!) Jamie Lanister returned to save Brienne of Tarth from the bare, the nondiegetic music was an instrumental version on a song we’d heard last series, The Rains of Castamere. We had learned back then that the song was an example of the way the Lannisters always repay their debts, and here we were watching Jamie repay his debt to Brienne. Clever.

I have an entirely unreasonable aversion to Jane Austin, which isn’t something to be proud of, considering the industry I work in, but I after I returned home from the library, I caught some of the programme Jeanice was in. Its great, I watched it again with my daughter the next day. Its only on line for another four days, but catch it if you can.

Pulling spaces out of narrative

While I’m looking at (broadly) how narratives can be told across space, I gatecrashed an interesting seminar today looking at how spaces (thats places, not the space between the words) can be pulled out of narratives and mapped. Its all part of the Spatial Humanities project at Lancaster University. Patricia Murrieta-Flores visited Southampton (her old alma mater) today to share some of the work she has been doing as a proof of concept for the idea.

In the first example Patricia explained how the team and processed the eighteenth century records of the Registrar General to co-locate clusters of deaths by cholera, diarrhea and dysentery. The idea (as I understand it) is that they input digitised versions of the historical texts (which they call the corpus) and the system parses and pulls out the place names, matches them against gazetteers, and maps them in GIS. The output shows the clusters on a map of the UK. This isn’t easy to automate, and its still quite a handcrafted process, because of different historical names for places, different spellings, different gazetteers, and disambiguation (does a mention of Lancaster, for example, mean the city in the UK, the county in Pennsylvania, or Mr Lancaster?).

What they discovered wasn’t surprising, the largest spikes of co-incident death by the three diseases corresponded with three of the occurrences of a cholera epidemic. Patricia’s story though had an interesting resonance with the story of John Snow, the “legendary” epidemiologist. During the first spike highlighted in Patricia’s work Snow suggested that the disease may be water-borne and not, as previously thought, miasmic. The second spike, in 1854 occurs as Snow is analysing data himself, to identify a particular water-pump in Broad Street as the centre on an outbreak. By the time of the third spike, in 1866, the authorities had begun to base their advice to citizens upon John Snow’s learning and the fourth and largest spike two years later, is not co-incident with an epidemic  but a result of better reporting because of Snow’s work.

In the second example Patricia touched on literature rather than historical record, charting mentions of the lake district places in the work of 18th century writers. The output showed how what began as a stopping off point on the way to Scotland, became a destination in its own right as the century (and the railways) developed.

Of course, this process has revealed nothing particularly new, but both these experiments were always meant as proof of concept. The exciting work, discovering new truths from less well known historical and literary narratives is about to begin…

Ludology vs. Narratology

Just a short note to set out my position.

In my reading I’ve again and again come across an academic division in the study of games, between ludology and narratology. (Ludology being a word I hadn’t heard before, meaning “game studies”.) For the most part, its seems to me, the debate consists mostly of ludologists saying “Hey, games are our thing! You narratologists can clear off back to your novels and your TV and your films and the like, we don’t want you round here,” but that may be my own prejudice in reading. So for the benefit of balance, I’ll quote from what wikipedia says (today).

This disagreement has been called the ludology vs. narratology debates. The narratological view is that games should be understood as novel forms of narrative and can thus be studied using theories of narrative. The ludological position is that games should be understood on their own terms. Ludologists have proposed that the study of games should concern the analysis of the abstract and formal systems they describe. In other words, the focus of game studies should be on the rules of a game, not on the representational elements which are only incidental.

Earlier this week I found this very gentlemanly exchange of emails, which was for me the most enlightening version of the debate that I’ve read so far.

For my studies of course, wherein I’m looking to learn about how a particular sort of “open world” games use narrative, I tend towards the narrativist viewpoint. Which isn’t to say the ludic approach isn’t valid, its just that I’m not trying to turn cultural heritage interpretation into a game.

Edit: my views on Ludology vs. Narratology are getting better informed, check out this post too.

Music, narrative and space

I’m thinking about music. Which is slightly scary for me, as I’m not very good with music. I have no sense of rhythm, I’m not tone deaf, but I do struggle to tell the difference between notes, and though I enjoy singing, people around me don’t enjoy my singing. This might have something to do with two of my favourite musicians being Bob Dylan and Shane McGowan whose own singing voices are a matter of some division among critics.

But while I may not be terribly qualified to think about music, I have become aware of how important music is the storytelling that occurs in some of the most applauded video games. When I hear the words music and video games in the same sentence I think first of the god-awful bleeps and beeps that I used to turn down in the eighties, but music in games has come a long way since then. Something I think I was only actually aware of when I started playing Dear Esther for this research. I was so impressed with how the music added to the atmosphere, and helped tell the story that I was not surprised to learn that the composer, Jessica Curry, had been nominated for a Bafta for her work on the game.

Then, when I was telling people I was planning to play Red Dead Redemption, everyone I spoke mentioned the music as an impressive feature of that game, most pulling out one particularly impressive example, which indeed take the number two spot in this list of the top twenty songs in games. This is the first time in the game that the (excellent) ambient music gives way to a very “front of mind” song, licensed from Swedish (with South American roots) singer Jose Gonzales. The simple fact that so many people talk about this moment in their appreciation of the game indicates that the music contributes to an emotional, memory creating, response in the player.

We talked about this at work on Wednesday, briefly mentioning the way music is used in the Bowie exhibition at the V&A, and the hope that the experimental opening of Leith Hill Place might include an innovative soundscape. However we concluded that cultural heritage doesn’t use music enough in interpretation, and where it does, it doesn’t do so that imaginatively. My boss said she might be up for sponsoring an innovative (and repeatable elsewhere)  use of music in interpretation. (So if anyone out there has an exciting ideas that would fit in a National Trust property around London and the South East, get in touch!)

It definitely seems to me that if I’m planning to learn from how games tell stories, I can’t ignore music. But I have some questions that need answering, and I think these are questions occasioned by the broad range of cultural heritage sites that my organisation, the National Trust, looks after though they would also apply elsewhere.

  1. Many examples of music and sound in interpretation occur through headphones. This tends towards an insular, individual experience. Lots of people enjoy audio guides but many people seek a more social learning experience in museums. How can places use sound and music in a more open, participatory manner? (This is one of the questions the Ghosts in the Garden tries to address.)
  2. Similarly, many people visit outdoor locations in part to enjoy the sounds of being in the open air. Can we design musical experiences that make space for, or even amplify some of the ambient sounds that may be occurring around the listener in the non-virtual world?
  3. Lots of the music we hear in cultural heritage interpretation is bought off the shelf – existing recordings, licensed or borrowed from royalty free collections. Occasionally (for Ghosts in the Garden, for example) new recordings are made of music historically connected with the site. More rarely (I’m aware of a piece created especially for Ham House last year, and another in development at Mottisfont) have new pieces of music been commissioned to help tell the story of site. Why doesn’t this happen more often?

Luckily I don’t have to try and an answer these questions on my own. I’ve already met colleagues at the university that are already asking similar questions. The At Home with Music project has already worked at National Trust sites, and this post from Ben Mawson suggests he’s recently been dealing with exactly the same narrative frustrations that started me on this research. I’m hoping I can enlist their help, and that they don’t mind my singing.

SCUMM and villainy, sorry, time

My fact of the day appears to be that time itself started on Monkey Island. In his Games and Culture paper Michael Black argues that one of the key innovations Ron Gibert introduced with the graphical adventure game Secret of Monkey Island, was a “sense of temporal narrativity.” It was pretty basic of course, and as Black reveals its a clever illusion create by manipulating the SCUMM engine’s spatial structure, but it was a major narrative improvement on the text adventures that preceded it.

I am by no means a computer gamer, and I’ve only experienced a few games, but Monkey Island and its sequel were two that I did, all-be-it over the shoulders of the couple I was rooming with at the time, as they played on their Amiga. (Though using the monkey as wrench was MY idea.) I remember being impressed, and I recall them borrowing my Mac (at that time the only machine connected to the net), to email Ron Gilbert and tell him how much they loved the game.

A modern day equivalent of SCUMM is Bethdesa’s Radiant story engine, which means, after many recommendations, the Skyrim is the next game narrative I will study.