I’m dashing off to do some research at Bodiam Castle, but I wanted to share this short film which a colleague pointed me to recently. An explorable model of Pepys’ London, created using the Crysis engine.
I’m excited because my first conference paper proposal has been accepted, and it gets financial support to help me go deliver it. So in September I’m off to the University of Rochester, NY for their Decoding the Digital conference. I thought I’d share the abstract here. Now, of course, I have to write the paper.
The creators of digital narratives, in the form of computer games, are experimenting with form as they explore story telling in virtual spaces. Different approaches to so-called “open world” games all succeed in creating emotionally engaging diageses, three-dimensional virtual story spaces around which the player can wander with apparent freedom.
Cultural heritage institutions, including museums, built heritage, historic and ancient sites and heritage landscapes, have long been telling stories in three dimensions. Where it’s done well, visitors to those sites can immerse themselves in stories that they co-author as they make choices about what to look at first and subsequently and how deeply they want to explore individual points of interest.
This presentation reports on early research comparing narrative approaches in digital games and cultural heritage institutions. Using case studies of open world games such as Red Dead Redemption, Dear Esther, and Skyrim, the presentation identifies different narrative techniques, structures and emotional triggers and seeks comparators in a number of UK cultural heritage sites. Highlighting the relative strengths of the digital and real-world media, the presentation discusses how cultural heritage sites might adapt some of the techniques of game narrative, including structure and music, to interpretive use. The results of an evaluation of a digital ludic interpretation case study, Ghosts in the Garden, at the Holburne Museum, Bath, illustrate the discussion.
The presentation concludes by setting out the plan for further research, including an exploration of adaptive narrative and the narrative braid (Hargood et al, 2012), and experiments with more considered use of music to trigger emotional responses at heritage sites.
I took a trip with work colleagues yesterday to check out the new Mary Rose museum. For those international readers who may be unfamiliar with “Britain’s Pompeii” Mary Rose is a Tudor warship that sank in Portsmouth Harbour during a battle with the French. Half the hull, and an amazing number of objects were preserved in the silt, and the site has been a marine archaeological “dig” for decades.
The new museum was built around the hulk of the vessel, which has been in-situ in a Portsmouth dry-dock since it was raised back when I was a kid.
In black wood, the building’s ark-like shape echoes the ship it contains. Inside, there are galleries that run along the length of the Mary Rose, displaying some of the objects in the mirror image of the position that they were found in the wreck . For example, one of the things that most excited me was the brick-built cooking facility right down on the bottom deck of the ship. Between these galleries and the Mary Rose herself, there is currently a wall, with windows of varying shapes and sizes allowing you to peer through. The wooden structure is still being conserved. Having been chemically sprayed for 20 years or so, it’s now being air-dried, and in about five years the wall between the ship and its visitors will come down. When that happens (I hope and expect) visitors will be able to glance across from an interest object (like my kitchen) and recognise the place it occupied in the wreck. At the moment, its slightly frustrating because visitors can’t make that connection without having to shift position and peer from one of the windows.
One great aspect of these long galleries is that their floors are not level but curve just like the decks of the Tudor ship. With some of the displays encouraging visitors to step into spaces with limited headroom, the physicality of the space does a very good jobs of putting you “on board”, without compromising access.
At either end of the museum there are more conventional (level) interpretive galleries, thematically unpacking a wealth of the objects that were found onsite. Vitrines and interactive displays interpret life on board the Mary Rose as well as the archaeological methods used to reach a shared understanding of what life might have been like.
Reflecting on the experience as a whole, and considering it through the lens of where my studies have taken me so far, it’s pretty traditional museum experience, there’s an emotional peak at the beginning, as you walk into the first gallery past an evocative video that puts you among the sailors as the ship turns on its side and water floods in. Then there’s a context setting exhibition which helps you start to define your route through the plethora of objects in vitrines that you are about to see. I felt no pressure to look in detail at every vitrine, though colleagues reported becoming quickly overwhelmed by the sheer number of objects to look at. Certain objects caught my eye and excited me, but there wasn’t another “designed” emotional peak until the route required taking a lift between the lowest floor and the highest. as the lift began its ascent, the interior lights dimmed, and we were blessed with a view of the whole hulk of the ship through the glass side of the lift. No longer obcsured by wall and glimpsed through windows, the sight was magnificent.
But after the little epiphany I had in this post, I was looking for another emotional peak, or moment of insight at least, as I left the final gallery. I was disappointed. All I got was some underwhelming footage of the salvaged Mary Rose making her way towards the shore at Portsmouth. I tried to remember the excitement I felt watching it on TV in 1982, but failed to make my heart beat faster.
It reminded me of a conversation I recently had with my boss, about this very subject. She use to work for the visitor attractions company which runs SeaLife centres, Tussauds, etc and told me that there, no project got off the ground that didn’t have a final wow moment of some sort. So it isn’t rocket science, why do so many cultural heritage attractions end with such a damp squib before the gift shop?
There are a couple of simple games included among the interactives. In one, visitors co-operate to become marine archaeologists. One player moves a diver equipped with an underwater “blower” around the seabed, looking for objects to uncover. As each object becomes visible the second player moves their diver to “record” the object. A successful recording rewards the players with a little bit of interpretive text, but the object of the game is see how many pieces you can uncover and record before the divers’ air tuns out.
I had to force myself to stop playing the other game though, which was simple but addictive. Equipped with four ship’s cannon, of various sizes, the object of the game was to disable but not sink as many French ships as you can within the time limit. The enemy vessels cross the screen at various ranges. Use too weak a gun, and your shot will fall short, but too powerful a weapon will sink the ship and you lose your booty. I managed eleven ships with my second go.
One of the other interactives felt strangely dated. An early exhibit displays the Cowdray Engraving, a depiction of the battle showing just the tips of the Mary Rose’s masts as she sinks beneath the waves. Touch sensitive screens allow visitors to explore the map in detail and various points of interest are interpreted with pop-up windows. But I found myself trying to use multi-touch gestures to navigate, and got quite frustrated with the displays.
The other interactives took the role of text panels when “asleep,” with a few lines of text on a topic, but touch one of the object icons, or question buttons, on the panel and a video would appear. With some objects, for example, a pocket sundial, the video would show a recreation of the object in use. In other videos would feature experts talking about the object, or concept.
A short aside on the video reconstructions. Many of the sailors are portrayed wearing just a shirt and hose, with no other layers. The shirts were also surprisingly short. Given the detail of many other aspects of the reconstruction, and some of the amazing scraps of fabrics (including a check shirt, velvent and woollen cloth) that have been preserved by the silt, I wanted to know whether this was a considered, informed choice, or just cheap costuming. I didn’t find a conclusive answer.
Overall though, the museum is a moving and captivating experience. I know a little about Tudor life, but I learned so much from these objects, mundane, useful objects and felt a real connection with the men who lost their lives when Mary Rose sank. The museum considers itself a memorial to those sailors, and it’s a sensitive and well-deserved epithet.
There are a lot of things in Neal Stephenson‘s The Diamond Age which I love. If I’m honest with myself I hope to see mediatronic paper and animated digital chops, for example, become real in my lifetime. There are other aspect of the world created in that novel, for example massive inequality in a post-scarcity society, which I hope we won’t see, but I fear we are already walking down the path towards. At the core of the book though is one idea that some of my recent reading has prompted me to think about again.
The 2009 paper, Serious Games in Cultural Heritage, by Anderson et al., is a fun read, reporting on the state of the art at the time. There are some lovely lines which I’d like to take issue with. The authors, for example, hint at an opinion that a serious game doesn’t need to be fun. To which my reply that if its not fun, then its all “serious” and not a “game,” even if it does make use of gaming technology. The authors cite two examples of virtual reconstructions of Roman life, Rome Reborn and Ancient Pompeii, which use gaming technology as a research tool: “[Rome Reborn] aims to develop a researchers’ toolkit for allowing archeologists to test past a current hypotheses surrounding architecture, crowd behavior, social interactions, topography, and urban planning and development.” More fun comes from the Virtual Egyptian Temple, and The Ancient Olympic Games examples which have playful or ludic elements in them, even its its only piecing pots back together or successfully answering quizzes set by what the paper calls a “pedagogical agent.” (Crikey! I’m returning to the Ludology vs Narratology debate again – on the side of the Ludologists!)
The paper also discusses the pedalogical value of some commercial games, which Burton calls “documentary games.” The most recent example of this genre brought to my attention is Call of Juarez: Gunslinger (with thanks to Chad at westernreboot). Of course another feature of many modern commercial games that the paper highlights is the bundled content creation tools that allow you to create your own cultural heritage environment, and indeed the Virtual Egyptian Temple mentioned above was built with the Unreal Engine toolset.
There’s also a section on all the various “realities” that gaming technology has to offer, which I’ll return to when I finally get round to writing up Pine and Korn’s Infinite Possibilities. and a section on the various gaming technologies (rendering effects and artificial intelligence) and the like, which a cultural heritage modeler can use, which makes the paper a very good primer on the subject (and one I wish I’d found earlier).
What led me to that paper was looking deeper at one of the poster presentations I saw last week. I didn’t get a chance to talk to (I guess) Joao Neto who was deep in a conversation I didn’t want to interrupt, so I did some Googling. Part of a team working to interpret Monserrate Palace in Sintra, Portugal, Joao and Maria Neto did some of the usual stuff: creating a 3D model from architectural drawings and laser scanning to show how the palace developed over time; an interactive application called The Lords of Monserrate, exploring the lives of the different owners of the palace over the centuries; and The Restoration, which appears to be a mobile app which recognizes the distinctive plasterwork in each room and interprets the restoration process in that room. But they also experimented with what they called Embodied Conversational Agents.
These are virtual historical characters, “equipped with the complete vital informational [sic] of a heritage site.” The idea was that the virtual character would capture the visitor’s interest with a non-interactive animated opening scene, in the manner of a cut-scene on a video game, but then would open up a real time conversation that would immerse the visitor with realistic “face movements, full-body animations and complex human emotions.” The conversation would be more sophisticated than a simple question and answer system, by being “context aware,” breaking up the knowledge base into modules, to make interactive responses more possible.
In order to achieve this ambition, we developed an Embodied Conversational Agent Framework – ECA Framework. This framework allows the creation, configuration and usage of virtual agents throughout various kinds of multimedia applications. Based on a spoken dialogue system, an Automatic Speech Recognition (ASR), Text-to-Speech (TTS) engines, a Language Interpretation, VHML Processing, Question & Answer and Behavior modules are used. These essential features have very different roles in the global virtual agent framework procedure, but they all work together to accomplish realistic facial and body animations, as well as complex behavior and disposition.
Which all sounds like an amazing feat,even if the end result is (and I’m sure it must be) a little bit clunky. I’d love to see it in action. But what does this have to do with Neal Stephenson and The Diamond Age? Well, the subtitle of that book and the McGuffin (though plot wise, it’s much more than a McGuffin) is A Young Lady’s Illustrated Primer. In the story, A Young Lady’s Illustrated Primer is an interactive book, a pedagogic tool commissioned by a very wealthy nobleman to ensure that his daughter’s educational development is superior to her peers. Many of the characters that the reader meets in the Primer are sophisticated virtual agents like those described by Neto and Neto. But some are voiced by a “ractor,” an interactive actor whose voice, expressions and movements are transmitted live to become the voice, expressions and movements of the character in the Primer. One of the characters in Stephenson’s novel make her living as a ractor, playing characters like Kate “in the ractive version of Taming of the Shrew (which was a a butcherous kludge, but popular with a certain sort of male user),” and to “fill in the blanks when things got slow, she also had standing bids, under another name, for easier work: mostly narration jobs, plus anything having to do with children’s media.”
I used to be a “ractor” of sorts, as a costumed interpreter in all sorts of historic sites. I’m proud that my colleagues and I became one of the most interactive and immersive of all the interpretation media available. But having professional people on site is expensive, and not all volunteers have the skills, confidence or desire to take on historical roles. So I’m wondering if another approach to Neto and Neto’s Embedded Conversational Agents is now, technically a possibility.
Could a virtual character be distantly controlled in real time by a human “ractor”? And could that ractor fill their working day becoming different characters (and even at different cultural heritage sites) as and when required? The relatively small audience for cultural heritage after all makes a live ractor experiment a more realistic possibility than it would be for a popular commercial video game.
I REALLY want to try this out. Who wants to help me?
Last Saturday I went to the inaugural conference of the Centre for Digital Heritage at the University of York. The first speaker was Professor Andrew Prescott, who gave us a salutatory reminder that the so-called Industrial Revolution wasn’t quite as revolutionary to those living through it, and that some of what we now realize were world changing developments, were not seen as such at the time. Whether we’ll recognize what is/was important enough about the current so-called Digital Revolution remains to be seen. But don’t let me speak for him, if you like, through the power of digital, you can see his slideshow here:
It was a mature and sobering start to the conference, but also inspirational. Towards the end he mentioned conductive ink that was safe to touch (or to paint on your skin if you want a working circuit-board tattoo) and pointed us towards the work of Eduado Kak as an example of how the digital and real worlds might collide in new ways:
I was particularly interested in the presentation from Louise Sorenson about a project to capture stories from families that emigrated from Norway to the US. The idea was to build a Second Life style recreation of the journey many such emigrants took (from Noway to Hull first of all, the overland to Liverpool to catch the boat to America). This would work as an inter-generational learning tool, letting people explore their forefather’s journeys, and to add to the world from their own family tales and photos or objects that might have been passed down the family from the original travelers. This experiment turned out to be one of those “a negative result is not a failure” types. They didn’t manage to capture much new data (though they did get some, shared on this blog), but learned a lot about why they didn’t, which Louise shared with us. For a start – Second Life? Remember when that was the “next big thing”? Early adopters got very excited and talked about it as though we’d all use it it, like Neal Stephenson’s Metaverse. But us “norms”, if we logged on at all, realised pretty quickly that it was hard work modelling your world, the pioneers were profiteering, selling us land and other stuff that existed only as one and noughts, and most tragically, everywhere you looked there were avatars having kinky sex.
In fact Ola Nordmann Goes West, as Sorenson’s project was called, rejected Second Life as a platform for at least two of those reasons. Instead the team opted for an open source alternative OpenSim. This allowed them to avoid the virtual property speculators and kinky sex, but didn’t solve the hard work problem. The challenge of: downloading the client; installing the client; setting up the client (with IP address, rather than an easy to remember/type URL); and, then signing up was an off-putting barrier to an audience used to just clicking on the next hypertext link. And this is competing for on-line time with more established social networks like Facebook and Flickr. Either of which might have more natural appeal to emigrant families, because they both are natural tools for keeping in touch with distant relations. Then, there’s the numbers problem.
The Ola project tells me around a million Norwegians emigrated to the US between 1825 and 1925, and that about four and half million Americans are descended from those families. Which feels like a large number. But when you slice it up to count the number of people that discover the project, the proportion of those who are interested by it, the number who get past the client barriers, and then the fraction who feel they have something to add to the story, you are going to end up with very few people.
I’ve spent a few paragraphs on this presentation because its particularly relevant to my original proposal, wherein I asked “What can real-world cultural heritage sites learn from the video games industry about presenting a coherent story while giving visitors freedom to explore and allowing them to become participants in the story making?” The Ola project is all about giving people freedom to explore and become participants in the story making, and so its a very useful example of what some of the traps I might have fallen into. Given that the sites I work with have an annual visitorship numbering in the tens of (if they are lucky, hundreds) of thousands, they’re chances of attracting even the tiny number of active community participants are even more limited than Ola Nordmann’s.
An alternative approach to public participation was shown by John Coburn. Tyne and Wear Archives and Museums put their collection on line as many institutions have done, but online collections remain connoisseurs resource: as Coburn said, “its only engaging if you know what you are looking for.” With the Half Memory project, the museums service handed their on-line collection over to creative people of all sorts to create compelling digital experiences. “Designing digital heritage experiences to inspire curiosity and wonder is more important than facilitating learning” Coburn insists.
Ed Fay’s project, PhoneBooth, for the LSE Library, had an even smaller intended audience, students sent our by their geography lecturers from the LSE, to explore the London described by Charles Booth’s survey of 1898-9. He colour-coded every street according to the evidence he witnessed and recorded on the streets, classifying them with one of seven colours raging from Black (Vicious, semi-criminal) to Yellow (Upper-Middle and Upper Class). It reminded me as he spoke of the MOSAIC classification from Experian that the National Trust uses. The library digitized both his published results and all his notes years ago, but the PhoneBooth is an app that lets you take that data with you, and walk the streets just as Booth did. It even lets you overlay the data with the modern equivalent – no, not MOSAIC, but the Multiple Deprivation Index.
Ceri Higgins shared her experiences working with the BBC and other academics to create a documentary about Montezuma. As the programme was being put together, she grew more and more excited. This was a film that was going beyond the old tropes of gold, sacrifice, and invasion by the Spanish to reveal a broader representation of Aztec society. However, by the time it came out of the editing suite, it had become, in her opinion at least, all about the old tropes of gold, sacrifice, and invasion by the Spanish. The bad guys here were the narrativists who, using tried an tested Aristotelian principles of drama, needed a protagonist, an antagonist and plenty of conflict to sell the programme. They didn’t think the more nuanced interpretation that Higgins had hoped for (and I understand, which was filmed) would connect emotionally with the audience. Hmmmm.
Pause for a moment of self reflection.
I wish I’d managed to chat with Ceri during one of the breaks. It strikes me, given all the footage which told different, more nuanced stories, that this is a case for The Narrative Braid!
Another presentation that grabbed me was from a team led by Helen Petrie, presenting their efforts to interpret (and then evaluate the interpretation of) “Shakespeare’s church” Holy Trinity in Stratford-upon-Avon. The interpretation, a smartphone app, was nothing special, using techniques that a myriad of other developers are also trying to push on cultural heritage institutions. But the evaluation was something new. According to Petrie “surprisingly little empirical research is available on the effects of using [smartphone app] guides on the visitor experience.” It’s not so surprising actually, considering how dificult it is to record emotionsal responses without participants intellectualising them. Anyway, they started from a clean slate, creating a psychometric toolset that includes the Museum Experience Scale (and of course the Church Experience Scale). The presentation of a top-line summary of course, but I’m keen to read more about it, as I’m pretty sure I saw at least one bar-chart with an “emotional engagement” label.
Another sort of guide, and one long imagined, was described by Adrian Clark. Ten years ago he started working on a 3D augmented reality model of parts of Roman Colchester, but the technology required at the time was on the limits of what was weaarble, and by no means cheap. Now that the Raspberry Pi is on the scene, he has started work again, and hopes soon to have a viable commercial model.
We also saw a presentation from Arno Knobbe, who showed us ChartEx, a piece of software that can mine Medieval texts (in this case, property charters) and pull out names and places and titles. Then the program will also algorithmically suggest relationships between the people and places mentioned in the charters and thus suggest where the same John Goldsmith (for example) appears in more than one charter. Jenna Ng analysed the use of modern Son et Lumiere shows in historic spaces. Valerie Johnson and David Thomas explained how the National Archives are gearing up for collecting the digital records that will soon be flooding in as the “30 year rule” becomes the “20 year rule.” My supervisor, Graeme Earl introduced a section on the history of Multi-Light imaging, in honor of English Heritage’s guide on the subject. The subsequent papers covered RTI, as well as combining free range photography with laser scanning to create accurate texture maps, and very readable 3D models. One fascinating aside (for me) was that the inventor of the original technique, Tom Malzbender, originally thought it’s main use would in creating more realistic textures for computer games. We also looked at: the digitisation of human skeleton remains (makes putting them together a lot easier apparently); the 3D modelling of the hidden city walls of Durham (though personally I’m more excited by the Durham Cathedral Lego Build which started today, first brick laid by Jonathan Foyle); and the digital recording, and multiple reconstructions, of mediaval wall paintings.
There were poster presentations too. Two that leaped out for me were Katrina Foxton’s exploration of “organic engagement” with cultural heritage on the internet, and Joao and Maria Neto’s experiments with virtual agents as historic characters.
In his post, The Simulation Dream (which I’ll forever thank Twitter for pointing me to), Tynan Sylvester sets out the Player Model Principle, which is “The whole value of a game is in the mental model of itself it projects into the player’s mind.” I’ve been thinking about that a lot this week, wrestling with the sometimes incredibly didactic way in which cultural heritage organisations can tell their stories. Occasionally we lay the story on thick, especially when we resort to a chronology of a place: “Humans first settled here in … there’s evidence of a pre-roman village … the Roman villa was destroyed … Medieval settlement … Sir Suchansuch was gifted the land in … changed hands during the civil war … current Lord Soandsew inherited when … and all of them thought it was a lovely place to live.” Ugh! Yawnsville! Why oh why do we do that?
(I always shy away from chronologies in my work, but I bet if I examined my previous projects closely, I could pull out an example of where I’ve accidentally included one.)
Where it’s especially apparent is in poorly designed guided tours. Bad tour guides will often start off their tours with a chronology (or, even worse – a family tree), in a misguided attempt to put the rest of their tour in context.
(Damn, that didn’t take long – here’s a chronology/family tree from an introductory video at Knole that I commissioned a few years back:
In my defense, it’s quite funny, and hosted by the lovely Jonathan Foyle. For the prosecution, it’s way too long at almost 15 minutes.)
Why do we (us heritage types) insist on telling you everything at the beginning of your visit? Well, I guess it’s because that’s the one time we know we’re going to reach everybody who comes in, before they’ve wondered off to look at what ever they are particularly interested in, or to the cafe, the toilet, or whatever. But why do we have to tell the story so literally?
Museums used to be accused of not telling the story at all, of being elitist organisations which could only be understood by the cognoscenti, where the objects could speak for themselves, the juxtaposition of their arrangement adding to your insight, but only if you knew enough about them already. Augustus Pitt-Rivers is credited with suggesting, in 1891, the philosophy that museums should be readable by the masses, or as he put it “I hold that the great desideratum of our day is an educational museum, in which the visitors may instruct themselves.”
Sixty or so years later the US Park Ranger Freeman Tilden gave us our common understanding of “heritage interpretation” in Interpreting Our Heritage. In that slim book he said, though we often seem to forget, that interpretation isn’t about transmitting a fact to the public as though they are empty vessels, but rather creating a “revelation” that “relate[s] what is being displayed or described to something within the personality or experience of the visitor.” But still we tend towards spoon-feeding our visitors with facts.
Instead, cultural heritage institutions should think of their interpretation in the same way that games programmers think of their computer code. As Sylvester says:
Designers create the Game Model out of computer code, while the player creates their own Player Model by observing, experimenting, and inferring during the play. In play, the Game Model is irrelevant. Players can’t perceive it directly. They can only perceive the Player Model in their minds. That’s where the stories are told. That’s where dilemmas are resolved. So the Game Model we create is just a pathway through which we create the Player Model in the player’s mind.
So museums and interpretive sites shouldn’t be just telling the story, rather they should use interpretation as a pathway though which the story is created in the visitor’s mind.
Pinchbeck et al illustrate this with a small experiment they conducted looking at narrative in games. Eight participants (three novices and five more experienced gamers were invited to play a section of the game Half Life 2. This game, and the particular sections of the game involved, were chosen because of a high degree of embedded narrative content:
The architecture positions the overall game world in time and space; the trees suggest a season; the huge alien citadel in the background sets up both a long term narrative intrigue and sows the seeds of a long term goal.
The public broadcast screen in the mid-ground is a good example of an active narrative device. Narrative information id actively supplied by this device; the player can, by listening to the broadcast, gain additional understanding of the situation. Whilst the device does not contribute anything to immediate goals, or short term narrative, it actively establishes the game world further.
In the foreground a humanoid agent will dynamically respond to the player, usually aggressively. Whilst contributing towards long term narrative as a collective, or type, the individual agent’s role in FPS games is ordinarily short term and micro-goal orientated, such as forfilling a combat function. Friendly Non-Player characters such as Half Life 2’s Alyx operate dynamically and actively contribute to long and short term narrative.
The players were observed and their eye movements tracked and recorded to see where their attention was while playing. There’s a lot of interesting observations, but the key conclusion that leaps out as relevant to what I’m thinking about now is this:
The lack of awareness of passive narrative objects was striking; especially in a game that is generally acknowledged to contain a strong narrative. Rather than steering play, the results may indicate that narrative has little to do with it, and is imposed post experience. In other words, when we speak of narrative influencing play, it essentially translates to the narrative being used to position action in context for memory, but not actually being part of the play experience…
… the strong narrative could make it easier to recall specific moments with in play by acting as a more robust framework for retrieval, thus yielding more specific and potentially more vivid recollections of the experience…
…strong game narratives may assist the structuring and management of memories of play experience, by supporting actions with a robust, temporal and contextual framework.
It’s not too far a leap from this to summarize that the narrative, though embedded within the game, only becomes apparent to the player afterwords, having projected itself into the player’s mind.
So, do we (in cultural heritage) do too much telling, without giving the visitor space to work things out?
I note that one of the most popular searches driving traffic to this blog is “narratology vs ludology.” I must admit, I’m not entirely sure why. I’ve written only one post addressing that debate, and over all, I guess I’m taking quite a narratological point of view. This post however may begin to address the balance, as this is where I begin to get all “ludological.”
When I wrote my funding proposal, I predicted that I’d struggle to find much literature around narrative in games. I haven’t found much so far. I suppose I should not be surprised, all the people who know about games are rightly making games rather than writing about how to make games.
However, a couple of days back, just as I was packing up for the evening, and shutting down Tweetdeck,I glimpsed an interesting looking item:
Great article on how most of the of a game’s storytelling happen in our minds: http://t.co/NYxZ6nIwrs
— Thomas Grip (@ThomasGrip) June 6, 2013
I followed the link and had a quick look at the article, which was intriguing, but I had plans for the evening. So I retweeted it in lieu of making a note and shut down my computer.
When I came back to it the next day, I read the article. The author Tynan Sylvester had worked on Bioshock, which was interesting because I’d recently read a paper on the use of music in that game, and also had my niece’s boyfriend recommend it as a “must play”. The article is about simulation and emergent story, and Sylvester related how the stories in Bioshock had been intended to come out of a complex simulated ecology, however:
While BioShock retained some valuable vestiges of its simulation-heavy beginnings, the game as released was really a heavily-scripted authored story. There was no systemic ecology at all. It worked fantastically as a game – but it wasn’t a deep simulation.
Attempts to create realistic models in games are misguided, he says, because:
What we really want is not a system that is complex, but a system that is story–rich. […] Interestingly, real life and most fictional worlds are not story-rich! Most days for most people on Earth or in Middle Earth are quite mundane. It’s only very rarely that someone has to drop the Ring into Mount Doom. Follow a random hobbit in Hobbiton, and you’ll be bored soon.
He goes on to point out that whatever the model in the computer program, “The whole value of a game is in the mental model of itself it projects into the player’s mind” [his emphasis]. He calls this the Player Model Principle. He goes on to talk about apophenia, the human mind’s tendency to project human patterns and behaviors onto non-sentient objects (and in this case, computer animations). Using an example from the Sims, he shows how a story of love, jealousy and murder can be imagined out of a couple of variables in computer code interacting. He discusses how to encourage apophenia in the player, and concludes that modelling can create successful and compelling narratives as long as the designer remembers to “Choose the minimum representation that supports the kinds of stories you want to generate.” Which is to say, keep the complexity of the model as simple as you can get away with, adding complexity for the sake of realism only creates noise.
Which is all very interesting, even if its relevance to those in my field, cultural heritage interpretation, is mostly a useful reminder not to over complicate things. Sylvester writes well, and explains complex ideas in very understandable ways. So I was particularly interested to see that he’s recently published a book called Designing Games, a Guide to Engineering Experiences. Could this be, I wondered, the elusive literature on designing narrative in games that I’d been looking for?
YES IT BLOODY CAN!
I downloaded a preview, and the first page set out Sylvester’s thesis, in the bold title of the first part (and then the first chapter) of the book “Engines of Experience.”
These are the droids you’re looking for.
I devoured that preview and wasn’t disappointed. I bought the full e-book (direct from the publishers). This is exactly the sort of book I envisaged finding when I wrote that funding proposal last year – not a guide to 3D modelling or programming games, but rather a games designer explaining (as he says) “the trade-offs in every design decision.”
And what gets me, is that I didn’t find it in a literature search, slogging away on Google, library catalogues or trawling though endnotes. It came to me on Twitter. I don’t know Thomas Grip, who posted that original tweet. I can’t even recall why I started following him. But thank you, Thomas, for posting that link.
And what if I had turned off five minutes earlier? Or ignored that tweet in my hurry to shut down? Would I have found this brilliant, helpful book at all? I hope so, but this has been a massive shortcut. I can see why my supervisors were so keen when I started my studies that I should up my Social game. Twitter is truly your friend.
But so is Google, and so for all those to find their way to this blog searching for ludology vs. narratology, let me quote Sylvester’s take on that debate.
This fiction-mechanics conflict is why some see a great debate between mechanics and fiction. The ludologists (from the Latin ludus, for “play”) argue that games draw their most important properties from mechanical systems and interactions. The narratologists argue that the mechanics are just a framework on which to hang the fictional elements players actually care about. This debate is the game designer’s nature versus nurture, our plot versus character, our individualism versus collectivism. But like all such debates, the conflict exists only on the surface. The pinnacle of game design is combining perfect mechanics and compelling fiction into one seamless system of meaning. Fiction and mechanics need not fight (though they easily can), and neither one need be given primacy (though one often is). Used together, they can enhance and extend each other in ways that each can not do alone.
Sylvester, T. Designing Games, O’Reilly Media, 2013-01-03. ePub.
I’ve got a suspicion you’ll be seeing a few more posts from me about this book.
The seminar I gave last week was streamed to the University of York. The software that does this also records it, so now the department have put it online and its available for anyone to watch. Watching it myself, for the first time, this morning I’m pleasantly surprised – not too many ums or urhs, and it seems to actually make sense.
I can’t read for a PhD in digital technology and cultural heritage interpretation at Southampton and not visit the recently reopened Tudor House Museum, which touts some of the very latest interpretation technology. So with my daughter on an inset day from school, I thought this would be the ideal opportunity for an educational visit.
We parked at the West Quay shopping centre, and skipped across the road to Bugle Street (what a great name). A very reasonable price for entry, even though I only got 10% for my MA membership. We took the audio tour wands and started with the AV presentation in the banqueting hall. Lily said later that this was her favourite bit, and I can see why: the room it atmospherically lit with wobbly-wick candle and a “crackling fire” in the hearth. Then, over the spitting of the logs, we hear whispers. Somewhere, somewhere close, people are shushing each-other, then talking, about us. Fairy-trails sparkles across the walls and curtains, and all of a sudden, the fire and candles are blown out. My daughter, almost twelve, was impressively spooked, especially by the rat she heard in the dark, scuttling under the bench.
And the Timeweaver introduces himself, and the spirits of the house, as friends and guides. They give us give us a short tape/slide presentation, a potted history of the house and its early 20th century “restoration,” being careful to point out the features in this room, for example, the musicians gallery, which are inventions of that restoration. Then the spirits are banished, the great curtain thrown open and the room is bathed in daylight. We are invited to explore the house, audio wands in hand.
The audio guides direct us to the garden first, to see the remains of “King John’s palace” (which turns out to have nothing to do with King John) and to admire the the Tudor style garden, before heading back into the domestic service areas of the house. Its worth pointing out here that the audio guide has a jarring change of tone here. An authoritive female voice guides us round the garden, presenting us with “facts,” but when we return to the house the Timeweaver greets us again and with the help of his spirits revealed the house with more of a storyteller’s style. I thought my audio tour had switched to the children’s commentry, but my daughter said the authoritative woman was her guide round the gardens too.
Inside the house we also met the museum’s “state of the the art technology” GuidA Rotate units from Blackbox-av. These touch-screen panels gave us computer generated models of the room in which they were sited at various points in history. Their USP being that they can be rotated around the room, so you always have both the relevant bit of the real world and the computer simulation in front of you. What struck me first were the differences between the “now” model, and the reality in front of me.
In every era the model also features “hot spots” which you can touch for a layer of extra interpretation. Sometimes this is directly related to the feature you are looking at, but sometimes the interpretation seemed more generic. Touch a barrel of … what? … salt? for example, and you get a photo the fascade when the house was occupied by dyers, but you still don’t know what was in the barrel.
A nice touch in the the first room is a screen mounted on the wall above the GuidA Rotate, so others in the room can see what the person controlling the device is looking at. And I loved the inclusion of a model of how the room might have looked in the late twentieth century, when it was a museum education room. It means something special when education sessions become part of the historic record.
Later on there’s another GuidA Rotate in a lavish bedroom, showing how the room might have looked in the Tudor, Georgian and Victorian periods. But the same model also features on the one of the lenticular panels which are also a feature of the interpretation.
This clever use of an old technology amused me more than the (I’m guessing more expensive and less reliable) GuidA Rotate units.
But what amused me the most was the temporary exhibition, which did something I’ve always wanted to do – challenge the values of museum collecting, with the more personal (and more modern) collections of local people. And what I especially wanted to do happens here: a comics collection is featured!
When we passed a wall in which faint graffiti had been scratched I tried to tell my daughter about the day I spent at Winchester using RTI photgraphy to make clearer imaged of similar graffiti. She wasn’t that interested, but on the other side of the wall are interactive units that allow visitors to look at clearer images of the graffiti – I bet my university colleagues (and their string and shiny balls) have already been involved …
Talking of which, I got an email today with a link to some of the images we created on that day. So I’ll update my post to include it.
A week or two back, a colleague gave me a sample of the QR code panels that are being piloted along the South Downs Way.
I was quite excited to see it, because it turned out not to be just a QR code, but also incorporated an NFC chip and a LAYAR augmented reality image.
I’m quite dismissive of QR codes, but only because some people get over excited about what is, after all, just another way of inputting a URL into a browser. I keep telling my colleagues that a QR code is only as exciting as the website it points to.
But the addition of an NFC chip and the Augmented Reality suggested that a lot more thought had been put into this pilot than some QR codes I’ve seen.
So, I’ve been playing with it, and I’m disappointed. My phone doesn’t have NFC, so I couldn’t try that. But I could download the LAYAR app and have a go with that.
It took a few goes to get LAYAR to recognise the image, but eventually it said “Getting Content”. Then it said “Point at the page again to view LAYAR content” so I obeyed, and …
Ho-hum. So I resorted to scanning the QR code. The Scan app quickly recognised the QR, and served up … this:
Oh dear. I’d sort of expected a page formatted for mobile devices, not one I’d have to “pinch to zoom” to read. And, more importantly, I really had expected to be taken to a page that told me about the the South Downs way, not a link to a survey about using QR codes.
To be fair, the little red buttons at the top do link to various places along the South Downs Way, but I had expected each QR code to take me to information about a specific place. Click on one of the buttons and this is what you get:
So lessons learned: Format for mobile if you are providing links to web content in the countryside (duh)! Survey your users after they’ve experienced the content. And build the engaging and dynamic web content before you install the QR panels. Oh, what’s that? You did? Well, you and I must have a different understanding of “engaging and dynamic” my friend.
And I might have shared my thoughts with the developers via their handy and prominent survey, had not all the questions been variations on a) “Let me count the ways in which QR codes are splendid” or b) “I’ve never heard of QR codes”.
All in all, I think National Trails have been sold a pup.