Ambient Games, Ambient Interpretation

Last night I saw a presentation by Dr Mark Eyles. It was part of a meeting of the Hampshire Unity3D/3D Interactive Group (H3DG), a groups which started up just as I was beginning my studies, so I’ve sort of fallen into it. Its a great little get together, about once a month at The Point in Eastleigh. Part of the evening consists of a tutorial demonstrating how easy the Unity3D engine is to use. (And it really seems easy, almost childs-play – but I speak as one who has just realized that he’s done his HypeDyn project all wrong, and will have to start again.) Last night for example showed how easy it was to use the 3D technology to make a 2D game. We also got a demonstration of the forthcoming Leap gesture controller, and how easy it is to integrate gesture controls into Unity3D games.

Having written this, I feel it sounds like some sort of corporate roadshow, selling the Unity3D engine, but no, its a bunch of freelance and SME Unity developers getting together to share ideas. And the proof of that is in last night’s “feature presentation” which had nothing to do with Unity3D at all.

Mark Eyles has just got his doctorate, having spend some time thinking about Ambient Games. He starting point for this train of thought was Brian Eno’s Ambient 1: Music for Airports. Eno, he said, described Ambient music by four features:

  • Engagement – ambient music should be both ignorable, in the background, and interesting in the foreground
  • Affect – ambient music should create a mood in the listener, which in turn should affect the way they perceive the space they are in
  • Persistance – ambient music shouldn’t require being listened to as a whole piece
  • Context  – ambient music should have a particular relationship to the location its played in

Eyles describes in this paper (which should be celebrated if only for referencing one of my favourite books, which no-one else it seems except for me and Mark Eyles has heard of), how he tried applying these four qualities to two experimental games: Ambient Quest (in two versions) and Ambient Quest: Pirate Moods. In the first, players wear a pedometer, and the steps they accumulate going about their business in the real world gives them power to move a character in a simple computer generated world. The second, gives players an RFID Pirate Card, which accumulates pirate resources (Rum, Canvas, etc) while the players look at an exhibition. Players can chose to ignore the game and focus on the exhibition, or play the game more actively, by choosing to stay close to the panels that give them the resources they need most.

Eyles followed up on these experiments by looking for Ambience in existing games, the most obvious examples being MMORPGs and Pervasive games, technologically augmented live action roleplaying, such as Prosopopeia Bardo 2: Momentum, which was played in Stockholm. In his presentation he also discussed how games like Skyrim (and I guess Red Dead Redemption) are not ambient, because although you are given a world in which to roam freely, that world only “comes alive” in the bubble surrounding your character. If John Marsten rides away from Armadillo, the town is frozen in stasis until he returns. But Eyles argues that games like Civilization ( was specifically looking an version 4) could well be ambient, because they are persistent – you set a city manufacturing tanks for example, and it will carry on doing that, and growing its population, and earning gold, even if you never look at it again. Of course, as Eyles admited, one way in which Civilization is not ambient is that its turn based – walk away from the game and eventually the current turn will end and the world will stop until you come back and start the next turn. (A more persistent and thus even more ambient version of Civ is the fictional game Despot, played by the main character in Iain Banks’ novel Complicity.)

Mark explained that he’d been awarded his Doctorate “days ago,” and so I’m looking forward to reading his dissertation will will soon be available here. Until then you’ll have to make do with my mangled remembering of his conclusions:

In an Ambient game, explains Dr Eyles, the player has the option of engaging more or less with the game, and the game world is persistent, in that actions can be  initiated by the player (not just AI actions) carry on after the player moves their attention away. So a feature of ambient games is moving player attention around the gamplay space, manipulating player attention resources, and providing hidden gameplay that continues away from player attention. This is turn provides opportunities for the player to discover some aspects of the game, or even invent some, which is what drives player engagement.

All of this, both the presentation yesterday and reading some of Eyles papers after his attendance at H3DG, got me thinking about Ambient Interpretation. One might argue that interpretation is pretty ambient already – some people choose to read interpretive panels, and others choose to ignore them. The display of cultural heritage does not rely only on text panels in any case, the positioning of objects in relation to each other, or in the National Trust’s case, the creation of  whole-room presentations, is a form of interpretation that visitors choose to engage with to a greater or lesser degree. Where text panels, or encapsulated room-cards at NT sites, do exist they persist whether the visitor engages with them or not.

But there are some aspects of the Ambient model that intrigue me:

  • the idea that interpretation is ALWAYS in the background, yet a visitor can pull it (and by it I mean, not a guidebook to leaf through by the most relevant, contextual interpretation to where they are, what they are looking at) into the foreground as soon their interest is aroused;
  • the idea that visitors might participate in the creation of interpretation, even when they have little interest in doing so. (I think what I’m getting at here, is that they are not actively contributing, but that their attention, their presence in an area even, as they pursue their own interests, informs the interpretive schema in some way. The easiest analogy might be on-line shopping, where by looking at items, I’m affecting what other shoppers may see as well as what items might be brought to my attention in future.);
  • the idea that interpretation might be persistently changing;
  • the idea that ambient interpretation is always contextual (of course) but also manipulates the visitors’ emotions; and
  • the idea of discovery, and shared discoveries.

At this stage I have no idea what all this means of course.

Eyles’ Pirate Moods game has the most obvious application in Cultural Heritage interpretation – in a museum environment, full of text panels, the addition of an RFID tag that collects data as the visitor wanders around reading the panels could at the very least track what the visitors is most interested in, and deliver deeper levels of interpretation, based on what the visitor has seen so far.

Its interesting to see how quickly the technology has changed. While Dr Eyles was reading for his PhD, GPS enabled smartphones have become almost ubiquitous. One feels those early experimental games of his might have taken a very different form had smartphones been so prevalent then. It makes me scared of what opportunities  might be around (or missed) by the time I finish writing my own dissertation.

Ripping text into Hypertext

I’ve spent the day engaged in a first-pass edit of a proposed guidebook text into HypeDyn. The text is the 10,000 word draft by Sue Kirkland of a guide to the River Wey and Godalming Navigations. Though this is National Trust site, its not an official project, I’m doing it as a “real-world” exercise in using HypeDyn.

So far I’ve cut the text up into about seventy “nodes”, most of which are associated with actual places along the river. There are also eight that are pure “story” elements, and a few others are are about things or people. A few “transistions” have also become apparent. The text as it stood envisaged a twenty mile walk from the Thames to Godalming – so so I thought, for most of the day. This puzzled me, as the Navigations are a favourite place for my family to walk, but we’ve never considered walking it all in one. (Well, my wife probably has, but the rest of us a far more fairweather.) And even if we were, I thought, why would we start at the Thames? Surely it would be more pleasant to walk downstream?

The “one way” nature of of the proposed text was the reason why I’d thought it might be fun to turn into Hypertext in the first place. If I managed no more that making it readable in two directions, that would be a useful enough thing to do in any case. So while I was editing I was thinking about the walks my family had taken, some upstream some down, and I still couldn’t work out why the original author had chosen to start at the Thames. It only dawned on me as I neared the end – the navigations aren’t only for walkers, obviously. Lots of pleasure-boat owners and hirers use the waterway too. Many are local with their boats moored somewhere along the river, but most visiting craft would have come via the Thames. Doh!

So, when i start my next task, turning it into a context based Hypertext, I won’t just have to think about walks starting at (for the sake of my sanity) the four sites with the best car parking, but also boats coming form the Thames (that should be easy of course because that’s how the original was written) the two points where other waterways join the Navigations. Actually its one other point right now – the Wey and Arun canal is not yet fully restored.

So at either end, there is only one direction of travel, but at the other three (or four) points, the visitor will have a choice to go up or downstream, and the language of the text will have to change to cope with the choices the visitor makes. I also want the text to tell most of the “story” elements to the visitor, even if they have the shortest, four mile, walk.

That’s all for another day though.

I took a phone call today from a friend of a friend who is possibly being offered a high-powered job with a global cultural heritage brand. We talked about that company and its competitors, and where the future might go. And for the first time I used the words “Ambient Interpretation.” I know exactly where I got the word Ambient from, but I’m not telling you, not yet. And not tomorrow, but next week.

Poetics and place

I’ve been reading about a really interesting project to create a context aware interactive experience on the island of San Servolo. This involved creating a narrative which worked not just as long as the listener is in the right place, but also only if they are there at the right time and the weather is doing the right thing, so:

a mad woman of the asylum tells her story next to the sculpture in the park, but only in the afternoons; a piece of classical music – reminder of the music therapy used for the guests of the institution – can be heard by the users facing the south side of the Venice lagoon, but only during the nights characterized by the absence of clouds.

It’s a well realized attempt influence some of the resonances that can create emotional immersion in location-based narratives.
This isn’t quite context aware hypertext. In fact each segment was presented as a short video, so of course, the content of the video didn’t change dynamically according to context, but the choice of which video the user was presented with of made by context aware software.
I’m not convinced that pure video is the ideal medium for cultural heritage interpretation, after all, when you are in a place, you don’t wan’t to immersed in the video, you want to bathe in the atmosphere around you. This project demonstrates how a short video, triggered by location, time and weather becomes part of the place, but I’d like to how how a similar project with perhaps audio and the occasional augmented reality elements would work.
I can’t deny that weather is an important poetic element in narrative (consider “Its was a dark and story night”). In the digital narrative I’m currently exploring, the game Red Dead Redemption the emotional impact of some scenes, not just set pieces but moments during free-wandering play can be enhanced by the weather, be it good or bad. I’ve not yet worked out whether the rain in the lead up to one scene was co-incidental or scripted, but I think it was a a happy accident that as my character, John Marston, walked toward a location that I, as player, knew was a trap, the rain started and Marston’s footsteps splashed, doomladen, through the puddles. (It’s not all doom and gloom: shortly after I started playing I happened to notice this tweet from @r4isstatic: “Sunset through Hennigan’s Stead. Beautiful.” Actually his post on what makes the narrative of Red Dead Redemption so different from other shooting games is, though not weather related, well worth a read.)
Back to San Servolo, the rules that deliver a particular piece of video, don’t just take into account the place, time and weather: there’s also a rule that will block a particular video, if its already been shown to enough people – the idea being that users are forced to use the social network to share what they’ve experienced, and to hear about what other users’ experience has been. All in all, its a location aware narrative that really pushes the boundaries.
One interesting point is that “while the current location value is retrieved from the user device, most of the values of the environmental context are retrieved from different web services.” I went to a seminar yesterday from Michael Charno, Web Developer/Digital Archivist from the University of York. It was about linked data and the semantic web. I’m not afraid to admit that a whole bunch of it went right over my head, but he did point out one danger of drawing data from a variety of web-based services – what happens if that service is withdrawn, or even if (as the Library of Congress did to one of Charno’s projects) the providing organisation decides to change their web address? The permanence of the web services an interactive project like this draws on becomes a vital factor in user satisfaction. Cultural heritage organisations will be looking for a product to this to last some time without needing too much IT support, and users won’t be willing to wait around while somebody re-codes a bunch of links to restore basic functionality.
But that’s shouldn’t detract from the imagination shown by the team putting together the San Servolo project. Its a great attempt to explore a new poetics for the place-based narrative. And it inspired me to spend too much of today hunting around for more digital poetics…
Having been sidetracked into telematics, and intrigued by some of the work of Paul Sermon (this is my favourite, but this is more relevant to what I’m looking at) and momentarily impressed by have far we have come technologically in such a short time (check out these tiny dial-up ready Quicktime packages) I came across this online journal. I don’t know yet whether its going to be useful, but today it was definitely interesting.
Of course what I should have been doing is designing my evaluation for Ghosts in the Garden, and chopping the Wey Navigations narrative into HypeDyn chunks. Neither of those have happened. It may be a long evening ahead.

HypeDyn, App Furnace and the Tudor Child

***Updated*** When I added a photo via my mobile device, I seem to have deleted half the post before publishing, so the headline won’t have made much sense. I’ve rewritten the  second half of the post now.  Apologies.

This morning I got my head around HypeDyn by working through the three tutorials they provide. The first tutorial is about building simple hypertext links, and it got me thinking about a text handling language I had on my very first computer, the little known Memotech MTX 500. I now realise his software, called Noddy, was somewhat ahead of its time. Indeed a contributor to the wiki page I link to above calls it a “forerunner” of HyperCard. I can’t say whether that’s true, but all it seems to have been missing was the point and click interface, that we’re all so used to since the Mac. As I worked through the first tutorial on HypeDyn, I was thinking how old fashioned this new software was. Still it is free.

The second tutorial didn’t improve matters much. Although it did introduce Anywhere Nodes, which were linked to every other node, it didn’t seem to offer a way of sculpting them very dynamically. Thankfully the third tutorial was all about sculptural hypertext, and I learned how to make the links conditional, not just on whether a node had been visited or not, but also on whether card independent flags (or “facts”) are true or false. I also learned how to make the text on each card more dynamic, again based on the reader’s node history or the state of definable facts.

It’s still pretty basic. It’s text only for example, it can’t serve pictures or video. But it will do to play with, and I’m going to have a hack at making an existing text more interactive. The text in question is the draft text for a proposed guidebook to the River Wey and Godalming Navigations – surely the National Trust’s longest bit of countryside at 20 miles long (and only a few metres wide in most places). We’ve already noticed that as pure text, it presupposes walking in only one direction along the towpath. Obviously people can start in more than one place and choose to walk either up-stream or downstream. So my first challenge will be to serve those users’ needs.

I mentioned last week that HypeDyn was designed as a tool for non-technical people, and by coincidence this afternoon, I heard about another one. I was chatting on Skype with the lovely people at Splash and Ripple about doing an evaluation of Ghosts in the Garden, which makes an eagerly anticipated return to the Holborne Museum, Bath, later this spring. They mentioned that the software that underpins the experience was made with App Furnace. The people behind App Furnace were, by coincidence squared, also in the team that put together the Riot! 1831 experience I wrote about yesterday. Like HypeDyn its a creative tool for non-technical types. It’s on-line and free to use, you only pay when you publish.  So it might be worth a play with. But only after I’ve got my head around HypeDyn…

And so to Picadilly, where this evening I went to celebrate the launch of The Tudor Child, a book (and for the next three weeks, an exhibition at the Weiss Gallery) which explores the place of the child in Tudor society, through the clothes they wore.


A child's mannequin in Tudor dress


I’ve been reading about a pilot study done by (it seems) Hewlett Packard and Bristol University. For three  weeks around Easter 2004 people could book out a iPaq (remember those?) and a pair of headphones, and walk around Queen Square in Bristol, listening to a location-based audio drama (or “mediascape“). This write-up isn’t brilliant, but it is based on a good sample, and does touch upon the idea of of periods of emotional immersion brought about by the experience, which the authors identify as “magic moments.”

They can’t cite much evidence for the first type they identify  – being surrounded by a “sea of voices”: one respondent, for example, calls it “quite nice” which doesn’t sound very magical to me. But a second type, which they describe as “Physical and virtual collisions” is better evidenced. What they mean is the sometimes scripted, sometimes accidental, moments of resonance when whats going on in the audio drama echos the physical world, for example: a seagull flies by in the real world co-incidentally as one screams in the audio-drama. Or when a name mentioned at one particular location is visible near-by. I think this resonance of real and virtual is also at the bottom of their third type of magic moment, “synaesthetic confusion” when for example, the sound of a skateboard in the real-world is perceived as the sound of bullets by the listener to the drama. And it also has something to do with their fourth magic moment, which about the realization that you are in the place where history happened. All of this reminds me on the moment that I sat reading Homage to Catalonia in the Mocha cafe on Las Ramblas in Barcelona, and realised  as George Orwell described a gun and grenade fight between to cafe balconies, that I was sitting in  one of the cafes I was reading about.

Without ever using the word then, this paper makes a strong case for resonance as one trigger of emotional immersion

Childish enthusiasm and Sculptural Hypertext

I gave my first seminar yesterday, talking about how I come to begin the PhD. It seemed to go pretty well, and was recorded for posterity, so when it appears online, I’ll post a link to it.

But before the seminar I spoke at a workshop examining Digital Narratives, and was impressed and excited by all the other speakers. One in particular got my childish enthusiasm all fired up though. David Millard, of the university’s Electronics and Computer Science department, spoke about “strange” and “sculptural hypertext.”

Hypertext is of course the basis of the World Wide Web. But the last time I did anything with it was when I was using Hypercard to prototype museum interactives for my degree. What makes hypertext “sculptural” is the idea that instead of authoring links between individual cards, all the “cards” are linked to all the others to begin with, and you cut away links to get to what you want. (This may be massive oversimplification, or I might have got hold of the wrong end of the stick entirely – if so forgive me, I only learned about it yesterday.)

This cutting away can be done dynamically, so for example, links might not be evident until you are in the right place geographically, or until you’ve read a particular “card”, or until the right time of day.

The more David explained, the more I thought “this sounds like the real world equivalent (actually not quite real world – I need to think about which of Pine’s “eight realms” it is) of the algorithm behind the likes of Red Dead Redemption”. Which is, I think, exactly what I’m looking for right now.

So, this morning, my childish enthusiasm got the better of me, and I’ve been on-line looking for a free Hypertext authoring tool that I might be able to get my head around to give it a go. I’ve just downloaded HypeDyn (pronounced Hyped-in) which seems an easy enough authoring tool to start with.  Its produced by my my wife’s old Alma Mater (or one of them) the National University of Singapore, who say “Much of our focus is on end-user technologies for people who may not be technically inclined, but who want to use the power of computation to build and explore things,” which sounds just like me. It’s very latest development version includes location tools and a way or publishing to HTML5 which can be used on any mobile phone. So it could be used to prototype a pretty sophisticated location based narrative. I’m going to start on a more stable version though.