What PhD supervisors are for

I had a great chat with my supervisor on Thursday, after helping out with a Masters seminar. As regular readers may have worked out, I’ve been having a great deal of trouble trying to get a coherent testable design to test out of my half-formed ideas and lofty ideals.

The problem was trying to think of a cheap way to test some of the theory I’ve come up with. I’d got hung up on trying to think of a way to track visitors round a site and test their reactions to that. Until I solved that I was handwaving the issues of breaking the story into natoms, and balancing the conflicting needs of multiple visits in the same space. Those two problems both felt more within my comfort zone. The problem is that I’m not a technologist, that bit is so far out of my comfort zone that I’d need to enlist (or pay for) one. On top of that, the tech itself isn’t that cheap – getting a wifi network into some of the heritage places I know, with their thick stone walls and sheer scale, isn’t about buying just one wifi router.

I’d mentioned the other problems (particularly in the one of negotiating conflicting needs) in the seminar. (The students had been reading about a variety of museum interpretation experiments for their “homework” and we discussed the common issue that many of the experiments focussed on the issue of a visitor in isolation, and hadn’t thought enough about multiple users in the same space). Afterwards I spent twenty minutes with Graeme, my supervisor, in his office. I felt he’d finally got what I’d been trying to say about a “responsive” environment, and his interest was particularly focused on the two issues I’d handwaved. We talked about low-tech ways or exploring both of those, and of course THAT’S what I should be doing, not worrying about the tech. These are both things I can do (I think!) rather than something I can’t .

So by the end of our chat, when Graeme had to return to his students we’d worked out the rudiments of a simple experiment.

  • What I need is a relatively small heritage site, but the possibility of lots of choices about routes, lots of intersections between spaces. What Hiller calls a low depth configuration (that last link is to a fancy new on-line edition of the book, by the way. It’s worth a read).
  • I need to work with the experts/curators of that site to “break” the stories. Break is a script-writing term, but it feels particularly appropriate when thinking about cutting the stories up into the smallest possible narrative atoms. (Although maybe “natomise” is better!)
  • Then I need to set up the site to simulate some of responsiveness that a more complex system might offer. Concealed Bluetooth speakers for example, or  switches like these that can be controlled by Bluetooth.
  • Finally, rather than try and create the digital system that tracks visitors and serves them ephemeral natoms, I can do a limited experiment with two or more humans following visitors around and remotely throwing the switches that might light particular areas of the room, play sounds or what ever other interventions we can come up with. The humans take the place of the server, and when they come together, negotiate which of their visitors gets the priority. Graeme suggested a system of tokens that the human followers could show each other – but the beauty of this concept is that the methods of negotiating could become part of the results of the experiment! The key thing is to explain to the participants that the person following them around isn’t giving them a guided tour, they can ask questions of him/her, but s/he isn’t going to lead their experience.

So, now I have a a thing that it is possible to do, with minimal help and with a minimal budget. And its a thing that I can clearly see has aims that come of the research I’ve done, and results that inform platonic ideal responsive environment I have in my head. If it works, it will hopefully inspire someone else to think about automating it.

That’s what supervisors are for!

 

There’s a History Mystery in Norwich

Image from historymysterygame.com click the image to visit the site

I had a great chat earlier today with old colleague and friend Richard, part of Corvidae, who is also involved in a new venture in Norwich. History Mystery is a real-life room escape game, in which between two and six players have just an hour to explore the room, discover clues, solve puzzles and find the solution to escape. (Don’t worry, they do get let out if they can’t escape within the hour – after all another party will be waiting their turn.)

“Escape the room” games are a sub-genre of computer adventure games. Very often the first challenge in the old text based adventure games was to get out of the room where the adventure starts. With the development of point-and click graphic adventures (like my old favourite, the Monkey Island series) and especially the creation of what we now call Adobe Flash, the idea of making a mini-adventure that was ALL about getting out of a richly detailed room, took flight.

It didn’t take long for people to cotton on the the idea that while the fantastic, huge, worlds of most adventure games couldn’t be re-created in real life, re-creating a single room was a possibility, and could even make a pretty good business case.

So in the last few years, a nascent Escape Room industry has grown rapidly, from Japan initially, to locations all over the world. You can escape from Magic Shows, Baseball Parks, Time Travel labs, Bank Heists, Prohibition speakeasies, and even, in London, “Lady Chastity’s Reserve” (didn’t ask, didn’t look).

But all these a made-up stories about imaginary places. I’m sure the set design looks lovely, but players might as well be playing in an industrial estate.

What History Mystery brings to the table, are real, exciting places, with the patina of real history*, and stories researched from and interpreting the history of that place. The first game on offer involves rescuing a Norwich archivist, trapped his own vault. History Mystery launched at the end of January in Norwich, and the company hopes to expand across the country, as they strike deals with suitable historic locations. I haven’t had a chance to play in a game yet, but it looks fantastic I hope to take a trip up there as soon as I can. Have any of my East Anglian chums heard about it, or given it a go?

I wish these guys every success – I’d love to see real history turned into adventure games all over the country.

*The blurb for a forthcoming game warns: “This game takes place in real gaol cells that held real prisoners who left behind graffiti using explicit and violent language that is not for the easily offended.”

Cultural Agents

I’ve been reading Eric Champion’s Critical Gaming: Interactive history and virtual heritage. Eric asked his publishers to send me a review copy, but none was forthcoming, and I can’t wait for the library to get hold of a copy – I think I was to quote it in a paper I’m proposing –  so I splashed out on the Kindle edition. I think of it as a late birthday present to myself, and I’m not disappointed.

One thing that has struck me so far is a little thing (its a word Champion uses only three times) but it seems so useful I’m surprised it isn’t used more widely, especially in the heritage interpretation context. That word is “multimodality”. As Wikipedia says (today at least) “Multimodality describes communication practices in terms of the textual, aural, linguistic, spatial, and visual resources – or modes – used to compose messages.” But its not just about multimedia, “mode” involves social and cultural making of meaning as well. Champion says:

Multimodality can help to provide multiple narratives and different types of evidence. Narrative fragments can be threaded and buried through an environment, coaxing people to explore, reflect and integrate their personal exploration into what they have uncovered.

Which is surely what all curated cultural heritage spaces are trying to achieve, isn’t it? (Some with more success than others, I’ll admit.) Champion is referring to the multimodality of games and virtual environments, but it strikes me that museums and heritage sites are inherently multi-modal.

It sent me off looking for specific references to multimodality in museums and heritage sites, and indeed, I found a few, this working paper for example, and this blog, but there are not many.

But I digress. I’ve started Eric’s book with Chapter 8 (all the best readers start in the middle) Intelligent Agents, Drama and Cinematic Narrative, in which he examines various pre-digital theories of drama (Aristotle’s Poetics, Propp’s Formalism (with a nod in the direction of Bartle and Yee) and Campbell’s monomyth), before crunching the gears to explore decidedly-digital intelligent agents as dramatic characters. Along the way, he touches upon “storyspaces” – the virtual worlds of games which are by necessity incomplete, yet create an illusion of completeness.

His argument is that there is a need for what he calls “Cultural Agents” representing, recognising, adding to, or transmitting cultural behaviours. Such agents would be programmed to demonstrate the “correct cultural behaviors given specific event or situations” and recognise correct (and incorrect!) cultural behaviours. For example, I’m imagining here characters in an Elizabethan game that greet you or other agents in the game with a bow of the correct depth for each other’s relative ranks, and admonishes you if (in a virtual reality sim) you don’t bow low enough when the Queen walks by.

Which leads on to what he calls the “Cultural Turing Test […] in order to satisfy the NPCs [non-player characters] that the players is a ‘local’, the player has to satisfy questions and perform like the actual local characters (the scripted NPCs). Hence, the player has to observe and mimic these artificial agents for fear of being discovered.” (As he points out, this is in fact a reversal of the Turing test.)

Then he shifts gear again to look at Machinema (the creation of short films using game engines, which I learned about back in Rochester) as a method for users to reflect on their experience in-game, and edit it into an interpretation of the culture the game was designed to explore. Its a worthy suggestion, and could be excellent practice in formal learning, but I fear it undermines the game-play itself, if it becomes a requirement of the player to edit their virtual experiences before comprehending them as a coherent narrative.

Also in all though, I can already see that the book will be an enjoyable and rewarding read.

 

Roxanne, you don’t have to pull out your Bluetooth phone

[Yes, this post may seem familiar to long time readers. I’ve edited it and reposted it because, a) its a good post, and deserves to be read; and, b) I’m submitting this version for publication. Forgive my hubris.]

An augmented reality app available for download before visiting the Royal Museums, Greenwich (photo, Matthew Tyler-Jones)
An augmented reality app available for download before visiting the Royal Museums, Greenwich (photo, Matthew Tyler-Jones)

Lets cut to the chase. There are a LOT of companies out there selling (or trying to sell) smartphone based apps for visitors on site. The allure of mobile apps is difficult to deny. The museum/heritage site doesn’t have to lease expensive proprietary technology, dedicate space to storing and charging the same, or have infrastructure/staffing in place to hand out and collect these expensive bits of tech. Not only that, if everybody is bringing a screen with them, the museum can save money on screens around to the galleries to display video. Those little screens can be used to augment reality. They can adapt to offer everything from simple kids trails to in-depth information. Audio can be piped directly to the visitor’s ears, without speakers and ambient music adding noise pollution to the list of things that irritate other visitors.

And surely most enticing of all, the museum can make use of archives, oral histories and content that there simply wouldn’t be space for, in the physical realm. Without having to spend on the hardware, cultural heritage sites can invest in putting their hidden stories, collections and archives into user’s hands, creating compelling content.

But is mobile content that compelling?

I’m not denying that some people want to use a smartphone (or Google Glass) to enable a better understanding of a place. But I am saying the majority of visitors really don’t want to use a smartphone or any other mobile device when they are on site. And why would they? They have travelled to, and are immersed in one of the most significant/beautiful/interesting places they know. Why would they want to look at any part of it through a four (or five, or six, or nine) inch screen?

It turns out that smartphones (and tablets, but from now on, just read “mobile devices” when I write “smartphones”, or even “phones”) are not seen by their users as a cheap and personal way for people to interact with the space they are in. Look around you, wherever you are reading this. If you are on a train or bus, you’ll see people passing time reading, watching or playing with their phones. If the conversation is flagging in a social situation it may be that people have their phones out and are checking twitter or Facebook. They are using their phones to transport themselves away from the place they are in.

From the moment Alexander Graham Bell said “Mr. Watson, come here, I want to see you” phones have always been a method of teleportation – into the next room in Bell’s case, but nowadays back to our homes or places of work, closer to absent friends, around the globe, and even into virtual worlds. Even the act of taking a photograph (which some might argue is an interaction with your surroundings) is an act of transportation, whether it’s to your friends’ sides as you Tweet the image, or back to your home where you are already in the future, remembering this scene.

There’s nothing wrong with using your phone to remove yourself from a space of course. This isn’t a rant against mobile devices. I have no problem with people using their phones at concerts (which seems to fill some others with irrational hatred), or at cultural heritage sites, if they want to take a photograph or remove themselves to the great reference library that is the internet, or to tell a friend what a great time they are having. But lets make no bones about it, when a visitor to a site uses a phone, even if its to hear Stephen Fry (or some equally capable voice talent) tell them a story about the place, they are removing themselves from their surroundings*.

And most people don’t want that. They have come to this place (they may even have used their phones to help transport them to this place – with on-line bookings or GPS route-finding) to be in the place.

So why do we offer them an app on a device that transports them away? Because of the interactivity? The ability to chose what you want to read about, listen to, or watch? Even the most passive visitor interacts with a place simply by choosing how to wander around it. Our visitors are making choices all the time. Their day is full of choices. Very, very rarely do we ever get feedback from a visitor along the lines of “I really wanted to make more decisions.”

The interactivity is inherent in the cultural heritage visit. Museums shouldn’t need to spend money on technology to make the visit more interactive, what they need to work on is making the place more responsive.

So when the phone user does want to take his phone out to look something up, a responsive site makes it easy for him (or her) to connect to the internet, to find the information s/he needs (however unpredictable his/her needs may be) and to download it. Custom apps for smartphones are sold to heritage sites for tens of thousands of pounds. It would surely cost a lot less simply to make sure there’s a pervasive wifi signal and a pointer to the place’s website and/or on-line catalogue.

Once that’s in place, then we can build something that works with visitors’ phone to enable the site to be even more responsive, while keeping the visitors firmly immersed in the place, and their phones in their pockets:

A phone regularly sends out a little signal that says “I’m this phone and I’m here.” Recent developments in Bluetooth LE only add granularity to that message. It only take’s the visitor’s consent and the site’s IT infrastructure to turn the signal into “I’m this visitor, and this is where I’ve been.” And that information enables the site to be far more responsive, relevant, to understand the visitor’s interests, to make connections with what they’ve already seen, to tell better stories.

To better connect the visitor with the place.

Which is what we’re all here for, isn’t it?

*There’s some strength in the argument that an audio tour is better at not getting between the visitor and what they are looking at – if only because our ears are behind our eyes, so with headphones on it always sounds like Stephen Fry (or whoever the presenter might be) is standing just behind your shoulder.

Put your phones away

This video came out a couple of years ago. It’s wordless, but it says a lot.

Of course, its nothing we haven’t heard before: people spend a lot of time in social situations looking at their smartphones. But they don’t really want to.

Lets cut to the chase. There are a LOT of companies out there selling (or trying to sell) smartphone based apps for visitors on site. But none of them are worth it.

I’m not denying that some people want to use a smartphone (or Google Glass) to enable a better understanding of a place. But I am saying the majority of visitors really don’t want to use a smartphone or any other mobile device when they are on site. And why would they? They have traveled to, and are immersed in one of the most significant/beautiful/interesting places they know. Why would they want to look at any part of it through a four (or five, or six, or nine) inch screen?

Smartphones (and tablets, but from now on, just read “mobile devices” when I write “smartphones”, or even “phones”) are seen by all those app companies as a cheap and personal way for people to interact with the space they are in. But they are not. Look at the behaviors of those phone users in the video, they are not using them to interact with their surroundings. They are using their phones to transport themselves away from the place they are in.

From the moment Alexander Graham Bell said “Mr. Watson, come here, I want to see you” phones have always been a method of transportation – into the next room in Bell’s case, but nowadays back to our homes or places of work, closer to absent friends, around the globe, and even into virtual worlds. Even the act of taking a photograph (which some might argue is an interaction with your surroundings) is an act of transportation, whether its to your friends’ sides as you Tweet the image, or back to your home where you are already in the future, remembering this scene.

There’s nothing wrong with using your phone to remove yourself from a space of course. This isn’t a rant against mobile devices. I have no problem with people using their phones at concerts (which seems to fill some others with irrational hatred), or at cultural heritage sites, if they want to take a photograph or remove themselves to the great reference library that is the internet, or to tell a friend what a great time they are having. But lets make no bones about it, when a visitor to a site uses a phone, even if its to hear Stephen Fry (or some equally capable voice talent) tell them a story about the place, they are removing themselves from their surroundings*.

And most people don’t want that. They have come to this place (they may even have used their phones to help transport them to  this place – with on-line bookings or GPS route-finding) to be in the place.

So why do we offer them an app on a device that transports them away? Because of the interactivity? The ability to chose what you want to read about, listen to, or watch? Even the most passive visitor interacts with a place simply by choosing how to wander around it. Our visitors are making choices all the time. Their day is full of choices. Very very rarely do we ever get feedback from a visitor along the lines of “I really wanted to make more decisions.”

The interactivity is inherent in the cultural heritage visit. Sites don’t want to waste money on technology to make the visit more interactive, what they need to work on is making the place more responsive.

So when the phone user does want to take his phone out to look something up, a responsive site makes it easy for him (or her) to connect to the internet, to find the information s/he needs (however unpredictable his/her needs may be) and to download it. Custom apps for smartphones are sold to heritage sites for tens of thousands of pounds. It would surely cost a lot less simply to make sure there’s a pervasive wifi signal and a pointer to the place’s website and/or on-line catalog.

Once that’s in place, then we can build something that works with visitors’ phone to enable the site to be even more responsive, while keeping the visitors firmly immersed in the place, and their phones in their pockets:

A phone regularly sends out a little signal that says “I’m this phone and I’m here.” Recent developments in Bluetooth LE only add granularity to that message. It only take’s the visitor’s consent and the site’s IT infrastructure to turn the signal into “I’m this visitor, and this is where I’ve been.” And that information enables the site to be far more responsive, relevant, to understand the visitor’s interests, to make connections with what they’ve already seen, to tell better stories.

To better connect the visitor with the place.

Which is what we’re all here for, isn’t it?

*There’s some strength in the argument that an audio tour is better at not getting between the visitor and what they are looking at – if only because our ears are behind our eyes, so with headphones on it always sounds like Stephen Fry (or whoever the presenter might be) is standing just behind your shoulder.

The Van Dyke Vanishments

Photo: Richard Lakos By kind permission The Milo Wladek Co.,
My son helps turn two dimensions into three Photo: Richard Lakos
By kind permission The Milo Wladek Co.

Last weekend I went to Games Expo, East Kent, or GEEK as it’s more commonly known, in “London’s Famous Margate”. What drew me there was The Van Dyke Vanishments. Billed as an immersive experience through “art, theatre and gaming,” how could I not go? With limited availability we snapped up the last tickets for Saturday and drove across to Margate after lunch. At the Turner contemporary, we had just enough time to scout round the Self exhibition gathering clues for the password that we’d need on our adventure, have a cup of tea while we tried to solve the anagram (the answer was sunflower, but I liked slower fun), then head off to the storefront of Endless Horizons Ltd, the art tourism company.

To be honest, they were a bit unprepared. Their TranspARTation machine was still at an experimental stage, so my son and I, and another family, had to sign extensive waivers before we were allowed into the lab. Which was empty. So we waited, but didn’t have that long to admire the photos of the frequent employee of the month winner before something came lumbering up the stairs…

Helmeted, with a mirrored visor and breathing apparatus,  the humanoid creature moved strangely about the lab as it … made a cup of tea. “It” had to take the helmet off to drink the tea of course, and we saw it was a young woman who introduced herself as Smith and after reciting the terms and conditions, led us down into the basement, and the machine…

Which wasn’t working. Of course. So, we had to remind Smith of the password, witness the machine have an existential crisis and shut itself down, rewire it (using the handy artist/colour code we all learned at school – Klimt = yellow apparently), thump it  and literally deface two valuable self portraits (this one, and this one) before we got it working. Then the third painting took us (through quantum mechanics and a brightly painted tunnel) into the very mind of Anthony Van Dyke.

He was somewhat surprised to find us there.

Smith had the brilliant idea of getting the old master to restore the damaged portraits (which we’d had the presence of mind to bring with us). Of course he was disgusted by them – the scrawlings of children he said. So the answer was no. But Smith persisted, and suggested, that now the TranspARTation machine was working, she could open another quantum warp into the mind of Henri Gaudier-Brzeska, and we could all, Van Dyke included, explore the thinking behind his art. Tony (I don’t think he liked me calling him that) was intrigued enough to agree, and so we found ourselves in a cubist hell.

Van Dyke didn’t like it at all, but we found flat panels among the geometric shapes on the wall, and points to thread strings from, and together we built a 3D fish out of 2D shapes, giving Van D (and ourselves) a quick lesson in cubism. Then we were off through the Quantum Wormhole into the very white mind of Patrick Heron. There we constructed a deconstructed picture of St Ives, and in doing so, freed Van Dyke “from the tyranny of reality.”

Thus educated in the modern movement, he agreed to restore the paintings we’d defaced. All was well. Until we got Van Dyke’s own portrait back out of the TranspARTation machine, to find he’d become a Modernist a few hundred years too early…

Overall it was a great experience. My son enjoyed it, and the other family I was with got right into character as we helped Smith smooth over her mishaps. I felt I learned something too which, given I’ve already had four years of art history under my belt, suggests they managed not to dumb-down the learning while making it accessible. I could get picky about the details of Van Dykes clothes, and part of me of a bit disappointed in the “game” element of the experience – apart from solving a few puzzles, the ludic element ran “on rails” and was more of an immersive theatre experience.  But, there was a board-game version on offer, which sadly we didn’t get time to have a go with when we spent the next day at GEEK. There was a digital game version too, which I wasn’t even aware of that until after the event. I’ve found a beta version of it on-line, if you’d like to give it a go. It seems to use the same script, but of course the performances aren’t quite as good 🙂

I follow Van Dyke into a wormhole Photo: Richard Lakos By kind permission The Milo Wladek Co.
I follow Van Dyke into a wormhole
Photo: Richard Lakos
By kind permission The Milo Wladek Co.

The talk I gave for York Heritage Research Seminars #YOHRS

I had a great time in York on Tuesday evenings. It was a lovely audience with plenty of comments and questions afterwards. And it was international with people watching from the States (and maybe elsewhere) via Google Hangouts. And then afterwards on to the pub, where the conversation continued with the likes of Nigel Walter, Don Henson (member of the National Trust’s learning panel) and gamingarcheo herself Tara Copplestone, over delicious pints of Thwaits Nutty Black. (The bit in the pub wasn’t livestreamed.)

The advantage of being on Google Hangouts is that all my stumbles, stutters leafing through notes, umms and errs and slideshow reversals are recorded for ever on YouTube’s massive server farms. If you missed it, you can enjoy it now:

The sound is out for the first minute but fear not, it’s not delivered entirely in the medium of mime. This is (approximately) what I said between Sara’s introduction and when the sound kicks in:

I’m going to keep this story simple, and tell it in three parts – the beginning, the middle and the end. In the beginning I’m going to explain why heritage professionals should be interested in digital computer games.In the second part, I’m going to explain why they shouldn’t. And finally I’m going to explore the state of madness to which this dichotomy has driven me.

Changing direction?

I’ve been doing a lot of thinking around my participation in the Portus MOOC a few weeks back. This post is an attempt to get my thoughts in order, so I apologise in advance for any disjointedness.

First of all, let me edit in some thoughts on locatative gaming, prompted by a Guardian article on social gaming I read today while I should have been bashing this post into shape. Describing the new game from Bungie, Destiny (which is of course a console game, not a location based one) she says

On a practical level, though, “social” is a business model. It means content engineered to be “liked” or shared. It means fundamentally we spend anxious time doing free labour for social infrastructures, providing our personal lives, disseminating links, making those platform-holders wealthy with our exhibitionism and interaction. When it comes to games, it’s increasingly on the player to create the meaning in their experience.

And passionate players provide unpaid labor to games development, too: games are being released in beta and updated in public, so that the end product will better meet their needs. Thus the eager front-line beta testers mitigate the expensive risk of developing a commercial tech product, just through the fuel of their social behavior.

[…]

It is social in that business sense: you must collaborate with and keep up with your friends, ensure that your statistics and equipment – your fitness for competition – are ever increasing. You participate excitedly in this capitalistic metaphor.

Having “played” Ingress for a couple of weeks now, I’m beginning to feel the same frustrations as the author. I simple don’t have the time and dedication to labour on behalf of Google and for the benefit of my fellow players. I know what I ought to do to have an enjoyable experience is recruit freinds and family into the game so that we can play as a team, or build relationships with other players to do the same. But its too much bother. Its not for my generation, I’ve concluded.

In the Guardian, Leigh Alexander concludes:

I believe in the potential for games to create incredible collaborative environments for play. But let’s think about what a “social” play experience would look like if it served us, the users, and not the platform, whose only real desire is to have us use it, to have us serve and propagate it, to lend hours of our time to its cold lunar ecosystem.

What would a locatative game look like, that served the users rather than the platform? We’ll have to wait and see.

Right now though, I’ve been thinking about how the Portus MOOC might better serve its users. I’ve been looking at all the comments that were posted by students on the MOOC, and though I’ve not yet done any proper text analysis, my impression is that the Portus team received great praise from participants, but there are two apparent challenges for web-based learning:

  • Spatial and contextual awareness. Comments from participants consistently highlight the difficulty of understanding the spaces involved, their relationship to each other, and their scale. Efforts to understand spaces were further undermined by the struggle to understand the context as the topography and use of space changed during the 500 year period of occupation. Copious maps, plans, 360/spherical panorama and references to GoogleEarth and Bing Maps failed adequately to mitigate this challenge.
  • A preference for didactic learning over investigation. Though many participants relished the more autodidactic optional activities, a considerable number expressed discomfort when faced with interpretation tasks where users generated their own content. Peer review was especially daunting.

My supervisor, Graeme Earl already addressed the first point in his post on the Portus MOOC blog. Therein he says:

Some of you have already used ingenious methods, such as pacing out the size of a canal on your driveway or finding household objects similar to those we find at Portus. This is fabulous and please keep sharing these ideas – it is really helpful for us and for other learners.

But what do we do if we want to immerse you in the site as it is today, and as it was in the past? I would like you to imagine the buildings towering above you, to feel as though you are walking the streets and avenues in the footsteps of the Roman sailors, warehouse workers, slaves and traders that walked there two thousand years ago. You did this in textual form fantastically already in theFirst Century discussion in week one and in the Summary of the Week in week five, and it would be great if you continued to produce image or audio versions and share them on the Flickr group pool.

We’ve been thinking about how, for the next run of the Portus MOOC, we might lift our model of Portus off the page, take it out of the tiny window of the average computer monitor.

Imagine this.

Armed with a smartphone (loaded with a simple app that we create), one of our MOOC participants takes a walk, where-ever they live, and finds a piece of ground of a reasonable size, a park perhaps, or a school playing field, or a parking lot even. As long as it’s reasonably clear of obstructions it should be fine. They walk around the field pacing out as large a rectangle as they can, using the smart-phone’s GPS function to define and log each of the four corners. The app (or maybe its an HTML5 webapp, so they (and we) don’t have to worry about app-stores) tells them how the area they’ve measured out compares to the area of the Portus site.

Then (and here is the clever bit) the app scales everything we know about the real Portus to the area they’ve described. Using the app, and maybe some physical markers of their own, they can locate the intersections of the streets, and the locations and sizes of buildings that we’ve excavated. The app would allow them to map the changes that took place over time too, so that could plan out Then they can walk those streets, and the app can help them visualise the building they are walking past, and how goods (and people) moved from one space to another on their journeys in and out of the Port.

When I say visualise, I bet you are thinking they hold their phone up and, looking through the screen, see 3D models that we’ve made of the buildings in AR. I guess it’s a possibility, but we’re beginning to push at the limits of the technology here: Smartphone GPS has been getting better, but most phones are likely to deliver something accurate only to between three meters and nine, and what with level changes on the site they are doing this, and at Portus, I fear that an AR presentation might end up with so many visual glitches that it becomes off-putting rather than insightful and inspiring.

So actually I’m thinking there is a better learning outcome by making them do it all in their heads. I like the idea of  learning that visualisation starts in the imagination, not at the 3D modelling interface. Grant Morrison, who wrote the challenging comic The Invisibles, coined the term Fictionsuit to describe a method by which an author interacts with the characters in his (or her) diegesis by becoming a character in the diegesis. In a way, this is what the MOOC asked students to do the “First century discussion” that Graeme referred to in his post. Some participants (according to the comments) were more comfortable than others with this exercise, but I’m convinced its a vital tool for interpreting archaeological evidence and learning about exploring a world that can, in a very literal sense, only be a creation of our collective imaginations.

In another way, game avatars are fictionsuits too. Whether they are created by authors of the game, like John Marston in Red Dead Redemption, customisable creations of the player as in Skyrim, or held entirely within the imagination of the player as in Dear Esther.

But there’s a dichotomy between the exercise of imagination, and the “truth” of an academic paper or computer model. And the evidence of the comments betrays, among participants on the MOOC, a preference for passive acceptance of an expert’s model over willingness to imagine a model of their own.

So I’m thinking about how we might use game mechanics to:

  • Immerse participants in the geo-spatial relationships of different parts of the site. Exploring it by moving from place to place on a map (or even a scale recreation of that map in a real-word space) to access different content.
  • Encourage the creation of fictionsuits to explore the possibilities of how the site might have worked
  • Share (and even evaluate) interpretations of the site and the evidence

That last, the sharing and evaluation of interpretation is a particular challenge, the game mechanic solution to which might be in some work I looked at yesterday. Without wanting to reveal too much about the project of a colleague I only just met, I was introduced to a team which is working of using game mechanics to create Linked Data for an enormous  corpus, and also evaluate learning. It strikes me that this methodology could be incredibly useful for MOOCs. Yes, it uses game-play to source un-paid labour, just like the social games that Leigh Alexander was berating in today’s Guardian, but it does offer the intrinsic reward of actual, real learning.

I’m still trying to synthesize all this into a coherent project, but I do think I’m getting somewhere.

Please do comment if this all feels like nonsense though.

Proximity!

20140501-090244.jpg

My Gimbal beacons arrived yesterday. These are three tiny Bluetooth LE devices, not much bigger than the watch battery that powers them. They do very little more than send out a little radio signal that says “I’m me!” twice a second.

There are three very different ways of using them that I can immediately think of:

I’ve just tried leaving one in in each of three different rooms, then walking around the house with the the simple Gimbal manager app on my iPhone. It seems their range is about three meters, and the walls of my house cause some obstruction So with careful placing, they could tell my phone very simply which room it is in. And it could then serve me media like a simple audio tour.

Alternatively, as they are designed like key-fobs, they could be carried around by the user, and interpretive devices in a heritage space could identify that each user as they approach, and serve tailor media to that user. Straight away I’m thinking that a user might for example be assigned a character visiting, say, a house party at Polesden Lacey, the the house could react to the user as though they were they character. Or perhaps the user could identify their particular interests when they start their visit. If they said for example, “I’m particularly interested in art” then they could walk around their a house like Polesden Lacey, and when they pick up a tablet kiosk in one of the rooms, it would serve them details of the art first. Such an application wouldn’t hide the non-art content of course, it would just make it a lower priority so that the art appears at the top of the page. Or more cleverly, the devices around the space could communicate with each other, sharing details of the user’s movements and adapting their offer according to presumed interest. So for example, device a might send a signal saying “User 1x413d just spent a long time standing close to me, so we might presume they are interested in my Chinese porcelain.” Device b might then think to itself (forgive my anthropomorphism) “I shall make the story of the owner’s travels to China the headline of what I serve User 1x413d.”

But the third option and the one I want to experiment with, is this. I distributed my three Gimbals around the perimeter of a single room. Then when I stood by different objects of interest in my room, read of the signal strength I was getting from each beacon. It looks like I should be able to triangulate the signal strengths to map the location of my device within the room to within about a metre, which I think is good enough to identify which object of interest I’m looking at.

What I want to do is create a “simple” proof of concept program that uses the proximity of the three beacons to serve me two narratives, one about the objects I might be looking at, and a second more linear narrative which manages to adapt to the objects I’m by, and which I’ve seen.

I’ve got the tech, now “all” I need to do is learn to code!

Unless anybody wants to help me…?

The CHESS Experience

Why haven’t I discovered this before? Last week an email from the Guardian Cultural network pointed me to a headline reading “Tell me a story: augmented reality technology in museums“. Now augmented reality articles are two a penny, but “tell me a story” had me intrigued. So I clicked through, and got very excited reading the stand-first, which said “Storytelling is key to the museum experience, so what do you get when you add tech? Curator-led, non-linear digital tales.”

“Non-linear”?! that’s a phrase very close to my heart, so I’ve spent all morning reading about the CHESS Experience project.

(Well I spent some of my morning thinking “Oh no, that’s what I wanted to do! How come Southampton University isn’t part of that project? Why didn’t I do my PhD at Nottingham? That’s where all the cool kids hang out, apparently.” But having wallowed in a bit a self-pity, I got back to reading.)

CHESS stands for Cultural Heritage Experiences through Socio-personal interactions and Storytelling, which sounds right up my street. And the project summary says “An approach for cultural heritage institutions (e.g. museums) would be to capitalize on the pervasive use of interactive digital content and systems in order to offer experiences that connect to their visitors’ interests, needs, dreams, familiar faces or places; in other words, to the personal narratives they carry with them and, implicitly or explicitly, build when visiting a cultural site.” This is all good stuff.

But actually the reality of the project so far doesn’t seem quite as exciting as I’d hoped. The “personalised” story in A Digital Look at Physical Museum Exhibits: Designing Personalized Stories with Handheld Augmented Reality in Museums, seems rather to be just two presentations of story, one for children (in which, for example, the eyes of the remnant head of a statue of Medusa glow scarily) and one for adults (wherein the possible shape of the whole statue fills in the gaps between the pieces).  A Life of Their Own: Museum Visitor Personas Penetrating the Design Lifecycle of a Mobile Experience, discusses visitors preparing for their visit by completing a short quiz on the museum’s website. When they arrive their mobile device will offer them a stories design for a limited list of “personas.” This isn’t personalisation, but rather profiling, as we discussed at The Invisible Hand. And the abstract for Controlling and Filtering Information Density with Spatial Interaction Techniques via Handheld Augmented Reality describes “displaying seamless information layers by simply moving around a Greek statue or a miniature model of an Ariane-5 space rocket.” This doesn’t seem to be offering the dynamic, on-the-fly adaptive narrative I was hoping for.

But its good stuff, none-the less, and there’s a great looking list of references which I want to explore. There’s also project participant Professor Steve Benford (who does little to disprove the theory that all the cool kids go to Nottingham). He’s a banjo-pickin’ guitar playin’ musician and Professor of Collaborative Computing, who among many, many other things has published a bunch of papers on pervasive games and performance, which I think my Conspiracy 600 colleagues might want to (need to) read.

Steve also provides the soundtrack for this post, which I hope you enjoy.