Why haven’t I discovered this before? Last week an email from the Guardian Cultural network pointed me to a headline reading “Tell me a story: augmented reality technology in museums“. Now augmented reality articles are two a penny, but “tell me a story” had me intrigued. So I clicked through, and got very excited reading the stand-first, which said “Storytelling is key to the museum experience, so what do you get when you add tech? Curator-led, non-linear digital tales.”
“Non-linear”?! that’s a phrase very close to my heart, so I’ve spent all morning reading about the CHESS Experience project.
(Well I spent some of my morning thinking “Oh no, that’s what I wanted to do! How come Southampton University isn’t part of that project? Why didn’t I do my PhD at Nottingham? That’s where all the cool kids hang out, apparently.” But having wallowed in a bit a self-pity, I got back to reading.)
CHESS stands for Cultural Heritage Experiences through Socio-personal interactions and Storytelling, which sounds right up my street. And the project summary says “An approach for cultural heritage institutions (e.g. museums) would be to capitalize on the pervasive use of interactive digital content and systems in order to offer experiences that connect to their visitors’ interests, needs, dreams, familiar faces or places; in other words, to the personal narratives they carry with them and, implicitly or explicitly, build when visiting a cultural site.” This is all good stuff.
But actually the reality of the project so far doesn’t seem quite as exciting as I’d hoped. The “personalised” story in A Digital Look at Physical Museum Exhibits: Designing Personalized Stories with Handheld Augmented Reality in Museums, seems rather to be just two presentations of story, one for children (in which, for example, the eyes of the remnant head of a statue of Medusa glow scarily) and one for adults (wherein the possible shape of the whole statue fills in the gaps between the pieces). A Life of Their Own: Museum Visitor Personas Penetrating the Design Lifecycle of a Mobile Experience, discusses visitors preparing for their visit by completing a short quiz on the museum’s website. When they arrive their mobile device will offer them a stories design for a limited list of “personas.” This isn’t personalisation, but rather profiling, as we discussed at The Invisible Hand. And the abstract for Controlling and Filtering Information Density with Spatial Interaction Techniques via Handheld Augmented Reality describes “displaying seamless information layers by simply moving around a Greek statue or a miniature model of an Ariane-5 space rocket.” This doesn’t seem to be offering the dynamic, on-the-fly adaptive narrative I was hoping for.
But its good stuff, none-the less, and there’s a great looking list of references which I want to explore. There’s also project participant Professor Steve Benford (who does little to disprove the theory that all the cool kids go to Nottingham). He’s a banjo-pickin’ guitar playin’ musician and Professor of Collaborative Computing, who among many, many other things has published a bunch of papers on pervasive games and performance, which I think my Conspiracy 600 colleagues might want to (need to) read.
Steve also provides the soundtrack for this post, which I hope you enjoy.