Poetics and place

I’ve been reading about a really interesting project to create a context aware interactive experience on the island of San Servolo. This involved creating a narrative which worked not just as long as the listener is in the right place, but also only if they are there at the right time and the weather is doing the right thing, so:

a mad woman of the asylum tells her story next to the sculpture in the park, but only in the afternoons; a piece of classical music – reminder of the music therapy used for the guests of the institution – can be heard by the users facing the south side of the Venice lagoon, but only during the nights characterized by the absence of clouds.

It’s a well realized attempt influence some of the resonances that can create emotional immersion in location-based narratives.
This isn’t quite context aware hypertext. In fact each segment was presented as a short video, so of course, the content of the video didn’t change dynamically according to context, but the choice of which video the user was presented with of made by context aware software.
I’m not convinced that pure video is the ideal medium for cultural heritage interpretation, after all, when you are in a place, you don’t wan’t to immersed in the video, you want to bathe in the atmosphere around you. This project demonstrates how a short video, triggered by location, time and weather becomes part of the place, but I’d like to how how a similar project with perhaps audio and the occasional augmented reality elements would work.
I can’t deny that weather is an important poetic element in narrative (consider “Its was a dark and story night”). In the digital narrative I’m currently exploring, the game Red Dead Redemption the emotional impact of some scenes, not just set pieces but moments during free-wandering play can be enhanced by the weather, be it good or bad. I’ve not yet worked out whether the rain in the lead up to one scene was co-incidental or scripted, but I think it was a a happy accident that as my character, John Marston, walked toward a location that I, as player, knew was a trap, the rain started and Marston’s footsteps splashed, doomladen, through the puddles. (It’s not all doom and gloom: shortly after I started playing I happened to notice this tweet from @r4isstatic: “Sunset through Hennigan’s Stead. Beautiful.” Actually his post on what makes the narrative of Red Dead Redemption so different from other shooting games is, though not weather related, well worth a read.)
Back to San Servolo, the rules that deliver a particular piece of video, don’t just take into account the place, time and weather: there’s also a rule that will block a particular video, if its already been shown to enough people – the idea being that users are forced to use the social network to share what they’ve experienced, and to hear about what other users’ experience has been. All in all, its a location aware narrative that really pushes the boundaries.
One interesting point is that “while the current location value is retrieved from the user device, most of the values of the environmental context are retrieved from different web services.” I went to a seminar yesterday from Michael Charno, Web Developer/Digital Archivist from the University of York. It was about linked data and the semantic web. I’m not afraid to admit that a whole bunch of it went right over my head, but he did point out one danger of drawing data from a variety of web-based services – what happens if that service is withdrawn, or even if (as the Library of Congress did to one of Charno’s projects) the providing organisation decides to change their web address? The permanence of the web services an interactive project like this draws on becomes a vital factor in user satisfaction. Cultural heritage organisations will be looking for a product to this to last some time without needing too much IT support, and users won’t be willing to wait around while somebody re-codes a bunch of links to restore basic functionality.
But that’s shouldn’t detract from the imagination shown by the team putting together the San Servolo project. Its a great attempt to explore a new poetics for the place-based narrative. And it inspired me to spend too much of today hunting around for more digital poetics…
Having been sidetracked into telematics, and intrigued by some of the work of Paul Sermon (this is my favourite, but this is more relevant to what I’m looking at) and momentarily impressed by have far we have come technologically in such a short time (check out these tiny dial-up ready Quicktime packages) I came across this online journal. I don’t know yet whether its going to be useful, but today it was definitely interesting.
Of course what I should have been doing is designing my evaluation for Ghosts in the Garden, and chopping the Wey Navigations narrative into HypeDyn chunks. Neither of those have happened. It may be a long evening ahead.

HypeDyn, App Furnace and the Tudor Child

***Updated*** When I added a photo via my mobile device, I seem to have deleted half the post before publishing, so the headline won’t have made much sense. I’ve rewritten the  second half of the post now.  Apologies.

This morning I got my head around HypeDyn by working through the three tutorials they provide. The first tutorial is about building simple hypertext links, and it got me thinking about a text handling language I had on my very first computer, the little known Memotech MTX 500. I now realise his software, called Noddy, was somewhat ahead of its time. Indeed a contributor to the wiki page I link to above calls it a “forerunner” of HyperCard. I can’t say whether that’s true, but all it seems to have been missing was the point and click interface, that we’re all so used to since the Mac. As I worked through the first tutorial on HypeDyn, I was thinking how old fashioned this new software was. Still it is free.

The second tutorial didn’t improve matters much. Although it did introduce Anywhere Nodes, which were linked to every other node, it didn’t seem to offer a way of sculpting them very dynamically. Thankfully the third tutorial was all about sculptural hypertext, and I learned how to make the links conditional, not just on whether a node had been visited or not, but also on whether card independent flags (or “facts”) are true or false. I also learned how to make the text on each card more dynamic, again based on the reader’s node history or the state of definable facts.

It’s still pretty basic. It’s text only for example, it can’t serve pictures or video. But it will do to play with, and I’m going to have a hack at making an existing text more interactive. The text in question is the draft text for a proposed guidebook to the River Wey and Godalming Navigations – surely the National Trust’s longest bit of countryside at 20 miles long (and only a few metres wide in most places). We’ve already noticed that as pure text, it presupposes walking in only one direction along the towpath. Obviously people can start in more than one place and choose to walk either up-stream or downstream. So my first challenge will be to serve those users’ needs.

I mentioned last week that HypeDyn was designed as a tool for non-technical people, and by coincidence this afternoon, I heard about another one. I was chatting on Skype with the lovely people at Splash and Ripple about doing an evaluation of Ghosts in the Garden, which makes an eagerly anticipated return to the Holborne Museum, Bath, later this spring. They mentioned that the software that underpins the experience was made with App Furnace. The people behind App Furnace were, by coincidence squared, also in the team that put together the Riot! 1831 experience I wrote about yesterday. Like HypeDyn its a creative tool for non-technical types. It’s on-line and free to use, you only pay when you publish.  So it might be worth a play with. But only after I’ve got my head around HypeDyn…

And so to Picadilly, where this evening I went to celebrate the launch of The Tudor Child, a book (and for the next three weeks, an exhibition at the Weiss Gallery) which explores the place of the child in Tudor society, through the clothes they wore.

 

A child's mannequin in Tudor dress

Childish enthusiasm and Sculptural Hypertext

I gave my first seminar yesterday, talking about how I come to begin the PhD. It seemed to go pretty well, and was recorded for posterity, so when it appears online, I’ll post a link to it.

But before the seminar I spoke at a workshop examining Digital Narratives, and was impressed and excited by all the other speakers. One in particular got my childish enthusiasm all fired up though. David Millard, of the university’s Electronics and Computer Science department, spoke about “strange” and “sculptural hypertext.”

Hypertext is of course the basis of the World Wide Web. But the last time I did anything with it was when I was using Hypercard to prototype museum interactives for my degree. What makes hypertext “sculptural” is the idea that instead of authoring links between individual cards, all the “cards” are linked to all the others to begin with, and you cut away links to get to what you want. (This may be massive oversimplification, or I might have got hold of the wrong end of the stick entirely – if so forgive me, I only learned about it yesterday.)

This cutting away can be done dynamically, so for example, links might not be evident until you are in the right place geographically, or until you’ve read a particular “card”, or until the right time of day.

The more David explained, the more I thought “this sounds like the real world equivalent (actually not quite real world – I need to think about which of Pine’s “eight realms” it is) of the algorithm behind the likes of Red Dead Redemption”. Which is, I think, exactly what I’m looking for right now.

So, this morning, my childish enthusiasm got the better of me, and I’ve been on-line looking for a free Hypertext authoring tool that I might be able to get my head around to give it a go. I’ve just downloaded HypeDyn (pronounced Hyped-in) which seems an easy enough authoring tool to start with.  Its produced by my my wife’s old Alma Mater (or one of them) the National University of Singapore, who say “Much of our focus is on end-user technologies for people who may not be technically inclined, but who want to use the power of computation to build and explore things,” which sounds just like me. It’s very latest development version includes location tools and a way or publishing to HTML5 which can be used on any mobile phone. So it could be used to prototype a pretty sophisticated location based narrative. I’m going to start on a more stable version though.