Simulating ideology in storytelling

The Story Extension Process, from Mei Yii Lim and Ruth Aylett (2007) Narrative Construction in a Mobile Tour Guide

Another great piece from Ruth Aylett, this time from 2007. Here, she and collaborator Mei Yii Lim are getting closer to what I’m aiming for, if taking a different approach. They kick off by describing Terminal Time, a system that improvises documentaries according to the user’s ideological preference, and an intelligent guide for virtual environments which take into account the distance between locations, the already told story, and the affinity between the the story element and the guide’s profile when selecting the next story element and location combination to take users to. They note that this approach could bring mobile guides “a step nearer to the creation of an ‘intelligent guide with personality'” but that it “omits user [visitor] interests”. (I can think of many of a human tour guide that does the same). They also touch on a conversation agent that deals with the same issues they are exploring.

This being a 2007 conference paper, they are of course using a PDA as their medium. Equipped with GPS and text to speech software, a server does all the heavy lifting.

“After [an ice-breaking session where the guide extracts information about the user’s name
and interests], the guide chooses attractions that match the user’s interests, and plans the shortest possible route to the destinations. The guide navigates the user to the chosen locations via directional instructions as well as via an animated directional arrow. Upon arrival, it notifies the user and starts the storytelling process. The system links electronic data to actual physical locations so that stories are relevant to what is in sight. During the interaction, the user continuously expresses his/her interest in the guide’s stories and agreement to the guide’s argument through a rating bar on the graphical user interface. The user’s inputs affect the guide’s emotional state and determine the extensiveness of stories. The system’s outputs are in the form of speech, text and an animated talking head.”

So, in contrast to my own approach, this guide is still story lead, rather than directly user led, but it decides where to take the user based on their interests. But they are striving for an emotional connection with the visitor. So their story elements (SE) are composed of “semantic memories [-] facts, including location-related information” and “emotional memories […] generated through simulation of past experiences”. Each story element has a number of properties, sematic memories for example incude: name ( a coded identifier); type; subjects; objects; effects (this is interesting its lists the story elements that are caused by this story element, with variable weight); event; concepts (this that might need a further definition when fist mentioned); personnel (who was involved); division; attributes (relationship to interest areas in the ontology); location; and, text. Emotional story elements don’t include “effects and subjects attributes because the [emotional story element] itself is the effect of a SE and the guide itself is the subject.” These emotional memories are tagged with “arousal” and “valence” tags. The arousal tags are based on Emotional Tagging, while the valence tag “denotes how favourable or unfavourable an event was to the guide. When interacting with the user, the guide is engaged in meaningful reconstruction of its own past,” hmmmmm.

So their prototype, a guide to the Los Alamos site of the Manhatten project, the guide could be either “a scientist who is interested in topics related to Science and Politics, and a member of the military who is interested in topics related to Military and Politics. Both guides also have General knowledge about the attractions.” I’m not convinced by the artifice of layering onto the interpretation two different points of view – as both such are being authored by a team who in their creation of the two points of view will, even if striving to be objective, will make editorial decisions that reveal a third, authentic PoV.

When selecting which SE to tell next, the guide filters out the ones that are not connected to the current location. Then “three scores corresponding to: previously told stories; the guide’s interests; and the user’s interests are calculated. A SE with the highest overall
score will become the starting spot for extension.” The authors present a pleasingly simple (for a non-coder like me) algorithm for working out which SE goes next. But the semantic elements are not the only story elements that get told. The guide also measures the Emotional, Ideological story elements against the user’s initial questionnaire answers and reactions to previous story elements and decides whether or not to add the guide’s “own” ideological experience on to the interpretation, a bit like a human guide might. So you might be told:

Estimates place the number of deaths caused by Little Boy in Hiroshima up to the end of 1945 at one hundred and forty thousands where the dying continued, five-year deaths related to the bombing reached two hundred thousands.

Or, if the guide’s algorithms think you’ll appreciate it’s ideological perspective, you could hear:

Estimates place the number of deaths caused by Little Boy in Hiroshima up to the end of 1945 at one hundred and forty thousands where the dying continued, five-year deaths related to the bombing reached two hundred thousands. The experience of Hiroshima and Nagasaki bombing was the opening chapter to the possible annihilation of mankind. For men to choose to kill the innocent as a means to their ends, is always murder, and murder is one of the worst of human action. In the bombing of Japanese cities it was certainly decided to kill the innocent as a means to an end.

I guess that’s the scientist personality talking, perhaps the military personality would  instead add a different ideological interpretation of the means to an end. As I mentioned before, I’m not convinced that two (or more) faux points of view are required when the whole project and every story element that the guide gets to choose from are already authored with a true point of view. But in many other aspects this paper is really useful and will get a good deal of referencing in my thesis.

Abstract: Digital Personalisation for Heritage Consumers

I’m speaking at the upcoming Academy of Marketing E-Marketing SIG Symposium: ‘Exploring the digital customer experience:  Smart devices, automation and augmentation’ on May 23 2017. This is what I wrote for my abstract:

Relevance to Call: Provocation, Smart Devices. Augmentation of the Customer Experience

Objective: A work-in-progress research development project at Chawton House explores narrative structure, extending the concept of story Kernels and Satellites to imagine the cultural heritage site as a collection of narrative atoms, or Natoms, both physical (spaces, collection) and ephemeral (text, video, music etc.). Can we use story-gaming techniques and digital mobile technology to help physical and ephemeral natoms interact in a way that escapes the confines of the device’s screen?

Overview: This provocation reviews the place of mobile and location technologies in the heritage market. Digital technology and social media are in the process of transforming the way that the days out market is attracted to cultural heritage places. But on site, the transformation is yet to start. New digital interventions in the heritage product have not caught on with the majority of heritage consumers. The presentation will survey the current state of digital heritage interpretation and especially the use location-aware technologies such as Bluetooth LE, NFC, or GPS. Most such systems deliver interpretation media to the device itself, over the air or via a prior app download. We explore some of the barriers to the use of mobile devices in the heritage visit – the reluctance to download proprietary apps, mobile signal and wifi complexities and most importantly, the “presence antithesis” the danger that the screen of the device becomes a window that confines and limits the user’s sensation of being in the place and among the objects that they have come to see. Also, while attempts to harness mobile technology in the heritage visit display interpretation that is both more relevant, and in some cases more personalised to the needs of the user, they also tend towards a “narrative paradox” – the more the media is tailored to the movements of the user around the site, the less coherent and engaging the narrative becomes.

Method: Story-games can show us how to create an experience that balances interactivity and engaging story, giving the user complete freedom of movement around the site while delivering the kernels of the narrative in an emotionally engaging order. At Chawton we plan to “wizard of oz” an adaptive narrative narrative for that place’s visitors.

Findings: Work so far demonstrates that a primary challenge for an automated system will be negotiating the contended needs of different groups and individuals within the same space. The work at Chawton looks to address this.

**

This is the first time I’ve written an abstract in this format, and I found it quite a challenge. What you add in and leave out is always a difficult decision, and this format, which was limited to one side, had me opting to leave out the references which I might have made room for if I had not had to write something under each of the prescribed headings. It’s also the first time I have had formal feedback on an abstract, which I share below:

Relevance to call: Good fit Smart devices, user experience,
augmentation, culture (5)
Objective: A practical case example of augmentation in a
heritage setting (5)
Lit rev: No indication of theory used, as this is a practical
case study (n/a)
Method: A specific case of Chawton House presented. (5)
Results: Interesting findings re barriers to use of mobile
devices in heritage, and the experience evaluation (4)
Generalisations: Interesting and original context of heritage
institution using augmentation, can extend to
other heritage sector applications. (4)
Total 23/25

**

So, not a bad score, but I wonder what I would have got (out of 30?) if I had included the references. Does the bibliography count within the one page limit? Or, could I have included it on a second side?

Still, not time for those questions. I have the write the actual presentation now. 🙂

Building the Revolution 

I finally got to the V&A today, for their exhibition You Say You Want a Revolution. I got turned away at the end of Cromwell Road last time, as the museum was being evacuated after a bomb-scare. 

I’m writing this review on my way home, using my phone (so please forgive my typos) partly because I want to recommend you go, and there is not long left to see it. 

The exhibition charts the western cultural revolution of 1966-1970, though John Peel’s record collection, plysbof course fashion and design from the V&As own collection and other items, such as an Apollo mission space suit borrowed from other institutions. 

One of the gimmicks of the show is the audio, an iteration of the same technology used at the Bowie exhibition a couple of years ago. I didn’t get to go to that one, but I had a demonstration of that tech from the makers Sennheisser, at a Museums and Heritage show. 

I wasn’t very impressed. Though these headphones, which play music or soundtrack to match whatever object or video you are looking at, were well  received by the media back then, in my experience the technology was clunky. Other friends who’d been confirmed that they changes between sound “zones” could be jarring, and that it was possible to stand in some places where music from two zones would alternate, vying for your attention. 

The experience this time was an improvement. It was by no means perfect: I found the music would stutter and pause annoyingly, especially if I enjoyed the track enough to find myself gently nodding my head. Occasionally the broadcast to everyone’s headphones would pause so everyone in a room could share a multimedia experience (of the Vietnam war for example) across all the gallery’s speakers, screens and projectors. These immersive over-rides were effective, in much the same way as those at IWM North, but when a track you were enjoying or a video that you found interesting was rudely interrupted, one couldn’t help but feel annoyed. I found myself forgiving the designers however, for this and even the stuttering sound of the headphones, because it all felt resonant with that late sixties “cut-up” technique. 

Where the technology really worked however was on two videos that topped and tailed the exhibition. In the first various icons and movers of the period were filmed in silent moving portraits of their current wrinkled and grey selves. Their reminiscences of the time appeared as typography overlaying their silent closed-mouth gaze, a little like Barbera Kruger’s work, while  over the headphones you heard their voice. The same characters appeared at the end, that s time as a mosaic of more conventional talking heads. And for the first time, the interpretation was didactic as each in turned challenged the current generation to build on their legacy. 

For me, one of the highlights was the section on festivals, which invited visitors to take off their headphones, lie back in the (astro)turf and let (another cut-up of) the famous Woodstock documentary wash over them on five giant screens. 

The other things I loved were, dotted around among the exhibits, tarot cards that, at first glance, looked like they might have been designed in the sixties. But then you notice references to things like Tim Berners Lee and the World Wide Web. You realise these are a subtle form of interpretation, telling a future of the sixties that apparently came true and for those of us from that future, creating correspondences and taxonomies that connect the events of 1966-70 with today. The V&A commissioned British artist Suzanne Treister to create the cards, based on her 2013 work, Hexen 2.0. And the very best thing about them is you can buy them (pictured above) in the shop which must be the first time copies of museum interpretation panels have been made available for purchase. 

Of course, the aren’t the only form of interpretation. About from the soundtrack, there are more traditional text panels, labels and booklets around the exhibition. But the cards show how cleverly the layering of meaning and interpretation has been created. Many visitors will have passed them by unnoticed, given them a cursory glance or chosen to ignore them, and will have had an entirely satisfactory experience. But for those that paused to study them in more detail a whole new layer of meaning opened up. 

I visited with a sense of duty, to try out a responsive digital technology. But I found so much more to enjoy. This is a brilliantly curatored exhibition. So much better than the didactic, even dumbed down permanent gallery of the new Design Museum which I visited before Christmas. I urge you to go, if you haven’t seen it yet. It’s only on for another month. 

A colleague who had visited the exhibition before told me how depressed it had made him: the optimism of that period seems to have been dashed upon the reactionary rocks of 2016, Brexit and Trump. But I came out with a very different mood. 

One of the early messages of the exhibition is the period as a search for utopia. The final tracks you hear as you walk out (after the video challenge issued by the old heads of the sixties) are Lennon’s 1971 single Imagine and then, brilliantly, Jerusalem

No, of course they didn’t find the Utopia they were looking for in the sixties, but we could build it…

Could this be … the first decent museum app?

sfmoma

Last week my wife and I went to San Francisco. Our second full day there was mostly spent within SF MOMA, the San Francisco Museum of Modern Art. And for the first time ever, I used a museum/heritage app that actually enhanced my visit.

Part of what made it so successful was the infrastructure that made it easy to download and use. I didn’t have to plan in advance and download it before my visit. I wasn’t even aware of it before I went, but if I had been, I would have been unlikely to download it, because our hotel’s free wifi only allowed one of us to use our device each four hour lease period.

We’d started our visit walking through the museum to the opposite entrance to contemplate the Richard Serra sculpture. It was early in the day, the museum was just opening, and there was a team-brief on the tiered seating that surround the piece. But they moved on and we sat for a moment to contemplate the enormous steel structure (I can’t deny the meditative quality of Serra’s work, or the calming impact it seems on have on the psyche when encountered, but really I sometimes feel “seen one, seen them all”) and to plan our day.

My wife noted a label on the wall directing people who wanted to know more about the art to SFMOMA’s app, and helpfully pointing out that you could log into the museum’s free wifi to download it. I think it said that it was iOS only, but if you didn’t have a suitable device, you could borrow one.

The first pleasure was logging onto the wifi. This was possibly the most hassle-free process I’ve ever encountered on public wifi. The signal was strong (everywhere), reliable and speedy too. The app downloaded quickly, and upon opening gave me three screens introducing what it offered, such as the one below:

It wanted access to my location services (of course), camera and, unusually, to my activity (the “healthy living” function of more recent versions of iOS), but having been so pleasantly surprised and satisfied by the process so far, I was very happy to allow both. All this had taken very little time, but enough time for my wife to have wandered away towards the elevators to begin our exploration of the museum, so I hurried after her, scanning what was on offer from the app as I went.

There’s a highlights function, which includes “Our picks for forty must-see artworks that are currently on view”, a timeline function that enables you to record and share your visit, and section on other “things to do”, and of course the ability to buy tickets, membership etc. At the core of the app are “Immersive Walks”: a range of fifteen to 45 minute audio tours of the galleries.

On no! I’d left my earphones back at the hotel.

But that wasn’t a problem, because as I caught my wife up by the elevators, I saw a stand stacked high with cases of SFMOMA-orange ear-buds. These were given away free and of a somewhat disposable quality, but good enough to last the day (and to pass on to my son when we got back from the holiday) with in-line volume controls for ease of use. The thought and effort that SFMOMA put into the infrastructure around the app deserves to be commended.

But lets get to the meat of the app’s functionality. The key thing here is indoor positioning. I’m guessing it’s achieved through wifi mobile location analytics, but I haven’t confirmed that. I can confirm that its pretty accurate, though with a little bit of lag, so it takes a while after walking into a gallery, and then standing still for a moment, before your device can deliver to you the buttons for the content relevant to the artworks on the gallery. Some, but not all, of the artworks are accompanied by a specific bit of media (mostly audio) to offer more in-depth insight into the work. This can include commentary, reviews or snippets of interviews with the artist.

I also took an immersive walk. I chose German to Me, a personal exploration of post-war German artists from radio journalist Luisa Beck, in which she shares her reaction on some of the works in the collection and interviews for mother, grandmother and cousin to uncover more about her own German-American identity. As the tour progresses you are guided, not just by Luisa’s spoken directions, but also by the app’s indoor positioning, as shown below.

I have to say, I would have given these galleries the most cursory of glances, had I not been captured by Luisa’s tour. As it was, her (wholly un-sensational) story, and her commentary upon the art engaged me emotionally to a degree I wasn’t expecting. It enhanced my visit like no other app has achieved.

The phone also recorded my “timeline”, my journey through the museum, on-line so that I can share with others the photos I took, the artworks that caught my attention enough to seek more information from the app, and the tours I went you. As you can see, I spent three and a half hours with the app, walking 3,369 steps (or 1.7 miles). This timeline is the only slightly disappointing aspect of the app – I would have like to have clicked through this on-line version to listen to some of the media again, now that I am back home, maybe even to be reminded (though the apps abilities to determine location) who made some of things that I took photos of.

You’ll know that I’m not a massive fan of looking at things through my phone, but this app did well enough to almost convince me otherwise.

The museum had other digital interventions of interest. You might have spotted in my timelime that one of the first things we looked at was a surveillance culture-inspired artwork by Julia Scher that turned the museum in to Responsive Environment, changing according to visitors actions.

img_6792

There was also a fun activity in one of the cafe’s that allowed you to create your own digital artwork, printing it out on thermal paper instantly, but also linking to a hi-res online version, which I used for the illustration at the top of this post (you will note that those free earbuds are the stars of that piece).

SFMOMA, with their technology partners Detour on the app, and the support of Bloomberg Philanthropies, are doing good things in the digital sphere. If you’re there, you should check them out.

Shine On: part two

In the afternoon Graham Festenstein, lighting consultant, kicked off a discussion about using lighting as a tool for interpretation. New technology, he said,  especially LED, presents new opportunities, “new revolution” in lighting. It’s smaller, with better optics, and control. And also more affordable! He used cave paintings as an example. Lighting designers could take one of three approaches to lighting such a subject:  they might try and recreate the historical lighting which, for a cave painting, would have been primitive indeed,  a tallow bowl light, revealing small parts of the painting at time and with an orange light; it’s more likely, given the needs of the visitor, that they might go for more wide angle lighting, revealing the whole of the painting at once; or, they might light for close up inspection of the work, to show the mark making techniques. Traditionally, a lighting designer would have had to chose just one of these approaches. But with the flexibility and versatile control of modern lighting technology, we can do all three things – caveman lighting, wide angle panorama, and close up technical lighting.

Graham’s presentation was not the strongest. heexplained that he was sceptical about LED lights at first pilot as a sceptic. He recalls a visit to a pilot project at the National Portrait Gallery. His first impressions were disappointing, but then he realised that what heith missed about the tungsten lighting to the way it it the gilded frames, and the the LED lighting was better serving the pictures. Then he went on to talk about colour, how the warm lights of the Tower of London’s torture expedition undermined the theme, but the presentation overall was somewhat woolly.

Zerlina Hughes, of  studio ZNA, came next, with a very visual presentation which I found myself watching rather than taking notes.It explained her “toolkit” of interpretive lighting techniques, but I didn’t manage to list all the tools. A copy of the presentation is coming my way though, so I might return with more detail on that toolkit in a later post. One of her most recent jobs looks great however, and I’m keen to go. Say You Want a Revolution, at the V&A follows on from the Bowie show a year or so ago, but with (she promises) less clunky audio technology. I want to go.

Jonathan Howard, of  DHA design, explained that like Zerlina, “most of us started as Theatre designers.” I (foolishly, I think, in retrospect) passed up an invitation to do theatre design at Central St Martins, and I think I would have been fascinated by lighting design, if I had gone, so I might have ended up at the same event, if on the other side of the podium. Museums audiences today are expecting more drama in museums, having experienced theatrical presentations like Les Miserables, and theme parks etc. I was interested to learn that in theatre, cooler colours throw objects into the background, and warmer colours push them into the foreground. This is apparently because we find the blue end of the spectrum more difficult to to focus on. In a museum space, he says, you can light the walls blue so that the edges of the gallery fall away completely. But he did have a caveat about using new lighting technology. Before rushing in to replace your lighting with LEDs and and the modern bells and whistles, ask youself:

Why are we using new tech?
Who will benefit?
Who will maintain it?
Who will support it?

Kevan Shaw, offered the most interesting insight into the State of the Art. He pointed out that lighting on the ceiling has line of sight to most things, because light travels in straight lines (mostly), and we tend to point it at things. So, he said, your ligthting letwork could make a useful communications network too. He wasn’t the first presenter to include and image of a yellow centered squat cylinder in their slide deck. And they spoke as though we all knew what it was. I had to ask, after the presentation, and they explained that it was one of these. These LED modules slip into many exisiting lamps or lumieres. They are not just a light source, but also a platform for sensors and a communications device. Lighting, Kevan argues could be the beachhead of the Internet of Things in museums.

He briefly discussed two competing architectures for smart lighting, Bluetooth, which we all know, and Zigbee, which you may be aware of through the Philip’s Hue range (which I was considering for the the Chawton experiment). He also mentioned Casambi and eyenut, I’m not sure why he thinks these are not part of the two horse race. He argues that we need interoperability. So I guess he’s saying that eventually the competing systems will eventually see a business case in adopting either Bluetooth or Zigbee as an industry standard.

With our lightbulbs communicating with each other, we can get rid of some of our wires, he argues, but it needs to be robust, reliable. And the secret to reliability is a mesh networking, robust networks for local areas. Lighting is a great place for that network to be. That capability already exits in Zigbee (so I think zigbee is what I should be using for Chawton), but its coming soon in Bluetooth. And I think Kevan believes that when it does, Bluetooth will become the VHS of the lighting system wars, and relegate Zigbee to the role of Betamax.

But the really exciting thing is Visible Light Communication, by which the building can communicate with any user with a mobile device that has a front facing camera (and the relevant software installed. He showed us a short video of the technology in Carrefour (mmm the own-brand soft goat cheese is delicious).

The opportunities for museums are obvious but, he warns, to be effectively used, museums will need resource to manage and get insight from all the data these lighting units could produce in resource. Though he says optimisticly to his fellow lighting consultants, “that need could be an opportunity for us!”

Finally we heard from Pavlina Akritas, of Arup, who took the workshop in the direction of civil engineering. Using LA’s Broad Museum as an example, she explained how in this new build, Arup engineered clever (North facing) light-wells which illuminated the museum with daylight, while ensuring that no direct Los Angeles sun fell directly onto any surface within the galleries. The light-wells included blackout blinds to limit overall light hours and photocells to measure the amountof light coming in and if neccessary, automatically supplement the light with LEDs. She also talked briefly about a project to simulate skylight for the Gagosian gallery, Grosvenor Hill.

All in all, it was a fascinating day.

This post is one of two, the first is here.

Mulholland on Museum Narratives

Working on the narratives for the Chawton Project, I’m taking a break and catching up on reading. Paul Mulholland (with Annika Wolff, Eoin Kilfeather, Mark Maguire and Danielle o’Donovan) recently contributed a relevant first chapter to Artificial Intelligence for Cultural Heritage (ed Bordoni, Mele and Sorgente).

Mulholland et al’s chapter is titled Modelling Museum Narratives to Support Visitor Interpretation. It kicks off with the structuralist distinction between story and narrative,  and points to a work I’ve not read and should dig out (Polkinghorne, D. 1988 Narrative Knowing and the human sciences) as particularly relevant to interpreting the past. From this, the authors draw the “narrative inquiry” process which “comprises four main stages. First, relevant events are identified from the historical period of interest and organised into chronological order. This is termed a chronicle. Second, the chronicle is divided into separate strands of interest. These strands could be concerned with particular themes, characters, or types of event. Third, plot relations are imposed between the events. These express inferred causal relations between the events of the chronicle. Finally, a narrative is produced communicating a viewpoint on that period of history. Narrative inquiry is therefore not just a factual telling of events, but also makes commitments in terms of how the events are organised and related to each other.” Which is as good and concise a summary of the process of curatorial writing as I am likely to find.

There’s another useful summary paragraph later in the document. “When experiencing a museum exhibition, the visitor draws relationships between the exhibits, reconstructing for themselves the exhibition story (Peponis 2003), whether those relationships are, for example, thematic or chronological. The physical structure of the museum can affect how
visitors perceive the exhibition narrative. Tzortzi (2011) argues that the physical structure of the museum can serve to either present (i.e. give access to the exhibition in a way that is independent from its underlying logic) or re-present (i.e. have a physical structure that reinforces the conceptual structure of the exhibition).” Tzortzi there is another reference I’ve not yet discovered and may check out.

What the paper does not do however, is make any reference to emotion in storytelling. the authors seem to leave any emotional context the the visitors’ own meaning making. The chapter include a survey of current uses of technology in museums, and academic experiments including virtual tour guides and opportunities to add the own interpretations and reminiscences, as well as web-based timelines etc.

But, digital technology gives us the opportunity (or need) to break down cultural heritage narratives even more, and an earlier (2012) paper by (mostly) the same authors, Curate and Storyspace: An Ontology and Web-Based Environment for Describing Curatorial Narratives describes a system for deeper analysis. (Storyspace turns out to be a crowded name in the world of writing tools and hypertext, so eventually the ontology and Storyspace API became Storyscope). The first thing that the ontology brings to the table is that

a curatorial narrative should have the generic properties found in other types of narrative such as a novel or a film

So the authors add another structuralist tool, plot, to the story/narrative mix. “The plot imposes a network of relationships on the events of the story signifying their roles and importance in the overall story and how they are interrelated (e.g. a causal relationship between two events). The plot therefore turns a chronology of events into a subjective interpretation of those events.” But using the narrative inquiry process “the plot can be thought of as essentially a hypothesis that is tested against the story, being the data of the experiment.”

I like this idea. But its worth distinguishing between the two uses of the word “interpretation” in cultural heritage. The first use, familiar to my archaeologist colleagues, describes the process of building an understanding of of aspect of the past from the available evidence. The second, more familiar to my museum and heritage site colleagues describes the process of explaining the evidence to non-professional visitors. At its very best, the museum/heritage site form of interpretation will resemble and guide visitors though the process of inquiry that builds an understanding of the evidence on display. But most of the time the second form of interpretation more closely resembles storytelling. That’s not a fault or failure of my museum/heritage site colleagues, most visitors are time poor in story rich environments. But digital technology has the potential to allow museum and heritage site interpretation to more closely resemble the first use of the word.

What digital technology offers, is the opportunity for brave curators to offer alternative plots, or theses, and test them in a public arena, rather than just through a peer review process. Or even to create plots procedurally by following the visitors’ path of attention between objects, maybe discovering plots the curator had not imagined.

The two experiments that the authors describe go someway towards this, by their dry ontology misses an emotional component. The event ontology could surely include an authorial opinion on whether the narrative element suggests a simple emotional reponse (even as simple as hope or fear) but instead “If the tag represents an artist, then events are used to represent, for example, artworks they have created, exhibitions of their work, where they have lived, and their education history.” Dry, dry facts… There is the tiniest nod towards, if not emotion per se, the some sort of value in their brief discussion of theme:

Theme is also related to the moral point of the story. This could be a more abstract concept, such as good winning through in the end, which serves to bind together all events of the story.

Given that they say “Narratives are employed by museums for a number of purposes,
including entertainment” they haven’t given much time to what makes narratives engaging. There is hope however. In their conclusion, they do say “Other narrative
features such as characterisation and authorial intent could potentially be
foregrounded in tools to support interpretation.”

 

P.O.R.T.U.S is go!

A week or two back, I had an interesting conversation with my supervisor, which I didn’t think I should mention on-line until, today, he invoked the “inverse fight club rule”. So I can now reveal that P.O.R.T.U.S stands for Portus Open Research Technologies User Study – yes, I know, as Graeme said “recursive-acronym-me-up baby.” This isn’t the Portus Project, but but it does ride on the back of that work, and (we hope) it will also work to the Portus Project’s benefit.

P.O.R.T.U.S is a small pilot project to explore better signposting to open research, so (for example) people interested in the BBC Documentary Rome’s Lost Empire, (which coincidentally is repeated TONIGHT folks, hence my urgency in getting this post out) might find their way to the Portus Project website, the FutureLearn MOOC,  the plethora of academics papers available free through ePrints (this one for example) or even raw data.

Though the pilot project will use the Portus Project itself as a test bed, we’re keen to apply the learning to Cultural Heritage of all types. To which end I’m looking to organise a workshop bringing together cultural heritage organisations, the commercial companies that build interpretation and learning for them, and open source data providers like universities.

The research questions include:

  • What are the creative digital business (particularly but not exclusively in cultural heritage context) opportunities provided by aligning diverse open scholarship information?
  • What are the challenges?
  • Does the pilot implementation of this for the Portus Project offer anything to creative digital businesses?

The budget for this pilot project is small, and that means the workshop will have limited places, but if you are working with digital engagement, at or for cultural heritage sites and museums,. and would like to attend, drop me a note in the comments.

Petworth Park and Pokemon too

Yesterday’s post was timely, it turns out, because today, Pokemon Go was released in the wild. I downloaded it and caught my first two Pokemon in the Great Hall at Chawton, waiting for a meeting that I’ll write more about tomorrow. But after that meeting I was off down to Petworth to have a go with the new(ish) Park Explorer.


The Park Explorer is one of the outputs of a three year long archeology project, exploring what’s under the Capability Brown landscape that survives today. I have some responsibility for the way it works. When my colleague Tom explained his plan to build a mobile application, I dissuaded him. There is little evidence that many people  download apps in advance of their visit to Heritage sites. And even fewer wish to deplete their data allowance on the mobile network to download it on site.

Together, we came up with an alternative – using solar powered Info-points to create wifi hotspots around the park that could deliver media to any phone capable of logging on to wifi, and browsing the web. Though in this case it’s not the World Wide Web, but a series of basic webpages offering maps, AV, etc. We’re running this installation as a bit of an experiment to gauge demand, to see, if it’s offered, how many people actually log on.

Pokemon Go demonstrated why the technology might be useful. With the app newly downloaded on my phone I, of course, wanted to try it out in Petworth’s pleasure grounds. I’d guessed right, the garden’s Ionic Rotunda and Doric Temple are both Pokestops.  But the wireless signal is so weak and patchy (on O2 at least) that the game could hardly log on, let alone do anything when I got within range. After a frustrating few minutes I gave up and returned to the local wifi.

That crummy phone signal is one of the reasons we went to solar powered local wifi. Once I logged on I was soon listening to the voice of my colleague Tom as he explained some of the archeology of the garden, watching an animated film of the development of the park and scrubbing away a photo of the current three person gardening team and their power tools to reveal a black and white photo of the small  army of gardeners that used to work here.

All of this was very good. But there are some issues that I think need to be addressed if the idea is to catch on. First of all, finding the wifi signal and logging on isn’t as intuitive as I’d hoped. Your browser need to be pointed at 10.0.0.1 to find the home page. The home page design leaves something to be desired. The floating button to change text size seems an afterthought that annoyingly obscures the text its trying to clarify. Navigation isn’t intuitive (no obvious way forward from the welcome splash pictured above, for example) or that well organised – I’d hoped that I’d be offered media that was closest to my position (as identified by the hotspot I was logged into), but the browse button just led to a list of things. Switching to the map view was easier, but it showed the design lacked a degree of responsiveness – see below how the word “Map” is partially obscured by the tile with the actual map on. The pins that link to different media suggest that its good to be standing in particular places to view that media, but on the few that I tried around the pleasure grounds, there seemed to be no discernible benefit to being in the right spot. In the end I settled under a spreading Oak to sit and work my way through what was on offer.

One feature that worked well to compare old and new and see change over the centuries was the scrub away photo feature. Even here though there was a fault in the responsiveness of the design. If I turned the phone into landscape mode, the picture became full screen and I lost the ability to reset it.

This slideshow requires JavaScript.

I imagined how good it would be, if it looked and felt (and responded) like the National Trust’s current website. Maybe, with a bit of work, it can.

More work would be and investment though, so, first of all though we need to interrogate the system’s solar powered servers, and see how many people are giving it a try.

A little bit of the history of interactive storytelling at Chawton


I spent yesterday morning at Chawton, locating and counting plug sockets so @ll know my limitations as I design whatever the experience there will be next March. The visit reminded me that I had meant to write here of a previous digital interpretation experiment at Chawton.

Back then, in 2005, the Chawton House Library was not widely open to the public. Primarily a centre for the study of Women’s Literature, one could argue that the visiting academics were also heritage visitors of a sort, and the house and gardens also welcomed some pre-booked visiting groups, such as the Jane Austen Society of America, and local garden societies. In their conference paper, a team from the universities of Southampton and Sussex describe how, looking for “curators” to work with they co-opted the trust’s Director, Estate Manager, Public Relations Officer, Librarian and Gardener. All these people may have taken on the role nut just of curator, but also guide to those visiting academics and groups. The paper attempts to describe how their tours interpret the place:

visitors’ experience of the house and its grounds is actively created in personalized tours by curators.

“House and grounds are interconnected in a variety of ways, e.g. by members of the family rebuilding the house and gardens or being buried in the churchyard. Thus artifacts or areas cannot be considered in isolation. There are many stories to be told and different perspectives from which they can be told, and these stories often overlap with others. Thus information exists in several layers. In addition, pieces of information, for example about a particular location like the ‘walled garden’, can be hard to interpret in isolation from information about other parts of the estate – there is a complex web of linked information.[…]

“Curators ‘live the house’ both in the sense that it is their life but also that they want to make it come alive for visitors. The experiences offered by Chawton House are intrinsically interpersonal – they are the result of curators interacting with visitors. Giving tours is a skilled, dynamic, situated and responsive activity: no two tours are the same, and depend on what the audience is interested in. They are forms of improvisation constructed in the moment and triggered in various ways by locations, artefacts and questions.”

Tours are a brilliant way of organising all those layers of information, and I’m sure a personal tour from any one of the curators that they identifies would have been excellent. But the problem comes as soon as you try to scale, or mass produce, the effect. As I said at a conference I presented at a couple of weeks ago (I’m reminded I should write about that too) people, even volunteers, are an expensive resource, and so only the smallest places can afford to give every visitor a guided tour experience. Even then, individuals or families have to book on to a tour, joining other people whom they don’t know, and whose interests they don’t necessarily share. The guided tour experience gets diluted, less personal, less tailored to your interests. Which is when you start getting people saying they would prefer to experience the site by themselves, rather than join a tour. Of course some tour guides are better at coping with these issues than others, but visitors are wary of taking the risk with a guide they don’t know, even if they can recount experiences of brilliant guided tour experiences.

The project written about in the paper had two sides, one was to try and produce content for schools, but the other was of particular interest to me:

“The curators are interested in being able to offer new kinds of experience to their visitors. We aim to find out what types they would like to offer, and help to create them. There is thus a need for ‘extensible infrastructure’ based on a basic persistent infrastructure that supports the creation and delivery of a variety of content.”

And four questions they ask themselves are also of particular interest:

  • “How can we enable curators to create a variety of new experiences that attract and engage different kinds of visitors, both individuals and groups?
  • “How do we engage curators in co-design of these experiences?
  • “How can curators without computer science backgrounds contribute to the authoring of content for the system?
  • “How do we create an extensible and persistent infrastructure; one that can be extended in terms of devices, content and types of experience?”

At the time of writing the paper, they had conducted a workshop with their chosen curators, using a map with 3D printed features. Although “use of a map in the first instance may have triggered somewhat different content,” they discovered that “Eliciting content from curators is most naturally and effortlessly done in-situ.” (Which is my plan – I’m in the process of fixing a date with one of Chawton’s most experienced tour guides.)

I particularly liked the observation that “Listening to them is much more lively and interesting than listening to professionally spoken, but often somehow sterile and dull audio tapes sometimes found in museums and galleries.” So enthusiastically did the team connect with the curators’ presentation, that they decided to record the tours and edit them into the narrative atoms that were delivered by their  infrastructure. That infrastructure was not the subject of the paper, but if I recall correctly, GPS based running on “Palm Pilot” style hardware.

More importantly, the most pertinent conclusion was that the curators were best placed, not just to select the narrative atoms from the recorded materials but also “sort them into
themes and topics, so that the system can cater for people with different broad interests, for example landscape, flora and fauna, or how Jane Austen’s writing reflects the environment. This necessitates a learning process, which must build on existing practices and over time develops new practices based on experience and reflection.”

Chawton

I dreary day to photograph a fine building, but the meeting made up for the weather!
A dreary day to photograph a fine building, but the meeting made up for the weather!

Just a quick note today to reflect on the meeting I had this morning with Gillian Dow, Executive Director of Chawton House Library. This place has been preying on my thoughts since I visited for the last Sound Heritage workshop. In fact, somebody (my friend Jane and her colleague Hilary) had suggested last year that it might be the perfect place to try out my Responsive Environment ideas. But my visit for Sound Heritage made me think more and more that they were right.

  • The place has many interesting stories but ones that can conflict with each other. Do people what to know about it’s centuries as a residence for the Knight family, its connections with Austen, and/or its modern day research into early female writers?
  • It’s a place that hasn’t been open to the public long (this year its its first full season welcoming days out visitors) and is still finding it’s voice.
  • Its relatively free of “stuff” and has modern display systems (vitrines and hanging rails), which means that creating the experience should not be too disruptive.
  • It has pervasive wi-fi (the library’s founding patron Sandy Lerner, co-founded Cisco systems) which will make the experiment a lot easier and cheaper to run, even though I’ve decided to Wizard of Oz it.

So today I explained my ideas to Gillian and, I’m pleased to say, she liked them. We’ve provisionally agreed to do something in the early part of 2017, before that year’s major exhibition is installed. I brought away a floor plan of the house, and I have just this moment received a copy of the draft guidebook, so I can start breaking the story into “natoms”. It looks very much like its all systems go!

I have to say I’m very excited.

(But right now, I’m meant to be taking the boy camping so, I’ll leave it there…