Building the Revolution 

I finally got to the V&A today, for their exhibition You Say You Want a Revolution. I got turned away at the end of Cromwell Road last time, as the museum was being evacuated after a bomb-scare. 

I’m writing this review on my way home, using my phone (so please forgive my typos) partly because I want to recommend you go, and there is not long left to see it. 

The exhibition charts the western cultural revolution of 1966-1970, though John Peel’s record collection, plysbof course fashion and design from the V&As own collection and other items, such as an Apollo mission space suit borrowed from other institutions. 

One of the gimmicks of the show is the audio, an iteration of the same technology used at the Bowie exhibition a couple of years ago. I didn’t get to go to that one, but I had a demonstration of that tech from the makers Sennheisser, at a Museums and Heritage show. 

I wasn’t very impressed. Though these headphones, which play music or soundtrack to match whatever object or video you are looking at, were well  received by the media back then, in my experience the technology was clunky. Other friends who’d been confirmed that they changes between sound “zones” could be jarring, and that it was possible to stand in some places where music from two zones would alternate, vying for your attention. 

The experience this time was an improvement. It was by no means perfect: I found the music would stutter and pause annoyingly, especially if I enjoyed the track enough to find myself gently nodding my head. Occasionally the broadcast to everyone’s headphones would pause so everyone in a room could share a multimedia experience (of the Vietnam war for example) across all the gallery’s speakers, screens and projectors. These immersive over-rides were effective, in much the same way as those at IWM North, but when a track you were enjoying or a video that you found interesting was rudely interrupted, one couldn’t help but feel annoyed. I found myself forgiving the designers however, for this and even the stuttering sound of the headphones, because it all felt resonant with that late sixties “cut-up” technique. 

Where the technology really worked however was on two videos that topped and tailed the exhibition. In the first various icons and movers of the period were filmed in silent moving portraits of their current wrinkled and grey selves. Their reminiscences of the time appeared as typography overlaying their silent closed-mouth gaze, a little like Barbera Kruger’s work, while  over the headphones you heard their voice. The same characters appeared at the end, that s time as a mosaic of more conventional talking heads. And for the first time, the interpretation was didactic as each in turned challenged the current generation to build on their legacy. 

For me, one of the highlights was the section on festivals, which invited visitors to take off their headphones, lie back in the (astro)turf and let (another cut-up of) the famous Woodstock documentary wash over them on five giant screens. 

The other things I loved were, dotted around among the exhibits, tarot cards that, at first glance, looked like they might have been designed in the sixties. But then you notice references to things like Tim Berners Lee and the World Wide Web. You realise these are a subtle form of interpretation, telling a future of the sixties that apparently came true and for those of us from that future, creating correspondences and taxonomies that connect the events of 1966-70 with today. The V&A commissioned British artist Suzanne Treister to create the cards, based on her 2013 work, Hexen 2.0. And the very best thing about them is you can buy them (pictured above) in the shop which must be the first time copies of museum interpretation panels have been made available for purchase. 

Of course, the aren’t the only form of interpretation. About from the soundtrack, there are more traditional text panels, labels and booklets around the exhibition. But the cards show how cleverly the layering of meaning and interpretation has been created. Many visitors will have passed them by unnoticed, given them a cursory glance or chosen to ignore them, and will have had an entirely satisfactory experience. But for those that paused to study them in more detail a whole new layer of meaning opened up. 

I visited with a sense of duty, to try out a responsive digital technology. But I found so much more to enjoy. This is a brilliantly curatored exhibition. So much better than the didactic, even dumbed down permanent gallery of the new Design Museum which I visited before Christmas. I urge you to go, if you haven’t seen it yet. It’s only on for another month. 

A colleague who had visited the exhibition before told me how depressed it had made him: the optimism of that period seems to have been dashed upon the reactionary rocks of 2016, Brexit and Trump. But I came out with a very different mood. 

One of the early messages of the exhibition is the period as a search for utopia. The final tracks you hear as you walk out (after the video challenge issued by the old heads of the sixties) are Lennon’s 1971 single Imagine and then, brilliantly, Jerusalem

No, of course they didn’t find the Utopia they were looking for in the sixties, but we could build it…

Could this be … the first decent museum app?

sfmoma

Last week my wife and I went to San Francisco. Our second full day there was mostly spent within SF MOMA, the San Francisco Museum of Modern Art. And for the first time ever, I used a museum/heritage app that actually enhanced my visit.

Part of what made it so successful was the infrastructure that made it easy to download and use. I didn’t have to plan in advance and download it before my visit. I wasn’t even aware of it before I went, but if I had been, I would have been unlikely to download it, because our hotel’s free wifi only allowed one of us to use our device each four hour lease period.

We’d started our visit walking through the museum to the opposite entrance to contemplate the Richard Serra sculpture. It was early in the day, the museum was just opening, and there was a team-brief on the tiered seating that surround the piece. But they moved on and we sat for a moment to contemplate the enormous steel structure (I can’t deny the meditative quality of Serra’s work, or the calming impact it seems on have on the psyche when encountered, but really I sometimes feel “seen one, seen them all”) and to plan our day.

My wife noted a label on the wall directing people who wanted to know more about the art to SFMOMA’s app, and helpfully pointing out that you could log into the museum’s free wifi to download it. I think it said that it was iOS only, but if you didn’t have a suitable device, you could borrow one.

The first pleasure was logging onto the wifi. This was possibly the most hassle-free process I’ve ever encountered on public wifi. The signal was strong (everywhere), reliable and speedy too. The app downloaded quickly, and upon opening gave me three screens introducing what it offered, such as the one below:

It wanted access to my location services (of course), camera and, unusually, to my activity (the “healthy living” function of more recent versions of iOS), but having been so pleasantly surprised and satisfied by the process so far, I was very happy to allow both. All this had taken very little time, but enough time for my wife to have wandered away towards the elevators to begin our exploration of the museum, so I hurried after her, scanning what was on offer from the app as I went.

There’s a highlights function, which includes “Our picks for forty must-see artworks that are currently on view”, a timeline function that enables you to record and share your visit, and section on other “things to do”, and of course the ability to buy tickets, membership etc. At the core of the app are “Immersive Walks”: a range of fifteen to 45 minute audio tours of the galleries.

On no! I’d left my earphones back at the hotel.

But that wasn’t a problem, because as I caught my wife up by the elevators, I saw a stand stacked high with cases of SFMOMA-orange ear-buds. These were given away free and of a somewhat disposable quality, but good enough to last the day (and to pass on to my son when we got back from the holiday) with in-line volume controls for ease of use. The thought and effort that SFMOMA put into the infrastructure around the app deserves to be commended.

But lets get to the meat of the app’s functionality. The key thing here is indoor positioning. I’m guessing it’s achieved through wifi mobile location analytics, but I haven’t confirmed that. I can confirm that its pretty accurate, though with a little bit of lag, so it takes a while after walking into a gallery, and then standing still for a moment, before your device can deliver to you the buttons for the content relevant to the artworks on the gallery. Some, but not all, of the artworks are accompanied by a specific bit of media (mostly audio) to offer more in-depth insight into the work. This can include commentary, reviews or snippets of interviews with the artist.

I also took an immersive walk. I chose German to Me, a personal exploration of post-war German artists from radio journalist Luisa Beck, in which she shares her reaction on some of the works in the collection and interviews for mother, grandmother and cousin to uncover more about her own German-American identity. As the tour progresses you are guided, not just by Luisa’s spoken directions, but also by the app’s indoor positioning, as shown below.

I have to say, I would have given these galleries the most cursory of glances, had I not been captured by Luisa’s tour. As it was, her (wholly un-sensational) story, and her commentary upon the art engaged me emotionally to a degree I wasn’t expecting. It enhanced my visit like no other app has achieved.

The phone also recorded my “timeline”, my journey through the museum, on-line so that I can share with others the photos I took, the artworks that caught my attention enough to seek more information from the app, and the tours I went you. As you can see, I spent three and a half hours with the app, walking 3,369 steps (or 1.7 miles). This timeline is the only slightly disappointing aspect of the app – I would have like to have clicked through this on-line version to listen to some of the media again, now that I am back home, maybe even to be reminded (though the apps abilities to determine location) who made some of things that I took photos of.

You’ll know that I’m not a massive fan of looking at things through my phone, but this app did well enough to almost convince me otherwise.

The museum had other digital interventions of interest. You might have spotted in my timelime that one of the first things we looked at was a surveillance culture-inspired artwork by Julia Scher that turned the museum in to Responsive Environment, changing according to visitors actions.

img_6792

There was also a fun activity in one of the cafe’s that allowed you to create your own digital artwork, printing it out on thermal paper instantly, but also linking to a hi-res online version, which I used for the illustration at the top of this post (you will note that those free earbuds are the stars of that piece).

SFMOMA, with their technology partners Detour on the app, and the support of Bloomberg Philanthropies, are doing good things in the digital sphere. If your are there, you should check them out.

Shine On: part two

In the afternoon Graham Festenstein, lighting consultant, kicked off a discussion about using lighting as a tool for interpretation. New technology, he said,  especially LED, presents new opportunities, “new revolution” in lighting. It’s smaller, with better optics, and control. And also more affordable! He used cave paintings as an example. Lighting designers could take one of three approaches to lighting such a subject:  they might try and recreate the historical lighting which, for a cave painting, would have been primitive indeed,  a tallow bowl light, revealing small parts of the painting at time and with an orange light; it’s more likely, given the needs of the visitor, that they might go for more wide angle lighting, revealing the whole of the painting at once; or, they might light for close up inspection of the work, to show the mark making techniques. Traditionally, a lighting designer would have had to chose just one of these approaches. But with the flexibility and versatile control of modern lighting technology, we can do all three things – caveman lighting, wide angle panorama, and close up technical lighting.

Graham’s presentation was not the strongest. heexplained that he was sceptical about LED lights at first pilot as a sceptic. He recalls a visit to a pilot project at the National Portrait Gallery. His first impressions were disappointing, but then he realised that what heith missed about the tungsten lighting to the way it it the gilded frames, and the the LED lighting was better serving the pictures. Then he went on to talk about colour, how the warm lights of the Tower of London’s torture expedition undermined the theme, but the presentation overall was somewhat woolly.

Zerlina Hughes, of  studio ZNA, came next, with a very visual presentation which I found myself watching rather than taking notes.It explained her “toolkit” of interpretive lighting techniques, but I didn’t manage to list all the tools. A copy of the presentation is coming my way though, so I might return with more detail on that toolkit in a later post. One of her most recent jobs looks great however, and I’m keen to go. Say You Want a Revolution, at the V&A follows on from the Bowie show a year or so ago, but with (she promises) less clunky audio technology. I want to go.

Jonathan Howard, of  DHA design, explained that like Zerlina, “most of us started as Theatre designers.” I (foolishly, I think, in retrospect) passed up an invitation to do theatre design at Central St Martins, and I think I would have been fascinated by lighting design, if I had gone, so I might have ended up at the same event, if on the other side of the podium. Museums audiences today are expecting more drama in museums, having experienced theatrical presentations like Les Miserables, and theme parks etc. I was interested to learn that in theatre, cooler colours throw objects into the background, and warmer colours push them into the foreground. This is apparently because we find the blue end of the spectrum more difficult to to focus on. In a museum space, he says, you can light the walls blue so that the edges of the gallery fall away completely. But he did have a caveat about using new lighting technology. Before rushing in to replace your lighting with LEDs and and the modern bells and whistles, ask youself:

Why are we using new tech?
Who will benefit?
Who will maintain it?
Who will support it?

Kevan Shaw, offered the most interesting insight into the State of the Art. He pointed out that lighting on the ceiling has line of sight to most things, because light travels in straight lines (mostly), and we tend to point it at things. So, he said, your ligthting letwork could make a useful communications network too. He wasn’t the first presenter to include and image of a yellow centered squat cylinder in their slide deck. And they spoke as though we all knew what it was. I had to ask, after the presentation, and they explained that it was one of these. These LED modules slip into many exisiting lamps or lumieres. They are not just a light source, but also a platform for sensors and a communications device. Lighting, Kevan argues could be the beachhead of the Internet of Things in museums.

He briefly discussed two competing architectures for smart lighting, Bluetooth, which we all know, and Zigbee, which you may be aware of through the Philip’s Hue range (which I was considering for the the Chawton experiment). He also mentioned Casambi and eyenut, I’m not sure why he thinks these are not part of the two horse race. He argues that we need interoperability. So I guess he’s saying that eventually the competing systems will eventually see a business case in adopting either Bluetooth or Zigbee as an industry standard.

With our lightbulbs communicating with each other, we can get rid of some of our wires, he argues, but it needs to be robust, reliable. And the secret to reliability is a mesh networking, robust networks for local areas. Lighting is a great place for that network to be. That capability already exits in Zigbee (so I think zigbee is what I should be using for Chawton), but its coming soon in Bluetooth. And I think Kevan believes that when it does, Bluetooth will become the VHS of the lighting system wars, and relegate Zigbee to the role of Betamax.

But the really exciting thing is Visible Light Communication, by which the building can communicate with any user with a mobile device that has a front facing camera (and the relevant software installed. He showed us a short video of the technology in Carrefour (mmm the own-brand soft goat cheese is delicious).

The opportunities for museums are obvious but, he warns, to be effectively used, museums will need resource to manage and get insight from all the data these lighting units could produce in resource. Though he says optimisticly to his fellow lighting consultants, “that need could be an opportunity for us!”

Finally we heard from Pavlina Akritas, of Arup, who took the workshop in the direction of civil engineering. Using LA’s Broad Museum as an example, she explained how in this new build, Arup engineered clever (North facing) light-wells which illuminated the museum with daylight, while ensuring that no direct Los Angeles sun fell directly onto any surface within the galleries. The light-wells included blackout blinds to limit overall light hours and photocells to measure the amountof light coming in and if neccessary, automatically supplement the light with LEDs. She also talked briefly about a project to simulate skylight for the Gagosian gallery, Grosvenor Hill.

All in all, it was a fascinating day.

This post is one of two, the first is here.

Mulholland on Museum Narratives

Working on the narratives for the Chawton Project, I’m taking a break and catching up on reading. Paul Mulholland (with Annika Wolff, Eoin Kilfeather, Mark Maguire and Danielle o’Donovan) recently contributed a relevant first chapter to Artificial Intelligence for Cultural Heritage (ed Bordoni, Mele and Sorgente).

Mulholland et al’s chapter is titled Modelling Museum Narratives to Support Visitor Interpretation. It kicks off with the structuralist distinction between story and narrative,  and points to a work I’ve not read and should dig out (Polkinghorne, D. 1988 Narrative Knowing and the human sciences) as particularly relevant to interpreting the past. From this, the authors draw the “narrative inquiry” process which “comprises four main stages. First, relevant events are identified from the historical period of interest and organised into chronological order. This is termed a chronicle. Second, the chronicle is divided into separate strands of interest. These strands could be concerned with particular themes, characters, or types of event. Third, plot relations are imposed between the events. These express inferred causal relations between the events of the chronicle. Finally, a narrative is produced communicating a viewpoint on that period of history. Narrative inquiry is therefore not just a factual telling of events, but also makes commitments in terms of how the events are organised and related to each other.” Which is as good and concise a summary of the process of curatorial writing as I am likely to find.

There’s another useful summary paragraph later in the document. “When experiencing a museum exhibition, the visitor draws relationships between the exhibits, reconstructing for themselves the exhibition story (Peponis 2003), whether those relationships are, for example, thematic or chronological. The physical structure of the museum can affect how
visitors perceive the exhibition narrative. Tzortzi (2011) argues that the physical structure of the museum can serve to either present (i.e. give access to the exhibition in a way that is independent from its underlying logic) or re-present (i.e. have a physical structure that reinforces the conceptual structure of the exhibition).” Tzortzi there is another reference I’ve not yet discovered and may check out.

What the paper does not do however, is make any reference to emotion in storytelling. the authors seem to leave any emotional context the the visitors’ own meaning making. The chapter include a survey of current uses of technology in museums, and academic experiments including virtual tour guides and opportunities to add the own interpretations and reminiscences, as well as web-based timelines etc.

But, digital technology gives us the opportunity (or need) to break down cultural heritage narratives even more, and an earlier (2012) paper by (mostly) the same authors, Curate and Storyspace: An Ontology and Web-Based Environment for Describing Curatorial Narratives describes a system for deeper analysis. (Storyspace turns out to be a crowded name in the world of writing tools and hypertext, so eventually the ontology and Storyspace API became Storyscope). The first thing that the ontology brings to the table is that

a curatorial narrative should have the generic properties found in other types of narrative such as a novel or a film

So the authors add another structuralist tool, plot, to the story/narrative mix. “The plot imposes a network of relationships on the events of the story signifying their roles and importance in the overall story and how they are interrelated (e.g. a causal relationship between two events). The plot therefore turns a chronology of events into a subjective interpretation of those events.” But using the narrative inquiry process “the plot can be thought of as essentially a hypothesis that is tested against the story, being the data of the experiment.”

I like this idea. But its worth distinguishing between the two uses of the word “interpretation” in cultural heritage. The first use, familiar to my archaeologist colleagues, describes the process of building an understanding of of aspect of the past from the available evidence. The second, more familiar to my museum and heritage site colleagues describes the process of explaining the evidence to non-professional visitors. At its very best, the museum/heritage site form of interpretation will resemble and guide visitors though the process of inquiry that builds an understanding of the evidence on display. But most of the time the second form of interpretation more closely resembles storytelling. That’s not a fault or failure of my museum/heritage site colleagues, most visitors are time poor in story rich environments. But digital technology has the potential to allow museum and heritage site interpretation to more closely resemble the first use of the word.

What digital technology offers, is the opportunity for brave curators to offer alternative plots, or theses, and test them in a public arena, rather than just through a peer review process. Or even to create plots procedurally by following the visitors’ path of attention between objects, maybe discovering plots the curator had not imagined.

The two experiments that the authors describe go someway towards this, by their dry ontology misses an emotional component. The event ontology could surely include an authorial opinion on whether the narrative element suggests a simple emotional reponse (even as simple as hope or fear) but instead “If the tag represents an artist, then events are used to represent, for example, artworks they have created, exhibitions of their work, where they have lived, and their education history.” Dry, dry facts… There is the tiniest nod towards, if not emotion per se, the some sort of value in their brief discussion of theme:

Theme is also related to the moral point of the story. This could be a more abstract concept, such as good winning through in the end, which serves to bind together all events of the story.

Given that they say “Narratives are employed by museums for a number of purposes,
including entertainment” they haven’t given much time to what makes narratives engaging. There is hope however. In their conclusion, they do say “Other narrative
features such as characterisation and authorial intent could potentially be
foregrounded in tools to support interpretation.”

 

P.O.R.T.U.S is go!

A week or two back, I had an interesting conversation with my supervisor, which I didn’t think I should mention on-line until, today, he invoked the “inverse fight club rule”. So I can now reveal that P.O.R.T.U.S stands for Portus Open Research Technologies User Study – yes, I know, as Graeme said “recursive-acronym-me-up baby.” This isn’t the Portus Project, but but it does ride on the back of that work, and (we hope) it will also work to the Portus Project’s benefit.

P.O.R.T.U.S is a small pilot project to explore better signposting to open research, so (for example) people interested in the BBC Documentary Rome’s Lost Empire, (which coincidentally is repeated TONIGHT folks, hence my urgency in getting this post out) might find their way to the Portus Project website, the FutureLearn MOOC,  the plethora of academics papers available free through ePrints (this one for example) or even raw data.

Though the pilot project will use the Portus Project itself as a test bed, we’re keen to apply the learning to Cultural Heritage of all types. To which end I’m looking to organise a workshop bringing together cultural heritage organisations, the commercial companies that build interpretation and learning for them, and open source data providers like universities.

The research questions include:

  • What are the creative digital business (particularly but not exclusively in cultural heritage context) opportunities provided by aligning diverse open scholarship information?
  • What are the challenges?
  • Does the pilot implementation of this for the Portus Project offer anything to creative digital businesses?

The budget for this pilot project is small, and that means the workshop will have limited places, but if you are working with digital engagement, at or for cultural heritage sites and museums,. and would like to attend, drop me a note in the comments.

Petworth Park and Pokemon too

Yesterday’s post was timely, it turns out, because today, Pokemon Go was released in the wild. I downloaded it and caught my first two Pokemon in the Great Hall at Chawton, waiting for a meeting that I’ll write more about tomorrow. But after that meeting I was off down to Petworth to have a go with the new(ish) Park Explorer.


The Park Explorer is one of the outputs of a three year long archeology project, exploring what’s under the Capability Brown landscape that survives today. I have some responsibility for the way it works. When my colleague Tom explained his plan to build a mobile application, I dissuaded him. There is little evidence that many people  download apps in advance of their visit to Heritage sites. And even fewer wish to deplete their data allowance on the mobile network to download it on site.

Together, we came up with an alternative – using solar powered Info-points to create wifi hotspots around the park that could deliver media to any phone capable of logging on to wifi, and browsing the web. Though in this case it’s not the World Wide Web, but a series of basic webpages offering maps, AV, etc. We’re running this installation as a bit of an experiment to gauge demand, to see, if it’s offered, how many people actually log on.

Pokemon Go demonstrated why the technology might be useful. With the app newly downloaded on my phone I, of course, wanted to try it out in Petworth’s pleasure grounds. I’d guessed right, the garden’s Ionic Rotunda and Doric Temple are both Pokestops.  But the wireless signal is so weak and patchy (on O2 at least) that the game could hardly log on, let alone do anything when I got within range. After a frustrating few minutes I gave up and returned to the local wifi.

That crummy phone signal is one of the reasons we went to solar powered local wifi. Once I logged on I was soon listening to the voice of my colleague Tom as he explained some of the archeology of the garden, watching an animated film of the development of the park and scrubbing away a photo of the current three person gardening team and their power tools to reveal a black and white photo of the small  army of gardeners that used to work here.

All of this was very good. But there are some issues that I think need to be addressed if the idea is to catch on. First of all, finding the wifi signal and logging on isn’t as intuitive as I’d hoped. Your browser need to be pointed at 10.0.0.1 to find the home page. The home page design leaves something to be desired. The floating button to change text size seems an afterthought that annoyingly obscures the text its trying to clarify. Navigation isn’t intuitive (no obvious way forward from the welcome splash pictured above, for example) or that well organised – I’d hoped that I’d be offered media that was closest to my position (as identified by the hotspot I was logged into), but the browse button just led to a list of things. Switching to the map view was easier, but it showed the design lacked a degree of responsiveness – see below how the word “Map” is partially obscured by the tile with the actual map on. The pins that link to different media suggest that its good to be standing in particular places to view that media, but on the few that I tried around the pleasure grounds, there seemed to be no discernible benefit to being in the right spot. In the end I settled under a spreading Oak to sit and work my way through what was on offer.

One feature that worked well to compare old and new and see change over the centuries was the scrub away photo feature. Even here though there was a fault in the responsiveness of the design. If I turned the phone into landscape mode, the picture became full screen and I lost the ability to reset it.

This slideshow requires JavaScript.

I imagined how good it would be, if it looked and felt (and responded) like the National Trust’s current website. Maybe, with a bit of work, it can.

More work would be and investment though, so, first of all though we need to interrogate the system’s solar powered servers, and see how many people are giving it a try.

A little bit of the history of interactive storytelling at Chawton


I spent yesterday morning at Chawton, locating and counting plug sockets so @ll know my limitations as I design whatever the experience there will be next March. The visit reminded me that I had meant to write here of a previous digital interpretation experiment at Chawton.

Back then, in 2005, the Chawton House Library was not widely open to the public. Primarily a centre for the study of Women’s Literature, one could argue that the visiting academics were also heritage visitors of a sort, and the house and gardens also welcomed some pre-booked visiting groups, such as the Jane Austen Society of America, and local garden societies. In their conference paper, a team from the universities of Southampton and Sussex describe how, looking for “curators” to work with they co-opted the trust’s Director, Estate Manager, Public Relations Officer, Librarian and Gardener. All these people may have taken on the role nut just of curator, but also guide to those visiting academics and groups. The paper attempts to describe how their tours interpret the place:

visitors’ experience of the house and its grounds is actively created in personalized tours by curators.

“House and grounds are interconnected in a variety of ways, e.g. by members of the family rebuilding the house and gardens or being buried in the churchyard. Thus artifacts or areas cannot be considered in isolation. There are many stories to be told and different perspectives from which they can be told, and these stories often overlap with others. Thus information exists in several layers. In addition, pieces of information, for example about a particular location like the ‘walled garden’, can be hard to interpret in isolation from information about other parts of the estate – there is a complex web of linked information.[…]

“Curators ‘live the house’ both in the sense that it is their life but also that they want to make it come alive for visitors. The experiences offered by Chawton House are intrinsically interpersonal – they are the result of curators interacting with visitors. Giving tours is a skilled, dynamic, situated and responsive activity: no two tours are the same, and depend on what the audience is interested in. They are forms of improvisation constructed in the moment and triggered in various ways by locations, artefacts and questions.”

Tours are a brilliant way of organising all those layers of information, and I’m sure a personal tour from any one of the curators that they identifies would have been excellent. But the problem comes as soon as you try to scale, or mass produce, the effect. As I said at a conference I presented at a couple of weeks ago (I’m reminded I should write about that too) people, even volunteers, are an expensive resource, and so only the smallest places can afford to give every visitor a guided tour experience. Even then, individuals or families have to book on to a tour, joining other people whom they don’t know, and whose interests they don’t necessarily share. The guided tour experience gets diluted, less personal, less tailored to your interests. Which is when you start getting people saying they would prefer to experience the site by themselves, rather than join a tour. Of course some tour guides are better at coping with these issues than others, but visitors are wary of taking the risk with a guide they don’t know, even if they can recount experiences of brilliant guided tour experiences.

The project written about in the paper had two sides, one was to try and produce content for schools, but the other was of particular interest to me:

“The curators are interested in being able to offer new kinds of experience to their visitors. We aim to find out what types they would like to offer, and help to create them. There is thus a need for ‘extensible infrastructure’ based on a basic persistent infrastructure that supports the creation and delivery of a variety of content.”

And four questions they ask themselves are also of particular interest:

  • “How can we enable curators to create a variety of new experiences that attract and engage different kinds of visitors, both individuals and groups?
  • “How do we engage curators in co-design of these experiences?
  • “How can curators without computer science backgrounds contribute to the authoring of content for the system?
  • “How do we create an extensible and persistent infrastructure; one that can be extended in terms of devices, content and types of experience?”

At the time of writing the paper, they had conducted a workshop with their chosen curators, using a map with 3D printed features. Although “use of a map in the first instance may have triggered somewhat different content,” they discovered that “Eliciting content from curators is most naturally and effortlessly done in-situ.” (Which is my plan – I’m in the process of fixing a date with one of Chawton’s most experienced tour guides.)

I particularly liked the observation that “Listening to them is much more lively and interesting than listening to professionally spoken, but often somehow sterile and dull audio tapes sometimes found in museums and galleries.” So enthusiastically did the team connect with the curators’ presentation, that they decided to record the tours and edit them into the narrative atoms that were delivered by their  infrastructure. That infrastructure was not the subject of the paper, but if I recall correctly, GPS based running on “Palm Pilot” style hardware.

More importantly, the most pertinent conclusion was that the curators were best placed, not just to select the narrative atoms from the recorded materials but also “sort them into
themes and topics, so that the system can cater for people with different broad interests, for example landscape, flora and fauna, or how Jane Austen’s writing reflects the environment. This necessitates a learning process, which must build on existing practices and over time develops new practices based on experience and reflection.”

Chawton

I dreary day to photograph a fine building, but the meeting made up for the weather!
A dreary day to photograph a fine building, but the meeting made up for the weather!

Just a quick note today to reflect on the meeting I had this morning with Gillian Dow, Executive Director of Chawton House Library. This place has been preying on my thoughts since I visited for the last Sound Heritage workshop. In fact, somebody (my friend Jane and her colleague Hilary) had suggested last year that it might be the perfect place to try out my Responsive Environment ideas. But my visit for Sound Heritage made me think more and more that they were right.

  • The place has many interesting stories but ones that can conflict with each other. Do people what to know about it’s centuries as a residence for the Knight family, its connections with Austen, and/or its modern day research into early female writers?
  • It’s a place that hasn’t been open to the public long (this year its its first full season welcoming days out visitors) and is still finding it’s voice.
  • Its relatively free of “stuff” and has modern display systems (vitrines and hanging rails), which means that creating the experience should not be too disruptive.
  • It has pervasive wi-fi (the library’s founding patron Sandy Lerner, co-founded Cisco systems) which will make the experiment a lot easier and cheaper to run, even though I’ve decided to Wizard of Oz it.

So today I explained my ideas to Gillian and, I’m pleased to say, she liked them. We’ve provisionally agreed to do something in the early part of 2017, before that year’s major exhibition is installed. I brought away a floor plan of the house, and I have just this moment received a copy of the draft guidebook, so I can start breaking the story into “natoms”. It looks very much like its all systems go!

I have to say I’m very excited.

(But right now, I’m meant to be taking the boy camping so, I’ll leave it there…

The Big Why #IdeatoAudience

Yesterday, I went to Digital: From Idea to Audience, a small conference (more of a large workshop actually) put together by Royal Pavilion and Museums, Brighton and Hove, with funding from Arts Council England. I might have enjoyed a trip to Brighton, but this actually took place in central London, just across the road from the BBC.

The programme was put together by Kevin (not that Kevin) Bacon, Brighton’s Digital Development head honcho. (By the way – I’m going to quote from this post in my forthcoming presentation at Attingham.) Kevin stated at the outset that the day didn’t have a theme as such, but rather a “Nuts and Bolts” conference, a response to many of the questions he had been asked after making presentations elsewhere. He hadn’t briefed the speakers, only chosen them because he had felt they might have experiences and learning of use to people working on digital projects.

But if a united theme came out of the day, then it was Keep Asking Why?

Kevin kicked off the day talking about his work at Royal Pavilion and Museums, Brighton and Hove, a number of sites across the city (including Pavilion istelf, Preston Manor, the Booth museum and both Brighton and Hove museums) that attract around 400,000 visitors a year. They hold three Designated collections (of national importance). He wanted to talk about two digital projects one of which was (broadly) unsuccessful, and the other (broadly) successful.

The first was Story Drop, a smartphone app that took stories from the collection out into the wider city. GPS enabled, it allowed people to take a tour around the city based on an object from the collection. Get to a location and it tells you more about it, and unlocks another object. As an R&D project, it worked. Piloting it, they had very favourable responses. So they decided to go for a public launch in January of 2014. The idea being that lots of local people would have got a new phone for Christmas, and be keen to try out a new app.

The launch turned out to be a damp squid. The weather was partly to blame, January 2014 was one of the wettest on record. But even when the streets dried out, take-up was not massive. Kevin said to me during the break, that maybe only hundreds of people have downloaded the app to date, two years later. He showed a slide detailing some of the reasons why people weren’t using it.

barrierstoapp

These reasons chimed with my own research. It wasn’t an unmitigated failure, people do love it – but only for a very small number of people. So he said, think about why people will use your digital project.

Which is the approach he took for the redevelopment of the Museum’s website, shifting from designing for demographics to designing for behaviours (motivations, needs, audiences). And that was far more successful : 23% increase in page views and 230% increase in social shares.

Then, Gavin Mallory from CogApps took the floor to talk about briefs. He has already put his presentation on Slideshare.  As experienced providers to the cultural heritage industry, they’ve seen a lot of briefs. Some good, some wooly, or overly flowery, too loose, too tight, too re-cycled, or as Giles Andreae would have it “no [briefs] at all!” I must admit, I’ve been guilty of a few of those.

After lunch Graham Davies, Digital Programmes Manager, National Museum of Wales and asked (emphatically) Why? Or rather, why digital? I think the titale of his session should have been “From Digital Beaver, to Digital Diva”, which its something he said, but he didn’t call it that, but it was a really useful set of challenges to make when somebody says “we need an app” or “an iPad to do this.”

 

I’m running out of time so I’ll finish with just one quote from the final speaker. Tijana Tasich, who has worked at Tate and is currently consulting to the South Bank Centre. Talking about usability testing, she said “we used to test just screens and devices, but with iBeacons etc. we are increasingly testing spaces.”

What PhD supervisors are for

I had a great chat with my supervisor on Thursday, after helping out with a Masters seminar. As regular readers may have worked out, I’ve been having a great deal of trouble trying to get a coherent testable design to test out of my half-formed ideas and lofty ideals.

The problem was trying to think of a cheap way to test some of the theory I’ve come up with. I’d got hung up on trying to think of a way to track visitors round a site and test their reactions to that. Until I solved that I was handwaving the issues of breaking the story into natoms, and balancing the conflicting needs of multiple visits in the same space. Those two problems both felt more within my comfort zone. The problem is that I’m not a technologist, that bit is so far out of my comfort zone that I’d need to enlist (or pay for) one. On top of that, the tech itself isn’t that cheap – getting a wifi network into some of the heritage places I know, with their thick stone walls and sheer scale, isn’t about buying just one wifi router.

I’d mentioned the other problems (particularly in the one of negotiating conflicting needs) in the seminar. (The students had been reading about a variety of museum interpretation experiments for their “homework” and we discussed the common issue that many of the experiments focussed on the issue of a visitor in isolation, and hadn’t thought enough about multiple users in the same space). Afterwards I spent twenty minutes with Graeme, my supervisor, in his office. I felt he’d finally got what I’d been trying to say about a “responsive” environment, and his interest was particularly focused on the two issues I’d handwaved. We talked about low-tech ways or exploring both of those, and of course THAT’S what I should be doing, not worrying about the tech. These are both things I can do (I think!) rather than something I can’t .

So by the end of our chat, when Graeme had to return to his students we’d worked out the rudiments of a simple experiment.

  • What I need is a relatively small heritage site, but the possibility of lots of choices about routes, lots of intersections between spaces. What Hiller calls a low depth configuration (that last link is to a fancy new on-line edition of the book, by the way. It’s worth a read).
  • I need to work with the experts/curators of that site to “break” the stories. Break is a script-writing term, but it feels particularly appropriate when thinking about cutting the stories up into the smallest possible narrative atoms. (Although maybe “natomise” is better!)
  • Then I need to set up the site to simulate some of responsiveness that a more complex system might offer. Concealed Bluetooth speakers for example, or  switches like these that can be controlled by Bluetooth.
  • Finally, rather than try and create the digital system that tracks visitors and serves them ephemeral natoms, I can do a limited experiment with two or more humans following visitors around and remotely throwing the switches that might light particular areas of the room, play sounds or what ever other interventions we can come up with. The humans take the place of the server, and when they come together, negotiate which of their visitors gets the priority. Graeme suggested a system of tokens that the human followers could show each other – but the beauty of this concept is that the methods of negotiating could become part of the results of the experiment! The key thing is to explain to the participants that the person following them around isn’t giving them a guided tour, they can ask questions of him/her, but s/he isn’t going to lead their experience.

So, now I have a a thing that it is possible to do, with minimal help and with a minimal budget. And its a thing that I can clearly see has aims that come of the research I’ve done, and results that inform platonic ideal responsive environment I have in my head. If it works, it will hopefully inspire someone else to think about automating it.

That’s what supervisors are for!