A new, easy to read guide

In a pleasant surprise today, a new book dropped through my letterbox. Interpretation in a Digital Age, by Paul Palmer and Neil Rathbone, is a concise, easy to read introduction and guide for Heritage professionals starting digital projects in their places. It promises "objective and practical guidance", and lives up to that promise.

It's an easy read, and neatly sums up the history of handheld guides in heritage sites as it walks the reader through concepts like: Bring Your Own Device; native, web and hybrid apps; media creation; webcams; and locational and proximity triggering. Palmer and Rathbone conclude a useful chapter on accessibility and inclusiveness with with a section on Mindfulness, wherein they argue we "need to develop more skill in the psychology of storytelling using digital media rather than blame the media". A sentiment with which, given the subject of my study, I can only agree.

There are chapters on using technology outdoors, understanding wifi, compliance and intellectual property, and project management. An optimistic chapter near the end explores some of the possibilities that "the digital toolbox" might enable, and the book ends with a jargon busting glossary that reveals the intended audience museum and cultural heritage professionals who not digital experts but are thinking of commissioning something and don't want to be fast-talked by potential suppliers.

It's not an academic work, it doesn't have references to other texts. Rather it is based on the practical experience to the two authors. So it's very good, if not technically detailed, on the how, and also offers practical advice on project management that will last longer than some of the technologies that are now current, but it lacks the why. It's not their intention (I think) to sell the concept of digital technology to heritage sites, rather it's a response to heritage sites looking to see what what is possible. Indeed in the introduction the authors refer to the "Gartner hype cycle", the tendency to over-estimate the potential of technology, and peter to be disappointed by its limitations. Given that more and more evidence I'm seeing suggests only a maximum of five percent of heritage visitors use apps or other mobile technologies, and that I heard recently that mention of an app is currently likely to kill an HLF bid stone dead, I'm still questioning whether it's possible to build a business case for the creation of digital content, let alone the purchase of hardware etc.

CultureGeek 2017 and Digital Customer Experiences

Better late than never, its a month since I went to two events in one week, and I’ve been so busy since then that I haven’t had time to write them up. Those of you who were following my Twitter stream live may ave some idea what excited me at the time, but for anyone else who might be interested, and more importantly for my own reflection, let me ram my thoughts together into this one post on both events.

We’ll start with Culture Geek in Kensington, which follows on from the M&H show, which I didn’t attend this year. This was the expensive one, with speakers flown in from other countries. I was pleasantly surprised to meet my colleague Alex there, so we were able to reflect a little between sessions, and there’s one thing especially we came away wanting to do, but more on that later. The conference touched on everything digital, including in-visit technology, but of course also plenty of on-line stuff. The first speaker was from that side of the field, Kimberly Drew, social media manager from New York’s Met museum. She drew on her experience as a person of colour doing a history of art degree, and how her life has changed during an internship at Harlem’s Studio Museum when a whole side of black art was revealed to her which had not been covered in her white-centric education.

Keen to share her epiphany, she and a friend started a Tumblr blog on Black Contemporary Art. Now that blog has over 200,000 followers, and she has unintentionally become “a poster child for diversity.” The Met weren’t looking for a “diversity champion” when they advertised the role of Social Media Manager (I asked her afterwards), but you can see why they snapped up such a dynamic, self-motivated blogger, with experience of, and reputation for, reaching out and expanding audiences.

Her work for the Met isn’t all about black art either. She sees the social media as the Met’s fourth space, alongside the 5th Avenue building,  The Cloisters and the Breuer. Her role there is to share 5000 years of art; connect users with the collection; highlight the ways the museum serves art and art history, and to “humanise” the museum and create invitations to participate. This last is the objective that benefits, in theory, from her previous experience, but of course they all do. Reflecting on her talk what comes across most is authenticity. Its a challenge for cultural heritage organisations, to match that authenticity of enthusiasm for both the medium and the message, someone who lives and breathes social media and the cause.

Kimberly is a young woman who inspires, and shows us how to do it, and the organisation she works for is a springboard, not a water-slides forcing her in a corporate direction. She’s one to watch.

The most interesting presentation for my research was given by Joe McFadden of the Royal Opera House. they are trying a number of digital experiments as they redevelop one of their spaces, known as the Piazza, with the intention of increasing the number of daytime visitors. Currently only the tens of thousands annually, which for a central London space, is very few. Their work is in three broad areas: Transactional – things like ordering your interval drinks online, and paying with Applepay; Experiential  – things like AR with hololens and VR (check out the work of the VOID) and post-show video on demand: and, Informational – things like personalised wayfinding (which made my ears prick up, but sadly when I quizzed Joe afterwards, he said they were struggling with the contending needs of different visitors at the same decision point, so It might not happen). We also talked about their current testing of an Alexa skill, so that Amazon Echo users could quiz their “household assistant” about whats on and even, possibly, buy tickets.

Which tied in with a fascinating presentation I saw later in the week at the Academy of Marketing’s Digital Customer Experiences event. There Prof. Merlin Stone of St Mary’s University talked about work he is doing on Baby Boomers and the heath service. These are “the largest generation of older people the world has seen”, but also the healthiest and longest living, the richest, most educated, etc etc. though its early days in the voice first market – he sees signs that they are also likely to be enthusiastic users of Alexa and other home voice assistants, and may well expect services (he was talking about health, but it applies equally well to Opera and Heritage, where baby-boomers are currently core market) to be provided by voice-first platforms.

Back to CultureGeek, Tim Wood of the Ballet Rambert showed us some simple online stuff that had proven surprisingly popular – live streaming of rehearsals. Not fancy dress-rehearsals, but studio work, the repetitive practice of moves and blocking. This is what set Alex and me off on a reverie about making a “slow TV” livestream event of a voyage down the length of the river Wey. One day….
Apart from those presentations at CultureGeek, there was interest as well in Patricia Buffa’s discussion of e-Marketing the Fondation Louis Vitton to Chinese tourists. The Chinese market isn’t a big one for my market yet, they are mostly urban tourists, and ticking off the iconic sites. But if (when?) it becomes spreads into the countryside and independent travelling there’s stuff we can learn here: the importance of Weibo/WeChat; finding Chinese celebrity advocates; doing exhibitions in Chinese partner locations; and, interestingly, the ubiquity of the QR code – “in China your QR code is your business card”.

We also got insight from the Science Museum’s use of Kickstarter to fund the rebuilding of Eric, Britain’s very first robot. We were shown a really interesting content management system created by MIT, and heard about building digital systems for a City of Culture in Hull. There were also some lovely experiments in mixed reality from the National Theatre, including a VR Alice in wonderland that the viewer experienced sitting on a toilet, and Draw Me Close, a VR opera that puts the audience in the naively drawn world of five year old Jordan. I’m not sure how sustainable the business model of this experience might be, the cast outnumbers the audience (of one) so that as the virtual Mum hugs you, or tucks you up in bed, a physical cast member also does it to you, to make an fully sensory experience. Its the closest we’ve come yet to the Ractors of The Diamond Age.

The Digital Customer experiences event was more commercial (after all, it was hosted by the Academy of Marketing at the Direct Marketing Institute). I had been invited to give a presentation, the abstract for which I posted a few weeks back. Apart from Professor Stone, whom I spoke about above, Dr Julia Wolney introduced the day with an overview of all the points in the customer “life cycle” where AI has growing potential.

Ana Canhoto gave a very interesting presentation about the conflicting attitudes to tracking and personalisation. As one respondent told her, its:

… creepy. But, then, it is just also very useful.

Dr Wolny returned to talk about her research into wearables, and the quantification of the self. As a recent wearer of an Apple watch, which I am using to incentivise my own movement, I was very interested in what she had to say. However based on her findings I’m not sure I’m typical. Women are more likely than men to track their fitness, but men are more likely to share their latest achievements. (I am not.)

But perhaps the most intriguing presentation was from Dr Fatema Kawaf – she presented a research technique I had not heard of before, but one I think may be valuable to evaluating heritage experiences. Its called The Repertory Grid, and as the linked article shows it comes out of psychology, a technique as a method to help the individual unveil his or her constructs. As Kawaf demonstrated though, it enables participants to use their own words to construct their understanding of experiences too. Kawaf was thinking about the retail experience, but I wonder if its ever been applied to heritage?

Building a story in Star Wars Indentities

A Stormtrooper marching band? The exhibition attempts to illustrate different values with illustrations like this.

It was Fathers’ day last weekend, as a treat, my family took me to the Star Wars Identities exhibition at the O2 in Greenwich. I was interested for a number of reasons, not the least of which was, being ten in 1977, I was (am) a massive fan of Star Wars. But one of the other reasons was the idea that visitors build their Star Wars identity as they go around the exhibition. This seemed to me to be a large scale, upfront attempt to personalize a cultural heritage visit. (And yes, Star Wars is cultural heritage now, I’m sure I saw other movies when I was ten, but I can’t recall what they were.)

The RFID tag that visitors use for the interactives

The mechanics of this personalization were wristbands, or if you were latex intolerant, “credit” cards, with RFID (I’m guessing) chips, and nine (not ten as advertised) stations around the exhibit where you could make choices that defined your Star Wars identity. The content of the show were props, models, costumes, concept sketches and some original art, mostly from the first six movies (though BB8 and a couple of props were squeezed in to represent the latest phase of production), with two streams of interpretation. One stream interprets the design of characters in the movies, and the second is a sort of “science of Star Wars” strand, with basic interpretation of things like genetics. Some of all this is delivered with traditional text panels, and some is aural, delivered by an IR activated headset. You are given a “medallion” style unit to hang around your neck and hook an earpiece over one ear. Then, stand in the right place in front of a panel or AV, and the relevant sound is beamed to your unit in a choice of languages. All you need to do, is control the volume… and make sure you don’t turn away from the beam, or cross your arms over the receptor, or let anybody tall stand in front of you, cutting of the beam – all of which will cut out the sound.

It’s worth  pointing out that the text comes in English and French. And that may betray the exhibition’s 2012 origins at the Montreal Science Centre. That of course explains where the science interpretation strand comes from, and why objects and stories from The Force Awakens feel shoe-horned in. One can’t argue with most of the interpretation – seeing how the characters like Yoda developed over time was interesting, the science was a bit basic, and its connections with the Star Wars story questionable (the exhibition suggests Force sensitivity is a genetic trait). The stuff on personality felt just one of many different models of personality types that, despite five post-Doc academics advising on it, reads like its been cribbed from a dodgy self-help book. Interestingly, it was the personality test that was the only interactive station that wasn’t a simple choice – visitors had to answer a number of questions before it revealed their personality profile.

When I started the experience, I was looking forward to discovering what my Star Wars identity would be, but three or four interactives in, when I realized that most of the stations were offering choices rather than revelations, I decided to rush back to the start and remake those choices – because I’ve known since I was ten what my Star Wars identity actually was: the son of Grand Moff Tarkin! Given that the character described in the link was mostly made of my direct choices, I am of course very pleased with the result. I was curious to see what my personality test said about me (or rather, about my Star Wars identity). Click on each of the buttons below the biography on the page I linked to, and it highlights the bit text in the biography that was chosen by your answers. So to reveal the personality results, all I need to do is click that button. The highlighted text says:

People often tell me I’m a generally adventurous and curious person, I also tend to be energetic and social.

… which suggests I was really getting into character when I was answering those questions, because that doesn’t sound like me at all 🙂

Actually the interactive I enjoyed most was Events. Touch your RFID bracelet to the receptor and a random life event spins into view, with a choice of how you react to it. I won a city in a “game of chance”, and had to decide (if I recall correctly) whether I governed sensibly, gave up the job, or “reveled in the prestige and borrowed liberally from the city coffers” which is the option I went for (of course). But my boy was disappointed that the random events on offer were not in some way defined by choices you’d already made – he too got the city, and I might easily have been “freed from slavery” by an event. The son of a Grand Moff in slavery? I don’t think so. 🙂

Despite its limitations (which one is prepared to forgive more when one realizes the technology is five years old), the opportunity to create a story like this was very much enjoyed by my family. I wonder if the exhibition had a deeper emotional impact on me because of it?

Apps not worth it, hard numbers

I’ve got to point people’s attention to this excellent blog post from Colleen Dilenschneider. Colleen works for a US market research firm called Impacts. They have a couple of hundred visitor facing clients, including for example, the Monterey Bay Aquarium, and they combine their data from all the research to produce the National Awareness, Attitude & Usage Study, which is informed by on-site interviews, randomly selected telephone interviews and an on-line component. So though its commercial market research, and not academically peer reviewed, the approach seems to be pretty robust. I’ve been looking for some hard numbers about the benefit (or otherwise) of mobile device interpretation, not just for my research (and my talk next week), but also for work. It was a work colleague who pointed me to post, but I’ll happy include some of the data in next week’s presentation.

I’ll let you read it for yourself. Some if it is not so surprising, when it offers some numbers to support what has already been reported anecdotally. For example, that people are more likely to use the place’s website, social media and review sites to plan a visit, than an institution’s app, or that people are more likely to use social media than an app when they are on-site (old readers will be familiar with my usual rant on this subject, now available in print 🙂 ).

But there’s one chart I want to draw out, which makes two key points (both important enough for Dilenschneider to use bold text):

People who use mobile applications onsite do not report significantly higher satisfaction rates than those who do not.


People who use social media or mobile web while they visit a cultural organization have a more satisfying overall experience than people who don’t use social media or mobile web during their visit.

She illustrates both points with the same graph.

Image (c) Impacts, copied from: http://colleendilen.com/2017/04/05/are-mobile-apps-worth-it-for-cultural-organizations-data/

All of which adds weight to the argument that institutions like the one I work for should prioritize  installation of free, easy to log on to, pervasive wifi over the commissioning of expensive, unused apps, and direct content development efforts towards the mobile web, in the knowledge that even then, users may prefer to publish out from a place, rather than read the content that you’ve created.

Some places get it.



Simulating ideology in storytelling

The Story Extension Process, from Mei Yii Lim and Ruth Aylett (2007) Narrative Construction in a Mobile Tour Guide

Another great piece from Ruth Aylett, this time from 2007. Here, she and collaborator Mei Yii Lim are getting closer to what I’m aiming for, if taking a different approach. They kick off by describing Terminal Time, a system that improvises documentaries according to the user’s ideological preference, and an intelligent guide for virtual environments which take into account the distance between locations, the already told story, and the affinity between the the story element and the guide’s profile when selecting the next story element and location combination to take users to. They note that this approach could bring mobile guides “a step nearer to the creation of an ‘intelligent guide with personality'” but that it “omits user [visitor] interests”. (I can think of many of a human tour guide that does the same). They also touch on a conversation agent that deals with the same issues they are exploring.

This being a 2007 conference paper, they are of course using a PDA as their medium. Equipped with GPS and text to speech software, a server does all the heavy lifting.

“After [an ice-breaking session where the guide extracts information about the user’s name
and interests], the guide chooses attractions that match the user’s interests, and plans the shortest possible route to the destinations. The guide navigates the user to the chosen locations via directional instructions as well as via an animated directional arrow. Upon arrival, it notifies the user and starts the storytelling process. The system links electronic data to actual physical locations so that stories are relevant to what is in sight. During the interaction, the user continuously expresses his/her interest in the guide’s stories and agreement to the guide’s argument through a rating bar on the graphical user interface. The user’s inputs affect the guide’s emotional state and determine the extensiveness of stories. The system’s outputs are in the form of speech, text and an animated talking head.”

So, in contrast to my own approach, this guide is still story lead, rather than directly user led, but it decides where to take the user based on their interests. But they are striving for an emotional connection with the visitor. So their story elements (SE) are composed of “semantic memories [-] facts, including location-related information” and “emotional memories […] generated through simulation of past experiences”. Each story element has a number of properties, sematic memories for example incude: name ( a coded identifier); type; subjects; objects; effects (this is interesting its lists the story elements that are caused by this story element, with variable weight); event; concepts (this that might need a further definition when fist mentioned); personnel (who was involved); division; attributes (relationship to interest areas in the ontology); location; and, text. Emotional story elements don’t include “effects and subjects attributes because the [emotional story element] itself is the effect of a SE and the guide itself is the subject.” These emotional memories are tagged with “arousal” and “valence” tags. The arousal tags are based on Emotional Tagging, while the valence tag “denotes how favourable or unfavourable an event was to the guide. When interacting with the user, the guide is engaged in meaningful reconstruction of its own past,” hmmmmm.

So their prototype, a guide to the Los Alamos site of the Manhatten project, the guide could be either “a scientist who is interested in topics related to Science and Politics, and a member of the military who is interested in topics related to Military and Politics. Both guides also have General knowledge about the attractions.” I’m not convinced by the artifice of layering onto the interpretation two different points of view – as both such are being authored by a team who in their creation of the two points of view will, even if striving to be objective, will make editorial decisions that reveal a third, authentic PoV.

When selecting which SE to tell next, the guide filters out the ones that are not connected to the current location. Then “three scores corresponding to: previously told stories; the guide’s interests; and the user’s interests are calculated. A SE with the highest overall
score will become the starting spot for extension.” The authors present a pleasingly simple (for a non-coder like me) algorithm for working out which SE goes next. But the semantic elements are not the only story elements that get told. The guide also measures the Emotional, Ideological story elements against the user’s initial questionnaire answers and reactions to previous story elements and decides whether or not to add the guide’s “own” ideological experience on to the interpretation, a bit like a human guide might. So you might be told:

Estimates place the number of deaths caused by Little Boy in Hiroshima up to the end of 1945 at one hundred and forty thousands where the dying continued, five-year deaths related to the bombing reached two hundred thousands.

Or, if the guide’s algorithms think you’ll appreciate it’s ideological perspective, you could hear:

Estimates place the number of deaths caused by Little Boy in Hiroshima up to the end of 1945 at one hundred and forty thousands where the dying continued, five-year deaths related to the bombing reached two hundred thousands. The experience of Hiroshima and Nagasaki bombing was the opening chapter to the possible annihilation of mankind. For men to choose to kill the innocent as a means to their ends, is always murder, and murder is one of the worst of human action. In the bombing of Japanese cities it was certainly decided to kill the innocent as a means to an end.

I guess that’s the scientist personality talking, perhaps the military personality would  instead add a different ideological interpretation of the means to an end. As I mentioned before, I’m not convinced that two (or more) faux points of view are required when the whole project and every story element that the guide gets to choose from are already authored with a true point of view. But in many other aspects this paper is really useful and will get a good deal of referencing in my thesis.

Abstract: Digital Personalisation for Heritage Consumers

I’m speaking at the upcoming Academy of Marketing E-Marketing SIG Symposium: ‘Exploring the digital customer experience:  Smart devices, automation and augmentation’ on May 23 2017. This is what I wrote for my abstract:

Relevance to Call: Provocation, Smart Devices. Augmentation of the Customer Experience

Objective: A work-in-progress research development project at Chawton House explores narrative structure, extending the concept of story Kernels and Satellites to imagine the cultural heritage site as a collection of narrative atoms, or Natoms, both physical (spaces, collection) and ephemeral (text, video, music etc.). Can we use story-gaming techniques and digital mobile technology to help physical and ephemeral natoms interact in a way that escapes the confines of the device’s screen?

Overview: This provocation reviews the place of mobile and location technologies in the heritage market. Digital technology and social media are in the process of transforming the way that the days out market is attracted to cultural heritage places. But on site, the transformation is yet to start. New digital interventions in the heritage product have not caught on with the majority of heritage consumers. The presentation will survey the current state of digital heritage interpretation and especially the use location-aware technologies such as Bluetooth LE, NFC, or GPS. Most such systems deliver interpretation media to the device itself, over the air or via a prior app download. We explore some of the barriers to the use of mobile devices in the heritage visit – the reluctance to download proprietary apps, mobile signal and wifi complexities and most importantly, the “presence antithesis” the danger that the screen of the device becomes a window that confines and limits the user’s sensation of being in the place and among the objects that they have come to see. Also, while attempts to harness mobile technology in the heritage visit display interpretation that is both more relevant, and in some cases more personalised to the needs of the user, they also tend towards a “narrative paradox” – the more the media is tailored to the movements of the user around the site, the less coherent and engaging the narrative becomes.

Method: Story-games can show us how to create an experience that balances interactivity and engaging story, giving the user complete freedom of movement around the site while delivering the kernels of the narrative in an emotionally engaging order. At Chawton we plan to “wizard of oz” an adaptive narrative narrative for that place’s visitors.

Findings: Work so far demonstrates that a primary challenge for an automated system will be negotiating the contended needs of different groups and individuals within the same space. The work at Chawton looks to address this.


This is the first time I’ve written an abstract in this format, and I found it quite a challenge. What you add in and leave out is always a difficult decision, and this format, which was limited to one side, had me opting to leave out the references which I might have made room for if I had not had to write something under each of the prescribed headings. It’s also the first time I have had formal feedback on an abstract, which I share below:

Relevance to call: Good fit Smart devices, user experience,
augmentation, culture (5)
Objective: A practical case example of augmentation in a
heritage setting (5)
Lit rev: No indication of theory used, as this is a practical
case study (n/a)
Method: A specific case of Chawton House presented. (5)
Results: Interesting findings re barriers to use of mobile
devices in heritage, and the experience evaluation (4)
Generalisations: Interesting and original context of heritage
institution using augmentation, can extend to
other heritage sector applications. (4)
Total 23/25


So, not a bad score, but I wonder what I would have got (out of 30?) if I had included the references. Does the bibliography count within the one page limit? Or, could I have included it on a second side?

Still, not time for those questions. I have the write the actual presentation now. 🙂

Building the Revolution 

I finally got to the V&A today, for their exhibition You Say You Want a Revolution. I got turned away at the end of Cromwell Road last time, as the museum was being evacuated after a bomb-scare. 

I’m writing this review on my way home, using my phone (so please forgive my typos) partly because I want to recommend you go, and there is not long left to see it. 

The exhibition charts the western cultural revolution of 1966-1970, though John Peel’s record collection, plysbof course fashion and design from the V&As own collection and other items, such as an Apollo mission space suit borrowed from other institutions. 

One of the gimmicks of the show is the audio, an iteration of the same technology used at the Bowie exhibition a couple of years ago. I didn’t get to go to that one, but I had a demonstration of that tech from the makers Sennheisser, at a Museums and Heritage show. 

I wasn’t very impressed. Though these headphones, which play music or soundtrack to match whatever object or video you are looking at, were well  received by the media back then, in my experience the technology was clunky. Other friends who’d been confirmed that they changes between sound “zones” could be jarring, and that it was possible to stand in some places where music from two zones would alternate, vying for your attention. 

The experience this time was an improvement. It was by no means perfect: I found the music would stutter and pause annoyingly, especially if I enjoyed the track enough to find myself gently nodding my head. Occasionally the broadcast to everyone’s headphones would pause so everyone in a room could share a multimedia experience (of the Vietnam war for example) across all the gallery’s speakers, screens and projectors. These immersive over-rides were effective, in much the same way as those at IWM North, but when a track you were enjoying or a video that you found interesting was rudely interrupted, one couldn’t help but feel annoyed. I found myself forgiving the designers however, for this and even the stuttering sound of the headphones, because it all felt resonant with that late sixties “cut-up” technique. 

Where the technology really worked however was on two videos that topped and tailed the exhibition. In the first various icons and movers of the period were filmed in silent moving portraits of their current wrinkled and grey selves. Their reminiscences of the time appeared as typography overlaying their silent closed-mouth gaze, a little like Barbera Kruger’s work, while  over the headphones you heard their voice. The same characters appeared at the end, that s time as a mosaic of more conventional talking heads. And for the first time, the interpretation was didactic as each in turned challenged the current generation to build on their legacy. 

For me, one of the highlights was the section on festivals, which invited visitors to take off their headphones, lie back in the (astro)turf and let (another cut-up of) the famous Woodstock documentary wash over them on five giant screens. 

The other things I loved were, dotted around among the exhibits, tarot cards that, at first glance, looked like they might have been designed in the sixties. But then you notice references to things like Tim Berners Lee and the World Wide Web. You realise these are a subtle form of interpretation, telling a future of the sixties that apparently came true and for those of us from that future, creating correspondences and taxonomies that connect the events of 1966-70 with today. The V&A commissioned British artist Suzanne Treister to create the cards, based on her 2013 work, Hexen 2.0. And the very best thing about them is you can buy them (pictured above) in the shop which must be the first time copies of museum interpretation panels have been made available for purchase. 

Of course, the aren’t the only form of interpretation. About from the soundtrack, there are more traditional text panels, labels and booklets around the exhibition. But the cards show how cleverly the layering of meaning and interpretation has been created. Many visitors will have passed them by unnoticed, given them a cursory glance or chosen to ignore them, and will have had an entirely satisfactory experience. But for those that paused to study them in more detail a whole new layer of meaning opened up. 

I visited with a sense of duty, to try out a responsive digital technology. But I found so much more to enjoy. This is a brilliantly curatored exhibition. So much better than the didactic, even dumbed down permanent gallery of the new Design Museum which I visited before Christmas. I urge you to go, if you haven’t seen it yet. It’s only on for another month. 

A colleague who had visited the exhibition before told me how depressed it had made him: the optimism of that period seems to have been dashed upon the reactionary rocks of 2016, Brexit and Trump. But I came out with a very different mood. 

One of the early messages of the exhibition is the period as a search for utopia. The final tracks you hear as you walk out (after the video challenge issued by the old heads of the sixties) are Lennon’s 1971 single Imagine and then, brilliantly, Jerusalem

No, of course they didn’t find the Utopia they were looking for in the sixties, but we could build it…

Could this be … the first decent museum app?


Last week my wife and I went to San Francisco. Our second full day there was mostly spent within SF MOMA, the San Francisco Museum of Modern Art. And for the first time ever, I used a museum/heritage app that actually enhanced my visit.

Part of what made it so successful was the infrastructure that made it easy to download and use. I didn’t have to plan in advance and download it before my visit. I wasn’t even aware of it before I went, but if I had been, I would have been unlikely to download it, because our hotel’s free wifi only allowed one of us to use our device each four hour lease period.

We’d started our visit walking through the museum to the opposite entrance to contemplate the Richard Serra sculpture. It was early in the day, the museum was just opening, and there was a team-brief on the tiered seating that surround the piece. But they moved on and we sat for a moment to contemplate the enormous steel structure (I can’t deny the meditative quality of Serra’s work, or the calming impact it seems on have on the psyche when encountered, but really I sometimes feel “seen one, seen them all”) and to plan our day.

My wife noted a label on the wall directing people who wanted to know more about the art to SFMOMA’s app, and helpfully pointing out that you could log into the museum’s free wifi to download it. I think it said that it was iOS only, but if you didn’t have a suitable device, you could borrow one.

The first pleasure was logging onto the wifi. This was possibly the most hassle-free process I’ve ever encountered on public wifi. The signal was strong (everywhere), reliable and speedy too. The app downloaded quickly, and upon opening gave me three screens introducing what it offered, such as the one below:

It wanted access to my location services (of course), camera and, unusually, to my activity (the “healthy living” function of more recent versions of iOS), but having been so pleasantly surprised and satisfied by the process so far, I was very happy to allow both. All this had taken very little time, but enough time for my wife to have wandered away towards the elevators to begin our exploration of the museum, so I hurried after her, scanning what was on offer from the app as I went.

There’s a highlights function, which includes “Our picks for forty must-see artworks that are currently on view”, a timeline function that enables you to record and share your visit, and section on other “things to do”, and of course the ability to buy tickets, membership etc. At the core of the app are “Immersive Walks”: a range of fifteen to 45 minute audio tours of the galleries.

On no! I’d left my earphones back at the hotel.

But that wasn’t a problem, because as I caught my wife up by the elevators, I saw a stand stacked high with cases of SFMOMA-orange ear-buds. These were given away free and of a somewhat disposable quality, but good enough to last the day (and to pass on to my son when we got back from the holiday) with in-line volume controls for ease of use. The thought and effort that SFMOMA put into the infrastructure around the app deserves to be commended.

But lets get to the meat of the app’s functionality. The key thing here is indoor positioning. I’m guessing it’s achieved through wifi mobile location analytics, but I haven’t confirmed that. I can confirm that its pretty accurate, though with a little bit of lag, so it takes a while after walking into a gallery, and then standing still for a moment, before your device can deliver to you the buttons for the content relevant to the artworks on the gallery. Some, but not all, of the artworks are accompanied by a specific bit of media (mostly audio) to offer more in-depth insight into the work. This can include commentary, reviews or snippets of interviews with the artist.

I also took an immersive walk. I chose German to Me, a personal exploration of post-war German artists from radio journalist Luisa Beck, in which she shares her reaction on some of the works in the collection and interviews for mother, grandmother and cousin to uncover more about her own German-American identity. As the tour progresses you are guided, not just by Luisa’s spoken directions, but also by the app’s indoor positioning, as shown below.

I have to say, I would have given these galleries the most cursory of glances, had I not been captured by Luisa’s tour. As it was, her (wholly un-sensational) story, and her commentary upon the art engaged me emotionally to a degree I wasn’t expecting. It enhanced my visit like no other app has achieved.

The phone also recorded my “timeline”, my journey through the museum, on-line so that I can share with others the photos I took, the artworks that caught my attention enough to seek more information from the app, and the tours I went you. As you can see, I spent three and a half hours with the app, walking 3,369 steps (or 1.7 miles). This timeline is the only slightly disappointing aspect of the app – I would have like to have clicked through this on-line version to listen to some of the media again, now that I am back home, maybe even to be reminded (though the apps abilities to determine location) who made some of things that I took photos of.

You’ll know that I’m not a massive fan of looking at things through my phone, but this app did well enough to almost convince me otherwise.

The museum had other digital interventions of interest. You might have spotted in my timelime that one of the first things we looked at was a surveillance culture-inspired artwork by Julia Scher that turned the museum in to Responsive Environment, changing according to visitors actions.


There was also a fun activity in one of the cafe’s that allowed you to create your own digital artwork, printing it out on thermal paper instantly, but also linking to a hi-res online version, which I used for the illustration at the top of this post (you will note that those free earbuds are the stars of that piece).

SFMOMA, with their technology partners Detour on the app, and the support of Bloomberg Philanthropies, are doing good things in the digital sphere. If you’re there, you should check them out.

Shine On: part two

In the afternoon Graham Festenstein, lighting consultant, kicked off a discussion about using lighting as a tool for interpretation. New technology, he said,  especially LED, presents new opportunities, “new revolution” in lighting. It’s smaller, with better optics, and control. And also more affordable! He used cave paintings as an example. Lighting designers could take one of three approaches to lighting such a subject:  they might try and recreate the historical lighting which, for a cave painting, would have been primitive indeed,  a tallow bowl light, revealing small parts of the painting at time and with an orange light; it’s more likely, given the needs of the visitor, that they might go for more wide angle lighting, revealing the whole of the painting at once; or, they might light for close up inspection of the work, to show the mark making techniques. Traditionally, a lighting designer would have had to chose just one of these approaches. But with the flexibility and versatile control of modern lighting technology, we can do all three things – caveman lighting, wide angle panorama, and close up technical lighting.

Graham’s presentation was not the strongest. heexplained that he was sceptical about LED lights at first pilot as a sceptic. He recalls a visit to a pilot project at the National Portrait Gallery. His first impressions were disappointing, but then he realised that what heith missed about the tungsten lighting to the way it it the gilded frames, and the the LED lighting was better serving the pictures. Then he went on to talk about colour, how the warm lights of the Tower of London’s torture expedition undermined the theme, but the presentation overall was somewhat woolly.

Zerlina Hughes, of  studio ZNA, came next, with a very visual presentation which I found myself watching rather than taking notes.It explained her “toolkit” of interpretive lighting techniques, but I didn’t manage to list all the tools. A copy of the presentation is coming my way though, so I might return with more detail on that toolkit in a later post. One of her most recent jobs looks great however, and I’m keen to go. Say You Want a Revolution, at the V&A follows on from the Bowie show a year or so ago, but with (she promises) less clunky audio technology. I want to go.

Jonathan Howard, of  DHA design, explained that like Zerlina, “most of us started as Theatre designers.” I (foolishly, I think, in retrospect) passed up an invitation to do theatre design at Central St Martins, and I think I would have been fascinated by lighting design, if I had gone, so I might have ended up at the same event, if on the other side of the podium. Museums audiences today are expecting more drama in museums, having experienced theatrical presentations like Les Miserables, and theme parks etc. I was interested to learn that in theatre, cooler colours throw objects into the background, and warmer colours push them into the foreground. This is apparently because we find the blue end of the spectrum more difficult to to focus on. In a museum space, he says, you can light the walls blue so that the edges of the gallery fall away completely. But he did have a caveat about using new lighting technology. Before rushing in to replace your lighting with LEDs and and the modern bells and whistles, ask youself:

Why are we using new tech?
Who will benefit?
Who will maintain it?
Who will support it?

Kevan Shaw, offered the most interesting insight into the State of the Art. He pointed out that lighting on the ceiling has line of sight to most things, because light travels in straight lines (mostly), and we tend to point it at things. So, he said, your ligthting letwork could make a useful communications network too. He wasn’t the first presenter to include and image of a yellow centered squat cylinder in their slide deck. And they spoke as though we all knew what it was. I had to ask, after the presentation, and they explained that it was one of these. These LED modules slip into many exisiting lamps or lumieres. They are not just a light source, but also a platform for sensors and a communications device. Lighting, Kevan argues could be the beachhead of the Internet of Things in museums.

He briefly discussed two competing architectures for smart lighting, Bluetooth, which we all know, and Zigbee, which you may be aware of through the Philip’s Hue range (which I was considering for the the Chawton experiment). He also mentioned Casambi and eyenut, I’m not sure why he thinks these are not part of the two horse race. He argues that we need interoperability. So I guess he’s saying that eventually the competing systems will eventually see a business case in adopting either Bluetooth or Zigbee as an industry standard.

With our lightbulbs communicating with each other, we can get rid of some of our wires, he argues, but it needs to be robust, reliable. And the secret to reliability is a mesh networking, robust networks for local areas. Lighting is a great place for that network to be. That capability already exits in Zigbee (so I think zigbee is what I should be using for Chawton), but its coming soon in Bluetooth. And I think Kevan believes that when it does, Bluetooth will become the VHS of the lighting system wars, and relegate Zigbee to the role of Betamax.

But the really exciting thing is Visible Light Communication, by which the building can communicate with any user with a mobile device that has a front facing camera (and the relevant software installed. He showed us a short video of the technology in Carrefour (mmm the own-brand soft goat cheese is delicious).

The opportunities for museums are obvious but, he warns, to be effectively used, museums will need resource to manage and get insight from all the data these lighting units could produce in resource. Though he says optimisticly to his fellow lighting consultants, “that need could be an opportunity for us!”

Finally we heard from Pavlina Akritas, of Arup, who took the workshop in the direction of civil engineering. Using LA’s Broad Museum as an example, she explained how in this new build, Arup engineered clever (North facing) light-wells which illuminated the museum with daylight, while ensuring that no direct Los Angeles sun fell directly onto any surface within the galleries. The light-wells included blackout blinds to limit overall light hours and photocells to measure the amountof light coming in and if neccessary, automatically supplement the light with LEDs. She also talked briefly about a project to simulate skylight for the Gagosian gallery, Grosvenor Hill.

All in all, it was a fascinating day.

This post is one of two, the first is here.

Mulholland on Museum Narratives

Working on the narratives for the Chawton Project, I’m taking a break and catching up on reading. Paul Mulholland (with Annika Wolff, Eoin Kilfeather, Mark Maguire and Danielle o’Donovan) recently contributed a relevant first chapter to Artificial Intelligence for Cultural Heritage (ed Bordoni, Mele and Sorgente).

Mulholland et al’s chapter is titled Modelling Museum Narratives to Support Visitor Interpretation. It kicks off with the structuralist distinction between story and narrative,  and points to a work I’ve not read and should dig out (Polkinghorne, D. 1988 Narrative Knowing and the human sciences) as particularly relevant to interpreting the past. From this, the authors draw the “narrative inquiry” process which “comprises four main stages. First, relevant events are identified from the historical period of interest and organised into chronological order. This is termed a chronicle. Second, the chronicle is divided into separate strands of interest. These strands could be concerned with particular themes, characters, or types of event. Third, plot relations are imposed between the events. These express inferred causal relations between the events of the chronicle. Finally, a narrative is produced communicating a viewpoint on that period of history. Narrative inquiry is therefore not just a factual telling of events, but also makes commitments in terms of how the events are organised and related to each other.” Which is as good and concise a summary of the process of curatorial writing as I am likely to find.

There’s another useful summary paragraph later in the document. “When experiencing a museum exhibition, the visitor draws relationships between the exhibits, reconstructing for themselves the exhibition story (Peponis 2003), whether those relationships are, for example, thematic or chronological. The physical structure of the museum can affect how
visitors perceive the exhibition narrative. Tzortzi (2011) argues that the physical structure of the museum can serve to either present (i.e. give access to the exhibition in a way that is independent from its underlying logic) or re-present (i.e. have a physical structure that reinforces the conceptual structure of the exhibition).” Tzortzi there is another reference I’ve not yet discovered and may check out.

What the paper does not do however, is make any reference to emotion in storytelling. the authors seem to leave any emotional context the the visitors’ own meaning making. The chapter include a survey of current uses of technology in museums, and academic experiments including virtual tour guides and opportunities to add the own interpretations and reminiscences, as well as web-based timelines etc.

But, digital technology gives us the opportunity (or need) to break down cultural heritage narratives even more, and an earlier (2012) paper by (mostly) the same authors, Curate and Storyspace: An Ontology and Web-Based Environment for Describing Curatorial Narratives describes a system for deeper analysis. (Storyspace turns out to be a crowded name in the world of writing tools and hypertext, so eventually the ontology and Storyspace API became Storyscope). The first thing that the ontology brings to the table is that

a curatorial narrative should have the generic properties found in other types of narrative such as a novel or a film

So the authors add another structuralist tool, plot, to the story/narrative mix. “The plot imposes a network of relationships on the events of the story signifying their roles and importance in the overall story and how they are interrelated (e.g. a causal relationship between two events). The plot therefore turns a chronology of events into a subjective interpretation of those events.” But using the narrative inquiry process “the plot can be thought of as essentially a hypothesis that is tested against the story, being the data of the experiment.”

I like this idea. But its worth distinguishing between the two uses of the word “interpretation” in cultural heritage. The first use, familiar to my archaeologist colleagues, describes the process of building an understanding of of aspect of the past from the available evidence. The second, more familiar to my museum and heritage site colleagues describes the process of explaining the evidence to non-professional visitors. At its very best, the museum/heritage site form of interpretation will resemble and guide visitors though the process of inquiry that builds an understanding of the evidence on display. But most of the time the second form of interpretation more closely resembles storytelling. That’s not a fault or failure of my museum/heritage site colleagues, most visitors are time poor in story rich environments. But digital technology has the potential to allow museum and heritage site interpretation to more closely resemble the first use of the word.

What digital technology offers, is the opportunity for brave curators to offer alternative plots, or theses, and test them in a public arena, rather than just through a peer review process. Or even to create plots procedurally by following the visitors’ path of attention between objects, maybe discovering plots the curator had not imagined.

The two experiments that the authors describe go someway towards this, by their dry ontology misses an emotional component. The event ontology could surely include an authorial opinion on whether the narrative element suggests a simple emotional reponse (even as simple as hope or fear) but instead “If the tag represents an artist, then events are used to represent, for example, artworks they have created, exhibitions of their work, where they have lived, and their education history.” Dry, dry facts… There is the tiniest nod towards, if not emotion per se, the some sort of value in their brief discussion of theme:

Theme is also related to the moral point of the story. This could be a more abstract concept, such as good winning through in the end, which serves to bind together all events of the story.

Given that they say “Narratives are employed by museums for a number of purposes,
including entertainment” they haven’t given much time to what makes narratives engaging. There is hope however. In their conclusion, they do say “Other narrative
features such as characterisation and authorial intent could potentially be
foregrounded in tools to support interpretation.”