@HeritageJam – the other live projects

I promised to write about the other projects that were created in the twenty eight hours (including sleeping) that we created Happy Gods. Of course we weren’t working solidly on Happy Gods for all that time. Though the Heritage Jam team kept us supplied with water, sugary snacks, and lunch, we all had to take a break or two to eat some proper food and to sleep, and I even took time out to contribute a little to one of the other projects.

That was  Jo Pugh‘s coded bookshelf. We had our briefing from Natalie in the Yorkshire Museum’s reading room (it features in the short film of her above), surrounded by old books caged in secure bookshelves. More than one person initially thought that doing something with those would be but, but it was Jo who did something. We weren’t allowed at the books themselves, so he took a number of photos of ranges of books on the shelves, and then sourced their contents from the museum’s own limited digitization project and other sources. (He wasn’t very complimentary about Google’s own efforts in this regard, but did even find some workable versions of the contents there.) He then set about presenting some of those contents on visitor’s phones, through QR codes that pointed to a choice of text or recorded excepts, and for those who hate the faff of QR (like me), short web addresses to type in. I helped by reading a couple of those excepts for him.

Jo was the only in-person Jammer not to work in a team, which is fine of course, but I really enjoyed being thrown in with a bunch of talented people and having to get creative with them. And next year, I told Jo (somewhat arrogantly on reflection), he should do the same. It was fun working with him for the short time I did, and I’d definitely be in a team with him.

Luke Botham and Mathew Fisher used a technology I’d seen before – manipulating virtual 3D objects by pointing a web-cam at a black and white icon. But they drew their 3D objects from the ADS Armana Archive, and they also had an idea I hadn’t seen before – to make the icons wearble, so that visitors might virtually wear some of the finds!

And so to Stephen Elliot, Laura Valeria, and others used a piece of software that Stephen’s company is developing. This is essentially a content management system that creates mobile applications using GPS outdoors and BLE beacons indoors to provide location-based interpretation. The team wove what looked to be a set of intriguing stories connecting the museum collection with outdoors locations.

All were excellent projects, and the teams (and Jo) had worked really hard to pull it them all together in the limited time. Everyone deserved to win, and in fact the competition was so close that the judges couldn’t choose between the three other projects, and they ended up sharing the Highly Commended spot.

So how come we won? Well, I think it was a close run thing, and I’ve been waiting to see if the judges comments explain more on the Heritage Jam website. None of the in-person entries have been put up on the site at the time of writing, but a blog post does explain a little of the judge’s thinking:

The judges were blown away by the scope and quality of the work and commended the team for the innovative way they had blended game mechanics across a stunning art-style to create a new and exciting way to engage with the collections.

Privileged @HeritageJam 2015

I arrived at Kings Manor, York, about ten past ten on Friday. Izzy helped me find a place to keep my rucksack safe and then we followed the others (who had been leaving just as I arrived) to the nearby Yorkshire Museum. Sara introduced us to Natalie McCaul, Curator of Archaeology who talked to us about the collections, both on display and in store, recent efforts to put the catalog on-line and the lack of money to spend on technology. Afterwards, Tara introduced me to my team-mates. More of them anon.

We went off to explore the collection and see if we could find inspiration for a Jam project. First of all we headed to the Richard III exhibition to have a look at the remains of a soldier, we had some thoughts about creating something that might allow visitors to literally put flesh on the bones of his story, imagining what he might have looked like, whether he’d been a professional soldier or a conscript. These were good ideas, but the spark wasn’t there, so we went off to look at the Roman collection.

Here we talked about creating a way for visitors to paint the stonework, perhaps using projection. But then we wandered among the gods.

Here were stone altars and figurines and other representations of, not just the “Roman” gods, but also the other local gods that Roman culture had assimilated  as it marched across Europe. There was a stone altar there that, the label explained, was a “cheap” altar, carved with simple generic icons, which a hard-up Roman might then paint to dedicate it to gods of their choice. I had to take a phone call about my insurance claim, but when I came back Juan had had “the idea.” And it was brilliant!

Lets take a break to introduce my amazing team-mates. I need to say now that this was the very best thing about joining the Heritage Jam. It was a real privilege to get to work with such talented people. That the sort of this people always say, but I mean it. I was humbled by the opportunity to work with them.

Lets kick off with Juan, as the idea was his. Juan is a game designer, artist and PhD researcher, studying Historical Representation in Games at the University of Salford, where he also teaches computer and video game development. His amazing talents as an illustrator meant that even out earliest concept designs looked stunning. Then, his amazing talents as a 3D modeller made the final version look like … well, wait and see.

Sam is a post-doctoral researcher at the University of York, currently working on a project called New Economic Models and Opportunities for digital Games (NEMOG). Sam could wrangle Unity like it was putty in his hands (mixed metaphor I know). And get this, he has also been a competitive Mixed Martial Artist!

Edwige is an Associate Professor of Digital Arts at Universite de Versailles. An expert Unity wrangler herself, she also brought excellent artistic, project management and games mechanics skills to the group. Frankly, there was very little left for me to do! 

Back to the idea. Juan, pointed out that the roman people making regular offerings to the gods were like Tamagochi players, and their altars were the roman equivalent of our digital interfaces. Not only that, but Roman society had a tendency to collect gods, like Pokemon. Right there we had two mechanics to teach modern day families about ancient religious observance and ritual, and from there, other aspects of everyday life. We even had a name, “Happy Gods”, and a tagline – “You gotta keep ’em happy”

On the way back to our Kings Manor base, we joked about how, if the Romans had smartphones, a serious version of ours would be a killer app. “Are you a Busy Centurion on the Move? Do you struggle to find the time, or the altar, to make your offerings? Are you worried you’ll upset local gods in the country you’ve just invaded? Download HappyGods from your appstore!”

We very quickly solidified our ideas. We’d been inspired by a case containing three figurines, including a very Happy looking Genius Loci, so he would be our “first level god”. We’d keep our ambition modest and achievable though –  just create that first level with his two companions in the case, Venus and Vulcan, the promise of levels two and three. Not being a roman expert, I found a paper (by Mark Robinson) on sieving and flotation analysis of biological remains from excavations below the AD 79 destruction levels in Pompeii, to discover some of what was being burned on household altars. We settled on giving the player an unlimited supply of five offerings: grapes, figs, cypress wood, grain and cockerels. But the challenge would be knowing who the god was wanting at any particular time. You had to select two offerings – choose the right combination and the god would get happier, but get it wrong and the god would get unhappy. Make the good 100% Happy and not only would you get to make a wish with a prayer offering, you’d get another god to add to your collection. You could ask the friendly roman woman Lucia (based on a popular item in the museum, the remains of a resident of roman York known locally (and online) as Ivory Banglelady) for help. But before she’d tell you exactly what the god wanted, you’d have to answer a question, the answers to which could be found in the museum, and importantly in their on-line catalog.

As we talked Juan sketched out how it might look.

The other three got out their high-powered lap-tops and started modelling, and coding the interface and the gameplay. Not having the skills or the equipment to join in, I got to work creating the questions for Lucia to ask. we weren’t going to need many, just for this demo, so it shouldn’t have taken very long. But as I worked I bean to get quite frustrated with the on-line database. I was working through the circa 7000 roman entries one by one. I soon realised it was very difficult to search for what I wanted if I don’t know what it was. Not only that, but the text was straight out of the museum’s own catalog. It was dry6 and academic and nothing like the well written labels in the museum itself. So after I tried (and failed) to craft questions that included enough key words that might identify the right item in a search, I realised that I needed to create a whole other layer of online interpretation. And so the happygods project blog that I’d started to record our work, was repurposed to become a support for players of the game. I wrote a new layer of interpretation that the game would refer to, if you asked for help, and which would in turn link to the relevant on-line catalog entry.

Actually, that was the idea I had as I was going to bed around midnight on Friday. I resolved to actually do the work on Saturday. In truth, I was a little overwhelmed by the talents of my team-mates, and worried that I had little to add to the project.  They were working so efficiently, turning Juan’s sketches into an animated modelled interface. I could offer very little technical help, except to tidy up a crop of one of Juan’s sketches to create the Lucia interface, and edit Edwiges (very good)  English for the game’s intro panel and our  submission materials and Paradata. 

But as it turned out that extra layer of interpretation was a very positive addition to the finished product.

Because we did finish it, and at four in the afternoon on Saturday we were able to present a finished, playable version of the first level of the game to our fellow Jammers and the judges. Yup judges. I’d not realized when I first signed up that the Heritage Jam was a friendly competition. And even though I twigged that before I actually attended, I’d not gone with any hope of winning.

But win we did.

Next time I’ll write about the other projects, all of them were brilliant and stiff competition.


Wow, that was a good couple of days. But I’m going to post about them later.


Things had not gone well earlier in the week. After my struggles with Twine last week I still had plenty to do, events conspired to make sure my evenings were otherwise engaged. Wednesday especially, when I spent the evening beside the A322 awaiting recovery after a van crashed into the rear of my car.

This was the car I had planned to drive up to my father’s house in Yorkshire on Thursday evening, for a social visit but also to get me close to York for Heritage Jam. So instead I spent all day Thursday sorting out my insurance claim, trying (and failing) to get my car picked up and a courtesy car delivered soon enough to get up to Dad’s. Then, when that was ruled out, working out how to get to York in time for the Jam. Oh, and in-between trying to get my Twine game into a presentable state.

The Team Info-pointers project was ruled out. I didn’t get a chance to even look at the data I’d collected the previous week. I had to let my team-mates down, but I didn’t even get to send them an apologetic email until half-way through Friday, from York. I had taken the Info-point unit along, in case we got a chance to do something related to it.

Eventually I worked out that if I got the first train out of Farnham on Friday morning, I could get to York into time to miss only registration and the meet and greet elements. I emailed my plans to the organisers (the lovely and brilliant Tara and Izzy) and told them to save a place on a team for me.

Then I cracked on with the Twine. I got all the links working and debugged, all the weirdnesses that were left over from early experiments. Cat sent me alternative (.MP3) versions of her sound files, in case people’s browsers couldn’t cope with the original .WAV files. I tried to host a build on my personal web-server so that people could play directly on-line, but again, couldn’t get the sound files to work. So, running short of time, I scrapped that idea and packaged everything into a “download this folder” package on my Dropbox’s Public folder. People would have to download it before playing. On reflection, I realize I should have compress it first.

But I didn’t have time to think that last Thursday. I had to enter it into the on-line stream of Heritage Jam and get an early night. Oh! I had to write a “paradata” too, up to a thousands words on the whys and wherefores of our creation. I forgot about that. So, I scrapped the early night and set to work. Luckily, I’d already asked Cat for a paragraph or two to include in a coda at the end of the game. So that went straight into the paradata. The rest flowed pretty easily, I was two tired to care if it was good or not.

I attached it all to the email and pressed “send”. And here it is, the very last entry, you’ll note, that was put on Heritage Jam’s website.

@HeritageJam 2015 diary 5 – Twine, how do I hate thee? Let me count the ways.

Short post today, to scream my frustration into the aether. You recall that when Team Oakleaf got together, we had a bit of a confusion over Twine 1.4, which I’d written in, and Twine 2 the shiny new version, which my teammate had downloaded. I hadn’t done much in my version, so we had the chance to choose which we should use. 

Cat left the decision to me, and the more experienced Twiner (!), I plumped for 1.4 because the image and (particularly) sound handing seemed better in that version. So one we went, Cat has turned in a couple of brilliant aural models, one of which has to be heard to be believed. I got most of the way with the Twine until all I had to do was create a way for readers to change a variable, without linking to a new passage. 

Two days I’ve spent on that. Using up all the time I’d planned to work on my Team Infopointers project. 

Can’t be done – except in Twine 2


@HeritageJam 2015 diary 4 – two projects.

Not as much progress as I’d hoped for this week. On Team Oakleaf, I spent last Sunday with a deck of index cards and a sharpie, breaking the story for a Twine game. The cards are still scattered, though more chaotically than they were across my dining room table. I had agreed with Cat, my Oakleaf team-mate, that one the job was done I’d type it all up in Prezi to share with her. But as I sat at my lap-top (which no-longer seems to be charging – I need to look into that), I thought that the time I spent typing text in Prezi would be duplicated when i came to type it into Twine.

So, I spent the evening typing it into Twine instead, Cat would have to down load Twine sooner or later anyway, I thought. Job done, I saved it all into a DropBox folder (optimistically titled “Phoenix”)m and shared the folder with her.

On Tuesday evening I turned to my other project for Team Info-pointers. My teammates had sent me a demo infopoint – a Raspberry Pi and a vicious looking high-end wi-fi adapter and other things. So I spend the evening preparing a system card with Raspian OS and Wireshark (including dumpcap), then firing up up the Pi and (step one) putting the wifi adapter into Monitor mode. Not all wifi dongles are capable of this, and Windows machines especially don’t build in the ability. Despite being a GUI interface, its not the most intuitive, so much of the evening was spent finding my way around the Wireshark software. What I couldn’t work out how to do, was have any impact on the wifi adapter itself. It sat there, glowing blue but refusing to respond to anything I clicked on. All the while the Monitor Mode checkbox sat ghosted and unclickable.

So I spurned the comforts of the GUI interface and turned to the Command Line. Still no luck. Just the smug blue glow of the adapter. I emailed my troubles to team-mate Paul and went to bed.

When Paul replied with the correct CLI syntax, I went straight there and typed it in.

wlan0: ERROR while getting interface flags: No such device

Aha, the (blue) light was on, but no-one was home. I swapped out the adapter for the cheap one from my boy’s Pi. As a booted up the project Pi, the little blue LED blinked and chattered away. The baleful blue of the fancy dongle was not smugness after all, but a lonely, dumb plea for someone to talk to. I was missing a driver. So I spent the morning working out what was missing. All I needed to do was “apt-get” firmware-Linux-free. No problem!

Problem! Can’t get on-line! So, I swap the dongles again, download firmware-Linux-free, and then restart with the fancy one. Success! I work out how to change the Time coloum to record UTC, hours minutes and 100millionths of a second, set the Source MAC as the next column, and then I crate a new column to record RSSI (which interestingly is a negative two digit number). I set it sniffing.

Now with the adapter working in Monitor mode. I turn back to Team Oakleaf. Cat has downloaded Twine, but can’t load my draft. I try to do the same, downloading Twine onto my Mac and indeed, I can’t import my own draft story into it either. Slight panic as I fear the story saved on Sunday evening may have been corrupted. The I realise – a new version has come out since I last used it. I wrote the story in Twine 1.4 and I’ve downloaded Twine 2. And the two are incompatible. I assume Cat has done the same thing, and email her. 1.4 is still available, she can use that to take a peek at what I’ve done.

But now a conundrum, do I replicate my work in a new Twine 2 story or, as cooperative working seems slightly easier in Twine 1.4, should we stay in that?

Then, back to Team Infopointers,  I take a look at what Wireshark has been sniffing. I can see my own Wifi router, my neighbour’s, their Apple TV too (I’m assuming – I don’t have one). But not (I think after much watching and switching on and off of devices) any mobile devices…

If there’s one thing you do in September…

…  visit Lightscape at Houghton Hall (Norfolk)

Why? Houghton Hall is a place like many of those the National Trust looks after, but still family-owned and managed. David, the current Marquess of Cholmondeley, is, like generations before him, a patron of the arts. Houghton Hall gives us a glimpse of some National Trust places might look like, if patronage and collecting had continued up to the present day. The gardens are a place of surprise and delight, with contemporary sculptures including a Richard Long (above) and, from Jeppe Hein, Waterflame, a burning fountain. But of special interest this year is a retrospective of works by James Turrell, including two permanent commissions for the Houghton Hall landscape. Turrell’s deceptively simple works are incredibly powerful, and you’ll likely never again have a chance to see so many together in the same place.

But it’s not just the art, the service is exemplary. And for the late night openings on Fridays and Saturdays, there’s a pop-up café, in keeping with spirit of place, that we can all learn from.

Yes it’s a little bit out of the way for those of us who don’t live in Norfolk, but make a weekend of it, it is worth it.  


I’ve signed up for #HeritageJam

The theme for Heritage Jam 2015 was announced today, and booking opened. The theme is Museums and Collections, which is so far up my street, its on my front step, knocking on the door!

So I signed up for the In Person element at University of York (maybe showing too much eagerness, I think I may have been the first). And while I’ve not yet signed up for the On-line element (which runs until the eve of the York event, 24th September), I do have a great idea for it.

I can’t share that idea yet, until I’ve checked a couple of things out. But if those things do check out OK I’ll share more later. 😉

By the way if you want to sign up, you have until the 24th August, which isn’t what I said in my earlier post.

Interpretive Planning – Part 1

Having put together the first draft of my literature review a few weeks back, I’m plugging some of of the gaps. One of the biggest gaps is about interpretive planning, where much of what I know comes from my first degree, back in the early nineties. Interpretive planning is more of an art that a science, and one can argue very well that you learn about it by doing it rather than reading about it. But I feel still feel I need to catch up with what people have written on the subject more recently than (it seems) I read anything about it. So the next few posts will be looking at the last ten years or so of writing on that subject.

Hugh AD Spencer runs Museum Planning Partners, and once worked at Lord Cultural Resources as their Principal in Charge of Exhibition Development. While he was there he wrote a chapter on Interpretive Planning for The Manual of Museum Exhibitions, so that’s where my efforts to catch up on the theory start.

Spencer describes the interpretive plan as “a component-by-component description of all exhibition and programme components in terms of:

  • Thematic area
  • Communication objectives and experience aims
  • Exhibit media options
  • Special requirements and opportunities”

In the book, he gives an example of a plan he developed with client S.Y. Yim, curator of the Hong Kong Heritage museum for a gallery of artefacts associated with Cantonese Opera. The gallery is comprised of four components: Introduction and Entrance; Heritage and Study Precinct; Performance and Participation Precinct; and The Living Experience. The introductory component’s objectives include:

  • To make a powerful first impression of the cultural impact of Cantonese Opera, its creative diversity and its expressive character
  • To transport visitors into the traditional setting for Cantonese Opera
  • To communicate that the study of Cantonese Opera is a rich and informative means of studying change and continuity in Chinese cultures worldwide – with particular relevance to the Hong Kong region,
  • To communicate [how the gallery is organised]

Two the of the “media and means of expression” for that particular component that he outlines are:

  • “Fushan Theatre Lobby/gathering place – modelled after a traditional venue for Cantonese Opera in the region. The theatre theming for the ceiling, floors and walls will prvide an environmental context for all exhibits within the hall.
  • The Great Mask – a large scale monochromatic face relief. Constantly changing images of faces in Cantonese Opera make-up (based on the characters made famous by important Cantonese artists) are continually projected onto the mask – illustrating the rage of design, expression and human character of this medium.”

He also presents, as a case study, the Earth Galleries at the Natural History Museum, London. In particular he focuses on the impressive threshold created and opened in 1996, with the expressed intention of meeting the “need to place the new galleries on the London visitor map pf ‘must-see destinations’ aspiring to the status of the blue whale of the Natural History Museum, or the mummies of the British Museum.”

Spencer doesn’t dwell much on how the story fits into the spaces, but he does offer an example schema (from the American Royal Museum and Visitor Center in Kansas City). It feels as though his illustration of the themes of that exhibition was intended to accompany text that didn’t make the final edit, but the venn diagram of three sub-theme intersecting around a central theme feels familiar. Broadly speaking its my starter for ten when anyone asks me (as somebody did a couple of weeks ago) about how they might start breaking the story for exhibition.

The important thing to note is that his thematic approach tends towards a non-linear model of interpretation, and that in turn relies on the introductory component of the exhibition to do most of the heavy emotive lifting, as the “wow” moments in his examples (the journey through the centre of the earth at the Natural History Museum, and the Great Mask in Hong Kong) illustrate.

But that’s not the end of the story – there’s a new edition of the Manual of Museum Exhibitions out, and next time, I’ll look at what’s changed.

The Magazzini Realised #buildyourownportus

Yesterday, I got the Lego bricks I’d ordered last week. So I set about building, to see if I’d got my LDD (Lego Digital Designer) design right. After I’d ordered them, I’d already spotted a few bricks I hadn’t put into the LLD model, and thus weren’t on my order list. But I was disappointed to that there were a a number of pieces – the corner tiles, the 1×4 bricks – that I’d entirely missed when I was ordering.

So I had to raid my boys collection – luckily he had plenty of the right sort of bricks, plus some others (tiles especially) that weren’t available from Lego’s brick order service. so what I’ve ended up with isn’t exactly the model I designed.

This slideshow requires JavaScript.

In the building of it, I discovered weaknesses in the construction – for example, the solid wall can be pushed down off the model too easily when fixing the upper story onto the ground floor. But of course, the advantage of work with your hands, building with bricks instead of of bits, is that structural improvements are somehow more immediately apparent. The concept of learning styles has been pretty effectively debunked over the last few years, but there does remain the idea that you can learn about different things in different ways. My hands could “see” the model better than my eyes looking at the computer model.

One thing I wanted to check that I’d found very difficult to measure was the height of the two units stacked on top of each other. The archaeological evidence suggests the brick walls of the building were 11 metres high (the roof of course was higher still). Using my rudimentary 1 stud = 1 metre scale, my model should stand 11 studs high. Measuring height is very difficult in LDD, because one standard brick is more than one stud high, and especially because the LLD environment does not come with a vertical scale. Comparing my model with other CAD models of the building, it looks shorter, more squat, less elegant than the CAD ones. However, I was please to see that, when measured with a twelve stud tile, my physical model is just about eleven studs high.


So I’m going to deconstruct the model and rebuild it, physically and in LDD, with the bricks beside my screen. My aim is to make it stronger, and use less bricks. I’m not sure we are going to be able to build an entire model during the Festival of Archaeology (especially at standard list prices), but I still want to build the most efficient, model I can.

Which I’m sure was the aim of the Roman builders of the Grandi Magazzini, nigh-on a couple of millennia ago.

Low Friction Augmented Reality

So I read this the day after attending our PostGrad conference, wherein PhD candidates must present their work annually (or for part-timers like me, every other year). While I was there I said to a colleague “I wonder if I could make my presentation a location aware game next year?” and here’s how to do it. 🙂

But my arms get tired.

Maybe you’ve thought, ‘Augmented reality – meh’. I’ve thought that too. Peeping through my tablet or phone’s screen at a 3d model displayed on top of the viewfinder… it can be neat, but as Stu wrote years ago,

[with regard to ‘Streetmuseum’, a lauded AR app overlaying historic London on modern London] …it is really the equivalent of using your GPS to query a database and get back a picture of where you are. Or indeed going to the local postcard kiosk buying an old paper postcard of, say, St. Paul’s Cathedral and then holding it up as you walk around the cathedral grounds.

I’ve said before that, as historians and archaeologists, we’re maybe missing a trick by messing around with visual augmented reality. The past is aural. (If you want an example of how affecting an aural experience can be, try Blindside).

View original post 916 more words