Having wrestled with the open source QGIS package a few weeks ago, my first attempt at modelling Portus in Minecraft, I decided it couldn’t hurt to give myself the introduction to GIS I so sorely needed. By happy circumstance, Esri, developers of the ArcGIS packages had just started a MOOC in conjunction with Udemy. So I signed up for that and, for the last couple of weeks, I’ve been catching up (I started four weeks late) and completing the course.
It made for a brilliant introduction to GIS (for GIS virgins like me but also, it seems from the comments, for more experienced users). Taught (mostly) by Linda Beale, with introductions from David DiBiase. I noted with interest that the Udemy MOOC engine (of course not really MOOC software, as most of Udemy’s courses are paid for) incorporated a time-stamped comments feature a bit like Synote, the one my colleagues are developing, but not quite as capable.
David and Linda introduce the course while I play with the notes function
There were song titles to look out for, smuggled into Linda’s lectures, and quizzes that were the right level of challenging, to help review your learning. The songs and some trick questions in the quizzes betrayed a mischievous sense of humor, which I enjoyed. Some students didn’t – upset, I guess, at spoiling a 100% record, but these were quizzes not exams.
Each week included one or two case studies, wherein we got to use an online version of Esri’s ArcGIS to solve data analysis problems: where to locate a distribution centre, or monitor Mountain Lions, or build mixed use accomodation, for example. These case studies were great fun… to begin with. But, as I caught up with my fellow students, and we all started working on the ArcGIS servers on the same day, the software couldn’t cope, and timed-out or returned errors on analysis. So in fact I haven’t done the three case studies. Which I found very frustrating.
I’ve got a few weeks to go back and try them again when its not so busy, but I’ve spend the greater part of the last couple of months studying MOOCs and not getting on with my own work, so I was hoping to call it quits today. Next week, I’m going to experiment with Twine.
We had a great meeting yesterday for our funding application, though everyone has so many great ideas that the biggest challenge is going to be scoping those ideas into something achievable. Barring a couple of extra questions, everybody seems reasonably happy with the survey I drafted, so all we’re waiting for now is the green light from ERGO, the university’s ethics monitoring system.
My team mate Mark showed me a game I hadn’t seen before, Ingress, from Google and currently only available on Android phones. At it’s heart is a reasonably simple geocaching mechanic based on public artworks, but around that is a territory capture mechanic that smacks of Feng Shui RPG and the Invisibles and on top of it all, what appears to be a captivating story line. On top of that players are playing the game in a way that I don’t think was intentional, using the mechanics to create virtual two-colour artworks across the maps. All in all it’s something I want to play, but I only have an iOS phone 😦
Other location based games came up in the conversation around the survey too. SCVNGER is a very commercial game, looking to more obviously monetise Foursquare’s behaviors. Chromarama looks more fun, like Ingress, a game of territory capture, and it looks a lot of fun for Londoners or commuters with a bit of time on their hands.
Another game for Londoners is Magic in Modern London, an iPhone app produced by our old friends at the Wellcome Trust (an institution which also gave us High Tea). This is a scavenger hunt of a different sort, based upon an exhibition put on at the Welcome Collection back in 2011. This isn’t something that came up in yesterday’s conversation, but instead brought to my attention by an article in today’s Guardian. Which also tells us that its not just Londoners getting all the fun. One of the brains behind that game is currently working on one for Oxford museums, called Box of Delights. I’m looking forward to giving it a try.
These are the challenges. And looking at them it feel quite daunting. Can our project manage to produce a similar (or dare I say it, an even better) experience using only extant platforms?
Last Saturday I went to the inaugural conference of the Centre for Digital Heritage at the University of York. The first speaker was Professor Andrew Prescott, who gave us a salutatory reminder that the so-called Industrial Revolution wasn’t quite as revolutionary to those living through it, and that some of what we now realize were world changing developments, were not seen as such at the time. Whether we’ll recognize what is/was important enough about the current so-called Digital Revolution remains to be seen. But don’t let me speak for him, if you like, through the power of digital, you can see his slideshow here:
It was a mature and sobering start to the conference, but also inspirational. Towards the end he mentioned conductive ink that was safe to touch (or to paint on your skin if you want a working circuit-board tattoo) and pointed us towards the work of Eduado Kak as an example of how the digital and real worlds might collide in new ways:
I was particularly interested in the presentation from Louise Sorenson about a project to capture stories from families that emigrated from Norway to the US. The idea was to build a Second Life style recreation of the journey many such emigrants took (from Noway to Hull first of all, the overland to Liverpool to catch the boat to America). This would work as an inter-generational learning tool, letting people explore their forefather’s journeys, and to add to the world from their own family tales and photos or objects that might have been passed down the family from the original travelers. This experiment turned out to be one of those “a negative result is not a failure” types. They didn’t manage to capture much new data (though they did get some, shared on this blog), but learned a lot about why they didn’t, which Louise shared with us. For a start – Second Life? Remember when that was the “next big thing”? Early adopters got very excited and talked about it as though we’d all use it it, like Neal Stephenson’s Metaverse. But us “norms”, if we logged on at all, realised pretty quickly that it was hard work modelling your world, the pioneers were profiteering, selling us land and other stuff that existed only as one and noughts, and most tragically, everywhere you looked there were avatars having kinky sex.
In fact Ola Nordmann Goes West, as Sorenson’s project was called, rejected Second Life as a platform for at least two of those reasons. Instead the team opted for an open source alternative OpenSim. This allowed them to avoid the virtual property speculators and kinky sex, but didn’t solve the hard work problem. The challenge of: downloading the client; installing the client; setting up the client (with IP address, rather than an easy to remember/type URL); and, then signing up was an off-putting barrier to an audience used to just clicking on the next hypertext link. And this is competing for on-line time with more established social networks like Facebook and Flickr. Either of which might have more natural appeal to emigrant families, because they both are natural tools for keeping in touch with distant relations. Then, there’s the numbers problem.
The Ola project tells me around a million Norwegians emigrated to the US between 1825 and 1925, and that about four and half million Americans are descended from those families. Which feels like a large number. But when you slice it up to count the number of people that discover the project, the proportion of those who are interested by it, the number who get past the client barriers, and then the fraction who feel they have something to add to the story, you are going to end up with very few people.
I’ve spent a few paragraphs on this presentation because its particularly relevant to my original proposal, wherein I asked “What can real-world cultural heritage sites learn from the video games industry about presenting a coherent story while giving visitors freedom to explore and allowing them to become participants in the story making?” The Ola project is all about giving people freedom to explore and become participants in the story making, and so its a very useful example of what some of the traps I might have fallen into. Given that the sites I work with have an annual visitorship numbering in the tens of (if they are lucky, hundreds) of thousands, they’re chances of attracting even the tiny number of active community participants are even more limited than Ola Nordmann’s.
An alternative approach to public participation was shown by John Coburn. Tyne and Wear Archives and Museums put their collection on line as many institutions have done, but online collections remain connoisseurs resource: as Coburn said, “its only engaging if you know what you are looking for.” With the Half Memory project, the museums service handed their on-line collection over to creative people of all sorts to create compelling digital experiences. “Designing digital heritage experiences to inspire curiosity and wonder is more important than facilitating learning” Coburn insists.
PhoneBooth, from the LSE library
Ed Fay’s project, PhoneBooth, for the LSE Library, had an even smaller intended audience, students sent our by their geography lecturers from the LSE, to explore the London described by Charles Booth’s survey of 1898-9. He colour-coded every street according to the evidence he witnessed and recorded on the streets, classifying them with one of seven colours raging from Black (Vicious, semi-criminal) to Yellow (Upper-Middle and Upper Class). It reminded me as he spoke of the MOSAIC classification from Experian that the National Trust uses. The library digitized both his published results and all his notes years ago, but the PhoneBooth is an app that lets you take that data with you, and walk the streets just as Booth did. It even lets you overlay the data with the modern equivalent – no, not MOSAIC, but the Multiple Deprivation Index.
Ceri Higgins shared her experiences working with the BBC and other academics to create a documentary about Montezuma. As the programme was being put together, she grew more and more excited. This was a film that was going beyond the old tropes of gold, sacrifice, and invasion by the Spanish to reveal a broader representation of Aztec society. However, by the time it came out of the editing suite, it had become, in her opinion at least, all about the old tropes of gold, sacrifice, and invasion by the Spanish. The bad guys here were the narrativists who, using tried an tested Aristotelian principles of drama, needed a protagonist, an antagonist and plenty of conflict to sell the programme. They didn’t think the more nuanced interpretation that Higgins had hoped for (and I understand, which was filmed) would connect emotionally with the audience. Hmmmm.
Pause for a moment of self reflection.
I wish I’d managed to chat with Ceri during one of the breaks. It strikes me, given all the footage which told different, more nuanced stories, that this is a case for The Narrative Braid!
Another presentation that grabbed me was from a team led by Helen Petrie, presenting their efforts to interpret (and then evaluate the interpretation of) “Shakespeare’s church” Holy Trinity in Stratford-upon-Avon. The interpretation, a smartphone app, was nothing special, using techniques that a myriad of other developers are also trying to push on cultural heritage institutions. But the evaluation was something new. According to Petrie “surprisingly little empirical research is available on the effects of using [smartphone app] guides on the visitor experience.” It’s not so surprising actually, considering how dificult it is to record emotionsal responses without participants intellectualising them. Anyway, they started from a clean slate, creating a psychometric toolset that includes the Museum Experience Scale (and of course the Church Experience Scale). The presentation of a top-line summary of course, but I’m keen to read more about it, as I’m pretty sure I saw at least one bar-chart with an “emotional engagement” label.
Another sort of guide, and one long imagined, was described by Adrian Clark. Ten years ago he started working on a 3D augmented reality model of parts of Roman Colchester, but the technology required at the time was on the limits of what was weaarble, and by no means cheap. Now that the Raspberry Pi is on the scene, he has started work again, and hopes soon to have a viable commercial model.
We also saw a presentation from Arno Knobbe, who showed us ChartEx, a piece of software that can mine Medieval texts (in this case, property charters) and pull out names and places and titles. Then the program will also algorithmically suggest relationships between the people and places mentioned in the charters and thus suggest where the same John Goldsmith (for example) appears in more than one charter. Jenna Ng analysed the use of modern Son et Lumiere shows in historic spaces. Valerie Johnson and David Thomas explained how the National Archives are gearing up for collecting the digital records that will soon be flooding in as the “30 year rule” becomes the “20 year rule.” My supervisor, Graeme Earl introduced a section on the history of Multi-Light imaging, in honor of English Heritage’s guide on the subject. The subsequent papers covered RTI, as well as combining free range photography with laser scanning to create accurate texture maps, and very readable 3D models. One fascinating aside (for me) was that the inventor of the original technique, Tom Malzbender, originally thought it’s main use would in creating more realistic textures for computer games. We also looked at: the digitisation of human skeleton remains (makes putting them together a lot easier apparently); the 3D modelling of the hidden city walls of Durham (though personally I’m more excited by the Durham Cathedral Lego Build which started today, first brick laid by Jonathan Foyle); and the digital recording, and multiple reconstructions, of mediaval wall paintings.
There were poster presentations too. Two that leaped out for me were Katrina Foxton’s exploration of “organic engagement” with cultural heritage on the internet, and Joao and Maria Neto’s experiments with virtual agents as historic characters.
My wife sent me a link to Historypin the other day. Historypin hopes to collect and geo-locate photos from all of us, to create a massive database linked to Google’s Streetview to record how places looked developed over time.
It’s a lovely idea, but is (on random inspection of a number of places I know) short on scanned photos. Which is to say, there are relatively recent, digital, photos, which of course are easy to upload, but few historical photos, which would require people to dig them out of albums and boxes, scan them into digital form and then upload them.
There are some older photos. I wondered if, given the number otoof recent photos, there might be some of the Olympic Cycle race on Box Hill last year. There weren’t, but I did discover my first historical image, this one uploaded by the (endangered, at the time of writing). Still searching for Olympic photos, I widened the area, and found another, this time uploaded by Dorking Museum. It turned out the area around Dorking and Box Hill is rich with historical rather than more recent photo, but interestingly mostly uploaded by institutions rather than individuals.
Now this isn’t quite in the spirit of Historypin’s founders, who are We Are What We Do, a London based not-for-profit that, according to the website, “creates ways for millions of people to do more small, good things.” Of course, I don’t know who at these two museums was responsible for uploading the photos. It could be work entirely done by volunteers. And one could argue that the publicly funded (for the time being at least) National Media Museum should make every effort to broaden the audience, but I’m intrigued to know what was the business case for devoting time to adding pictures to this data base.
I’m not saying its wrong, its great that they did so. But I know that a local National Trust place in the area, Polesden Lacey, has a good collection of historic photos, and some of them would benefit from geo-location, but I’m not sure I could make a case for people to spend time uploading and accurately locating the photos on Historypin.
Imagine though, not just photos, but other media, audio, video and text interpretation, pinned to geo-locations, that could be pulled down by a user direct to their phones. Polesden Lacey already has an audio tour that features especially recorded audio, as well as contemporary and historic photos and video. That content is currently accessed via iPods that people can borrow. As iPods don’t have GPS chips, its not properly geo-located, but each bit of content is “pinned” by a map to a specific area of the grounds. Imagine if that same content was available to every smartphone user, via something like Historypin. (Of course, the Polesden area would need a decent data carrying mobile signal, which it doesn’t have, so this is all somewhat academic.)
It’s not hard to imagine all that data being accessed from the web, in all sorts of exciting ways, or by all sorts of mobile apps so that a man interested in the history of brewing, can stand next to another interested in Edwardian gossip and a woman interested in nature walks, and all three can access the bits of data most relevant to their interests, woven into an emotionally compelling narrative (that last part is the difficult bit – but more on that another time). That is after all the whole point of the world wide web.
But the challenge is the huge variety of databases competing with one another. Historypin in competing for attention in two ways. It is competing for content, versus any number of photo servers, including Flickr and Instagram, and its competing for attention at the location with companies like Foursquare and Yelp. Of course you can argue a well made story app, will be server agnostic and seek out the best content from all the servers, but then, is that a sustainable model for anybody?
This 2011 paper from my Southampton colleague Michael Jewell and Clare Hooper of Eindhoven, spins an enticing tale of fiction, drawn from your location. But the first two services it mentions, Wanderlust Stories and Broadcastr, no longer exist. How can cultural institutions be confident of placing the right bet when it comes to making their content available geo-spatially?
Historypin has the backing of Google, so it may be around for some time. But Google isn’t a charity, it’s there to make a profit, and has been seen to be pretty ruthless with its own services when the needed. How can the likes of the National Media Museum and Dorking Museum be confident that the time they spent uploading and locating their photos will still bare fruit in five or ten years time?
While I’m looking at (broadly) how narratives can be told across space, I gatecrashed an interesting seminar today looking at how spaces (thats places, not the space between the words) can be pulled out of narratives and mapped. Its all part of the Spatial Humanities project at Lancaster University. Patricia Murrieta-Flores visited Southampton (her old alma mater) today to share some of the work she has been doing as a proof of concept for the idea.
In the first example Patricia explained how the team and processed the eighteenth century records of the Registrar General to co-locate clusters of deaths by cholera, diarrhea and dysentery. The idea (as I understand it) is that they input digitised versions of the historical texts (which they call the corpus) and the system parses and pulls out the place names, matches them against gazetteers, and maps them in GIS. The output shows the clusters on a map of the UK. This isn’t easy to automate, and its still quite a handcrafted process, because of different historical names for places, different spellings, different gazetteers, and disambiguation (does a mention of Lancaster, for example, mean the city in the UK, the county in Pennsylvania, or Mr Lancaster?).
What they discovered wasn’t surprising, the largest spikes of co-incident death by the three diseases corresponded with three of the occurrences of a cholera epidemic. Patricia’s story though had an interesting resonance with the story of John Snow, the “legendary” epidemiologist. During the first spike highlighted in Patricia’s work Snow suggested that the disease may be water-borne and not, as previously thought, miasmic. The second spike, in 1854 occurs as Snow is analysing data himself, to identify a particular water-pump in Broad Street as the centre on an outbreak. By the time of the third spike, in 1866, the authorities had begun to base their advice to citizens upon John Snow’s learning and the fourth and largest spike two years later, is not co-incident with an epidemic but a result of better reporting because of Snow’s work.
In the second example Patricia touched on literature rather than historical record, charting mentions of the lake district places in the work of 18th century writers. The output showed how what began as a stopping off point on the way to Scotland, became a destination in its own right as the century (and the railways) developed.
Of course, this process has revealed nothing particularly new, but both these experiments were always meant as proof of concept. The exciting work, discovering new truths from less well known historical and literary narratives is about to begin…
Here’s a report a colleague linked me to, about a website that maps WWII bomb locations to modern London, and links to more details and images. The images are only local to the area – I guess its too much to hope that every bombsite was photographed.
The project was part-funded by JISC. Something for me to look into later.
Today I joined Southampton’s Web Science DTC students and Digital Economy USG colleagues for four lunchtime presentations. The one I was immediately drawn to was headstream. Julius Duncan told us about how they arrived at the Social Brands 100 report, monitoring a companies website, blogs, and social networking presence, to see how much of the output was of value to followers, how well they reponded to followers, how much video and photos they link to (worth far more to followers than boring of status updates it seems) etc etc. The talk was about process not results, so I had to go online to to see that Innocent was the top social brand, which isn’t surprising – They are one of the few “brands” in my facebook, giving me value with their funny posts, and selling through me to my social circle (though I discovered this weekend that it annoys my sister).
As Julius was talking, I wondered if the National Trust had thought of working with them? According to this blog post, yes. But this sort of stuff in interesting at a national, or international scale – I’m interested in how any one cultural heritage site, with a smaller, more local following, can leverage the social.