Versailles 1685

I ran a session for a group of Masters’ students yesterday, part of the 3D Recording, Modelling and Interpretation module. It was a great afternoon, with a really responsive group of students, who ended up planning  a game around the Mayan city state of Tulum.

I talked for about an hour, beforehand. Riffing off Red Dead Redemption (of course) to discuss Tynan Sylvester’s Engines of Emotion, and look in more detail at Game Narratives, finishing of with the idea of Kernels and Satellites.

On the way, I mentioned Versailles 1685, which I suddenly recalled while pulling my notes together. Twenty years ago, at University, one of my final year projects was a proposal for House of Delight, a game exploring the social and sexual mores of late seventeenth century England. It didn’t get anywhere – well, it got me an interview with a video games company, but not a job. So I was very jealous when this game came out, but never got a chance to play it. That said, the company that didn’t give me a job did go bust a short while afterwards, and the interview led (in a round about way) to meeting my wife, so maybe its all good.

Versailles 1685, also known as A Game of Intrigue, was one of the first games commissioned by a museum authority, in this case Reunion des Musees Nationaux, for the purpose of heritage interpretation. Created in 1996, it sold itself with:

  • 25 hours of gameplay, set in history’s most beautiful palace
  • Featuring OMNI 3D that lets you look and move around freely in an entirely 3D environment
  • Over 30 characters modelled in 3D from period portraits, that bring back to life actual historical figures.
  • A stunning recreation of Versaille in 3D just as it was in 1685
  • Over 200 paintings that you can study close up
  • A soundtrack of 40 minutes of Baroque music, true to the period.

Taking the role of Leland, a junior servant of the King, the player
discovers and eventually foils a plot to burn down the Palace. The
narrative gives the player an opportunity to explore a 3D model of what the palace(possibly) looked like in 1685, and an accompanying encyclopedia detailing Louis VIX’s court and collection.

It was pretty well received, and spawned a sequel, and its almost twenty years old. So looking back, I wonder why we’re not inundated with games from heritage organisations, with specific interpretive objectives.

My introduction to GIS

Having wrestled with the open source QGIS package a few weeks ago, my first attempt at modelling Portus in Minecraft, I decided it couldn’t hurt to give myself the introduction to GIS I so sorely needed. By happy circumstance, Esri, developers of the ArcGIS packages had just started a MOOC in conjunction with Udemy. So I signed up for that and, for the last couple of weeks, I’ve been catching up (I started four weeks late) and completing the course.

It made for a brilliant introduction to GIS (for GIS virgins like me but also, it seems from the comments, for more experienced users). Taught (mostly) by Linda Beale, with introductions from David DiBiase. I noted with interest that the Udemy MOOC engine (of course not really MOOC software, as most of Udemy’s courses are paid for) incorporated a time-stamped comments feature a bit like Synote, the one my colleagues are developing, but not quite as capable.

David and Linda introduce the course while I play with the notes function
David and Linda introduce the course while I play with the notes function

There were song titles to look out for, smuggled into Linda’s lectures, and quizzes that were the right level of challenging, to help review your learning. The songs and some trick questions in the quizzes betrayed a mischievous sense of humor, which I enjoyed. Some students didn’t – upset, I guess, at spoiling a 100% record, but these were quizzes not exams.

Each week included one or two case studies, wherein we got to use an online version of Esri’s ArcGIS to solve data analysis problems: where to locate a distribution centre, or monitor Mountain Lions, or build mixed use accomodation, for example. These case studies were great fun… to begin with. But,  as I caught up with my fellow students, and we all started working on the ArcGIS servers on the same day, the software couldn’t cope, and timed-out or returned errors on analysis. So in fact I haven’t done the three case studies. Which I found very frustrating.

I’ve got a few weeks to go back and try them again when its not so busy, but I’ve spend the greater part of the last couple of months studying MOOCs and not getting on with my own work, so I was hoping to call it quits today. Next week, I’m going to experiment with Twine.

Synote, video and distance learning

I’ve been a bit quiet on this blog of late, partly because of devoting my time to two very interested but concurrent MOOCs. Both of them from University of Southampton and FutureLearn, they started in the same week. One, Shipwrecks and Submerged Worlds: Maritime Archaeology was only four weeks long, though, so having completed it, and this week’s work on Web Science: How the Web is Changing the World, I have a little more time to catch up with the blog.

Of course one of the ways in which the web is changing the world, is the provision of this sort of education. And for the duration of these courses I’m always getting distracted by the learning experience itself. Lats time, it was participation on the forums that sparked my interest. This time its video. The videos on FutureLearn seem short, three, four, or at the most, seven minutes long. Contrast this with the ones on the Coursera course I did on statistics: they were 20 to 30 minutes long. Looking at the guidance FutureLearn offers for partners creating course content, the recommendation is no more than ten minutes.

I’d prefer something longer. To be honest, what I really wanted was an audio only podcast, to listen to on as I drive for work. My gold standard is In Our Time, the discussion programme hosted by Melvyn Bragg on BBC Radio Four. But that’s by the by, the video content on FutureLearn seems to be the briefest of introductions to concepts, to shallowest of discussions, not a developing and involving narrative (though I don’t recall thinking that with the Portus MOOC, which is interesting).

I guess one of the reasons why they keep the videos short is that they want to enable people quickly discussing the subject on the forum. It would be difficult to retain an interesting thought you had during the video, if you have to wait 20 minutes for the video to end. Then there’s the short quizzes, which give participants an opportunity to reflect on what they’ve learned. Coursera had a system where they could include these in the video itself. Indeed, if I recall correctly, you couldn’t continue with the video until you’d had a go at the quiz. FutureLearn treats the quizzes as separate elements, normally towards the end of the week, and only occasionally during the week’s content but always on a separate page. The Coursera system, in a crude way, lets you interact with the video. FutureLearn treats the video as a discreet element.

Don’t get me wrong, I’m not saying I’d prefer longer videos to the text articles that FutureLearn offers. I’m just as happy to learn by reading as by watching. Its just that I feel the short video format doesn’t use the medium to its full potential. Video has a great ability to compress or expand time, overlay the real with the imaginary, and explore distance, but those abilities need room to breath.

Last week I was invited to have a look at a technology that might reconcile my desire for longer videos with the the didactic need to discuss what we’re watching. Synote is an application developed by Mike Wald at Southampton University to make “multimedia resources such as video and audio easier to access, search, manage, and exploit. Learners, teachers and other users can create notes, bookmarks, tags, links, images and text captions synchronised to any part of a recording, such as a lecture.”

Mike and PhD student Yunjia Li showed us a new version of the application, currently in development, with a view to making it usable for MOOC learners as well as others. They showed us how easy it is to play a video through Synote and while its playing, make comments, that are timecoded to particular parts of the video, comments can even to be attached to particular areas of the screen. Comments can link to other web-based resources, anything with a URI in fact. And as every comment has a URI of its own, you can link from one section of the video to another section of a related video, effectively making your own “mash-up” (although with buffering it won’t be quite as slick as something edited together).

Adam, a colleague from the University’s Winchester School of Art was also (virtually) at the meeting, and soon set up a group of his students to help design a better user interface. You can read about their exciting and efficient workshop here.

So as I’ve worked though this week’s content for the Webscience MOOC, I’ve been thinking about Synote and how it might be used. To be honest the main course content videos seem too short to reward the effort of running them through a different web viewer just to be able to tag your comments to a particular place in the video. And reading the comments, just one commenter (at the time of writing) seems to have felt any need to refer to a particular point in the video. It seems the brevity of the videos might actually contribute to the generalness of the comments.

However, the MOOC has sent us off to the TED website to look a couple of longer videos there. Often the “See also” links at the end of an article point to videos. And these videos are often longer (the TED ones run just under fifteen minutes), and on these videos I think it would be good from a learning point of view, to be able to tag comments to particular sections of the video. For example, a couple of commenters included links to videos that weren’t part of the “see also” course related material. They might have preferred to have the ability to point their fellow students to the particularly relevant section of each video.  One such video was a TED talk by Daniel Dennett, always a favourite of mine. He quoted a lovely reference about five minutes 40 seconds in, about how “‘real magic’ doesn’t exist. Conjouring, the magic that does exist is not ‘real magic'”. Now I’d like to point you, dear reader to that moment, but I’ve taken two lines of text linking you the the video and telling you where to find the bit that I thought was particularly funny. It would have been so much easier if I’d been using Synote.

So, imagine a MOOC assignment that said “watch these through Synote and share/mash up the bits that are most relevant to what we’ve been discussing”. Imagine participants, setting up a Synote playlist of all the most relevant bits of TED talks to the subject they are discussing. Imagine in the Daniel Dennett talk above where he asks the audience to spot changes in a series of short videos, participants actually being about to mark exactly where on screen and in which frame they first noticed the change.

All of these are things that Synote is capable of.

 

 

More about forum participation in education

Another random set of notes about forums I’m afraid this week, but we’re close to finishing the paper that I’m co-authoring so normal service will be resumed shortly.

Initially, on-line forums were offered in addition to print-based correspondence courses, and were, alongside email and web-based articles, considered optional “so as not to reduce access to students without internet or computer facilities”  (Bates, 2008)

The 2002 paper by Wu and Hiltz sets out possible benefits of forum participation in education:

Online discussions that persist throughout the week should motivate students to be more engaged in their  course on a continuous basis […] Secondly, active participation in online discussions, which are student-dominated rather than instructor-dominated, should be enjoyable for the students. It should  make learning more active and “fun.”

To test these hypotheses, they surveyed 116 participants in three face-to-face courses (two undergraduate  and one graduate) for which active participation in forms was a requirement of the course. It’s important to note trial was observational, not a randomised, controlled trial, and the surveys tested perceptions of learning rather than testing learning itself. However, they concluded that students did find that the asynchronous discussion afforded by forums did make the course more enjoyable and increase motivation. They also discovered that the amount of previous experience with distance learning courses didn’t appear to affect how enjoyable or motivating student found the on-line discussions on the observed courses. The importance of the instructors’ involvement in setting topics for discussion, offering feedback and guiding discussion was highlighted by the students’ responses, with one saying that instructors should be “online for for two to three hours every day.”

Two years later, Biesenbach-Lucas (2004) put forward her interpretation of the benefits of forum participation (particularly in teacher training) :

Positive interdependence: Students organize themselves by assuming roles which facilitate their collaboration.

Promotive interaction: Students take responsibility for the group’s learning by sharing knowledge as well as questioning and challenging each other.

Individual accountability: Each student is held responsible for taking an active part in the group’s activities, completing his/her own designated tasks, and helping other students in their learning.

Social skills: Students use leadership skills, including making decisions, developing consensus, building trust, and managing conflicts.

Self-evaluation: Students assess individual and collective participation to ensure productive collaboration.

Her paper only expects the instructor to act as “Observer/evaluator, perhaps some participation” however, she admits that, over the course of her five semester experiment with forums, the instructor carried over more and more outputs from the forum into face-to-face sessions.

Vonderwell, Liang and Alderman (2007) explored asynchronous online discussions, assessment processes, and the meaning students derived from their experiences in five online graduate courses, and concluded:

Educators need to look more carefuly into the notions of “assessment for learning” as well as “assessment of learning.” online learning pedagogy can benefit from a notion of “assessment as inquiry” and “assessment of constructed knowledge” in asynchronous discussions.

Kearns (2012) offers a reasonable summary of the challenges including the sheer number of posts that might need the instructors’ attention:

One problem arising from the asynchronous nature of online discussion is the impact of late posting. For a discussion that runs from Monday to Sunday, for example, students in the discussion group may miss the opportunity to fully engage if some wait until Saturday to begin. On the other hand, even in classes where discussion is sometimes less than robust, students may face the challenge of having to keep up with voluminous postings across multiple groups and discussion forums. As one of the participants pointed out, “Sometimes it’s hundreds of entries.” […] A recurring theme among instructors who participated in this phase of the study was the amount of time and effort involved in providing effective feedback to online students. One source of demand was online discussion. Several instructors reported being overwhelmed with the amount of reading this required. As one instructor remarked, the discussion board became “cumbersome when done every week.” Another demand on an instructor’s time that was raised was having to enter comments on student papers using Microsoft Word rather than being able to handwrite in the margins. For one instructor, this was “time consuming” and “more tedious” than annotating the hard-copy assignment. One instructor mentioned needing a greater number of smaller assessments to oblige students to complete activities that might otherwise be completed during F2F class time. In her words, “If there is not a grade associated with an assignment, it is completely eliminated.” Finally, several instructors commented on having to answer the same questions more than once in the absence of a concurrent gathering of students.

Mentioning peer assessment as as useful strategy in coping with this challenge, she cites Yang and Tsai (2010) which relates an interesting experiment with peer assessment, finding peer reviewed marks reasonably comparable with an those of an external assessor, and measuring teh impacts of the students on perceptions and approaches to peer review. Though in the context of MOOCS, I’m not looking at this stage for a robust marking procedure, I am interested in peer review as a way of tagging posts so that they can be used to create procedural or semi-random narratives.

So there may be lots of analysis of that challenge, and some useful ideas in the packed paper by Meyer (2006). For example:

Many research studies do not use multiple raters to code the content in online discussions. This may be  owing to a number of factors, including the instructor’s preference for working alone or a lack of interested colleagues to help with coding. Researchers may not have the time to train other coders or the  money to pay them, or perhaps the aim is simply to collect data about the learning of a given set of  students rather than to produce reliable findings. In other words, there may be understandable reasons for not using multiple raters, despite the greater reliability that might result from their use.

Crown sourcing the rating, from other students, in peer review, may be an answer.

Reading about forum participation as a component of on-line learning

I’ve participated in two MOOCs so far, one through Coursera and one through FutureLearn. One difference between the two platforms is the use of Forums.

In the Coursera course on Statistics, the forum is presented as an add-on, a tool that was available to students who wished to interact with other students, discuss concepts raised, offer feedback on the course and, especially, seek help with the weekly assignments that were the main form of assessment during the course. But the forum didn’t feel part of the course, and there was no evidence that my participation on the forum was being, either formally or informally evaluated.

On the FutureLearn course, each learning element came with its own forum built in, and students were actively encouraged to submit work on the the forum, and to comment on each other’s submissions.

Neither course ended with any form of certification, so any evaluation of the student’s work on the FutureLearn forums was informal, but there was a more of a sense of the course team taking an active interest in how student participated in forum discussions than on the Coursera course.

With a growing number of courses delivered wholely or partly on-line, and in particular expansion of Massively Open On-line Courses (MOOCs), new models of student participation and evaluation have developed. One such model is the use of discussion forums. A development of the Bulletin Board Systems of the early internet, forums can be described as an asychronous form of conversation that uses type. Forums are often archived, at least temporarily, and the course of the whole conversation can be viewed at any time, which distinguishes discussion forms from other typed conversations such as Internet Relay Chat.Discussion forums are often a component of Vrtual Learning Evironments (VLEs) like Blackboard, etc.

Moodle is an open source virtual learning evironment first released in 2002 and used by a number of insititions worldwide (including, for example, the Open University) to deliver on-line education. It was originally developed by Martin Dougiamas who (for example in Dougiamas and Taylor, 2002) is a proponent of social constructionist pedagogy. Lewis (2002) is a much cited study that was one of the first to use a randomised trial to evaluate the effectivness of discussion forums as a learning tool. Although inconclusive on the main question, one new hypothesis raised was that “online group discussion activities must reach a certain level of intensity and engagement by the participants in order to result in effective learning.”

Indeed Hrastinski, S. (2008) is concerned that asynchonous online conversations can be difficult to get going if too few students participate. However, when they do succeed, Hrastinski offeres evidence that asynchonous conversations stay on-topic for longer, give students more time to reflect on complex issues, and allow students from different time-zones, and with different time commitments, to participate.

Given these advantages it’s no wonder that on-line course designers want to include discussion forums in the toolset that they offer to students. But if the forums are to be an effective learning tool, students must be incentivised to participate. One obvious incentive is to make participation in discussion forums part of the sudent’s assessment. Morrison (2012) offers an example rubric that makes clear to students how their participation could be assessed. In her example, the quality of the initial post is measured according to relevance, clarity and depth of understanding. Follow up posts are graded according to frequency and supportiveness. Word count and timelyness are also factors that affect grading.

This is just one example, but it demonstrates the effort required by instructors to properly assess each student’s work. An active and vibrant forum may have dozens or, especially likely with MOOCs, hundreds of posts. Automated tools, especially those that enable supportive peer review are required if the full learning potential of asynchronous discussion forms is to be realized.

Changing direction?

I’ve been doing a lot of thinking around my participation in the Portus MOOC a few weeks back. This post is an attempt to get my thoughts in order, so I apologise in advance for any disjointedness.

First of all, let me edit in some thoughts on locatative gaming, prompted by a Guardian article on social gaming I read today while I should have been bashing this post into shape. Describing the new game from Bungie, Destiny (which is of course a console game, not a location based one) she says

On a practical level, though, “social” is a business model. It means content engineered to be “liked” or shared. It means fundamentally we spend anxious time doing free labour for social infrastructures, providing our personal lives, disseminating links, making those platform-holders wealthy with our exhibitionism and interaction. When it comes to games, it’s increasingly on the player to create the meaning in their experience.

And passionate players provide unpaid labor to games development, too: games are being released in beta and updated in public, so that the end product will better meet their needs. Thus the eager front-line beta testers mitigate the expensive risk of developing a commercial tech product, just through the fuel of their social behavior.

[…]

It is social in that business sense: you must collaborate with and keep up with your friends, ensure that your statistics and equipment – your fitness for competition – are ever increasing. You participate excitedly in this capitalistic metaphor.

Having “played” Ingress for a couple of weeks now, I’m beginning to feel the same frustrations as the author. I simple don’t have the time and dedication to labour on behalf of Google and for the benefit of my fellow players. I know what I ought to do to have an enjoyable experience is recruit freinds and family into the game so that we can play as a team, or build relationships with other players to do the same. But its too much bother. Its not for my generation, I’ve concluded.

In the Guardian, Leigh Alexander concludes:

I believe in the potential for games to create incredible collaborative environments for play. But let’s think about what a “social” play experience would look like if it served us, the users, and not the platform, whose only real desire is to have us use it, to have us serve and propagate it, to lend hours of our time to its cold lunar ecosystem.

What would a locatative game look like, that served the users rather than the platform? We’ll have to wait and see.

Right now though, I’ve been thinking about how the Portus MOOC might better serve its users. I’ve been looking at all the comments that were posted by students on the MOOC, and though I’ve not yet done any proper text analysis, my impression is that the Portus team received great praise from participants, but there are two apparent challenges for web-based learning:

  • Spatial and contextual awareness. Comments from participants consistently highlight the difficulty of understanding the spaces involved, their relationship to each other, and their scale. Efforts to understand spaces were further undermined by the struggle to understand the context as the topography and use of space changed during the 500 year period of occupation. Copious maps, plans, 360/spherical panorama and references to GoogleEarth and Bing Maps failed adequately to mitigate this challenge.
  • A preference for didactic learning over investigation. Though many participants relished the more autodidactic optional activities, a considerable number expressed discomfort when faced with interpretation tasks where users generated their own content. Peer review was especially daunting.

My supervisor, Graeme Earl already addressed the first point in his post on the Portus MOOC blog. Therein he says:

Some of you have already used ingenious methods, such as pacing out the size of a canal on your driveway or finding household objects similar to those we find at Portus. This is fabulous and please keep sharing these ideas – it is really helpful for us and for other learners.

But what do we do if we want to immerse you in the site as it is today, and as it was in the past? I would like you to imagine the buildings towering above you, to feel as though you are walking the streets and avenues in the footsteps of the Roman sailors, warehouse workers, slaves and traders that walked there two thousand years ago. You did this in textual form fantastically already in theFirst Century discussion in week one and in the Summary of the Week in week five, and it would be great if you continued to produce image or audio versions and share them on the Flickr group pool.

We’ve been thinking about how, for the next run of the Portus MOOC, we might lift our model of Portus off the page, take it out of the tiny window of the average computer monitor.

Imagine this.

Armed with a smartphone (loaded with a simple app that we create), one of our MOOC participants takes a walk, where-ever they live, and finds a piece of ground of a reasonable size, a park perhaps, or a school playing field, or a parking lot even. As long as it’s reasonably clear of obstructions it should be fine. They walk around the field pacing out as large a rectangle as they can, using the smart-phone’s GPS function to define and log each of the four corners. The app (or maybe its an HTML5 webapp, so they (and we) don’t have to worry about app-stores) tells them how the area they’ve measured out compares to the area of the Portus site.

Then (and here is the clever bit) the app scales everything we know about the real Portus to the area they’ve described. Using the app, and maybe some physical markers of their own, they can locate the intersections of the streets, and the locations and sizes of buildings that we’ve excavated. The app would allow them to map the changes that took place over time too, so that could plan out Then they can walk those streets, and the app can help them visualise the building they are walking past, and how goods (and people) moved from one space to another on their journeys in and out of the Port.

When I say visualise, I bet you are thinking they hold their phone up and, looking through the screen, see 3D models that we’ve made of the buildings in AR. I guess it’s a possibility, but we’re beginning to push at the limits of the technology here: Smartphone GPS has been getting better, but most phones are likely to deliver something accurate only to between three meters and nine, and what with level changes on the site they are doing this, and at Portus, I fear that an AR presentation might end up with so many visual glitches that it becomes off-putting rather than insightful and inspiring.

So actually I’m thinking there is a better learning outcome by making them do it all in their heads. I like the idea of  learning that visualisation starts in the imagination, not at the 3D modelling interface. Grant Morrison, who wrote the challenging comic The Invisibles, coined the term Fictionsuit to describe a method by which an author interacts with the characters in his (or her) diegesis by becoming a character in the diegesis. In a way, this is what the MOOC asked students to do the “First century discussion” that Graeme referred to in his post. Some participants (according to the comments) were more comfortable than others with this exercise, but I’m convinced its a vital tool for interpreting archaeological evidence and learning about exploring a world that can, in a very literal sense, only be a creation of our collective imaginations.

In another way, game avatars are fictionsuits too. Whether they are created by authors of the game, like John Marston in Red Dead Redemption, customisable creations of the player as in Skyrim, or held entirely within the imagination of the player as in Dear Esther.

But there’s a dichotomy between the exercise of imagination, and the “truth” of an academic paper or computer model. And the evidence of the comments betrays, among participants on the MOOC, a preference for passive acceptance of an expert’s model over willingness to imagine a model of their own.

So I’m thinking about how we might use game mechanics to:

  • Immerse participants in the geo-spatial relationships of different parts of the site. Exploring it by moving from place to place on a map (or even a scale recreation of that map in a real-word space) to access different content.
  • Encourage the creation of fictionsuits to explore the possibilities of how the site might have worked
  • Share (and even evaluate) interpretations of the site and the evidence

That last, the sharing and evaluation of interpretation is a particular challenge, the game mechanic solution to which might be in some work I looked at yesterday. Without wanting to reveal too much about the project of a colleague I only just met, I was introduced to a team which is working of using game mechanics to create Linked Data for an enormous  corpus, and also evaluate learning. It strikes me that this methodology could be incredibly useful for MOOCs. Yes, it uses game-play to source un-paid labour, just like the social games that Leigh Alexander was berating in today’s Guardian, but it does offer the intrinsic reward of actual, real learning.

I’m still trying to synthesize all this into a coherent project, but I do think I’m getting somewhere.

Please do comment if this all feels like nonsense though.

A Lego Magazzini #buildyourownportus

My post a couple of weeks back on the Portus MOOC, and trying to model Building Five in Lego aroused some visits from my fellow students, a few comments in the MOOC itself, and at least one other attempt to use Lego Digital Designer to as an archeological tool.

It so encouraged Graeme Earl that he wrote about it on Southampton’s MOOCs blog He also provides a link there to some plans and drawings of the Grandi Magazzini Di Settimio Severo that he persuaded Grant Cox (he of the astounding computer models) and Christina Triantafillou to create.

The challenge is evident. Can we, the MOOC’s students, rise to it and build our own models of this enormous building?

One could of course, use the drawings themselves as building blocks, reproducing them to the correct scale, sticking multiple copies to card, and assembling them with glue.

Or, of course, I could turn to Lego again.

This is another huge building. Bigger even than building five. In videos from Week one and week three of the MOOC, Simon Keay strides down the remains of the corridor as though it’s a street. (EDIT: or am I confusing that with the Portico di Claudio?) So in the end I’ll resort to digital designer again. But first let me get a feel for the shape by getting my hands around some real bricks.

Looking at the plans, it’s apparent that many of the storerooms are the same structure, repeated again and again across each wing and two floors. There are other spaces, stairwells etc that don’t conform to the pattern. But to begin with, I’m looking for a modular design for the storerooms.

First of all I lay out a simple version of the design on a baseboard:

20140624-213243-77563942.jpg

Here, I’ve made a very unscientific decision about scale. After my attempt at building five, I’m less interested in building it to minifigure scale. I’m not sure even LDD has enough virtual bricks (!) and anyway I don’t want to place them all. So instead I’m experimenting with the smallest scale possible, and here I’ve decided I can get away with one stud = one meter. Of course the plans show varied decimal fractions of a meter in the metrics, so I’m rounding up and down arbitrarily. Romans – if you find this blog through some sort or temporal anomaly – do not scale up my Lego measurements. You’ll be very disappointed.

20140624-214314-78194641.jpg

Even without the correct scale, the act of modelling makes one think about how the spaces go together, and really interrogate the plans. The picture above shows one of the arches which looked out over the Claudian Basin and the sea beyond. Now though I’m wondering – is it open to the floor? Or does it have a sill? For the time being I’m leaving it open to the floor.

20140624-214912-78552795.jpg

20140624-214913-78553040.jpg

Then there are decisions to make that aren’t about an absence of information, but rather the limitations of the Lego System. The plans show domed interior ceilings, almost like vaulted pillars in medieval cellars, but with Lego I can only have arches. So should I put them across the room, or down its length, because as the images above show, it could work both ways. In the end I decide to put them across the room, and fake the vaulting with some inverse roof tiles. Like so:

20140624-215338-78818256.jpg

“Minifigure scale” is well known among adult fans of Lego, but there is a smaller scale, based on the pieces used in some of Lego’s board games such as Heroica. Sadly these “microfigures” are still too big to populate my building, so I resort to a minifigure film star’s Oscar statuette to give the building a sense of scale. Talking of which, I know the width of this interior doorway but the plans don’t show the height:

20140624-221014-79814849.jpg

Finally, I want to get rid of the baseboard. Having got this far in plastic, and got an idea of the size of pieces I need, I’ll be moving onto virtual bricks. Then I’ll need to create repeatable module that clicks together, so I’m better off creating a “baseboard” that goes on top of the structure.

20140624-221817-80297521.jpg

20140624-221817-80297296.jpg

That’s enough for tonight. Next time, the virtual model.

The Portus MOOC and modelling

Shipshed render - Grant Cox - http://www.artasmedia.com/

Having been focussing on my Symposium presentation, and then taking a week camping with the family in France, I’ve finally caught up with the other students on the Portus MOOC, which I’d had a sneak preview of some months back. We’re two thirds of the way through the course, which is a bit late for catching up, but never mind. The debate in the user forums below each activity has been better than I’ve seen in any other MOOC I’ve tried – rather than simple requests for help with an issue, the Portus learners have been contributing their own knowledge and experience and asking probing questions which have elicited very full and illuminating replies from the Project team.

This week, the course has turned to two aspects of the Portus Project which are of particular interest: the mysteries of “building five” and computer modelling. Building five is a large building almost 250 metres long, and (according to evidence which I won’t repeat here – if you want to know, do the course)  it may have been a boatbuilder’s shed, although no evidence has been found as to how exactly ships built in the shed might have been launched. The course also explored the activity of modelling the building, showing how computer models are used to hypothesize the shape of there is missing evidence for.

There was some debate in the forum comments about the power of CGI models of convince the viewer that this might be the ONLY truth, while the Portus team are using CGI to explore all the possibilities. There is an argument that the more sketchy archaeological drawing is inherently less “final” and more pregnant with possibilities, but I’m not sure I’m convinced by that argument. But I am convinced that the act of modelling is more informative than the finished model, whether its created by computer or pen and ink.

So how easy would it be to give all the MOOC learners an opportunity to get their hands dirty with modelling? The sort of software used by the Portus team comes with expensive licenses and a steep learning curve, which likely put it out of the reach of most home learners. But there alternative 3D modelling packages. My colleague, Javier Pereda suggests two on-line 3D modelling programs that come with their own tutorials: Tinkercad – a Web based 3D system which is a simplified version of 123D from Autodesk; or 3D Tin – also Web based, and a little bit more complex, but with the difference that this app can export the models to other sites like Thingverse, and from there, maybe 3D print their models.

Could the Portus team create 3D models of some of the actual finds from Portus, standing remains, and the other evidence they’ve discovered about the shape of the buildings’ footprint, and enable their on-line students to start creating their own 3D models? I must admit I took a more playful route, which resonates with my recent reblog.

After a discussion about trying to build Portus in Lego minifigure scale, I quickly worked out that even my son doesn’t have enough bricks for that. So I turned to Lego Digital Designer (which I guess is a 3d modelling package – but, for me at least , one which a less steep learning curve), and after spending a day and a half (!) creating one corner of  Building Five to virtual minifigure scale, I’ve produced this:

Portus5.1

Not as impressive as Grant Cox’s render at the top of the post. But I’ve also learned two things: that the engagement of playing with the evidence and the Lego System to try and model Portus is a valid educational activity; and, that minifigure scale definitely does use too many bricks.

The CHESS Experience

Why haven’t I discovered this before? Last week an email from the Guardian Cultural network pointed me to a headline reading “Tell me a story: augmented reality technology in museums“. Now augmented reality articles are two a penny, but “tell me a story” had me intrigued. So I clicked through, and got very excited reading the stand-first, which said “Storytelling is key to the museum experience, so what do you get when you add tech? Curator-led, non-linear digital tales.”

“Non-linear”?! that’s a phrase very close to my heart, so I’ve spent all morning reading about the CHESS Experience project.

(Well I spent some of my morning thinking “Oh no, that’s what I wanted to do! How come Southampton University isn’t part of that project? Why didn’t I do my PhD at Nottingham? That’s where all the cool kids hang out, apparently.” But having wallowed in a bit a self-pity, I got back to reading.)

CHESS stands for Cultural Heritage Experiences through Socio-personal interactions and Storytelling, which sounds right up my street. And the project summary says “An approach for cultural heritage institutions (e.g. museums) would be to capitalize on the pervasive use of interactive digital content and systems in order to offer experiences that connect to their visitors’ interests, needs, dreams, familiar faces or places; in other words, to the personal narratives they carry with them and, implicitly or explicitly, build when visiting a cultural site.” This is all good stuff.

But actually the reality of the project so far doesn’t seem quite as exciting as I’d hoped. The “personalised” story in A Digital Look at Physical Museum Exhibits: Designing Personalized Stories with Handheld Augmented Reality in Museums, seems rather to be just two presentations of story, one for children (in which, for example, the eyes of the remnant head of a statue of Medusa glow scarily) and one for adults (wherein the possible shape of the whole statue fills in the gaps between the pieces).  A Life of Their Own: Museum Visitor Personas Penetrating the Design Lifecycle of a Mobile Experience, discusses visitors preparing for their visit by completing a short quiz on the museum’s website. When they arrive their mobile device will offer them a stories design for a limited list of “personas.” This isn’t personalisation, but rather profiling, as we discussed at The Invisible Hand. And the abstract for Controlling and Filtering Information Density with Spatial Interaction Techniques via Handheld Augmented Reality describes “displaying seamless information layers by simply moving around a Greek statue or a miniature model of an Ariane-5 space rocket.” This doesn’t seem to be offering the dynamic, on-the-fly adaptive narrative I was hoping for.

But its good stuff, none-the less, and there’s a great looking list of references which I want to explore. There’s also project participant Professor Steve Benford (who does little to disprove the theory that all the cool kids go to Nottingham). He’s a banjo-pickin’ guitar playin’ musician and Professor of Collaborative Computing, who among many, many other things has published a bunch of papers on pervasive games and performance, which I think my Conspiracy 600 colleagues might want to (need to) read.

Steve also provides the soundtrack for this post, which I hope you enjoy.

A first look at my Bodiam data

Last week, I had a look at the developing script for the new Bodiam Castle interpretive experience (for want of a better word). It’s all looking very exciting. But what I should have been doing is what I’m doing now, running the responses from the on-site survey I did last year through R, to see what it tells me about the experience with out the new … thing, but also what it tells me about the questions I’m trying out.

A bit of a recap first. One thing we’ve learned from the regular visitor survey the the National Trust runs at most of its sites,  is that there is a correlation between “emotional impact” and now much visitors enjoy their visit. But what is emotional impact? And what drives it? In the Trust, we can see that some places consistently have more emotional impact than others. But those places that do well are so very different from each other, that its very hard to learn anything about emotional impact that is useful to those who score less well.

I was recently involved in a discussion with colleagues about whether we should even keep the emotional impact question in the survey, as I (and some others) think that now we know there’s a correlation, there doesn’t seem to be anything more we can learn by continuing to ask the question. Other disagree, saying the question’s presence in the survey reminds properties to think about how to increase their “emotional impact.”

So my little survey at Bodiam also includes the question, but I’m asking some other questions too to see if they might be more useful in measuring and helping us understand what drives the emotional impact.

First of all though, I as R to describe the data. I got 33 responses, though its appears that one or two people didn’t answer some of the questions. There are two questions that appear on the National Trust survey. The first (“Overall, how enjoyable was your visit to Bodiam Castle today?”)  gives categorical responses and according to R only three categories were ever selected. Checking out the data, I can see that the three responses selected are mostly “very enjoyable” with a  very few “enjoyable” and a couple “acceptable.” Which is nice for Bodiam, because nobody selected “disappointing” or “not enjoyable”, even though the second day was cold and rainy (there’s very little protection from the weather at Bodiam).

The second National Trust question was the one we were beating last week: “The visit had a real emotional impact on me.” Visitors are asked to indicate the strength of their agreement (or of course, disagreement) with the statement on a seven point Likert scale. Checking out the data in R, I can see everybody responded to this question, and the range of responses goes all the way from zero to six, with a median of 3 and mean of 3.33. There’s a relatively small negative skew to responses (-0.11), and kurtosis (peakyness) is -0.41. All of which suggests a seductively “normal” curve. Lets look at a histogram:

Hist(ghb$emotion)

Looks familiar huh? I won’t correlate emotional impact with the “Enjoyable” question, you’ll have to take my word for it. Instead I’m keen to see what the answers to some of my other questions look like. I asked a few questions about learning, all different ways of asking the the same thing, to see how visitors’ responses compare (I’ll be looking for some strong correlation between these):

  • I didn’t learn very much new today
  • I learned about what Bodiam Castle was like in the past
  • What I learned on the visit challenged what I thought I knew about medieval life, and
  • If this were a test on the history of Bodiam, what do you think you you might score, out of 100?

The first three use the same 7 point Likert scale, and the last is a variable from 1 to 100. Lets go straight to some histograms:

Hist(ghb$learning2x2)

What do these tell us? Well, first of all a perfect demonstration of how Likert scale questions tend to “clumpiness” at one end or the other. The only vaguely “normal” one is the hypothetical test scores. The Didn’t Learn data looks opposite the Learned data, which given these questions are asking the opposite things, is what I expected. I’m sure I’ll see a strong negative correlation. What is more surprising is that so many people disagreed that they’d learned anything that challenged what they thought they knew about medieval life.

An educational psychologist might suggest that this shows that few few people had in fact, learned anything new. Or it might mean that I asked a badly worded question.

I wonder which?