What PhD supervisors are for

I had a great chat with my supervisor on Thursday, after helping out with a Masters seminar. As regular readers may have worked out, I’ve been having a great deal of trouble trying to get a coherent testable design to test out of my half-formed ideas and lofty ideals.

The problem was trying to think of a cheap way to test some of the theory I’ve come up with. I’d got hung up on trying to think of a way to track visitors round a site and test their reactions to that. Until I solved that I was handwaving the issues of breaking the story into natoms, and balancing the conflicting needs of multiple visits in the same space. Those two problems both felt more within my comfort zone. The problem is that I’m not a technologist, that bit is so far out of my comfort zone that I’d need to enlist (or pay for) one. On top of that, the tech itself isn’t that cheap – getting a wifi network into some of the heritage places I know, with their thick stone walls and sheer scale, isn’t about buying just one wifi router.

I’d mentioned the other problems (particularly in the one of negotiating conflicting needs) in the seminar. (The students had been reading about a variety of museum interpretation experiments for their “homework” and we discussed the common issue that many of the experiments focussed on the issue of a visitor in isolation, and hadn’t thought enough about multiple users in the same space). Afterwards I spent twenty minutes with Graeme, my supervisor, in his office. I felt he’d finally got what I’d been trying to say about a “responsive” environment, and his interest was particularly focused on the two issues I’d handwaved. We talked about low-tech ways or exploring both of those, and of course THAT’S what I should be doing, not worrying about the tech. These are both things I can do (I think!) rather than something I can’t .

So by the end of our chat, when Graeme had to return to his students we’d worked out the rudiments of a simple experiment.

  • What I need is a relatively small heritage site, but the possibility of lots of choices about routes, lots of intersections between spaces. What Hiller calls a low depth configuration (that last link is to a fancy new on-line edition of the book, by the way. It’s worth a read).
  • I need to work with the experts/curators of that site to “break” the stories. Break is a script-writing term, but it feels particularly appropriate when thinking about cutting the stories up into the smallest possible narrative atoms. (Although maybe “natomise” is better!)
  • Then I need to set up the site to simulate some of responsiveness that a more complex system might offer. Concealed Bluetooth speakers for example, or  switches like these that can be controlled by Bluetooth.
  • Finally, rather than try and create the digital system that tracks visitors and serves them ephemeral natoms, I can do a limited experiment with two or more humans following visitors around and remotely throwing the switches that might light particular areas of the room, play sounds or what ever other interventions we can come up with. The humans take the place of the server, and when they come together, negotiate which of their visitors gets the priority. Graeme suggested a system of tokens that the human followers could show each other – but the beauty of this concept is that the methods of negotiating could become part of the results of the experiment! The key thing is to explain to the participants that the person following them around isn’t giving them a guided tour, they can ask questions of him/her, but s/he isn’t going to lead their experience.

So, now I have a a thing that it is possible to do, with minimal help and with a minimal budget. And its a thing that I can clearly see has aims that come of the research I’ve done, and results that inform platonic ideal responsive environment I have in my head. If it works, it will hopefully inspire someone else to think about automating it.

That’s what supervisors are for!


Music in Interpretation

Jeanice asked me before Christmas about academic study of how music impacts heritage interpretation. My first response was “there is none” (and I stand by that), but it did make me dig out a couple of papers that I’d found and not included in my literature review. And on reflection, I think I may indeed go back an add one of them in.

The first was Musical Technology and the Interpretation of Heritage, a conference keynote speech given by Keith Swanwick, and published in 2001 by the International Journal of Music Education. That there publication is the clue that this isn’t really about heritage interpretation (as I’m defining it) at all, but rather about cultural transmission through music, and especially through music teaching. I left it in my notes because it references information and digital technology but, re-reading it for Jeanice, I realise it doesn’t do anything with that reference, apart from equating music itself with ICT as a mode of cultural transmission.

There’s some discussion of compositions created with cultural transmission as an intent, which may be interesting for some later study, but it doesn’t give the overview of music as museum/cultural heritage-site media that the title promised.

More interesting is this paper, from the V&A’s on-line research journal. The paper explores the development of a collaborative project between the Royal College of Music and the V&A, involving new recordings of period music for the Medieval & Renaissance Galleries. The first thing that strikes me is that the galleries opened in 2009, and yet the project was conceived in 2002. Sometimes I wish I worked for an organisation that gave such projects a similar about of time to gestate.

Drawing on all their front end evaluation, and the debates on learning styles and segmentation that have taken place over the years, the Medieval & Renaissance Galleries team were keen to offer visitors a “multi-sensory framework […] incorporating opportunities for tactile experience, active hands-on learning and varied strategies for helping visitors to decode medieval and Renaissance art actively.” This included audio as well as film and other digital media.

This is where the quote that, on reflection, I think I should at the very least include in my literature review. It comes from a footnote, and usefully sums up my fruitless search for literature on music in cultural heritage sites:

Music in museums has not been the focus of detailed study or writing.

The article goes on to round up various ways in which music is used in (general, not museums of music) interpretation, for example, places where popular music of the twentieth century is used to help immerse visitors in a particular decade. Of particular interest is The Book of the Dead: Journey to the Afterlife, a British Museum exhibition for which, “…a musical soundtrack was commissioned to heighten emotional effect at a key moment in the exhibition narrative.” I might have to try and find out more about that commission.

The V&A actually included two exhibitions in their music making. While the Medieval & Renaissance Galleries were being developed, the museum ran a temporary exhibition called At Home In Renaissance Italy, for which 24 pieces were recorded and played ambiently in rotation. “Evaluations demonstrated an overwhelmingly positive response to the music from the visitor’s point of view” but also highlighted some of the problems, not least of which was that some people (especially staff who have to hear it non-stop) really don’t like ambient music. This evaluation informed how the music project developed for the permanent gallery.

The plan had been to use pre-existing recordings “that could help visitors to imagine the medieval and Renaissance worlds and to convey emotion and feeling.” But as the curatorial research developed, it became apparent that there were opportunities to use music that hadn’t previously been recorded, but that was directly connected to the objects and stories of the exhibition. Because “evaluation of audio provision in the V&A’s British Galleries demonstrated that audio-tracks were less effective without a strong connection to immediately adjacent objects or displays” the museum decided upon benches equipped with touch-screens and good quality headphones as “audio-points” where a user could sit a browse music related to what they could see in front of them. Each piece faded out after a minute or two, to ensure a reasonable rate of churn of listeners, but the complete pieces were available from the V&A website, for those that wished to listen to them complete.

Sadly the evaluation had too wide a remit to explore in depth visitors’ responses to the music. All they could say was that it “showed that a high percentage of users engage with the audio-points, a strong indication that they are valued by visitors.” I would have liked to have discovered how well the music achieved their aims of conveying emotion and feeling. They do conclude however that “The increasing ownership of smartphones and MP3 players is rapidly increasing the options for museums to deliver music in gallery spaces and the number of ways in which visitors can choose to engage with it.”

So we need to see some more research about how its used an its impact on the visitor experience.


Participate in research and (maybe) win!

Fleur Schinning is currently writing her master’s thesis as a part of her specialisation in Heritage Management at Leiden University in the Netherlands. Her research focuses on the use of blogs and social media and how they contribute to the accessibility of archaeology. Public archaeology has been developing considerably in the Netherlands for the last couple of years, but much can still be improved concerning public outreach activities. This is why she has decided to focus her research on social and digital communication methods that might make archaeology more accessible for a wider public.

Her research is looking at several blogs from both the UK and USA; in these countries blogging seems widely accepted and used a lot as a tool in creating support for archaeology. To be able to explore how blogging in archaeology contributes to public archaeology,  she would like to question the bloggers and (this is where you come in) readers of these blogs. She has created a questionnaire for you dear reader, which can be accessed here: http://goo.gl/forms/z3BAUTyYUL.

All participants also have a chance to win a small prize; 6 issues of Archaeology Magazine!

Go on, you know you want to!

Reading about forum participation as a component of on-line learning

I’ve participated in two MOOCs so far, one through Coursera and one through FutureLearn. One difference between the two platforms is the use of Forums.

In the Coursera course on Statistics, the forum is presented as an add-on, a tool that was available to students who wished to interact with other students, discuss concepts raised, offer feedback on the course and, especially, seek help with the weekly assignments that were the main form of assessment during the course. But the forum didn’t feel part of the course, and there was no evidence that my participation on the forum was being, either formally or informally evaluated.

On the FutureLearn course, each learning element came with its own forum built in, and students were actively encouraged to submit work on the the forum, and to comment on each other’s submissions.

Neither course ended with any form of certification, so any evaluation of the student’s work on the FutureLearn forums was informal, but there was a more of a sense of the course team taking an active interest in how student participated in forum discussions than on the Coursera course.

With a growing number of courses delivered wholely or partly on-line, and in particular expansion of Massively Open On-line Courses (MOOCs), new models of student participation and evaluation have developed. One such model is the use of discussion forums. A development of the Bulletin Board Systems of the early internet, forums can be described as an asychronous form of conversation that uses type. Forums are often archived, at least temporarily, and the course of the whole conversation can be viewed at any time, which distinguishes discussion forms from other typed conversations such as Internet Relay Chat.Discussion forums are often a component of Vrtual Learning Evironments (VLEs) like Blackboard, etc.

Moodle is an open source virtual learning evironment first released in 2002 and used by a number of insititions worldwide (including, for example, the Open University) to deliver on-line education. It was originally developed by Martin Dougiamas who (for example in Dougiamas and Taylor, 2002) is a proponent of social constructionist pedagogy. Lewis (2002) is a much cited study that was one of the first to use a randomised trial to evaluate the effectivness of discussion forums as a learning tool. Although inconclusive on the main question, one new hypothesis raised was that “online group discussion activities must reach a certain level of intensity and engagement by the participants in order to result in effective learning.”

Indeed Hrastinski, S. (2008) is concerned that asynchonous online conversations can be difficult to get going if too few students participate. However, when they do succeed, Hrastinski offeres evidence that asynchonous conversations stay on-topic for longer, give students more time to reflect on complex issues, and allow students from different time-zones, and with different time commitments, to participate.

Given these advantages it’s no wonder that on-line course designers want to include discussion forums in the toolset that they offer to students. But if the forums are to be an effective learning tool, students must be incentivised to participate. One obvious incentive is to make participation in discussion forums part of the sudent’s assessment. Morrison (2012) offers an example rubric that makes clear to students how their participation could be assessed. In her example, the quality of the initial post is measured according to relevance, clarity and depth of understanding. Follow up posts are graded according to frequency and supportiveness. Word count and timelyness are also factors that affect grading.

This is just one example, but it demonstrates the effort required by instructors to properly assess each student’s work. An active and vibrant forum may have dozens or, especially likely with MOOCs, hundreds of posts. Automated tools, especially those that enable supportive peer review are required if the full learning potential of asynchronous discussion forms is to be realized.

Gamer data – the heart of the matter (oh dear)

Enough pussyfooting around with ludic.interest. We conducted this survey because we wanted to get an idea of what would encourage people to play a game that used mobile technology to engage players in the story that take place in a number of cultural heritage locations around Southampton. (Though in other news, our funding bid for that project fell at the second hurdle, so this research is likely to now be, both literally and metaphorically “academic.”)

We already know that awareness to these sorts of games is relatively low, but by seeing if there’s a relationship between any of the four Fun preferences described by Lazarro and the respondents’ stated interest in location based gaming (which itself wasn’t explicitly covered in Lazarro’s work), we might have a better idea of what sort of game mechanics appeal best to the audience we’d need to attract.

We asked four questions suggested by our technology provider in this section, where the response was recorded on a 101 point Likert scale:

  • If a mobile game used location would this be of interest to you?
  • Use this slider to show how much you’d prefer working in a team or as an individual, in a location based game
  • Would you be interested in a mobile game that uses digital artefacts / objects alongside real world locations? and
  • How much does the game featured in this video interest you? [we used a promotional video for Chromarama]

So, the first thing I did in R was to change the variable names (which were sucked into R in the form of the whole question plus instructions about using the slider), into something that would be easier to read on plots, then create a scatterplot matrix ordered by degree of correlation:


So what does that show us? That the locatative gaming  questions all seem to correlate with each other. As do the Fun preferences. But the correlation between fun preferences and locatative gaming responses is weaker. A preference for People fun, for example, only correlates with responses to the Teamwork question (and frankly I’d be worried if it didn’t!).

More worryingly, only the Hard fun preference correlates at all with the core measure of interest in a mobile game that uses location. Lets look at that in more detail:


Oh dear. There’s the positive slope which, at first glance, suggests some correlation, but check out the confidence curve (I used the R package ggplot2 to create this). There’s space within that to flatten out the regression line. I “cannot reject the null hypothesis”. Blast! And check out all those dots down on zero, expressing no interest at all in  a locatative game! Double blast! Maybe its just as well we didn’t get the funding 😦

The matrix also shows some correlation between Easy fun preference and interest in searching for digital artifacts in real-world locations. And on closer examination, the plot looks more attractive, greater interest overall, but again, I can not reject the null-hypothesis:


And the so called correlation between Serious Fun preference and interest in the example video has even worse confidence intervals (and a massive p value of 0.36):


The only positive result I’ve found so far is the apparent correlation between working in a team and finding digital artefacts, wherein I can (just about) reject the null hypothesis.


I guess these two mechanics have some history together in the world of scavenger hunts. But it seems I can’t yet claim from this research, that the world is ready and waiting for locatative games.

More maths


Last time I finished with this matrix of scatter-plots, ordered by the magnitude of correlation. But what does it actually mean? Lets take a step back, and look at those derived variables. I ask R to describe the table of variables that I created previously, which include the notional ludic.interest variable and the Hard, Serious, Easy and People fun preference variables. These are handily additional columns created by R on the end of the table of original data, so I ask R to describe just those columns:

> describe(newdata[90:94])

This gives me a little table describing the variables. It’s where the mean values I quoted last week came from. Looking at it again this week its interesting to note the ranges of some of the scores, but the first thing I notice is that the Standard Deviation (SD) of the ludic.interest variable is noticeably lower than the fun preference variables. Those range between 15.31 for the Hard fun variable, and 16.75 for the Serious fun variable. While the ludic.interest variable is 11 (actually 0.11, but remember that the other fun variables are between 0-100 and ludic.interest between 0-1). The range of score for ludic.interest  is tighter too:

ludic.interest 51
H 66
E 76
S 89
P 76

The Serious fun preference questions thus showed the most division among gamers. What’s particularly interesting is that the lowest score in that range is zero, so at least one respondent vehemently disagreed with all the statements associated with that preference. The same is true of the People fun variable.

That matrix at the top of the post suggests that despite (or because of?) the wide range of the Serious fun variable, its one that shows some correlation with all the other variables. Stronger correlation, in fact, than the People fun variable, which correlates poorly with the all the variables except Serious fun.

Lets look at that in more detail. The Serious fun variable correlates most with the Easy fun variable,  the value of the correlation coefficient  (r) = 0.52, plot the two variables with a regression line and it looks like this:

Not a bad, shall we say “moderate” relationship. For every point up the Easy fun preference scale somebody scores, they are likely to score 0.54 higher on the Serious fun scale. With a standard error of 0.09 the T value for this relationship is 5.9, and the corresponding p value is very low at 0.00000006. So this appears to be a statistically valid relationship.

(You can see that respondent who disagreed with all the Serious fun statements in the bottom left, they weren’t that keen on Easy fun either, but at least scored 22 for that. Looking at the table of data I find its the same respondent who also disagreed with all the People fun preferences, and scored 43.5 for hard fun, and 33 (0.33) for ludic.interest.)

Lets compare how that looks with the plot for the relationship between preferences for Hard fun and People fun, where the correlation coefficient is just 0.02:


Hardly any relationship at all then.


Gamer data: Fun preferences

After last week’s hair-pulling day of frustration, I’ve made I bit more progress. The survey contained seventeen questions which were based on the theory of four types of fun, set out by Nicole Lazzaro. These were 101 point  Likert scales, wherein the participant indicated their agreement with a statement, using a slider with no scale and the slider “handle” position set randomly, to reduce systematic bias. Of course, these being Likert disagree/agree scales, I was still expecting clumping at one end or the other despite my attempts to reduce that by making them 101 point scales. And so it proved, in many cases, as these histograms of the four questions I used as indicators of a preference for “Serious Fun” show.


I never intended to do any correlations with the responses to the individual questions though. Instead my plan was to average out each individual’s responses to the indicator questions to create something more like a continuous variable which I could correlate with other responses. Doing that for the responses to the Serious Fun indicator questions, for example, turns the four clumpy histograms above into something a lot more like a “normal” curve.



And the distributions of all four “Fun preferences” look like this (as curve plots this time, in case you were getting bored of histograms):


You’ll note straight away, that the “Hard Fun” curve is the one that most resembles a “normal” bell curve. Easy Fun has a distinct negative skew, and in fact all three others have a slight negative skew. And there’s a distinct preference apparent in this sample for Hard and Easy Fun over Serious and People Fun. In fact, the most popular preference in this sample is for Easy Fun where the mean stands at 70.8 and the median (in this most skewed of the four distributions) at 73.7. The mean of the Hard Fun distribution is 66.61, in third place is Serious Fun with a mean of 54.06 and trailing behind is People Fun with a mean of 42.22.

I was a bit surprised that People Fun scores so poorly in this sample, but I guess I shouldn’t be because one of the questions I used to indicate a preference for People Fun was “I don’t actually like playing games all that much” which I don’t suppose is going to find much agreement among gamers after all.

Which begs the question “would People Fun preference correlate negatively with the Ludic Interest vector I created last week?” But rather than look at that relationship on its own, lets see how all the derived variables I’ve created relate to each other.


So, people fun correlates a bit with Serious Fun, but little else. Ludic Interest correlates less well with Hard Fun than I might have expected. Though the Ludic Interest variable was admittedly an afterthought and the selection of games from which it was derived by no means scientific. I might rethink that whole  section next time. The Serious Fun vector correlates with other variable more than I expected, and the little scatter plots look interesting, so next time I’ll investigate some of these relationships more deeply.



My Gimbal beacons arrived yesterday. These are three tiny Bluetooth LE devices, not much bigger than the watch battery that powers them. They do very little more than send out a little radio signal that says “I’m me!” twice a second.

There are three very different ways of using them that I can immediately think of:

I’ve just tried leaving one in in each of three different rooms, then walking around the house with the the simple Gimbal manager app on my iPhone. It seems their range is about three meters, and the walls of my house cause some obstruction So with careful placing, they could tell my phone very simply which room it is in. And it could then serve me media like a simple audio tour.

Alternatively, as they are designed like key-fobs, they could be carried around by the user, and interpretive devices in a heritage space could identify that each user as they approach, and serve tailor media to that user. Straight away I’m thinking that a user might for example be assigned a character visiting, say, a house party at Polesden Lacey, the the house could react to the user as though they were they character. Or perhaps the user could identify their particular interests when they start their visit. If they said for example, “I’m particularly interested in art” then they could walk around their a house like Polesden Lacey, and when they pick up a tablet kiosk in one of the rooms, it would serve them details of the art first. Such an application wouldn’t hide the non-art content of course, it would just make it a lower priority so that the art appears at the top of the page. Or more cleverly, the devices around the space could communicate with each other, sharing details of the user’s movements and adapting their offer according to presumed interest. So for example, device a might send a signal saying “User 1x413d just spent a long time standing close to me, so we might presume they are interested in my Chinese porcelain.” Device b might then think to itself (forgive my anthropomorphism) “I shall make the story of the owner’s travels to China the headline of what I serve User 1x413d.”

But the third option and the one I want to experiment with, is this. I distributed my three Gimbals around the perimeter of a single room. Then when I stood by different objects of interest in my room, read of the signal strength I was getting from each beacon. It looks like I should be able to triangulate the signal strengths to map the location of my device within the room to within about a metre, which I think is good enough to identify which object of interest I’m looking at.

What I want to do is create a “simple” proof of concept program that uses the proximity of the three beacons to serve me two narratives, one about the objects I might be looking at, and a second more linear narrative which manages to adapt to the objects I’m by, and which I’ve seen.

I’ve got the tech, now “all” I need to do is learn to code!

Unless anybody wants to help me…?

Another look at my gamer data

This slideshow requires JavaScript.

I’m still wrestling with R and wishing I was a natural (or maybe just a more experienced) coder. Everything takes so long to work out and to actually do. Last time I shared the results, I was just looking at the top-line data that iSurvey shares. This time I’ve downloaded the data and sucked it into R, the command line based stats language.

I start off looking at the basics. What is the size of my DataFrame (as it’s called in R)?

> dim(ghb)
[1] 193 89
> nrow(ghb)
[1] 193
> ncol(ghb)
[1] 89

There we go, its 193 by 89, or 193 rows by 89 columns. Now more that 200 people actually responded to the survey, but not everybody completed it, so to keep things simple, I only downloaded those who had completed it. But I discovered there were still gaps in the data, and here’s a case in point:

The first question I asked was a list of games, against which respondents could select from six categories:

When I composed this question I had two intentions in mind. Firstly, to offer a simple question to ease people into doing the survey, so they would be less challenged by the more esoteric questions I attempted later. Secondly, I just wanted to get an idea of the participants awareness of a number of different games and types of games. Thus the list of games was somewhat esoteric, with games I knew were popular, and games I’d only come across through my study. This is how that list appears in R:

[12] “Minecraft”
[13] “Red.Dead.Redemption”
[14] “Papa.Sangre”
[15] “I.Love.Bees”
[16] “Elder.Scrolls..Skyrim”
[17] “Cut.the.Rope”
[18] “Zombie.Run”
[19] “World.of.Warcraft”
[20] “The.Sims”
[21] “Just.Dance”
[22] “Ingress”
[23] “Dear.Esther”

I mentioned how the games compared in my earlier post. But since composing the survey I realized it should be quite easy to convert the categories into numbers and and total up individuals’ awareness of these games into a notional continuous numerical “game awareness score.” That might prove a statistically useful measure of a question I purposefully didn’t ask (which might have been: How interested in games are you? Not at all—–>Pro Gamer) against which I might be able to correlate certain play preferences, maybe even proving or disproving the oft-heard cry “Real gamers don’t play Angry Birds“! (An aside – I like this comic representation of a similar argument).

So after some frustration I come up these two lines of code for R:

ghb$ludic.interest <- round(rowSums(ghb[12:23])/72, 2)
hist(ghb$ludic.interest, col = “firebrick3”, xlab = “Notional score”, main = “Ludic Interest”)

Which creates a new array of values(rounded to two decimal places) between zero and one (where one = “true gamer”), then plots the results in a histogram thus:


Not entirely “normal” but getting there, with a positive skew, but nothing too dreadful. A set of data I can work with.

Or can I? Because when I look at the values in the vector itself I find that a small number of values are coming up “NA”. Whats going on? It turns out that some respondents didn’t select any of the categories for some of the games. And if they miss out just one game, their Ludic Interest value is screwed. It’s not too bad for this vector, but I can only assume there are other questions, where other respondents have chosen not to select an answer. And I try to correlate those vectors with this one, more and more answers will come up “NA”.

What should I do? The easiest thing to do would be to remove any respondent who has has any missing data:

> newdata <- na.omit(ghb)
> dim(newdata)
[1] 94 90

And bamm! At a stroke my sample size tumbles down from 193, to 94. How badly will that effect my analysis? Lets redraw that histogram with the reduced dataset:


Hmmm, a bit more comb-like, almost bi-modal. Worrying.

So, can I deal with the missing data in other ways, changing it to zero for example? That might be (just about) acceptable for converting the categorical data in this particular question into a Ludic interest score, but may not be acceptable for the other instances of missing data. Ohhhhh maths is hard!

Oh curse you, respondents! Why could you just have answered all the questions properly? And why didn’t iSurvey remove you when I asked it to strip out incomplete surveys?

This post on Stack Exchange is the most useful introduction I’ve discovered so far about the mysteries of imputation.  But I’ll leave that for another day. In the meantime, I’ll work with my 94 complete responses.

International, interdisciplinary and “on the move”

Today, I’ve been at Southampton University’s interdisciplinary week, for a session on the World University Network, of which, Southampton is a part. WUN sponsors my trip last year to the the US to attend and speak at the the Decoding the Digital Conference at University of Rochester.

After a brief introduction to the session from my supervisor Graham Earl, and another one to the WUN from Elanora Gandolfini, Professor Leslie Carr, of the University’s Web Science Institute, kicked off by trying to claim that universities are old and more sustainable than the countries in which they are based. (I’m not going to agree or disagree.) He does make a compelling case however that there were attempts to make things like the World Wide Web before this academic and open initiative actually succeeded and was given free to the world.

He contrasts this with the rise of for profit academic publishing since the war, and recognizes the tension between the two methods of distribution and sharing of knowledge. But he concludes that universities are more than places to learn, but a vital engine for better worlds, woven into the social fabric, and more sustainable the Johnny-come-lately technology companies.

Then Chris Phethern, a third year PhD candidate, talked about a couple of exchange trips he has made alongside other Southampton students to Tromso and Korea, facilitated by WUN. Graeme Earl explained a little about the Research Mobility Programme (which got me to Rochester) and another programme that makes awards to specific projects.

He then went on to challenge us on various methods of interdisciplinary work, making me realize that though I work collaboratively on all sorts of written work, I do it by sharing multiple copies of the work on email, not by working on a single shared document like GoogleDocs.

I was on more comfortable ground when the discussion turned to social networking and blogging, two fellow PhD candidates I was sitting next to turned out to be far more nervous that I am about sharing this sort of stuff. Partly, I think, because they felt very few other people would be interested in their area. I countered that in the great scheme of things, I don’t expect VERY many people to be interested I this blog. But I feel I’ve already made useful contacts out of sharing my work here and on Twitter. However, justas we turned back to the front, one of the highlighted the concern he had about opening himself up to abuse on social networks. I think this is a very real concern for many, especially (it seems) women, as we transition from a pseudonominous internet society to a real-name one.

I have an action to take away from this session, to find out more about the University’s Internal Communications Network and SMuRF (and CalIT2). As someone who doesn’t spend much time on campus, I do feel I still rely too much on face-to-face real-world networking with my university cohort, and I might be missing the person also working at Southampton on a project that might perfectly compliment my own research.

Overall though, I left the session feeling very excited about the digital future of Universities. We may still be feeling our way nervously through the digital forest, but when the “find it” we’ll look back and realize that we changed the world.