The challenges of evaluating in-visit digitally enabled heritage interpretation

I am reading a paper which will help me better present my Chawton study. The paper is brand new: Nikolakopoulou, V. & Koutsabasis, P.. 2019. Methods and Practices for Assessing the User Experience of Interactive Systems for Cultural Heritage. In: Pavlidis, G., ed. Applying Innovative Technologies in Heritage Science. IGI Global, and it is another literature review, but more a sort of meta study of evaluations, called by the authors a “fortiori.”

The methodology of this literature search is exemplary. The authors searched (via Google scholar, ACM digital library, IEEEXplore, Science direct, and Springer link) and found 350 papers, and and in a second search another 150 that looked as though they might be evaluation studies of digital in-visit heritage content. They screened out the majority, for not actually being accessible, or not actually being scientific (I think my thesis would have fallen at this hurdle), leaving just 73. These were screened a second time and a number of duplicate were found and then a number which did not actually write up the evaluation. Leaving just 29 to study in depth.

In summary these papers discussed applications to”explore a virtual or physical space or place” and/or “play a game”. The authors noted the the proportion of games had gone down since a previous survey, which likely corresponds with the time of my change of focus away from a ludic system to something more to do with narrative. They were still roughly 20% of the papers they looked at. Overall, the systems used a variety of digital technologies including 3d game engines, mobile devices, mobile AR, VR, the web, multi-touch displays, location based audio, and physical and kinaesthetic (responding to body movement) interfaces.

There were a lot of studies that looked at usability of the technologies, but the paper points out only using “user satisfaction” as a metric, which is a dangerous trap to fall into. Only a few to a thorough investigation of the User Experience in that way that modern commercial companies test their websites. “A considerable number of systems are evaluated upon learning effects on users” this despite that fact that learning is often not the primary reason for days out (though it may be a validation for the museum), but again few do that in a properly scientific way. There is an interesting paper mentioned (Falvo, P. G. (2015, September). Exhibit design with multimedia and cognitive technologies impact assessment on Luca Giordano, Raphael and the Chapel of the Magi in Palazzo Medici Riccardi, Florence. In 2015 Digital Heritage (Vol. 2, pp. 525-532). IEEE.)) that I may want to take a look at.

The paper highlights the difference between empirical evaluations “in the lab” as it were and in the field. “Field evaluations are contextual, which enhanced the validity of the process and results.” But “recruiting visitors to experience or assess CH content on purpose changes their original purpose of visit, which is
something inherently connected with the visitor experience discussed in several museum and visitor studies […] which suggests a paradox to me, either test random users, and change their reason for visiting, and potentially their responses to the visit; or invite visitors especially for the test and thus make it more like a lab experiment. The reason for visiting a cultural heritage experience is actually part of the experience.

One issue they highlight is that despite being “peer-reviewed and published in important journals and
conferences, it was possible to identify several failures in the quality of reporting on empirical evaluations” in particular reporting on the number of users that participated in the evaluations. I expect that is because, due tho the limitations of evaluation in cultural heritage sites, the time taken to observe a visit, the number of participants is statistically small – small enough to put into question wether the studies are “empirical” at all I know that after a week on site I had less than ten samples. And so I tried very hard not to present what I learned as empirical evidence. Indeed maybe even too hard. That bit need sa re-write post viva.

The conclusion is a reiteration that digital interpretation evaluation should involve more cultural heritage professionals and field studies. It point out that there is more work in newer methodologies “like physical computing and tangible user interfaces” (objects that are the interfaceIt also highlights ongoing issues of a lack of systematic approaches to evaluation, organised “in distinct identifiable catagories” which I imagine would make doing meta-studies like this more meaningful. Also, “Aspects of cultural value, awareness, and presence could only be recognized in very few empirical evaluations. Evaluation studies that consider more qualitative dimensions and more related to general purposes of CH like learning effectiveness have been increased, and they are usually accompanied by comparative evaluations. Comparative evaluations represent a small number compared to the overall number of studies reviewed.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s