I’ve participated in two MOOCs so far, one through Coursera and one through FutureLearn. One difference between the two platforms is the use of Forums.
In the Coursera course on Statistics, the forum is presented as an add-on, a tool that was available to students who wished to interact with other students, discuss concepts raised, offer feedback on the course and, especially, seek help with the weekly assignments that were the main form of assessment during the course. But the forum didn’t feel part of the course, and there was no evidence that my participation on the forum was being, either formally or informally evaluated.
On the FutureLearn course, each learning element came with its own forum built in, and students were actively encouraged to submit work on the the forum, and to comment on each other’s submissions.
Neither course ended with any form of certification, so any evaluation of the student’s work on the FutureLearn forums was informal, but there was a more of a sense of the course team taking an active interest in how student participated in forum discussions than on the Coursera course.
With a growing number of courses delivered wholely or partly on-line, and in particular expansion of Massively Open On-line Courses (MOOCs), new models of student participation and evaluation have developed. One such model is the use of discussion forums. A development of the Bulletin Board Systems of the early internet, forums can be described as an asychronous form of conversation that uses type. Forums are often archived, at least temporarily, and the course of the whole conversation can be viewed at any time, which distinguishes discussion forms from other typed conversations such as Internet Relay Chat.Discussion forums are often a component of Vrtual Learning Evironments (VLEs) like Blackboard, etc.
Moodle is an open source virtual learning evironment first released in 2002 and used by a number of insititions worldwide (including, for example, the Open University) to deliver on-line education. It was originally developed by Martin Dougiamas who (for example in Dougiamas and Taylor, 2002) is a proponent of social constructionist pedagogy. Lewis (2002) is a much cited study that was one of the first to use a randomised trial to evaluate the effectivness of discussion forums as a learning tool. Although inconclusive on the main question, one new hypothesis raised was that “online group discussion activities must reach a certain level of intensity and engagement by the participants in order to result in effective learning.”
Indeed Hrastinski, S. (2008) is concerned that asynchonous online conversations can be difficult to get going if too few students participate. However, when they do succeed, Hrastinski offeres evidence that asynchonous conversations stay on-topic for longer, give students more time to reflect on complex issues, and allow students from different time-zones, and with different time commitments, to participate.
Given these advantages it’s no wonder that on-line course designers want to include discussion forums in the toolset that they offer to students. But if the forums are to be an effective learning tool, students must be incentivised to participate. One obvious incentive is to make participation in discussion forums part of the sudent’s assessment. Morrison (2012) offers an example rubric that makes clear to students how their participation could be assessed. In her example, the quality of the initial post is measured according to relevance, clarity and depth of understanding. Follow up posts are graded according to frequency and supportiveness. Word count and timelyness are also factors that affect grading.
This is just one example, but it demonstrates the effort required by instructors to properly assess each student’s work. An active and vibrant forum may have dozens or, especially likely with MOOCs, hundreds of posts. Automated tools, especially those that enable supportive peer review are required if the full learning potential of asynchronous discussion forms is to be realized.