Evaluating the effectiveness of a science communication MOOC

Evaluating the effectiveness of a science communication MOOC

Author: nusuya Chinsamy-Turan – University of Cape Town, South Africa

Andrew Deacon – University of Cape Town
Eric Jensen – University of Warwick
Janet Small – University of Cape Town

With a longstanding commitment to public communication of her scientific research, the lead author has explored various methods over the years: popular-level talks, articles, and books. She became interested in the potential of Massive Open Online Courses (MOOCs) to reach wider audiences, permit researchers to share their work more globally, and most importantly to allow for greater access to knowledge. Together with colleagues at UCT, we developed a MOOC called ‘Extinctions: Past and Present’, that launched in 2017. The course includes videos in a lecture style, along with conversational interviews with scientists, and site visits to interesting places. The Extinctions MOOC ran three times in 2017 and attracted over 6,500 enrolments, from 130 different countries. While nearly half were from Europe, there was also significant enrolment in Africa (19.5%) and Asia (16.4%). Our MOOC was very well received – so much so the course currently features on the public MOOC aggregator site, Class Central’s 2017 list of Top 50 MOOCs in the world out of over 7400 MOOCs from 8500 Universities worldwide.

It is well recognised that improving communication between scientists and the public calls for improved evaluation to understand what is working and why, not simply more science promotion and engagement initiatives for their own sake. Measuring the impact of science communication initiatives on target audiences requires a careful process of developing clear objectives and a linked evaluation design focus on course-relevant outcomes to inform practice. In contrast, the standard evaluations of MOOCs are driven by generic questions about audience characteristics, motivation to learn online and the online experience. Dissatisfied with the limitations of the standard approach, we developed a repeated measures evaluation (pre-, intermediate and post-experience surveys) to gauge the impact of the science engagement experience in the MOOC, as well as whether course-relevant attitudes had changed through the learning experience. Thus, we were interested in understanding the impact of the science communication, that is, ‘what knowledge do people come with, and what do they take away after the learning engagement?’. The evaluation assessed attitudes, interests and awareness relating to the topic of extinctions, in the past and in the future. Our repeated measures instrument was designed to track effects on outcomes at the individual level (i.e. individual participant responses were tracked). While not all participants responded to each survey, we were able to get over 400 respondents for both pre- and post- surveys. These data have permitted us to evaluate the shift in certain perceptions and attitudes of the participants, and allowed us to interrogate the impact of the MOOC in a level of depth that has not been achieved previously with this type of science engagement initiative.

Without effective evaluation methods, the real value of science communication practice for its intended beneficiaries cannot really be assessed. Good impact evaluation requires early planning, clear objectives from practitioners, relevant research skills and a commitment to improving practice based on outcomes of the evaluation.

The author has not yet submitted a copy of the full paper.

Presentation type: Individual paper
Theme: Science
Area of interest: Applying science communication research to practice