1 Introduction
ICTs have revolutionised both traditional classroom teaching and the distance education concept. The advances in educational technology have helped to enhance students’ learning experiences, through self-paced and self-regulated learning activities where they can exert better control over their learning (JUNG, 2001). There are certain perceived advantages behind the reasoning as to why it is better to opt for online learning. These include, but are not limited to factors such as widening access to new groups of students, ubiquitous access and increased flexibility, learner self-regulation, improved learning support respectively (cf. GOODYEAR, 2001; OLSON & ROBERT, 2002). However, at the same time, there are some concerns relating to the effectiveness of online instruction. Previous research has reported that learners found that online delivery sometimes fails to meet their actual needs in terms of instructional effectiveness, ease of locating information on the platform, learner support, or even the design layout of the online learning environment (HRC, 2009). These deficits can lead to information overload or information anxiety (BAWDEN & ROBINSON, 2009) and hence to frustration which may ultimately result in high dropout rates (RODRIGUEZ, 2012).
Learning Analytics is an emerging field of research in technology enhanced learning. It is based on the principle of using large educational datasets from online student activities and other data sources to identify behaviours, attitudes, learning paths, and trends that can highlight potential issues and areas for improvement in terms of the educational design, delivery, and administration of student learning (GRELLER & DRACHSLER, 2012). Learning Analytics refers to the collection and compilation of data, which is then analysed to assess the progress of learners, as well as judge their past and anticipated future performances. Assessing learners in this way might be in terms of different variables, such as their way of participating, their responses and their academic achievements (SIEMENS, 2013). In an online learning environment, learning data can be compiled by gathering information on learners’ performances in assignment submissions, in online tests or quizzes, through their engagement with learning resources or in discussion fora as well as their reflective comments and feedback posted (cf. ERADZE, 2016).
In this paper, we are aiming to use Learning Analytics datasets from the virtual learning environment of the University of Mauritius to measure the level of interaction of students in online course provisions. This research is of great importance in the local context of the Mauritius Higher Education sector, as public universities face severe cuts in their government grant-in-aid support. Thus, universities have aligned their strategic plans to place e-learning and online education at the heart of their delivery. There is, however, still some resistance from academics and some degree of scepticism in the student community with respect to the new educational models that are being proposed.
From our analysis, some conclusions will be drawn on how online elements of a course connect to student achievement or (potential) failure, and what that means for course design.
2 Previous work on the effectiveness of online learning
Enrolment in online courses has considerably increased in the higher education sector. According to a survey by ALLEN & SEAMAN (2011), the rate of increase in online enrolment has slowed by about 10%, but has still outpaced the overall growth in higher education enrolment by approximately 1%. Despite this trend, the study showed that one third of professors still believe that face-to-face education (F2F) is superior to online education in providing students with quality instruction. This proportion has remained nearly constant since 2003 (ALLEN & SEAMAN, ibid.).
A survey of the literature reveals that the discussion around effectiveness of online learning has somewhat calmed down after much activity in the late 1990s and early 2000s. In one of the early studies on the effectiveness of online education, PICCOLI et al. (2001) found no significant difference in the performance of students between face-to-face and online delivery. In fact, during the study, a virtual learning environment (VLE) was used for daily instruction, but all course exams were conducted in person. However, students in the online sections of the course pointed out that they had higher self-efficacy than did their face-to-face peers, but also reported lower satisfaction in their course.
The effectiveness of online education has largely been investigated and reported for situations where no stark contrasts in the results have been obtained between face-to-face and online courses. In a teacher education course, traditional, online, and classroom-in-a-box were compared and no significant difference in student performance was found between these three delivery modes (SKYLAR, 2005). Student satisfaction was likewise shown to be roughly equal. However, in a Thai business statistics course, students in the online course were observed to perform significantly better than students in an equivalent F2F course (SUANPANG, 2006). LAMERES & PLUMB (2014) compared the performance of online and F2F students in a classroom and lab-based electronics course and found again no significant difference in their achievements.
From these and other comparative studies, we can take it that the evidence of whether online learning is better or worse than classroom tuition in terms of pedagogical outcomes is inconclusive and very context specific. For this reason, we decided to use the evidence from our own data and analytics to compare and evaluate student engagement and achievements. The motivation for this came from the desire and need to put arguments influencing course design and delivery mode into the local debate and making it context specific to Mauritius and our students.
3 Background work on Learning Analytics
Online environments hold a lot of information about student interactions that can be held against their learning outcomes and achieved grades as assessed by their lecturers. Additionally, universities possess other student data like their schooling background, grades or personal data including their gender. The exploitation of this “big data” in education through educational data mining and Learning Analytics has gathered pace in recent years with many universities around the world using the information they hold on students to investigate, reflect on, and improve their services. National education authorities like SURF in the Netherlands (cf. ENGELFRIET et al., 2015) or the JISC in the UK (SCLATER & BAILEY, 2015) are keenly trying to advance the implementation of Learning Analytics in higher education institutions and to alleviate potential barriers, such as data privacy. Recently, policies and good practice guides are emerging to give further guidance on how to apply Learning Analytics on an institutional level (cf. OPEN UNIVERSITY UK, 2014; SCLATER, 2016; JRC, 2016).
Our rationale for applying Learning Analytics to investigate student participation and success lies mainly in its anticipated power to detect students in danger of falling behind or dropping out of a course early, so dedicated remedial actions can be taken to keep them going (GRELLER & DRACHSLER, 2012).
4 Issues and setting
Since 2009, the University of Mauritius (UoM) has been running the “Diploma in Web and Multimedia Development” programme to secondary school leavers, online through their e-learning platform. Two modules in the first year are, however, offered face-to-face given that they form the core of the subject area. The course is offered on a full-time basis over four semesters with exams for each module normally being held at the end of the academic year. However, throughout the year, there are a number of assignments, and practical exercises that students have to submit as part of the continuous assessment of the course. The enrolment numbers in the programme have continuously risen over the years. Then, in 2012, it was noticed that some 30% of students failed the course in their first year. This issue triggered a data investigation through a Learning Analytics approach to try and establish the main causes for the drop-out.
In 2012, some 120 freshmen were enrolled in the programme. After the first-year exams, many of them either had at least one re-sit, had to repeat the year, or had been terminated from the course. There have been sporadic claims especially from a few of the failed students that the online mode of delivery was to their disadvantage – despite them knowing that the course was offered in a distance education online learning mode prior to registration. Given that the number of students had been steadily increasing due to the policy of widening access to tertiary education, there are a number of questions that arose with the challenge of explaining this relatively high dropout:
- Is there a correlation between HSC grades of first year students and their performance in the course?
- Is there a significant difference between the performance of the same students in the online modules when compared to modules offered in a face-to-face mode?
- Do we find gender related issues?
- Is there a link between student engagement in an online module and the performance of that same student in the exams?
Statistical analysis of interaction data from the VLE log files was used as the main method adopted for this investigation of two...