In thinking about ways that we can evaluate the effectiveness of learning via our multimedia course redesigns, the following comes to mind:
- Web analytics is an excellent way to unobtrusively learn more about our students’ learning that is particularly suited to the medium.
- Analytics can tell us what and how: what students are doing online (while logged into the content) and how they are using that content. We can “see” without actually observing them while they click.
- Because our work is for required courses and students have an incentive to master the content in order to pass the course as well as end-of-program qualifying/certifying exams, “intent” is not an issue – as is currently discussed in the analytics literature. And “conversion rates” are more about achievement of learning outcomes on short and longer term exam/program outcomes – not on short-term purchasing decisions.
- We can use the Trinity framework to assess our multimedia course designs and possibly (I hope) student learning outcomes.
- If we can provide students with the ability to monitor their learning — through analytics reporting – they may be more open to the idea of pre- and post- “quiz” assessments. In order to be credible, this research must have pre/post measurements as baseline evidence of learning.
- Qualitative data is essential to complement analytics. We need to determine the best strategy for collecting students’ reactions to the multimedia as well as their own assessments of their learning. Would prefer to capture that feedback anonymously…and use in aggregate.
In summary, clickstream, time-on-task and other data gleaned from logs will tell us how students are using the multimedia (which we need to know). Linking behavior to assessment scores (learning outcomes) will help us understand correlations between behavior and achievement (maybe). Connecting behavior, outcomes, and students’ own assessments of the impact of multimedia on their learning can help us develop “actionable insights” to change/improve the media and educational impact for future students.
Next step to figure out: how to design reports for students and reports for instructors — with the aim of helping them target studying/improvement in master and changes/improvements in classroom instruction, respectively.