Want to eliminate cheating? Ask better questions
There was an excellent piece in The Conversation a week or so ago written by Beverley Oliver (@DVCEdDeakin) entitled, Proving knowledge by degrees: MOOCs and the challenge of assessment. The reference to MOOCs in the title is a useful device for attracting attention, but the content of the article focuses on the quality of assessment design, and has broader applicability beyond the context of MOOCs.
The design of assessment tasks is critical to the creation of authentic learning environments. Student engagement and the deep learning that ensues, is far more likely if students can see the point of what they doing (see, for example, Lizzio & Wilson, 2013). As Professor Oliver notes:
Perhaps instead of focusing on how we test students, a more purposeful question might be: presuming we know what outcomes we need students to achieve, and at what standard, what evidence will enable us to judge that this student is ready to graduate? In other words, assessment tasks are opportunities for students to create evidence of learning achievements in an array of formats.
In other words, if we look upon assessment as learning, rather than of learning, the approach taken by the learner (and faculty) takes on a completely different complexion, as Dr Adele Flood describes in this video clip.
A commitment to authentic assessment, while not completely eliminating the prospect of cheating, provides the opportunity for students to demonstrate what they know rather than what they don’t know. With some imagination, and harnessing the power of technology, assessment tasks can be created that either draw on real world problems, or situations that simulate real world problems.
In summary, if we ask better questions, the scope for unethical practice will be limited because the evidence of learning will be highly personalised (with an intrinsic motivation not to cheat) and presented in a format where it can be exhibited to people who have an interest in that evidence of learning (through a learning portfolio to prospective employers, for example). A useful criterion to guide to assessment design would be to ask whether the final deliverable is something sufficiently authentic for it to be deemed ‘curatable’ by a student as a digital artefact that showcases their learning.
This is a shift in thinking that, as Trish McCluskey (@trilia) has pointed out recently, requires us to re-evaluate how we define cheating in a default digital world. It also requires us to move beyond traditional text based assessment that has been the dominant paradigm in educational institutions for centuries. In a digital age, the ability to manipulate text to convey meaning and understanding is a necessary but insufficient condition for demonstrating competency in any given domain. This is what it meant to be literate in the analogue industrial age, but now multiple digital literacies are required of graduates if they are to fully participate in society and meet the expectations and demands of employers.
For this to happen, of course, education institutions also need digitally literate staff.