Maintaining Assessment Integrity in the Age of ChatGPT
Humans have always been attracted to an omniscient ultimate intelligence, which could explain our fascination with ChatGPT, the generative AI tool released in November 2022. ChatGPT has already amassed around 1.6 billion users as of March 2023 and is the fastest-growing online service. The fact that Tiktok took about nine months to reach 100 million, puts the extremely rapid rise of ChatGPT in perspective. ChatGPT can be used for applications like content generation, step-by-step solutions to mathematical problems, code generation, research, language translation, and summarising content. ChatGPT remembers the context of previous conversations and allows further questions on the same topic, just like any human conversation. According to an Insider report, ChatGPT-4 answered more than 90% of the US medical licensing exam questions correctly.
As with any new creation which has the power to disrupt the way things are done, there are huge benefits and equally worrisome risks. One of the detrimental impacts is ChatGPT’s potential to undermine academic and assessment integrity. Assessments are conducted across all kinds and levels of learning, educational or professional. The type, duration, and frequency are customised based on the objective of the assessment. Assessment integrity is the merit and credibility of the assessment process and its outcome. It consists of the guidelines and means that ensure fairness, accuracy, and validity in assessing knowledge, skills, and abilities. Maintaining assessment integrity is crucial, as decisions based on the assessment results can have a far-reaching impact.
ChatGPT’s manifold abilities are particularly suited to scholastic tasks like essays, research papers, text summarising, step-by-step solutions, and writing code. These features have set off considerable deliberation in educational institutions across the globe. A survey of 1000 students aged over 18 in January 2023 by Study.com revealed that 89% of students had used ChatGPT for homework assignments, 48% for at-home quizzes, and 53% for essays. Academics have realised that disregarding the challenges posed by ChatGPT to assess integrity will imperil the credibility of the educational system. Let’s look at the problems caused by generative AI tools’ multi-faceted competencies, which educators must overcome to ensure genuine learning and assessment of knowledge.
Plagiarism
Copying off another person’s work and presenting it as your own without any source attribution might be as old as academia. But the advent of ChatGPT has democratised it and made it mainstream. Noam Chomsky, the celebrated linguist, has described ChatGPT as high-tech plagiarism and a way to avoid learning. ChatGPT has high-level natural language processing capabilities, thoroughly comprehends input prompts, and produces persuasive content. There are valid concerns regarding the infringement of intellectual property in the content generated by ChatGPT. The infractions include the facilitation of academic misconduct in performing research and reporting the results. It defeats the whole purpose of enrolling for education, and students waste the money and time spent on their education with no gains or improvements in return.
Cheating
Plagiarism is just one aspect of using AI-generated content to complete assignments. Submitting readymade material with no work put in by the individual student and then getting graded for it falls under the category of cheating. The student has not contributed to the output and does not deserve the awarded grade. Moreover, the grade does not reflect the actual learning achieved by the student, and the objective of the assessment or assignment was to test the understanding that the student accomplished, not the ChatGPT prompt-writing skills. Cheating on remote quizzes is made easier with ChatGPT as it provides answers to questions on most subjects.
False information
ChatGPT does not create content based on facts. It produces content by weighing the probability of the possible relationships between words. The response given by ChatGPT may be fake, biased, discriminatory, or obsolete based on the input query. The veracity of its output depends on the authenticity and credibility of the data it was trained on. The training data for ChatGPT is limited to 2021, and since it is not updated with current data, the answers might be outdated for some queries. ChatGPT might make up seemingly legitimate references when requested to provide current references, but they may not lead to sources. It can even generate made-up content and a bibliography with fake URLs. Learners might be unable to distinguish between concocted content and genuine information and accept the misleading material as accurate. This limitation of ChatGPT compromises the student’s learning.
Decline of Cognitive Skills in Students
It is human tendency to choose the path of least effort, and using ChatGPT to complete assignments will diminish the student’s inclination to understand the concept through research to form their own opinions and reach the corresponding conclusions. Once students get into the habit of completing their work with minimum effort, it is not easy to break the pattern. Getting the task done becomes so simple that intellectual apathy sets in. Using such tools leads to the student expending next to no form of thinking or reasoning to create the assignment output. The purpose of the assignment in terms of learning and development of skills is defeated. It gradually degrades creativity, critical thinking, and problem-solving skills in the long term.
As schools and learning centres grapple with the disruption caused by ChatGPT, they have realised the need to develop a comprehensive strategy to ensure assessment integrity. With AI technology progressing by leaps and bounds, banning ChatGPT is not a practical solution. The debate needs to be centred around preventing misuse, boosting learning, and simultaneously redesigning assessments to measure the actual knowledge gained by students. Let’s look at a few approaches that aid in doing so.
Re-defining Assessments
Most assessments have a generic problem statement and involve critical analysis. AI tools provide excellent content on a general topic, and critical studies usually have no personalised elements which bring in the student’s experience. Redesigning assessments to apply to specific real-life or hypothetical scenarios and introducing reflective components to capture the students’ original opinions on the subject would mitigate the impact of ChatGPT. The reflective element should align with the course module’s subject matter and learning objectives. It could be weekly entries or blog posts on the steps they are going through for the assessment, the impact of the said activities, and their thoughts about the effects. They could list the reference material and a few lines on how each piece helped them complete the assessment. Hypothetical case studies that are detailed and require the students to thoroughly understand the concepts being taught before applying them to the scenario will necessitate effort by the students. Here the use of ChatGPT could only be used as a learning aid. Frequent and timely refreshes to assessment questions will make it harder for students to get readymade solutions.
Tools
AI-generated content can be identified by an AI detection tool that uses machine learning and natural language processing. The tool is programmed to identify patterns in the content. If there is a very predictable pattern, it is more probable that the tool will categorise the content as AI-generated. There are AI-enabled solutions that check for plagiarism and contract cheating. The tool compares the content with websites, publications, and academic papers and lists out possibilities of plagiarism. These solutions might include content originality scrutiny, which compares the submission with other writings done by the author to detect contract cheating. Browser extensions that integrate ChatGPT with most browsers and enable ChatGPT output in the browser window are available. Using secure browsers to conduct online exams prevents cheating through ChatGPT extensions. Safe browsers can restrict internet access, turn off switching between windows and opening new tabs or windows, and restrict copy-paste functions. When using a secure browser is not an option, intelligent proctoring solutions can prevent cheating using ChatGPT. These tools conduct real-time monitoring of the candidate’s screen and surroundings. Any suspicious activity like switching windows, the candidate leaving the seat, or using other devices like phones or tabs can be detected and flagged immediately. Smart solutions can even detect if the candidate is reading from a book through eye movement analysis.
ChatGPT Usage Guidelines
Educational institutes need to organise a dialogue with all the stakeholders regarding the impact of AI tools like ChatGPT and collectively formulate a strategy to incorporate its use while upholding academic and assessment integrity. The convention’s outcome should be unambiguous policies on using ChatGPT and other similar AI aids. Clear guidelines should be provided on the citation and attribution of AI-generated content. Open and in-depth discussions regarding assessment integrity should be held with students, as student engagement and agency is the key to the constructive adoption of ChatGPT. Students should be educated about the loss of learning and the decline in cognitive skills due to over-dependence on ChatGPT.
Generative AI will continue to evolve and get smarter as AI research progresses. Academics worldwide need to take heed of the urgency of the threat posed to learning by the unregulated use of generative AI tools. It is time to analyse and develop the next generation of assessments that can go beyond content creation tasks and successfully evaluate learning and original thought.
References: