The ChatGPT challenge: regulate or liberate

Pic: Forbes

Since its launch in late 2022, ChatGPT has become the topic of conversation, with pieces from scholars quickly weighing in concerns over authentication of assessment submissions (was it written by an AI?), how ChatGPT could be used in new assessments, as well as how it might change student careers in the future.

Stepping away from these bigger picture concerns, what should we be doing right now, on the verge of a new semester, to protect against or perhaps encourage ChatGPT in our 2023 assessments?

In some ways, a student asking ChatGPT to write their assignment is no different from them asking a fellow student to write it for a fee. Under TEQSA legislation both approaches could be considered contract cheating. Yet, assuming students will start to use ChatGPT – perhaps in much greater numbers than they currently use contract cheating services – how might academics adjust their practice, and reframe assessment tasks to assume the tool will be used, rather than trying to identify and regulate its use?

ChatGPT and existing legislation

The first question is whether or not ChatGPT (and other AI text generation tools) is already regulated by existing legislation. Regulation to fine and incarcerate contract cheaters who promote, write and sell student assessment solutions for a fee is well established in Australia and the United Kingdom, but should this apply to multipurpose AI Chatbots, such as ChatGPT?

ChatGPT falls within the broad definition of an “academic cheating service” under the 2020 amendments to the Tertiary Education Quality and Standards Agency Act 2011 (Cth) s 5 if it provides work to or the undertaking of work for students being a substantial part of an assessment task that students are required to personally undertake. Under s 114A a person (which includes a corporation) commits an offence if that person provides or arranges a third person to provide an “academic cheating service,” to a student undertaking a course of study with an Australian higher education provider and for a “commercial purpose” (ie the derivation of financial gain or reward). 

If provided free to students for non-commercial purposes, ChatGPT avoids regulation. If there is a commercial purpose, for example, licensing and incorporation into other software, available to students for a fee, liability may potentially apply. For corporations, the penalty can be $687 500. TEQSA has powers under s 127A to apply to the Federal Court for an injunction requiring a carriage service provider to disable access to services contravening the Act. 

In the higher education space, TEQSA has exercised this power numerous times but faces a unique challenge with Chat GPT because of its legitimate public benefits. Legitimate uses include language translation and correction,  general writing assistance and editing, inspirational idea generation, text summarising, job interview and application guidance, simple explanations of difficult concepts, solving maths puzzles with guided support, interpersonal advice, computer code generation and debugging, automation of workflows, development of viral social media and marketing materials, creation of essays, games and legal agreements, linking with other AI software such as DALL.E2  to create amazing artwork, even interior designs, and arguably better Internet searching than Google to name but a few.

Attempting to regulate tools with legitimate public benefits places potential TEQSA regulation at odds with the wider community, and requires academic institutions to find a solution.

The role of academic institutions

For academic institutions, ChatGPT poses immediate integrity problems, some perhaps of their own making based on an overreliance on inexpensive assessment and quality assurance models that support them.

ChatGPT challenges many assessment strategies, particularly assessments based on a low-level understanding and application of facts and concepts, particularly multiple choice, true-false, essays, assignments, and simple programming. These approaches are no longer viable to validate knowledge and its application outside of fully controlled supervised environments such as examination halls or remote proctoring (solutions not without their own problems).

 This means that with any unsupervised assessment, academics have to anticipate the use of ChatGPT. Software designed to detect ChatGPT is currently easily defeated. But more importantly, do tertiary institutions want to fund an ongoing war between AI-generated output and AI detection systems – a lucrative new business for plagiarism detection vendors?

Adjusting teaching practice

For academic institutions a better approach may be to embrace ChatGPT as another tool, but this may mean jettisoning some assessment practices and encouraging more sophisticated, and perhaps more expensive alternatives. TEQSA advises fostering personalised relationships with students as a means to avoid contract cheating. This would include reflections on practicum, oral presentations (vivas) and classroom-based assessments (with their fourth suggestion around individualised assessment already challenged by ChatGPT).

ChatGPT can enhance learning when combined with critical thinking and more authentic holistic assessment. Tools such as ChatGPT, Google, database searches and Grammarly for example all assist with learning and assessment processes but by themselves will not be able to complete the assessment.

In contrast to the suggestion that institutions should go back to high stakes invigilated assessment, an area that the academy has worked hard to move past, this allows for assessment to continue to be authentic and useful in the context of future workplaces. 

Changes in assessment practice should also be made in conjunction with staff and student training reinforcing academic integrity through both academic misconduct prevention strategies and student-centered approaches. 

The jury is still out in regulating ChatGPT  but the reality is that it has entered our institutions whether we like it or not. As educators, we need to work out how best to deal with it. ChatGPT, and other variants such as WebGPT that produced footnoted sources, are tools that should be embraced rather than regulated. We should be focussing on assessing higher-order thinking and critical analysis that humans value add beyond the capabilities of the  AI environment which we now inhabit. AI has the potential to impact how we interact and apply knowledge now and into the future, which would suggest an immediate need to welcome AI into the education process. 

by STEPHEN COLBRAN, COLIN BEER and MICHAEL COWLING

Previous articleTop Bankers In Asia See Pay Cuts Of Up To 50% As Deals Dried Out
Next articleClosure Of Bursa Malaysia In Conjunction With Federal Territory Day And Thaipusam

LEAVE A REPLY

Please enter your comment!
Please enter your name here