Shardlow, Matthew ORCID: https://orcid.org/0000-0003-1129-2750 and Latham, Annabel ORCID: https://orcid.org/0000-0002-8410-7950 (2023) ChatGPT in computing education: a policy whitepaper. Discussion Paper. Council of Professors and Heads of Computing, UK.
|
Published Version
Available under License In Copyright. Download (2MB) | Preview |
Abstract
On Monday 17 July 2023, 65 academics from 33 universities across the UK joined forces for a workshop to explore the affect of generative AI tools (such as ChatGPT) on Computing Higher Education, and to co-develop guidelines for university assessment policy. The ‘ChatGPT in Computing Education: A workshop to Co-Develop Guidelines for Assessment Policy’ workshop was funded by a Council of Professors and Heads of Computing (CPHC) 2023 Special Project Fund grant, and hosted by Dr Annabel Latham and Dr Matthew Shardlow at Manchester Metropolitan University, UK. The day started with a talk on How Large Language Models (LLMs) work, to give context for the discussions. In the first workshop task, six different modes of assessment common in computing-related higher education courses were evaluated in terms of the threat level and opportunities for redesign in light of LLMs. A group feedback session explored findings and thoughts about the six traditional assessment types and ideas for future assessments. The group noted that assessment types are already evolving away from the traditional knowledge based assessments (factual recall, closed exams) towards skills-based assessment (coursework, practical activities). Whilst knowledge based assessment may be threatened by LLM-based plagiarism, skills based assessments require the learner to demonstrate a practical ability. If this is assessed through a written piece, it may be vulnerable to academic misconduct, however there are many alternative ways of assessing skills such as practical exercises and vivas. The group examined a number of forms of written assessments (knowledge recall, critical analysis, long essay and experiential) as well two code-based assessment formats (code production and code analysis). The first part of this report gives a summary of the findings for each of these types of assessment. The final workshop activity was a World Cafe activity, whereby each table group was assigned a policy topic, and a host who remained with the table to lead discussions. Each group of attendees spent 10 minutes at each table discussing and shaping policy guidelines for universities. These groups discussed a wide range of policy topics including: how to incorporate LLMs into HE practices, mitigation of academic misconduct, delivery strategies and appropriate timescales for adoption. These discussions, along with relevant policy points are summarised in the second section of this report. This report is intended for policy makers in Computing HE settings and beyond. Our findings demonstrate the need for informed decisions to be made within our university settings. LLMs are here and their power is increasing. As educators, we must stay at the forefront of this curve, incorporating this technology into our teaching practices to the benefit of our students’ education and future employment prospects.
Impact and Reach
Statistics
Additional statistics for this dataset are available via IRStats2.