Chat Generative Pre-Trained Transformer (ChatGPT) is an artificial intelligence tool that has been trained using deep learning algorithms to generate conversational interactions to user prompts. A conversational chatbot that answers questions and provides information rapidly generating a typed response. According to the OpenAI website, the trained model can answer follow up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

I became aware of ChatGPT in early December 2022 although I didn’t get a chance to properly play with it until the day before my holidays when I had a quieter inbox.  The technology was released for public use on November 30, 2022, and during this research preview, ChatGPT is free for public use. In recent days, the number of users has reached capacity, so it has not always been available.

I am an advocate of incorporating technology in learning. I use polling tools such as Mentimeter in the classroom to promote active learning. I have flipped the student role using PeerWise where students create multiple choice questions, write explanations, and evaluate the work of their peers. My research evidence supports this technology enhanced learning.  My level of curiosity was naturally piqued with ChatGPT. As an academic, I had questions: Can ChatGPT answer assessment questions? If yes, what is the quality like? Can it be detected by the plagiarism software Turnitin? If several users use the same prompt will ChatGPT generate very similar responses?

Here’s my perspective with investigating ChatGPT. I wanted to better understand its functionality. My discipline is chemistry and the examples I share are chemistry assessment questions.

Can ChatGPT answer assessment questions?

 In some instances, yes but not all assessment questions. For questions that focus on knowledge and understanding of knowledge with “describe” and “discuss” verbs, then ChatGPT can generate responses.  For questions that focus on application of knowledge and interpretation then ChatGPT reaches a limitation. Questions that refer to information presented, for example, in a figure or graph cannot be answered.

Its knowledge cut-off is 2021 which became apparent in a question that referred to a medicine approved in 2021.

The word count on generated responses has been noted not to exceed 600 words. A prompt asking for a 2000-word answer did not generate the requested word count. The word count was 519 and 549 words for two different requests.

What is the quality of ChatGPT responses?

Generated responses tend to have a good structure and are well-written. The quality of answers varies and, it was found for one question, to contain an error. This was a surprise as a google search for the information resulted in the correct answer. Questions that required more complex analysis or interpretation were poorly answered. This ChatGPT response shown would not meet the pass criteria.

Can ChatGPT include academic references

Yes, it can when the prompt requests references. The examples show the iterations in ChatGPT where “give appropriate references” is included in the prompt and this produces weblinks as the reference sources. In the third iteration “do not use Wikipedia” and a specific reference style format were included in the prompt and the quality of references improved from weblinks to academic texts.

It does require the user to understand the objectives of the task and the ability to critically evaluate the outputs.

Will Turnitin detect ChatGPT responses?

Turnitin is a web based text matching service which compares assessment work submitted by students to identify any duplication or cheating. Electronic sources, databases and other student submissions are compared. Turnitin did not produce a high percentage matching score for the ChatGPT generated responses. For questions that requested structures to be drawn, only text answers were generated which is not the typical format for some disciplinary topics and this would alert me as the assessor. Another interesting outcome is that responses generated from different user accounts were unique.

 

My Thoughts

There has been a lot of ChatGPT hype in recent days. I don’t fear this technology. I can forsee educational benefits with ChatGPT particularly as a developmental tool for learners. The most important aspect to consider as educators is our assessment design. What exactly are we assessing? This is something we have questioned particularly during the COVID-lockdown and the pivot to online teaching and assessment. We needed to produce “take away” exams that could be completed online, in open book format and during an appropriate time window. Reframing our assessments from recall-based tasks to questions that required students to demonstrate how they use information was key.

Application and interpretation of knowledge is not well processed by ChatGPT. Using problem solving, data interpretation or case-study based questions are ways to redesign assessment beyond knowledge-based questions. This disruptive technology will help educators to question why are we doing things this way? That has to be a good thing in my book! Or maybe I will ask ChatGPT 🙂

(Date 19th January 2023)