Paige Morrow: My first ChatGPT hallucinations experience
I’ve heard many times that chatbots such as ChatGPT can sometimes make things up, leading to inaccurate information. I just had not experienced it firsthand…until now.
I recently finished reading “Against Decolonisation: Taking African Agency Seriously” by Olúfẹ́mi Táíwò and wanted to write down my thoughts on it. I was not aiming to write a book review, just some reflections on how it (mis)aligns with my stance on decolonization narratives/approaches. Since it would be a quick blog, I thought I would ask chatgpt for a summary of the book, which I would include as a screenshot along with my thoughts. I entered the prompt: ‘What is the summary of the book Against Decolonisation: Taking African Agency Seriously’ (yes, I admit it was not the best prompt). Chat GPT provided the output below in Figure 1.
At first glance, the output looks okay, and it gets many points of the book correct. However, if you have read the book (or the second paragraph of this blog), you will immediately notice one glaring issue. So, I followed up with another prompt, as seen in Figure 2 below.
Yup, ChatGPT gave a fairly okay summary and made up an author. What is also surprising is that it looked like it was providing a justification for the “confusion caused.” This response had me asking about the possibility of AI being a sentient being in the future…but that’s not for this blog.
What ChatGPT did is commonly called hallucination. IBM defines AI hallucination as “a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate” (https://www.ibm.com/topics/ai-hallucinations). Some have made a big deal about this, as if we have forgotten that even, naantu oha ya fundju, yes, even people lie and make up things, so perhaps we should expect that a tool made by humans will also have these not-so-nice human characteristics. Plus, not everything on the internet is factual either. So then the issue, should not just about being the inaccurate outputs (lies); it is about how we deal with these hallucinations.
It is about literacies
Literacy is often limitedly defined as a person’s ability to read and write. In today’s digital and information age, however, we have multiple literacies. When it comes to ChatGPT, we need to emphasise information literacy, which means that “a person must be able to recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information” (https://www.ala.org/acrl/publications/whitepapers/presidential). In other words, it is not just about having a question and knowing that you can ask that question in a chatbot. It is also critical to have the ability to evaluate and effectively use what the chatbot puts out. The question then becomes, how do we develop these skills?
AI tools can give some of us a false sense of knowing things we do not understand. With chatbots like ChatGPT, one enters a query, and a response is given. Receiving an answer is different from knowing, understanding, or having expertise in something. To benefit from the age of AI, we still need to know things. We still need to read so that we can identify when systems hallucinate. We still need to ensure that more people have the ability to not only locate answers but to be able to evaluate and use what they are given effectively.
My hope is that once the euphoric dust around AI settles and we have moved from dualistic arguments of doom and gloom vs. best thing since oxygen, we can focus on information literacy or a new form of AI literacy.