Understanding the capabilities and limitations of generative AI

HomeSchool resourcesTechnologyUnderstanding the capabilities and limitations of generative AI

Understanding the capabilities and limitations of generative AI

HomeSchool resourcesTechnologyUnderstanding the capabilities and limitations of generative AI

In a webinar, Mandy Henk and Dr Sarah Bickerton of Tohatoha shared their insights into how generative AI is already playing a role in students’ lives, and how teachers can explicitly teach students the limitations and potential of generative AI in their classrooms.

What is generative AI? Until the launch of generative AI towards the end of 2022, the most commonly used form of AI was algorithmic AI, as in voice assistants such as Siri and Alexa. Algorithmic AI analyses data and performs tasks using a set of rules or instructions, and works according to a relatively narrow set of parameters. By contrast, generative AI is trained on a large data set to identify the probabilistic connections between items and develop a network map of all the items within its data set. The responses it generates are based on the strength of the connections between items. Algorithmic moderator bots act as guardrails to ensure that certain limitations are in place around the kinds of responses that generative AI provides. Students are likely to be interacting with generative AI outside of school, as these tools are well integrated into digital products that they use every day. One of the most common is MyAI in Snapchat. ‘Help me write’ in Google is another AI feature that they may be using in their personal Google account. 

While AI can and will play a role in teaching in learning, it will not render teachers redundant or irrelevant. Rather, it is important for teachers to adapt their pedagogy appropriately. A significant risk for teachers is that students will use AI to generate answers to questions, but without acquiring or understanding the knowledge required to complete tests and assignments. Another potential risk is that students are no longer taught composition. Rather than assuming that students will use AI, or alternatively simply banning its use, it is important for teachers to be clear about their pedagogical objectives, to consider the roles that AI tools may play, and to design assignments and learning tasks accordingly. For example, in order to support students’ understanding about how well AI can write an essay, teachers can use ChatGT or a similar tool to generate an essay on a topic being studied in class, and then work with students to identify the strengths and weaknesses of that essay. Students can be tasked with verifying the evidence the essay purports to draw on, or assigned certain sentences to correct or improve.

It is important that teachers are aware of AI’s limitations as well as its capabilities, and that they explicitly teach their students about both. There is a mounting body of evidence to demonstrate that generative AI has serious limitations in its ability to perform mathematical tasks. It is also deeply flawed in its ability to provide citations for factual information. Common issues are that it generates fabricated citations, or supplies a reference that does not contain the information being cited. Similarly, generative AI may appear authoritative at repeating factual information, but there is no guarantee that the purported facts are accurate. One of the most critical things to teach students about generative AI tools is that they confabulate, and are therefore not infallible. If students are using AI to generate text, they need to understand that they must fact-check the generated text very critically. Teachers should also teach students to be aware of AI’s potential for bias. Because generative AI creates probabilistic maps of commonalities between items, dominant perspectives are more likely to be identified and repeated by the AI. This means that nuanced perspectives, and the knowledge and perspectives of smaller groups, are much less likely to be surfaced by AI, or less likely to be reported as true. 

It is important to teach students that generative AI tools are not search engines, nor should they be used as search engines. It is also worth talking to students about the fact that, since the advent of generative AI, the results of search engines may be less reliable, given that some of the responses provided  by the search request will now be AI-generated. This provides a good opportunity to refer students to the many resources and services offered through school and community libraries, and to encourage them to take advantage of them when doing research and completing assignments. 

There are certain ethical implications related to the use of generative AI, one of which is that the data that has been collected to train the tools may be in violation of copyright law. Similarly, there is the question of the copyright status of what is generated. There have been instances of generative AI tools producing responses that contain authors’ copyrighted material, and not all writers and artists will have access to legal recourse pertaining to their breach of copyright.

There is also some research currently being done around the potential risk of self-radicalisation by users of generative AI, due to the tools being programmed to provide pleasant, affirming responses regardless of the questions or views being entered. It is important to ensure that students understand this characteristic of generative AI. Similarly, it is valuable to remind students that, while generative AI is skilled at mimicking human emotions and human connections, it is not human.   

Did you find this article useful?

If you enjoyed this content, please consider making a charitable donation.

Become a supporter for as little as $1 a week – it only takes a minute and enables us to continue to provide research-informed content for teachers that is free, high-quality and independent.

Become a supporter

Close popup Close
Register an Account
*
*
*
*
*
*