a

STEAM in AI

  /  STEAM in AI   /  Teaching AI Auditing: Empowering Students to Evaluate AI Data Biases
Teaching AI Auditing Empowering Students to Evaluate AI Data Biases

Teaching AI Auditing: Empowering Students to Evaluate AI Data Biases

Teaching AI Auditing: Empowering Students to Evaluate AI Data Biases

Written By:

Julie M. Smith, PhD, Senior Education Researcher

Monica McGill, EdD, Founder & CEO

Institute for Advancing Computing Education in Partnership with the Computer Science Teachers Association

With the shift to AI as a prominent tool, it’s inevitable that the way we teach subjects, including computer science, will start to change even within K-12.

Let’s take a brief look at one concept– algorithm auditing. Algorithm auditing, or the testing of algorithms to assess whether or not they do what is intended by the programmer, is not a new concept.

Often, error messages that the programmer receives (from the compiler) can give students clues as to whether the program is syntactically correct, which can sometimes contribute to the algorithm audit process.

However, we don’t stop there–it’s pretty much been the case that when we teach students to program, we stress the importance of testing through ad hoc testing as well as unit testing.

Having written the algorithms and the program, students can question their steps, run simple checks between various lines of code, and work with a debugger to watch the variables change.

So, what happens when programming is left to AI–that is, when AI is the programmer?

Research has shown that, when students participate in AI algorithm audits, they become aware of bias and design-related issues; they also gain knowledge to improve their own models (Morales-Navarro, et al., 2024). Thus, AI algorithm auditing appears to be a promising way to improve students’ AI literacy and skills. 

How do we shift our teaching practice to adapt and lean heavily into AI auditing? And, given the great uncertainty with its relative newness, how do we embed ethical and critical thinking into this process? Ethics in AI auditing include ensuring that emphasis is given to fairness, bias, transparency, and accountability. Beyond the technical aspects, ethics will include the investigation of social aspects (see Table 1).

Table 1. Adapted from Laine, Minkkinen, & Mäntymäki, 2024

Social aspects

Technical aspects

Process-oriented approach

AI auditing as interrogating how complex social groups and contexts are represented in algorithmic systems

AI auditing as technically governing algorithmic processing

Outcome-oriented approach

AI auditing as avoiding social discrimination, prejudice, and harm in predictions and decisions

AI auditing as technically assuring appropriate outputs

All that is nice–but practically speaking, what can teachers do in the classroom to bring these concepts and practices to students?

One lesson might involve having students develop an AI-based image classifier in a group. In this activity, students use photos, such as of different types of workers. Students work in teams to manually classify the photos into groups (such as preschool teachers, lawyers, and auto mechanics), and this classification is used to train an AI model.

Once trained, the AI model is then presented with new photos and tasked with classifying them. A different team of students is then assigned the role of algorithm auditors, checking the model for accuracy in general as well as for any possible biases in the output. For example, the model may have an overall accuracy near 90%, but only 30% accuracy for pictures of auto mechanics who are women or preschool teachers who are men.

Another lesson might expand students’ data science skills through an algorithm audit of a large language model such as ChatGPT. In this lesson, students might repeatedly query ChatGPT with fictitious patient data and ask for a diagnosis. Then, using their data science skills, they might analyze whether there is a pattern of racial or gender bias in the diagnoses. Or, fictitious loan or rental application data might be used to ask whether a loan should be issued or a rental agreement should be approved, and the results analyzed for various forms of bias.

At this point, it’s safe to say that we’re in the experimentation stage not only with AI, but also how to teach it in ways that meet the needs of students.

Resources

https://ai4k12.org/

https://www.teachai.org/

References:

Laine, J., Minkkinen, M., & Mäntymäki, M. (2024). Ethics-based AI auditing: A systematic literature review on conceptualizations of ethical principles and knowledge contributions to stakeholders. Information & Management, 103969.

 

Morales-Navarro, L., Kafai, Y. B., Konda, V., & Metaxa, D. (2024). Youth as Peer Auditors: Engaging Teenagers with Algorithm Auditing of Machine Learning Applications. arXiv preprint arXiv:2404.05874.