Technical Practices for Detecting Bias in AI: Building Fair and Ethical Models

In an era where Generative AI (Gen AI) shapes realities, it's crucial to recognize the inherent biases this technology may carry. Large language models (LLMs), developed using vast and diverse datasets, may reflect the biases inherent in the human-generated content on which they were trained.

According to Dr. Joy Buolamwini, the founder of the Algorithmic Justice League, an organization tracking the harms of artificial intelligence, AI-powered tools are “determining who gets hired, who gets medical insurance, who gets a mortgage, and even who gets a date. " Buolamwini goes on to say, “When AI systems are used as the gatekeeper of opportunities, it is critical that the oversight of the design, development, and deployment of these systems reflect the communities that will be impacted by them.” [1]

But AI should benefit everyone, right? Even OpenAI (parent company of ChatGPT) publicly states its mission is to ensure that artificial general intelligence “benefits all of humanity.” And, the opening of OpenAI's GPT Store and similar venues has ushered in a new era of Gen AI apps, which may introduce new ethical challenges. How will we govern this technology to ensure ethical practices and benefits for all of humanity?

During this presentation, participants will gain insights into using AI Observability, AI Governance, and other key concepts to ensure responsible management of Gen AI systems in compliance with policies and standards.

Target Audience

This talk is for anyone using Generative AI (Gen AI), whether a curious consumer or a seasoned technologist, who shares the common goal that Gen AI should operate ethically and fairly.

Bill Allen
Date & Time
Thursday, July 25, 2024, 10:45 AM - 12:00 PM
Location Name
Texas C
Session Type
Technology for All
Learning Level
Learning Objectives
After attending this talk, participants will be able to identify bias in AI platforms and apply these 3 key learning outcomes:

• Evaluate training data for bias

• Document performance disparities

• Test the LLM for biases
Download Session PDF
Survey Link