Unboxing Biases in AI Resume Screening

Christina Huang | M.S. Computational Design Practices

Hypothesis, Research Question, or Provocation

Interactive visualizations of AI-powered hiring algorithms can help users identify biases in recruitment processes, and integrating skill-based data and algorithms into these systems can improve fairness, making hiring decisions more equitable and transparent.

Project Description

This project explores how AI-powered hiring algorithms can both improve recruitment efficiency and perpetuate bias, especially in recruitment practices. Through an interactive system map, the project visualizes the AI hiring process, highlighting the key points where biases, such as gender or racial disparities, may occur in automated decisions. The visualization allows users to interactively explore these biases, deepening their understanding of how AI algorithms score and rank resumes. Additionally, the project proposes integrating skill-based data and algorithms into the AI model, suggesting a potential solution to refocus hiring practices and increase fairness. This project emphasizes the need for careful oversight of AI hiring tools to ensure transparency, inclusivity, and equitable outcomes in real-world hiring decisions.

Computational Methods

The project combines two computational methods:

Design Methods

This project adopts a critical and interactive design approach to investigate AI biases in recruitment systems. The design methods involve:

Precedents

The concept of AI-driven recruitment and its potential for bias has been explored in multiple studies, such as "Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval" by [Author]. This research examines how language models used in resume screening can perpetuate gender and racial biases.

Proof of Concept

The proof of concept for this project involves demonstrating an interactive visualization of the AI-powered hiring process. The system map will visually represent how resumes are processed by AI algorithms, highlighting the points where biases may occur. Users will be able to adjust variables like gender, race, and skill sets, observing how these changes influence AI decisions. Additionally, the integration of skill-based data will be showcased, illustrating how this shift can help reduce bias and improve fairness. Prototypes, wireframes, and interactive demos will be used to visualize these concepts, ensuring that users can experience firsthand the impact of these interventions on the hiring process.

Audience

The primary audience for this project includes HR professionals, recruiters, AI researchers, and diversity and inclusion advocates. By providing an interactive experience, the project aims to raise awareness about biases in AI-driven hiring systems and offer actionable insights into how they can be mitigated. The project will also appeal to job seekers who want to better understand the AI hiring processes affecting them. To make the project more inclusive, it could be adapted for different audiences, such as HR managers in smaller organizations or educational institutions, by tailoring the visualizations and insights to their specific hiring practices and needs.

Data

The data used in this project will include publicly available resume datasets, as well as AI model inputs and outputs from research studies on biases in hiring algorithms. For example, the dataset used in the research paper "Gender, Race, and Intersectional Bias in Resume Screening" will be analyzed to observe how different demographic factors influence AI decision-making:

Research

Next Steps

Incorporating Feedback from Critiques on December 10, 2024