Believe it or not, the focus on Artificial Intelligence (AI) began decades ago primarily as a result of a paper published in 1950 by a man named Alan Turing. He proposed his Turing Test as a benchmark for machine intelligence in the article, “Computing Machinery, and Intelligence,” where he posed the question, “Can machines think?”
Modern AI, often described as objective, data-driven, mathematical, and precise, has been proven over time that today’s AI systems carry social, racial, and gender biases because they learn from human-generated data, therefore inherit human-made patterns such as unequal representation, historic discrimination, and stereotypes embedded in text, images, and records. When these systems are used for hiring, housing, healthcare, education, surveillance, or research, biased outputs can translate into real-world harm. A widely cited article, “Gender Shades,” found that commercial gender-classification tools had far higher error rates for darker-skinned women than for lighter-skinned men, revealing how “high accuracy” can still dramatically fail certain groups.
Against this reality, three women stand out for doing more than pointing out problems. Timnit Gebru, Rediet Abebe, and Ayanna Howard are helping reshape what progress in AI should look like by building tools, theories, and technologies that push AI toward transparency, equity, and trustworthiness.
Timnit Gebru: Turning AI Ethics into Standards and Accountability
Timnit Gebru is best known for making ethical AI concrete by emphasizing a set of practices. One of her most influential contributions is “Datasheets for Datasets,” a proposal that datasets should be accompanied by standardized documentation explaining how the data was collected, what it contains, intended uses, limitations, and potential risks. The concept is simple but powerful and emphasizes that if AI is trained on data, then documenting that data is a first step toward understanding and reducing harm.
Timnit also helped the public and research communities see bias as intersectional, meaning systems can fail in compounded ways at the overlap of race and gender. A 2018 research paper “Gender Shades” co-authored with Joy Buolamwini, revealed significant racial and gender biases in commercial AI facial analysis technologies, pushing organizations, researchers, and journalists to ask better questions such as “Who is the ‘average user,’ and who gets excluded when we optimize for that average?”
Timnit also founded the Distributed AI Research Institute (DAIR), an independent organization focused on community-rooted research and the real-world impacts of AI. In a field dominated by corporate labs, DAIR represents an alternative approach in pushing research agendas shaped by public interest, not just product priorities.
Rediet Abebe: Building the Mathematics of Algorithmic Justice
Where Timnit is often associated with accountability and documentation, Rediet Abebe is known for bringing rigorous theory to one of AI’s hardest questions; “How do algorithms interact with inequality? “Rediet’s work treats inequality not as a side issue, but as a technical design constraint. If algorithms rank candidates, allocate resources, or match people to opportunities, then it matters, mathematically and ethically, who benefits and who is left behind.
Her research, frequently framed as “computing for social good,” is aimed at ensuring that optimization, prediction, and automated decision systems do not simply scale existing inequities. Rediet has helped shape the conversation around what counts as “good” machine learning by centering focus on outcomes such as education access, public health, and economic mobility rather than only accuracy metrics. Her approach insists that AI researchers ask not only Can we build it? but also What does it do to opportunity and fairness when deployed?
Ayanna Howard: Human-Centered Robotics and the Dangers of “Overtrust”
Ayanna Howard brings AI into the physical world through robotics and human-centered autonomous systems where issues of bias and trust become tangible. When a system interacts with people directly, “mostly accurate” is not good enough. Errors can create unequal safety outcomes, exclude users with disabilities, or cause people to rely on technology beyond its capabilities.
A key theme in Ayanna’s work is “trust calibration;” designing systems so that human confidence matches what the technology can truly do. She emphasizes the danger of “overtrust” in that people assume an AI system is more capable than it is and make decisions that increase risk. Ayanna has written and spoken about overtrust as a defining challenge of the robotics age, especially as AI tools become more embedded in daily life.
Together, Timnit, Rediet, and Ayanna represent three pillars of trustworthy AI: transparency (knowing what data and assumptions power systems), justice (ensuring algorithms do not widen inequality), and human-centered design (building systems people can use safely and inclusively). As AI moves from novelty to infrastructure, their work offers a better standard for progress: not just AI that is powerful, but AI that is fair, accountable, and worthy of public trust.

Courtesy, Karen Clay
