AI Fairness for People with Disabilities       

links

 Peter J Fay photo photoShari Trewin photo

AI Fairness for People with Disabilities - overview


IBM Accessibility Research has been exploring the topic of fair treatment for people with disabilities in artificial intelligence (AI) systems. Much has been written about the potential for AI methods to encode, perpetuate and even amplify discrimination against marginalized groups, especially people of color and women. Might there be similar risks of bias against people with disabilities in AI systems, and if so, how can we change this?

Artificial intelligence (AI) is everywhere.  One very successful form of AI is machine learning (ML). Machine learning is being applied by many organizations to classification tasks that once needed human judgment. It is useful in helping people to handle large volumes of data by supporting decision making, finding interesting patterns, or interpreting human speech or behavior. ML models are already helping doctors to spot melanoma, recruiters to find promising candidates, and banks to decide who to extend a loan to.  They are used in product recommendations, targeted advertising, essay grading, employee promotion and retention, image labelling, video surveillance, self-driving cars and a host of other applications.

ML is also transforming the way we interact with machines, as they learn to translate our words to text, interpret our gestures, and recognize us and our emotions. Speech recognition has reached near-parity with human performance on some data sets, and AI methods are able to identify people and objects in photos, video or sensor data. Self-driving cars are already out there.

As AI and machine learning becomes pervasive, it is essential that ML models uphold society's moral and legal obligations to treat all people fairly, especially with respect to protected groups that have historically experienced discrimination. However, researchers have found gender discrimination, racial discrimination, and age discrimination in ML models.  Where there has been historical bias, models may be learning from biased training data. In other situations, the training data under-represents a group of people, and doesn’t perform as well for them.

Biased human attitudes and wrong assumptions can lead to unfair treatment for people with disabilities in the world today. We believe that the introduction of machine learning offers a real opportunity to improve this situation, but this will only happen with conscious attention to fairness.

In September 2018, IBM Research released AI Fairness 360, a toolbox for testing data, models and outcomes for bias against protected groups, and techniques for improving fairness when data about group membership is available.

In the AI Fairness for People with Disabilities project, we are exploring the use of these tools to reveal bias and improve fairness for people with disabilities. People with disabilities are not a homogeneous group and each individual may have unique characteristics, so even with diverse, unbiased training data, inequalities could still exist.  Machine learning models are optimized for good performance on typical cases, often at the expense of unusual or ‘outlier’ cases.

The IBM Accessibility Research team is:

  1. Exploring key use cases and questions. In October 2018 we hosted a Workshop on AI Fairness for People with Disabilities, bringing together individuals with disabilities, advocates, AI experts and accessibility researchers. Read an overview of the workshop.
  2. Exploring the use of bias detection and mitigation techniques in this domain. Disability is not a simple variable with a small number of discrete values. It has many dimensions and people can have multiple disabilities. How best to apply statistical bias detection methods in this situation is an open research question.

The goal of this work is to contribute to the development of methods to prevent, identify and address disability discrimination in AI solutions, leading to solutions that are demonstrably fair and trustworthy for individuals of all abilities.

For more information, read our point-of-view paper, VentureBeat article, or this interview with Shari Trewin. See also Jutta Treviranus' post on Sidewalk Toronto and Why Smarter is not Better.