AI Fairness for People with Disabilities     

links

 photoMichael Muller photo

AI Fairness for People with Disabilities - overview


Artificial Intelligence (AI) is increasingly being used in decision-making that directly impacts people’s lives. Much has been written about the potential for AI methods to encode, perpetuate, and even amplify discrimination against marginalized groups in society. Like age, gender, and race, disability status is a protected characteristic. Disability status has many dimensions, varies in intensity and impact, and often changes over time; yet, today’s methods for bias testing tend to simply split individuals into members of a protected group and “others.” Disability information is also highly sensitive and not always shared, precisely because of the potential for discrimination; AI systems may not have explicit information about disability that can be used to apply established fairness tests and corrections. Finally, some disabilities have relatively low rates of occurrence; in current algorithmic processes, these individuals can appear as data outliers rather than part of a recognizable subgroup. These are just three examples of how theory and practice of AI require scrutiny to ensure fair treatment of people with disabilities.

IBM Accessibility Research has been exploring the topic of fair treatment for people with disabilities in artificial intelligence (AI) systems. In October 2019, we sponsored a workshop at the ASSETS 2019 conference to develop community across disciplines in the area of AI fairness, accountability, transparency, and ethics (FATE) for the specific situations of people with disabilities. We aim to develop new research directions and collaborations, and strategic action plans for increasing impact in research, industry, and policy. Workshop presentation abstracts are available online and selected position papers are published in the SIGACCESS Newsletter October 2019 Issue.

We are also co-editing a special issue on AI Fairness and People with Disabilities in the ACM Transactions on Accessible Computing.

As AI and machine learning becomes pervasive, it is essential that ML models uphold society's moral and legal obligations to treat all people fairly, especially with respect to protected groups that have historically experienced discrimination. Biased human attitudes and wrong assumptions can lead to unfair treatment for people with disabilities in the world today. We believe that the introduction of machine learning offers a real opportunity to improve this situation. To this end, in September 2018, IBM Research released AI Fairness 360, a toolbox for testing data, models and outcomes for bias against protected groups, and techniques for improving fairness when data about group membership is available.

However, people with disabilities are not a homogeneous group and each individual may have unique characteristics, so even with diverse, unbiased training data, inequalities could still exist.  Machine learning models are optimized for good performance on typical cases, often at the expense of unusual or ‘outlier’ cases. Fairness will only happen with conscious attention to fairness, and may require new or hybrid methods, to better accommodate outlier individuals.

For more information:

  1. Read an overview, position papers and abstracts from the ASSETS 2019 Workshop on AI Fairness for People with Disabilities
  2. Our workshop attendees produced a report outlining practical steps towards accommodating people with diverse abilities throughout the AI development lifecycle, published in the ACM SIGAI Newsletter - AI Matters.
  3. Key use cases and questions. In October 2018 we hosted a Workshop on AI Fairness for People with Disabilities, bringing together individuals with disabilities, advocates, AI experts and accessibility researchers. Read an overview of the workshop.
  4. Our Point-of-view paper discusses the attributes of people with disabilities as a protected group.
  5. VentureBeat article: How to tackle AI Bias for People with Disabilities (Dec 2018), or this
  6. Interview with Shari Trewin in the MIT Tech Review (Nov 2018) (Please know that Shari did not choose this depressing wheelchair photo!)

See also Jutta Treviranus' post on Sidewalk Toronto and Why Smarter is not Better.