Experts Offer Ideas to Ensure AI Is Disability-Inclusive

Posted on • Reading Time: 3 min read

This article was written and Published by Julia Edinger— April 9th, 2025

A report from the Center for Democracy and Technology provides suggestions for government in building an inclusive artificial intelligence ecosystem, to help ensure its tools serve people equitably.

A human hand and a robot hand both reach out to touch a brain depicted in graphic form.

As the artificial intelligence ecosystem advances, there are steps government agencies can take to ensure it is disability-inclusive.

A March report, Building A Disability-Inclusive AI Ecosystem: A Cross-Disability, Cross-Systems Analysis Of Best Practices, examines exactly that. The report is co-authored by the Center for Democracy and Technology’s (CDT) Disability Rights in Technology Policy Project Lead Ariana Aboulafia and the American Association of People with Disabilities’ (AAPD) Technology Policy Consultant Henry Claypool.

This report follows previous collaborations between CDT and AAPD to inform inclusive AI development strategies, including a memorandum of understanding between the two organizations and the federal government that resulted in recommendations informed by members of the disability community.

It provides specific recommendations, not only for members of the disability community, but for government agencies, private-sector AI practitioners, and disability rights and justice advocates.

“These technologies are only going to become more prevalent, not less, and so their impacts on people with disabilities are only going to become more prevalent — not less,” Aboulafia said. The ethos of CDT and AAPD’s collaborative work, she underlined, focuses on the importance of centering people with disabilities in the creation, auditing, procurement and integration of these technologies.

As one recommendation example, the report suggests government agencies should issue guidance for employers addressing the use of AI-integrated employment tools, including their potential for violating the Americans with Disabilities Act, the need for employers to disclose to workers any automated decision-making tools being used, and employees’ rights to opt out of using them.

Notably, the report makes recommendations to federal agencies but specifically underlines that some of those may also be applicable to state and local agencies.

This report builds on a 2021 release, Centering Disability in Technology Policy, which identified areas in which AI technologies were being used. The new report goes further, to offer steps stakeholders can take to prevent AI from harming people who are disabled, said Claypool, who was also an author on the 2021 document.

Ensuring that data sets are disability-inclusive is another major piece of building effective AI systems. This year, the federal government has removed public access to certain data sets. However, Aboulafia argued that the issues with inclusive disability data collection are not a new phenomenon, and best practices outlined in another CDT report can help.

“If you really want this technology to work for this population, you need to understand the issues that we identify about data,” said Claypool. “And if you don’t address it, then you’re going to build technology that doesn’t really work for this population.”

Claypool also noted the market impact for building technologies that do not consider people with disabilities, who comprise more than a quarter of the population.

Some technologies may pose specific risks for people with disabilities, including biometric recognition technology and AI-powered chatbots. However, Aboulafia said the report is intentionally organized by the various systems in which AI technologies may be integrated into, rather than by specific technology: “I don’t think it’s necessarily that any particular technology is the issue, but rather how these technologies — whether you’re talking about biometrics or AI tools or algorithmic systems — how these tools are being incorporated into various systems.”

The report addresses several systems in which AI incorporation may pose risks for this population: benefits determinations, education, employment, health care, information and communication technology, the criminal justice system, and transportation.

Organizations should conduct pre- and post-deployment audits when incorporating an AI or algorithmic system into a process or agency, Aboulafia said.

“That technology should be audited for disability inclusion, but also, for by potential bias on the basis of race and gender and other things,” she said. An April 3 report from the Pew Research Center found the majority of survey respondents — including AI experts — were highly concerned about bias in decisions made by AI.

Despite a federal executive order banning activities related to inclusion and accessibility, Claypool indicated accessibility work is “still as relevant and important as ever.”

Greater industry collaboration may occur as a result, Claypool said, noting his hope that developers will engage the disability population and its stakeholders to implement best practices in the technology development process. He said he also foresees the potential for state-level legislation to fill the gaps and safeguard this population through law.

Notably, the report acknowledges that AI can also positively impact people with disabilities, but it aims to be a resource informing stakeholders of possible risks and benefits to inform responsible AI advancement.

A disability-inclusive AI ecosystem includes many stakeholders, Aboulafia emphasized, not just those in the federal government. It also includes buy-in from state and local government agencies, advocates, and disability community members themselves: “We are all stakeholders in building out this system.”