Key takeaways
The most widespread documented failures involve facial recognition technology used by law enforcement and federal agencies.
Research conducted by MIT computer scientist Joy Buolamwini revealed that commercial facial recognition systems show dramatic disparities in accuracy based on race and gender.
"I literally had to put on a white mask to have my dark skin detected," Buolamwini said, describing her experience with facial recognition software that prompted her groundbreaking research at MIT.
Her 2018 study found error rates as high as 35% for darker-skinned women, while lighter-skinned men were correctly identified 99% of the time.
The research analyzed systems from major tech companies, including Microsoft, IBM, and Amazon, that supply technology to government agencies.
A 2019 federal government test by the National Institute of Standards and Technology confirmed these findings, concluding the technology works best on middle-aged white men while showing significantly lower accuracy for people of color, women, children, and elderly individuals.
The study examined 189 algorithms and found that facial recognition technologies were substantially more error-prone on darker skin tones.
"Power shadows are cast when the biases or systemic exclusion of a society are reflected in the data," Buolamwini explained in her book "Unmasking AI."
These technical failures have real-world consequences.
Multiple Black men have been wrongfully arrested after facial recognition systems misidentified them as criminal suspects, including Robert Williams in Detroit, who was arrested in front of his daughters based on a flawed match.
In London, activist Shaun Thompson was wrongfully identified by live facial recognition technology as a criminal suspect in 2024 and subjected to what he described as an aggressive police stop.
An independent review of London's Metropolitan Police facial recognition trials found that out of 42 matches, only eight could be confirmed as absolutely accurate.
Healthcare AI denials spark federal litigation
Beyond law enforcement, incomplete training data is causing failures in healthcare AI systems.
UnitedHealthcare faces an ongoing federal class action lawsuit alleging the company used a flawed artificial intelligence algorithm to systematically deny elderly Medicare Advantage patients medically necessary care.
The lawsuit, filed by the Estate of Gene Lokken and other plaintiffs in Minnesota federal court, claims UnitedHealthcare deployed an AI tool called nH Predict that had a 90% error rate, meaning nine out of ten appealed denials were ultimately reversed.
"Despite the high error rate, Defendants continue to systemically deny claims using their flawed AI model because they know that only a tiny minority of policyholders (roughly 0.2%) will appeal denied claims," the lawsuit states.
In February 2025, Judge John Tunheim allowed key portions of the lawsuit to proceed, ruling that UnitedHealthcare must answer allegations that it breached its insurance contracts by using AI instead of clinical professionals to make coverage decisions.
The judge waived the normal requirement that plaintiffs exhaust all administrative appeals, calling the company's process "futile" and noting it could cause "irreparable injury."
The lawsuit describes how elderly patients were prematurely discharged from care facilities or forced to deplete family savings after the AI system overrode physicians' medical judgments.
One 74-year-old stroke patient was repeatedly denied coverage despite his doctor's insistence that he needed continued care, ultimately forcing his family to pay over $70,000 out of pocket.
Federal agencies acknowledge training data vulnerabilities
Government agencies are beginning to acknowledge the severity of training data problems. In July 2024, the Department of Commerce announced new guidance addressing vulnerabilities in AI systems.
"One of the vulnerabilities of an AI system is the model at its core," the Commerce Department stated in its announcement.
"By exposing a model to large amounts of training data, it learns to make decisions. But if adversaries poison the training data with inaccuracies, the model can make incorrect, potentially disastrous decisions."
The Commerce Department's guidance recommends analyzing training data for signs of poisoning, bias, homogeneity, and tampering.
However, no federal law currently regulates the use of facial recognition technology, despite bipartisan concerns about constitutional violations.
Industry data reveals widespread AI project failures
The problems extend beyond specific applications to fundamental challenges in how AI systems are developed and deployed.
According to a 2025 survey by S&P Global Market Intelligence, 42% of companies abandoned most of their AI initiatives in 2024, a dramatic spike from just 17% in 2024.
A RAND Corporation analysis found that over 80% of AI projects fail, twice the failure rate of non-AI technology projects.
The research identified inadequate training data as a primary cause.
"Data quality and readiness" was cited as the top obstacle to AI success by 43% of respondents in Informatica's CDO Insights 2025 survey.
Industry experts now recommend dedicating 50-70% of AI project budgets and timelines to data preparation, extraction, and quality control.
Toju Duke, founder of the nonprofit Diverse AI and former manager of Google's Responsible AI program, said the facial recognition problems identified in 2019 persist today.
"If there's been any changes, it's not more than 5%, and it's not measurable," Duke told The Record in February 2024.
Read more:
AI Forecast To Put 200,000 European Banking Jobs At Risk By 2030
Trooper Sues Washington State Patrol Over AI Deepfake Video
Microsoft CEO Predicts 2026 Will Mark Critical Shift In AI Adoption