Racial Bias In Medical Algorithms

Racial Bias In Medical Algorithms

A recently released study released found that a formula used by UnitedHealth subsidiary Optum used to predict health care needs of 200 million patients across the nation discriminates against black people by counting health care costs as an indicator of illness.

The algorithm is used to predict which patients will benefit from extra medical care, and it dramatically underestimates the health needs of the sickest black patients. The algorithm specifically excluded race and used a seemingly race-blind metric: how much patients would cost the health-care system in the future. But cost isn’t a race-neutral measure of health-care need. Black patients incurred about $1,800 less in medical costs per year than white patients with the same number of chronic conditions; thus the algorithm scored white patients as equally at risk of future health problems as black patients who had more illness and disease.

Machines and artificial intelligence increasingly make decisions that affect human life, and large corporations, especially those in health care work to use massive data sets to improve operations. The data may not appear to be racist or biased but it’s collected and coded by humans, and as such have been heavily influenced by longstanding social, cultural and institutional biases — such as health-care costs.

Ziad Obermeyer, a UC Berkeley researcher and lead author of the study dissecting the Optum algorithm, says its results should trigger a broad reassessment of how such technology is used in health care. “The bias we identified is bigger than one algorithm or one company—it’s a systematic error in the way we as a health sector have been thinking about risk prediction,” he says. The algorithm’s skew sprang from the way it used health costs as a proxy for a person’s care requirements, making its predictions reflect economic inequality as much as health needs.

But the issues go further than health care. Similar formulas are used to determine which job candidates qualify for interviews and who qualifies for bank loans. All of these algorithms run the risk of automating racism or other human biases. An algorithm used to determine prison sentences was found to be racially biased, incorrectly predicting a higher recidivism risk for black defendants and a lower risk for white defendants. Facial recognition software has been shown to have racial and gender-based bias, accurately identifying a person’s gender only among white men. Google’s advertising algorithm has been found to show results for high income jobs to men far more often than to women.

The software was an outgrowth of the Affordable Care Act, which created financial incentives for health systems to keep people well instead of waiting to treat them after they were sick. The idea was that it would be possible to simultaneously contain costs and keep people healthier by identifying those patients at greatest risk for becoming very sick and providing more resources to them. But because wealthy, white people use more health care services, such tools could also lead health systems to focus on them, missing an opportunity to help some of the sickest people. According to Obermeyer, “we shouldn’t be blaming the algorithm, we should be blaming ourselves, because the algorithm is just learning from the data we give it.”

Correcting the bias would more than double the number of black patients flagged as at risk of complicated medical needs within the health system the researchers studied, and they are already working with Optum on a fix. When the company replicated the analysis on a national data set of 3.7 million patients, they found that black patients who were ranked by the algorithm as equally as in need of extra care as white patients were much sicker: They collectively suffered from 48,772 additional chronic diseases.

Biases like these are unintentionally built into the software we use at many different stages, said Ruha Benjamin, author of Race After Technology and associate professor of African American studies at Princeton University. “Pre-existing social processes shape data collection, algorithm design and even the formulation of problems that need addressing by technology,” she said. “The design of different kinds of systems, whether we’re talking about legal systems or computer systems, can create and reinforce hierarchies precisely because the people who create them are not thinking about how social norms and structures shape their work,” she said. “Indifference to social reality is, perhaps, more dangerous than outright bigotry.”

January 16, 2020

Back to the list

a Session



Contact us
10926 Quality Drive. #38991
Charlotte, NC 28278
a Session

Terms of Use
Copyright © 2016-2020 The Alchemist Agency – All Rights Reserved