Actuarial Outpost
 
Go Back   Actuarial Outpost > Actuarial Discussion Forum > Health - Disability
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions

Salary Surveys
Property & Casualty, Life, Health & Pension

Health Actuary Jobs
Insurance & Consulting jobs for Students, Associates & Fellows

Actuarial Recruitment
Visit DW Simpson's website for more info.
www.dwsimpson.com/about

Casualty Jobs
Property & Casualty jobs for Students, Associates & Fellows


Reply
 
Thread Tools Search this Thread Display Modes
  #1  
Old 10-28-2019, 10:49 AM
New York Actuary New York Actuary is offline
Member
SOA
 
Join Date: Jul 2012
College: Alumni
Posts: 588
Default Study finds racial bias in Optum algorithm

This is interesting:

https://www.healthcarefinancenews.com/node/139338
Reply With Quote
  #2  
Old 10-28-2019, 11:24 AM
campbell's Avatar
campbell campbell is offline
Mary Pat Campbell
SOA AAA
 
Join Date: Nov 2003
Location: NY
Studying for duolingo and coursera
Favorite beer: Murphy's Irish Stout
Posts: 90,920
Blog Entries: 6
Default

https://www.healthcarefinancenews.com/node/139338

For convenience:
Quote:
Study finds racial bias in Optum algorithmOptum calls the study's conclusions "misleading" as health systems use many data elements other than cost to select patients for clinical engagement.
Spoiler:
A new study finds racial bias in an algorithm from Optum that is widely used by health systems.

The algorithm helps hospitals identify high-risk patients, such as those who have chronic conditions, to help providers know who may need additional resources to manage their health.

The authors of the study published in the American Association for the Advancement of Science, said black patients assigned the same level of risk by the algorithm are actually sicker than white patients, but bias occurs because the algorithm uses health costs as the measure for health needs.

The algorithm predicts healthcare costs, rather than illness, the study said.

Less money is spent caring for black patients than for white patients, the authors said.

The authors estimate that this racial bias reduces the number of black patients identified for extra care by more than half.

Hospitals and health insurers use the algorithm to identify high risk patients by looking at their medical histories and how much was spent treating them. The algorithm can then predict those individuals who will likely have high costs in the future.

Optum is the business services arm of UnitedHealth Group.

"We appreciate the researchers' work, including their validation that the cost model was highly predictive of cost," Optum said by statement. "However, the use of the model by the health system customer and the conclusions of the researchers are misleading. The cost model is just one of many data elements intended to be used to select patients for clinical engagement programs, including, most importantly, the doctor's expertise and knowledge of his or her patient's individual needs."

THE STUDY

Because less money is spent on black patients who have the same level of need as white patients, the algorithm falsely concludes this means black patients are healthier than equally sick white patients, the study said. At a given risk score, black patients are considerably sicker than white patients, as evidenced by signs of uncontrolled illnesses.

The literature broadly suggests two main potential channels for disparities in cost, the authors said. First, poor patients face substantial barriers to accessing healthcare, even when enrolled in insurance plans.

Although the population studied is entirely insured, there are many other mechanisms by which poverty can lead to disparities in the use of healthcare from geography and differential access to transportation, jobs or child care, or knowledge of reasons to seek care. To the extent that race and socioeconomic status are correlated, these factors will differentially affect black patients, the study said.

Secondly, race could affect costs directly via several channels such as direct discrimination and changes to the doctor–patient relationship. A recent trial randomly assigned black patients to a black or white primary care provider and found significantly higher uptake of recommended preventive care when the provider was black, the authors said.

THE IMPACT

Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs.

This widely used algorithm is typical of the industry-wide approach and affects millions of patients, the authors said. Reformulating the algorithm so that it no longer uses costs as a proxy for need would eliminate the racial bias in predicting who needs extra care.

Remedying this disparity would increase the percentage of black patients receiving additional help from 17.7 to 46.5%, the study said.

THE LARGER TREND

The authors said this is one of the largest and most typical examples of a class of commercial risk-prediction tools that, by industry estimates, are applied to roughly 200 million people in the United States each year.

Large health systems and payers rely on this algorithm to target patients for high-risk care management programs to improve the care of patients with complex health needs by providing additional resources.

These programs as the cornerstone of population health management efforts, and are widely considered effective at improving outcomes and satisfaction while reducing costs.

Health systems are increasingly relying on data from algorithms and analytics to increase better health outcomes and to lower the cost of care.

The study's lead author is Ziad Obermeyer, acting associate professor of Health Policy and Management and a physician and researcher at Berkeley Public Health. Co-authors are Brian Powers, Christine Vogeli and Sendhil Mullainathan.

ON THE RECORD

"Thus, despite healthcare cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise," the study said. "We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts."

Optum said in an earlier statement: "Data and analytics help clinicians provide more effective patient care every day, and we are only scratching the surface of how they will help improve healthcare in the future. Predictive algorithms that power these tools should be continually reviewed and refined, and supplemented by information such as socio-economic data, to help clinicians make the best informed care decisions for each patient. As we advise our customers, these tools should never be viewed as a substitute for a doctor's expertise and knowledge of their patients' individual needs."



https://science.sciencemag.org/conte...64/447/tab-pdf
Quote:
Dissecting racial bias in an algorithm used to manage the health of populations
Ziad Obermeyer1,2,*, Brian Powers3, Christine Vogeli4, Sendhil Mullainathan5,*,†
See all authors and affiliations

Science 25 Oct 2019:
Vol. 366, Issue 6464, pp. 447-453
DOI: 10.1126/science.aax2342

Health systems rely on commercial prediction algorithms to identify and help patients with complex
health needs. We show that a widely used algorithm, typical of this industry-wide approach and
affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients
are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses.
Remedying this disparity would increase the percentage of Black patients receiving additional
help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than
illness, but unequal access to care means that we spend less money caring for Black patients than
for White patients. Thus, despite health care cost appearing to be an effective proxy for health
by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of
convenient, seemingly effective proxies for ground truth can be an important source of algorithmic
bias in many contexts.
[full paper at the link]
__________________
It's STUMP

LinkedIn Profile
Reply With Quote
  #3  
Old 10-28-2019, 11:27 AM
campbell's Avatar
campbell campbell is offline
Mary Pat Campbell
SOA AAA
 
Join Date: Nov 2003
Location: NY
Studying for duolingo and coursera
Favorite beer: Murphy's Irish Stout
Posts: 90,920
Blog Entries: 6
Default

oh, clickbait

https://www.governing.com/news/headl...m_medium=email
Quote:
Racist Data Decreases Health-Care Quality
An algorithm set up to analyze health-care needs in patients “was incorrectly steering some black patients away.” The error was rooted in faulty assumptions that black and white patients are treated equally.
Spoiler:
(TNS) People may be biased, even without realizing it, but computer programs shouldn’t have any reason to discriminate against black patients when predicting their health care needs. Right?

Wrong, new research suggests.

Scientists studying a widely used algorithm typical of the kind health insurers use to make crucial care decisions for millions of people have discovered significant evidence of racial bias when it comes to predicting the health risks of black patients.

RELATED
Is Racism a Public Health Crisis? Google Facial Recognition Project Used Shady Ways to Find ‘Darker-Skinned’ People California Could Become the First State to Ban Natural Hair Discrimination
The findings, described Thursday in the journal Science, have far-reaching implications for the health and welfare of Americans as we become increasingly reliant on computers to turn raw data into useful information. The results also point to the root of the problem — and it isn’t the computer program.

“We shouldn’t be blaming the algorithm,” said study leader Dr. Ziad Obermeyer, a machine learning and health researcher at the University of California, Berkeley. “We should be blaming ourselves, because the algorithm is just learning from the data we give it.”

An algorithm is a set of instructions that describe how to perform a certain task. A recipe for brownies is an algorithm. So is the list of turns to make to drive to your friend’s party.

A computer algorithm is no different, except that it’s written in code instead of words. Today, they’re used to target online ads, recognize faces and find patterns in large-scale data sets — hopefully turning the world into a more efficient, comprehensible place.

But while algorithms have become more powerful and ubiquitous, evidence has mounted that they reflect and even amplify real-world biases and racism.

An algorithm used to determine prison sentences was found to be racially biased, incorrectly predicting a higher recidivism risk for black defendants and a lower risk for white defendants. Facial recognition software has been shown to have both race and gender bias, accurately identifying a person’s gender only among white men. Online advertisements that appear with Google search results have been found to show high-income jobs to men far more often than to women.

Obermeyer said it was almost by accident that he and his colleagues stumbled across the bias embedded in the health care algorithm they were studying.

The algorithm is used to identify patients with health conditions that are likely to lead to more serious complications and higher costs down the line. A large academic hospital had purchased it to help single out patients who were candidates for a care coordination program, which provides access to services such as expedited doctors’ appointments and a team of nurses who may make house calls or refill prescriptions.

“It’s kind of like a VIP program for people who really need extra help with their health,” Obermeyer said.

The goal is to take care of these patients before their condition worsens. Not only does that keep them healthier in the long run, it keeps costs down for the health care system.

These kinds of algorithms are often proprietary, “making it difficult for independent researchers to dissect them,” the study authors wrote. But in this case, the health system willingly provided it, along with data that would allow researchers to see whether the algorithm was accurately predicting the patients’ needs.

The researchers noticed something strange: Black patients that had been assigned the same high-risk score as white patients were far more likely to see their health deteriorate over the following year.

“At a given level of risk as seen by the algorithm, black patients ended up getting much sicker than white patients,” Obermeyer said.

This didn’t make sense, he said, so the scientists focused in on the discrepancy. They analyzed the health data from 6,079 black patients and 43,539 white patients and realized that the algorithm was doing exactly what it had been asked to do.

The problem was that the people who designed it had asked it to do the wrong thing.

The system evaluated patients based on the health costs they incurred, assuming that if their costs were high, it was because their needs were high. But the assumption that high costs were an indicator of high need turned out to be wrong, Obermeyer said, because black patients typically get fewer health care services than white patients, even when they’re equally unwell.

That meant the algorithm was incorrectly steering some black patients away from the care coordination program.

Remedying that racial disparity could cause the percentage of black patients enrolled in the specialized care program to jump from 17.7% to 46.5%, the scientists realized.

Having identified the problem — a faulty human assumption — the scientists set about fixing it. They developed one alternative model that zeroed in on “avoidable costs,” such as emergency visits and hospitalizations. Another model focused on health, as measured by the number of flare-ups of chronic conditions over the year.

The researchers shared their discovery with the manufacturer of the algorithm, which then analyzed its national dataset of nearly 3.7 million commercially insured patients, confirming the results. Together, they experimented with a model that combined health prediction with cost prediction, ultimately reducing the bias by 84%.

Dr. Karen Joynt Maddox, a cardiologist and health policy researcher at Washington University of St. Louis, praised the work as “a thoughtful way to look at this really important emerging problem.”

“We’re increasingly putting a lot of trust on these algorithms and these black-box prediction models to tell us what to do, how to behave, how to treat patients, how to target interventions,” said Joynt Maddox, who was not involved in the study. “It’s unsettling, in a way, to think about whether or not these models that we just take for granted and are using are systematically disadvantaging particular groups.”

The fault in this case was not with the algorithm itself, but with the assumptions made while designing it, she was quick to add.

Obermeyer said they chose not to single out the company that made the algorithm or the health system that used it. He said they hoped to emphasize the role of an entire group of risk-prediction algorithms that, by industry estimates, are used to evaluate roughly 200 million people a year.

Some people have reacted to discoveries of algorithmic bias by suggesting the algorithms be scrapped altogether — but the algorithms aren’t the problem, said Sendhil Mullainathan, a computational behavioral scientist at the University of Chicago and the study’s senior author.

In fact, when properly studied and addressed, they can be part of the solution.

“They reflect the biases in the data that are our biases,” Mullainathan said. “Now if you can figure out how to fix it … the potential that it has to de-bias us is really strong.”

A better algorithm may help to diagnose and treat the effects of racial disparities in care, but it cannot “cure” the disparity at the root of the problem: the fact that fewer dollars are spent on care of black patients, on average, than on white patients, he acknowledged.

These cost disparities likely arise for a number of reasons, the study authors said. Race and discrimination may play a role, and even when they have insurance, poorer patients face “substantial barriers to accessing health care.”

“To the extent that race and socioeconomic status are correlated, these factors will differentially affect black patients,” the researchers wrote.

Fixing those real-world sources of disparity presents a deeper and far more complicated challenge.

Ultimately, Obermeyer said, “it’s a lot easier to fix bias in algorithms than in humans.”


__________________
It's STUMP

LinkedIn Profile
Reply With Quote
  #4  
Old 10-28-2019, 11:28 AM
WhosOnFirst WhosOnFirst is offline
Member
Non-Actuary
 
Join Date: Mar 2015
Posts: 212
Default

Thanks for the article. This has been a personal concern of mine for awhile now. Bias should be expected in any sort of AI model that you develop. That bias may be okay when it comes to a predictive model designed to sell you more books or recommend a song to listen to. The stakes are much higher when it comes to healthcare and these predictive models are based upon certain probabilities. Because the models are probabilistic, that means an inherent error rate. That error can mean the difference between life and death in some cases and must be respected. Too often when these models are implemented there are no safeguards put into place to prevent adverse decisions or even any analysis done to determine what the adverse effects might be.

I'll get off my soapbox now.
Reply With Quote
  #5  
Old 10-28-2019, 12:11 PM
DyalDragon's Avatar
DyalDragon DyalDragon is offline
Member
SOA
 
Join Date: Apr 2009
Location: Here
Studying for the hell of it...
College: AASU
Favorite beer: This one...
Posts: 33,211
Default

Breaking news: Predictive model used by insurance companies to aid in cutting costs focuses on higher cost patients.


Coming up tomorrow: Items dropped into a bucket of water tend to come out dampened.
__________________
P FM MFE MLC C Predictive Analytics
VEE FAP FAP FAP FAP FAP FAP FAP FAP APC
Reply With Quote
  #6  
Old 10-28-2019, 12:15 PM
DyalDragon's Avatar
DyalDragon DyalDragon is offline
Member
SOA
 
Join Date: Apr 2009
Location: Here
Studying for the hell of it...
College: AASU
Favorite beer: This one...
Posts: 33,211
Default

I'm curious what would be the best and/or easiest correction for this. Factors to increase costs for groups to bring those with similar chronic conditions in line with the higher cost groups?

Separate trigger thresholds for different groups?

Seems like the most straightforward way to work around the cost issue is to treat the groups differently, but wouldn't that raise its own problems?
__________________
P FM MFE MLC C Predictive Analytics
VEE FAP FAP FAP FAP FAP FAP FAP FAP APC
Reply With Quote
  #7  
Old 10-28-2019, 12:26 PM
The_Polymath The_Polymath is offline
Member
CAS SOA
 
Join Date: Jun 2016
Posts: 3,202
Default

Quote:
Originally Posted by WhosOnFirst View Post
Thanks for the article. This has been a personal concern of mine for awhile now. Bias should be expected in any sort of AI model that you develop. That bias may be okay when it comes to a predictive model designed to sell you more books or recommend a song to listen to. The stakes are much higher when it comes to healthcare and these predictive models are based upon certain probabilities. Because the models are probabilistic, that means an inherent error rate. That error can mean the difference between life and death in some cases and must be respected. Too often when these models are implemented there are no safeguards put into place to prevent adverse decisions or even any analysis done to determine what the adverse effects might be.

I'll get off my soapbox now.
Have to adopt more sophisticated learning models. Such as advanced neural networks.
Reply With Quote
  #8  
Old 10-28-2019, 01:19 PM
campbell's Avatar
campbell campbell is offline
Mary Pat Campbell
SOA AAA
 
Join Date: Nov 2003
Location: NY
Studying for duolingo and coursera
Favorite beer: Murphy's Irish Stout
Posts: 90,920
Blog Entries: 6
Default

It's not necessarily the structure of the model, but which variables you decide to concentrate on.

They used cost of care as a proxy for severity of condition -- that's the output they were aiming for. Using a neural network (no matter how complicated a structure) with a focus on cost would end up with the same result. When there's already a racial disparity in cost.... guess what?

Or, as DD mentions, given the insurance company is on the hook for the cost, they concentrated on cost. They -said- they were using it as a proxy for severity of condition.
__________________
It's STUMP

LinkedIn Profile
Reply With Quote
  #9  
Old 10-28-2019, 02:09 PM
New York Actuary New York Actuary is offline
Member
SOA
 
Join Date: Jul 2012
College: Alumni
Posts: 588
Default

Quote:
Originally Posted by campbell View Post
It's not necessarily the structure of the model, but which variables you decide to concentrate on.

They used cost of care as a proxy for severity of condition -- that's the output they were aiming for. Using a neural network (no matter how complicated a structure) with a focus on cost would end up with the same result. When there's already a racial disparity in cost.... guess what?

Or, as DD mentions, given the insurance company is on the hook for the cost, they concentrated on cost. They -said- they were using it as a proxy for severity of condition.
That's along the lines of what I was thinking. That is, the model is doing exactly what it is being asked to do, and the actual bias was introduced earlier at the provider level.
Reply With Quote
  #10  
Old 10-29-2019, 11:26 AM
WhosOnFirst WhosOnFirst is offline
Member
Non-Actuary
 
Join Date: Mar 2015
Posts: 212
Default

Quote:
Originally Posted by campbell View Post
It's not necessarily the structure of the model, but which variables you decide to concentrate on.

They used cost of care as a proxy for severity of condition -- that's the output they were aiming for. Using a neural network (no matter how complicated a structure) with a focus on cost would end up with the same result. When there's already a racial disparity in cost.... guess what?

Or, as DD mentions, given the insurance company is on the hook for the cost, they concentrated on cost. They -said- they were using it as a proxy for severity of condition.
One clarification on the insurance angle. I believe this is an algorithm used by hospitals to triage patients. While Optum is owned by United Healthcare, this algorithm is used by hospitals. They are giving preference to higher paying (presumably) patients. Still, this shouldn't be a surprising outcome and is beneficial to the hospital financially.

I also agree that a more sophisticated model would have produced the same outcome. None of the machine learning techniques I've learned are able to magically remove bias and distortion. The patterns that are found in the data are sometimes (often?) due to human bias and not some part of natural law that hadn't been discovered yet. Hence why I think these models need to be closely examined and not automatically assumed to be good even if the model generates a desired outcome.
Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 04:15 AM.


Powered by vBulletin®
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
*PLEASE NOTE: Posts are not checked for accuracy, and do not
represent the views of the Actuarial Outpost or its sponsors.
Page generated in 0.31757 seconds with 9 queries