Actuarial Outpost
 
Go Back   Actuarial Outpost > Actuarial Discussion Forum > Health - Disability
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions


Upload your resume securely at https://www.dwsimpson.com
to be contacted when our jobs meet your skills and objectives.


Reply
 
Thread Tools Search this Thread Display Modes
  #11  
Old 10-29-2019, 02:08 PM
New York Actuary New York Actuary is offline
Member
SOA
 
Join Date: Jul 2012
College: Alumni
Posts: 588
Default

Quote:
Originally Posted by WhosOnFirst View Post
One clarification on the insurance angle. I believe this is an algorithm used by hospitals to triage patients. While Optum is owned by United Healthcare, this algorithm is used by hospitals. They are giving preference to higher paying (presumably) patients. Still, this shouldn't be a surprising outcome and is beneficial to the hospital financially.

I also agree that a more sophisticated model would have produced the same outcome. None of the machine learning techniques I've learned are able to magically remove bias and distortion. The patterns that are found in the data are sometimes (often?) due to human bias and not some part of natural law that hadn't been discovered yet. Hence why I think these models need to be closely examined and not automatically assumed to be good even if the model generates a desired outcome.
It may be as simple as physicians being more likely to refer patients with insurance, particularly those with more rich benefits, to other providers within their provider group. This would certainly result in equally sick individuals receiving varying levels of diagnosis codes.

It would be interesting to see the actual study. I am curious as to why they singled out "black" and "white".
Reply With Quote
  #12  
Old 10-29-2019, 06:37 PM
Pamela Wells Pamela Wells is offline
Member
SOA AAA
 
Join Date: May 2019
Location: Phoenix, Arizona
Posts: 114
Default

Quote:
Originally Posted by DyalDragon View Post
I'm curious what would be the best and/or easiest correction for this. Factors to increase costs for groups to bring those with similar chronic conditions in line with the higher cost groups?

Separate trigger thresholds for different groups?

Seems like the most straightforward way to work around the cost issue is to treat the groups differently, but wouldn't that raise its own problems?
At a guess, I would think they put more emphasis on diagnosis and procedure code than on cost of care previously received. It sounds like they're using multiple socioeconomic factors as well, so they may also have tweaked the weighting factors among them, or may even have added race as a factor.
Reply With Quote
  #13  
Old 10-29-2019, 06:43 PM
Pamela Wells Pamela Wells is offline
Member
SOA AAA
 
Join Date: May 2019
Location: Phoenix, Arizona
Posts: 114
Default

Quote:
Originally Posted by New York Actuary View Post
I am curious as to why they singled out "black" and "white".
There's a known treatment bias with respect to race. Black people are less likely to be prescribed pain management medications, and are likely to be given fewer physical therapy sessions. It's quite common for practitioners to assume that they are healthier than they actually are. There are also several treatment discrepancies with respect to gender. In particular, providers are less likely to order further investigation when a woman has a non-specific complaint, and women are more likely to get a diagnosis of "idiopathic" when the cause isn't readily apparent.

Those are reflections of human social biases. In general, black people are assumed to be stronger (both physically and emotionally) and more mature. Women are generally assumed to be more sensitive and emotional, and more likely to exaggerate their experiences and conditions.
Reply With Quote
  #14  
Old 10-30-2019, 08:27 AM
campbell's Avatar
campbell campbell is offline
Mary Pat Campbell
SOA AAA
 
Join Date: Nov 2003
Location: NY
Studying for duolingo and coursera
Favorite beer: Murphy's Irish Stout
Posts: 90,918
Blog Entries: 6
Default

https://www.businessinsider.com/an-a...edsXhOhw3_yvIA

Quote:
New York is investigating UnitedHealth's use of a medical algorithm that steered black patients away from getting higher-quality care
New York State is investigating UnitedHealth Group over its use of an algorithm that researchers found to be racially biased.
A study in the journal Science just found a widely-used algorithm gave more complex treatment to white patients than sicker black patients.
The study's findings point to one of the many risks to implementing more AI in healthcare. Business Insider Intelligence predicts that spending on healthcare AI is projected to grow at an annualized 48% between 2017 and 2023.
Algorithms also strip doctors and nurses from the autonomy to diagnose and treat patients individually.

Spoiler:


UnitedHealth Group used technology that may have kept sick black patients from receiving high-quality care.


New York's state departments of financial services and health sent a letter to UnitedHealth Group over its use of an algorithm that researchers found to be racially biased. Per the Wall Street Journal, the missive is an initial step into a larger investigation.

The algorithm in question, Impact Pro, identifies which patients would benefit from complex health procedures favored treating white patients than sicker black ones between 2013 and 2015, according to a study published in the prestigious journal Science.

New York lawmakers deemed the use of this discriminatory technology "unlawful," and asked to either demonstrate the algorithm is not biased or to stop using Impact Pro immediately.

"New York will not allow racial bias, especially where it results in discriminatory effects that could mean the difference between life and death for an individual patient and the overall health of an already-underserved community," Linda Lacewell, superintendent of New York's department of financial services, and Howard Zucker, commissioner of the department of health, wrote in the letter.

Why Impact Pro's algorithm may have discriminated against black patients
The algorithm predicted black patients would cost less, which signaled to medical providers that their illnesses must not be that bad. But, in reality, black patients cost less because they don't purchase healthcare services as much as white people on average.

The study stated black patients don't seek out healthcare due to a lack of access and a general mistrust in the system. Facing more barriers to accessing healthcare, in turn, indirectly drives down the projected "cost" of illness in black patients.

Health systems use this algorithm on 200 million people each year across the US, the report states. If the algorithm were to eliminate the racial bias, black patients who receive additional help would increase from 17.7% to 46.5%, it predicts.

Optum, UnitedHealth's multi-billion dollar business that used Impact Pro, previously touted its use of AI to provide better care.


Business Insider reached out to UnitedHealth Group for comment.

The trouble with algorithmic healthcare
AI and algorithms are on the rise in the health industry. Business Insider Intelligence predicts that spending on healthcare AI is projected to grow at an annualized 48% between 2017 and 2023.

Yet experts and researchers have long called out the bias algorithms can perpetuate. Amazon built a hiring tool that discriminated against women. Tweets from black people were more likely to be dubbed "toxic" in a Google-funded AI tool. Facial recognition tools used by the US government have been shown to misidentify black people much more often than white people.

An algorithm is also at risk of reflecting structural inequality, as with zip codes. In the 1930s, the US explicitly segregated African-American neighborhoods from white ones through policies known as "redlining." Today, these policies have lasting impact on racial makeup on zip codes — which means using these zip codes in algorithms perpetuates racial inequality.


Algorithms, plus other technology like difficult-to-use electronic medical recording systems, are a factor contributing to doctor and nurse burnout.

Gerard Brogan, a registered nurse and the director of nursing practice National Nurses United and their California branch, says algorithm takes autonomy from clinicians. While algorithms take the average of patient outcomes to find treatment, most nurses and doctors prefer to provide treatment tailored to each individual.

"Traditionally, both nurses and doctors are independent professionals, but because it's now an industry, we're looking at care where algorithms are dictating care rather than professional judgment," Brogan said. "Bill Gates a few years ago said in 15 years time there will be nurses, there will be no doctors, because no one can out-think a computer," Brogan said. "Algorithms may beat people at chess, but they don't hold peoples' hands."
__________________
It's STUMP

LinkedIn Profile
Reply With Quote
  #15  
Old 10-30-2019, 08:55 AM
campbell's Avatar
campbell campbell is offline
Mary Pat Campbell
SOA AAA
 
Join Date: Nov 2003
Location: NY
Studying for duolingo and coursera
Favorite beer: Murphy's Irish Stout
Posts: 90,918
Blog Entries: 6
Default

https://www.wsj.com/articles/new-yor...as-11572087601
Quote:
New York Regulator Probes UnitedHealth Algorithm for Racial Bias
Financial Services Department is investigating whether algorithm violates state antidiscrimination law

Spoiler:
New York's insurance regulator said it is launching an investigation into a UnitedHealth Group Inc. algorithm that a study found prioritized care for healthier white patients over sicker black patients.

The state's Department of Financial Services said it sent a letter Friday to UnitedHealth Chief Executive David Wichmann asking for the company to either show the algorithm isn't racially discriminatory or stop using it. The New York Department of Health joined in sending the letter.

A UnitedHealth spokesman didn't comment about the letter or a New York state investigation, but said the algorithm was only one part of a far broader tool it offers to help analyze patients' health needs.

The letter is the initial step in the Financial Services Department's investigation, and the agency is seeking an explanation "of what appears to be a discriminatory business practice," New York Superintendent of Financial Services Linda Lacewell said.

New York state law prohibits insurers from relying on, producing or promoting a discriminatory algorithm, the letter said.

The company could face enforcement action if the investigation finds violation of New York antidiscrimination laws, a person familiar with the matter said. Under state law, each violation can trigger a fine of up to $1,000.

UnitedHealth's insurance unit, UnitedHealthcare, is the biggest U.S. health insurer by revenue.

The probe follows a study, published in the journal Science on Thursday, that found that an algorithm sold by UnitedHealth's Optum unit ranked white patients with fewer chronic diseases and healthier vital signs the same as sicker black patients.

The reason, the study's authors said, was because the algorithm used past medical use and spending to predict patients' future health-care costs, then used cost to rank their medical needs. But spending for black patients was less than that for similar white patients, giving healthier white patients an edge.

Dozens of hospitals and insurers in the U.S. use the Optum tool to identify diabetes and other chronic-disease patients who should receive extra assistance such as home-care visits, help managing medicines and support coordinating doctor appointments.

"This compounds the already unacceptable racial biases that black patients experience, and reliance on such algorithms appears to effectively codify racial discrimination as health providers' and insurers' policy," the New York agencies' letter said.

The UnitedHealth spokesman said the study looked at a part of the company's Impact Pro tool that predicts the costs patients could incur. The study, the spokesman said, validated "that the cost model within Impact Pro was highly predictive of cost, which is what it was designed to do."

However, the spokesman said, the Impact Pro tool also looks at more than 600 clinical measures to identify gaps in patients' care, including gaps caused by socioeconomic factors, which doctors and health systems can use to "ensure people, especially in underserved populations, get effective, individualized care."


__________________
It's STUMP

LinkedIn Profile
Reply With Quote
  #16  
Old 10-30-2019, 10:22 AM
New York Actuary New York Actuary is offline
Member
SOA
 
Join Date: Jul 2012
College: Alumni
Posts: 588
Default

Quote:
Originally Posted by Pamela Wells View Post
There's a known treatment bias with respect to race. Black people are less likely to be prescribed pain management medications, and are likely to be given fewer physical therapy sessions. It's quite common for practitioners to assume that they are healthier than they actually are. There are also several treatment discrepancies with respect to gender. In particular, providers are less likely to order further investigation when a woman has a non-specific complaint, and women are more likely to get a diagnosis of "idiopathic" when the cause isn't readily apparent.

Those are reflections of human social biases. In general, black people are assumed to be stronger (both physically and emotionally) and more mature. Women are generally assumed to be more sensitive and emotional, and more likely to exaggerate their experiences and conditions.
This sounds a lot more like provider bias than it does model bias.
Reply With Quote
  #17  
Old 10-30-2019, 11:00 AM
campbell's Avatar
campbell campbell is offline
Mary Pat Campbell
SOA AAA
 
Join Date: Nov 2003
Location: NY
Studying for duolingo and coursera
Favorite beer: Murphy's Irish Stout
Posts: 90,918
Blog Entries: 6
Default

Yes, but these treatment biases were known before this specific model was built.

I've pointed out at plenty of these "RAH! RAH! Machine Learning!" talks -- insurance is a highly regulated industry and there are all sorts of correlations with illegal underwriting variables. Some can be legit -- but some really are just reflecting illegally biased behavior that you can't do anything about (in a class action way) when it's individuals making the choice, but you codify it in an all-encompassing model? Nice juicy target for regulators (not to mention tort lawyers).

"But the model fits the data!" is not going to be an adequate support for its use.

People chose the data, chose the variables, and built the model. People, as in specific individuals, are responsible. Not the model.
__________________
It's STUMP

LinkedIn Profile
Reply With Quote
  #18  
Old 10-30-2019, 11:51 AM
WhosOnFirst WhosOnFirst is offline
Member
Non-Actuary
 
Join Date: Mar 2015
Posts: 212
Default

Quote:
Originally Posted by campbell View Post
Yes, but these treatment biases were known before this specific model was built.

I've pointed out at plenty of these "RAH! RAH! Machine Learning!" talks -- insurance is a highly regulated industry and there are all sorts of correlations with illegal underwriting variables. Some can be legit -- but some really are just reflecting illegally biased behavior that you can't do anything about (in a class action way) when it's individuals making the choice, but you codify it in an all-encompassing model? Nice juicy target for regulators (not to mention tort lawyers).

"But the model fits the data!" is not going to be an adequate support for its use.

People chose the data, chose the variables, and built the model. People, as in specific individuals, are responsible. Not the model.


I don't think this can be emphasized enough. If something is found to be discriminatory it doesn't matter if the cause was unintentional. United is in a world of hurt here I think because they trusted the machine and didn't look for adverse impact. In their defense, the data used to train their model may not have included race because that isn't captured in claims data. However, if they were using EMR of some sort, they have no excuse. It would have been one of the first things I checked out when building the model.

AI is neither artificial nor intelligent. It replicates human bias in any system where the bias exists.
Reply With Quote
  #19  
Old 10-30-2019, 01:14 PM
DyalDragon's Avatar
DyalDragon DyalDragon is offline
Member
SOA
 
Join Date: Apr 2009
Location: Here
Studying for the hell of it...
College: AASU
Favorite beer: This one...
Posts: 33,207
Default

Quote:
Originally Posted by campbell View Post
Not really following the context of the article, but the quote at the end is moronic.
Quote:
Algorithms ... don't hold people's hands
No shit Sherlock. You also don't need a medical degree and a massive salary to hold someone's hand either. Way to justify the existence of your profession, buddy.
__________________
P FM MFE MLC C Predictive Analytics
VEE FAP FAP FAP FAP FAP FAP FAP FAP APC
Reply With Quote
  #20  
Old 10-30-2019, 01:58 PM
New York Actuary New York Actuary is offline
Member
SOA
 
Join Date: Jul 2012
College: Alumni
Posts: 588
Default

Quote:
Originally Posted by campbell View Post
Yes, but these treatment biases were known before this specific model was built.

I've pointed out at plenty of these "RAH! RAH! Machine Learning!" talks -- insurance is a highly regulated industry and there are all sorts of correlations with illegal underwriting variables. Some can be legit -- but some really are just reflecting illegally biased behavior that you can't do anything about (in a class action way) when it's individuals making the choice, but you codify it in an all-encompassing model? Nice juicy target for regulators (not to mention tort lawyers).

"But the model fits the data!" is not going to be an adequate support for its use.

People chose the data, chose the variables, and built the model. People, as in specific individuals, are responsible. Not the model.
Good point, but what about the providers? It sounds like they are responsible for a significant portion of the bias, if not most of it.
Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 03:15 PM.


Powered by vBulletin®
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
*PLEASE NOTE: Posts are not checked for accuracy, and do not
represent the views of the Actuarial Outpost or its sponsors.
Page generated in 0.31377 seconds with 9 queries