Actuarial Outpost
 
Go Back   Actuarial Outpost > Actuarial Discussion Forum > Health - Disability
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions

DW Simpson
Actuarial Jobs

Visit our site for the most up to date jobs for actuaries.

Actuarial Salary Surveys
Property & Casualty, Health, Life, Pension and Non-Tradtional Jobs.

Actuarial Meeting Schedule
Browse this year's meetings and which recruiters will attend.

Contact DW Simpson
Have a question?
Let's talk.
You'll be glad you did.


Reply
 
Thread Tools Search this Thread Display Modes
  #41  
Old 11-01-2019, 08:43 PM
Pamela Wells Pamela Wells is offline
Member
SOA AAA
 
Join Date: May 2019
Location: Phoenix, Arizona
Posts: 105
Default

Quote:
Originally Posted by The Obese Dog View Post
When I say the target variable I mean the literal thing they put into the number and said "predict this", not what someone had in their head as what they were trying to actually achieve. I doubt that was "need". At least I can't think of any way to represent that as a numerical vector. Whether or not it was allowed cost, it was almost undoubtedly a proxy. We never observe and don't know "need" - either before or after the fact. We only have better and worse proxies.

I'll grant that removing the connection to allowed cost may make for a less biased model - I don't think I've contested that point. What I am protesting is that doing so means will no longer have bias. It is easy to say models shouldn't have bias, but in reality it is very hard to keep them from creeping in, possibly intractable. If you collect race data and modify the model to achieve parity on race, you might achieve parity across a visible, measurable dimension but still not true parity. That doesn't mean we shouldn't try but it is a point worth appreciating, and we shouldn't feel that our models are beyond such influences. In fact, I haven't seen an actuarial model yet that incorporates race. We just completely ignore it (like UHC did) and hope that's okay. That makes sense where we legally have to but surely there are models out there where that's not true.
I agree that there will be bias in most models. At a minimum, the future will not be exactly like the past. Most bias isn't material to the model's purpose, or doesn't have a deleterious effect on the targets of the algorithm. In cases where the bias does impact the outcomes in a meaningful way, however, it's appropriate to account for those biases.

Most actuarial models don't use race as a variable because it's considered illegal and discriminatory to do so. Similarly, many actuarial models aren't allowed to use gender as a variable. ACA disallows rating based on sex, even though there are known cost differences associated with it.

I'm curious how this situation will ultimately resolve. The articles sound like UHC did not include race as a feature in their mode. I would guess that the best way to adjust for the skew in the model (and the data) is to add race as a feature, but that runs the risk of being viewed as discriminatory too.
Reply With Quote
  #42  
Old 11-04-2019, 01:38 PM
WhosOnFirst WhosOnFirst is offline
Member
Non-Actuary
 
Join Date: Mar 2015
Posts: 211
Default

Quote:
Originally Posted by The Obese Dog View Post
I don't think this is true. They are getting slammed precisely because they reached out to different proportions of people, not the error rate. Hence, reaching out to all 60 would be biased. Also, trying to equalize proportions on accuracy would guarantee differing rates of of reach out and vice-versa (unless the underlying base rates are the same, which they aren't).
Per the article, "At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%." This isn't about parity. This is about the risk scores for black people being set lower than risk scores for white people.

Quote:
Originally Posted by The Obese Dog View Post
I am specifically questioning whether this is possible. If there are true underlying differences how different segments utilize healthcare (black people are more likely to have missing coded conditions because they get less care), I am saying there DOES NOT exist a dataset that isn't biased and there can't be. Even if you gather race data and force parity on whatever statistic you decree represents bias, this won't go away, because you still won't be as effective across subsets of getting care to the people who need it most (your true goal). In other words, I am saying I believe the proposed "fix" is still biased...maybe much less so than the original model but biased nonetheless (at least to the extent I understand the technical details). Obviously, there are still better and worse answers.
It may not be possible to build a model that doesn't produce adverse impact. If that is the case, then a company shouldn't use the model. You may be right that the proposed fix still produces adverse impact. It is very possible to test that and it should be tested. If it does produce adverse impact, than it shouldn't be used. This is not an opinion thing, this is a legal issue. I encourage you to read up on the history of red-lining in the insurance industry. Modern modeling provides a justification to this exact same sort of behavior. It doesn't matter if it was intentional or not or even if the modeling is correct (like if blacks really did need less care than whites*), the practice is illegal any time adverse impact is created on a protected class.

Quote:
Originally Posted by The Obese Dog View Post
It is not clear that the gain comes at the expense of a protected class, depending in what sense you mean it (I mean in the sense of harming them from what would have happened in absence of a model). It could just be codifying what was already happening without a model. It could be that this model has in fact helped them, just not as much as it could have without the bias and so they proportionally suffered while gaining absolutely. I realize I'm being a bit absurd here by my point is that it's a much easier task to ask models not create bias. It's a much harder task to ask them to undo the bias that already exists throughout the system (although I firmly believe they are the best tool we have for the task used appropriately).

I am not disputing that including allowed costs in a model for the use they intended it for was a bad idea or that they did a "good job". I am trying to point out that creating truly unbiased models is MUCH HARDER than being implied in this thread.
In concept I agree, a rising tide floats all boats as it were. And UHG may be able to persuade NY that this is the case and not be found guilty of any wrong doing. However, prima facie, assigning lower risk scores to blacks is going to be a hard argument to overcome. We can look at some history to see what might actually happen. Do you remember UHG's Ingenix? They were the gold standard of aggregated data for the health insurance industry. Everybody used their data to determine what appropriate out of network charges were. NY sued insurers saying that the Ingenix database artificially lowered what the insurance companies should have been paying to doctors. As an industry insider, the entire case seemed without merit because how else are you going to set a U&C rate? Regardless, UHG lost the case and paid like $400 million is penalties (don't quote me on that number). The final outcome was that a non-profit was created to do the exact same function that Ingenix previously performed.

Maybe that is the final solution, if you want to sell a product that discriminates it has to be sold via a different type of company that doesn't have to adhere to the same regulations as an insurer.

I have no illusions about how difficult it is to build models that don't have an adverse impact. Sometimes it is boarding on the impossible. When it is, those models can't be used in practice. I've built and/or tested plenty of models that needed to be shelved because of this exact reason and each of them was. Or we went back to the drawing board to see if the concept could be modified to stay on the correct side of the law.

* Not a position I'm advocating
Reply With Quote
  #43  
Old 11-04-2019, 04:31 PM
The Obese Dog's Avatar
The Obese Dog The Obese Dog is offline
Member
 
Join Date: Jan 2008
Posts: 432
Default

Quote:
Originally Posted by WhosOnFirst View Post
Per the article, "At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%." This isn't about parity. This is about the risk scores for black people being set lower than risk scores for white people.



It may not be possible to build a model that doesn't produce adverse impact. If that is the case, then a company shouldn't use the model. You may be right that the proposed fix still produces adverse impact. It is very possible to test that and it should be tested. If it does produce adverse impact, than it shouldn't be used. This is not an opinion thing, this is a legal issue. I encourage you to read up on the history of red-lining in the insurance industry. Modern modeling provides a justification to this exact same sort of behavior. It doesn't matter if it was intentional or not or even if the modeling is correct (like if blacks really did need less care than whites*), the practice is illegal any time adverse impact is created on a protected class.



In concept I agree, a rising tide floats all boats as it were. And UHG may be able to persuade NY that this is the case and not be found guilty of any wrong doing. However, prima facie, assigning lower risk scores to blacks is going to be a hard argument to overcome. We can look at some history to see what might actually happen. Do you remember UHG's Ingenix? They were the gold standard of aggregated data for the health insurance industry. Everybody used their data to determine what appropriate out of network charges were. NY sued insurers saying that the Ingenix database artificially lowered what the insurance companies should have been paying to doctors. As an industry insider, the entire case seemed without merit because how else are you going to set a U&C rate? Regardless, UHG lost the case and paid like $400 million is penalties (don't quote me on that number). The final outcome was that a non-profit was created to do the exact same function that Ingenix previously performed.

Maybe that is the final solution, if you want to sell a product that discriminates it has to be sold via a different type of company that doesn't have to adhere to the same regulations as an insurer.

I have no illusions about how difficult it is to build models that don't have an adverse impact. Sometimes it is boarding on the impossible. When it is, those models can't be used in practice. I've built and/or tested plenty of models that needed to be shelved because of this exact reason and each of them was. Or we went back to the drawing board to see if the concept could be modified to stay on the correct side of the law.

* Not a position I'm advocating
Thanks, that was informative.
Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 08:15 AM.


Powered by vBulletin®
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
*PLEASE NOTE: Posts are not checked for accuracy, and do not
represent the views of the Actuarial Outpost or its sponsors.
Page generated in 0.19910 seconds with 9 queries