Actuarial Outpost
 
Go Back   Actuarial Outpost > Actuarial Discussion Forum > Careers - Employment
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions


Fill in a brief DW Simpson Registration Form
to be contacted when new jobs meet your criteria.


Reply
 
Thread Tools Search this Thread Display Modes
  #31  
Old 02-17-2018, 12:16 AM
MathStatFin MathStatFin is offline
Member
Non-Actuary
 
Join Date: Jul 2014
Posts: 4,703
Default

Quote:
Originally Posted by Revis Island View Post
I don't think I follow what you're saying here. Can you restate it, particularly the bold?

What I'm arguing is that the real value is in building systems. The algorithms generally are a part of that system, but they aren't the only part. Also, in some domains they aren't even close to the most important part. Deep learning will provide far better results for image recognition or training Alexa. It's probably not going to be at all useful for a claims model with limited data. I'm merely pointing out that a failure of complex algorithms in one area does not imply a failure of complex algorithms generally because:

1) In some cases that failure is due to a failure in other parts of the system (e.g. data, implementaion)

2) In some cases that failure is due to the nature of the problem domain
You wrote above that the model was pretty useless and not that important. I can see why you would write that, but it's simply not true in many areas. The existence of good models and algos is sometimes the driving force for entire processes. Nobody in their right mind would invest billions of dollars in self-driving cars unless they had a pretty damn good reason to trust the relevant algorithms. I also wouldn't start a quant hedge fund -- with an extremely expensive IT investment -- unless I had reason to believe that my algorithms were superior in some sense. There are other examples like FDA testing, surveys, etc.
Reply With Quote
  #32  
Old 02-17-2018, 09:47 PM
AbedNadir's Avatar
AbedNadir AbedNadir is offline
Member
CAS SOA
 
Join Date: Mar 2014
Studying for enlightenment
Posts: 2,139
Default

Quote:
Originally Posted by MathStatFin View Post
You wrote above that the model was pretty useless and not that important. I can see why you would write that, but it's simply not true in many areas. The existence of good models and algos is sometimes the driving force for entire processes. Nobody in their right mind would invest billions of dollars in self-driving cars unless they had a pretty damn good reason to trust the relevant algorithms. I also wouldn't start a quant hedge fund -- with an extremely expensive IT investment -- unless I had reason to believe that my algorithms were superior in some sense. There are other examples like FDA testing, surveys, etc.
as a matter of specialty, people who try to use the same tool for everything are bound to run into it's limitations...it's like trying to use xgboost or blockchain methodology to cook a good meal
__________________
Reply With Quote
  #33  
Old 02-18-2018, 02:42 PM
Heywood J Heywood J is offline
Member
CAS
 
Join Date: Jun 2006
Posts: 4,063
Default

I have not read this article in detail, but one thing in general to keep in mind is that the vast majority of such articles are highly intellectually dishonest. There are countless ways to evaluate and compare models, some far more potentially misleading than others. You can rest assured that any time someone prepares to publish an article, they will go down the list of measures until they find one that is misleading enough to give a desired answer, or can be plied with verbiage to give the desired conclusion.

Judging by the convoluted way the authors have stated their results, I suspect that this article is not an exception, though it's impossible to tell confidently without looking into the work deeply. The fact that they beat around the bush with a whole bunch of numbers, rather than looking at a single measure like AUROC, makes me wonder whether AUROC is telling a story they don't like.

The key finding seems to be that white people who re-offended were more likely to be misclassified as low-risk compared to black people who re-offended. They also show how white people were classified as low risk much more often than black people. They don't seem to grasp that the second fact will lead to the first, even if the model is perfectly calibrated. If this is the level of misleading bullshit they have to resort to, then there is probably nothing to worry about for AI or Data Scientists, other than the fact that non-technical audience is judging them uncritically based on fallacious arguments.
Reply With Quote
  #34  
Old 02-20-2018, 06:13 PM
Revis Island's Avatar
Revis Island Revis Island is offline
Member
CAS
 
Join Date: Oct 2010
Posts: 985
Default

Quote:
Originally Posted by MathStatFin View Post
You wrote above that the model was pretty useless and not that important. I can see why you would write that, but it's simply not true in many areas. The existence of good models and algos is sometimes the driving force for entire processes. Nobody in their right mind would invest billions of dollars in self-driving cars unless they had a pretty damn good reason to trust the relevant algorithms. I also wouldn't start a quant hedge fund -- with an extremely expensive IT investment -- unless I had reason to believe that my algorithms were superior in some sense. There are other examples like FDA testing, surveys, etc.
That wasn't what I meant to convey (although I see how it came across that way). Certainly the model itself isn't useless. And I fully agree that different models perform better on different problems. I think the entirety of my comments reflect that (where I compare deep learning vs linear models). What I was trying to say is that the model building piece itself is being increasingly automated (e.g. DataRobot, H2O Driverless AI). I think this will only continue in the future. I see the art of building good AI (to use the buzzword) is being itself increasingly subsumed by AI. So I don't think we have a disagreement on the importance of using the right model for the job. I just think that the model building piece itself is going to become less and less important.
Reply With Quote
Reply

Tags
data science, predictive analytics

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 04:51 PM.


Powered by vBulletin®
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.
*PLEASE NOTE: Posts are not checked for accuracy, and do not
represent the views of the Actuarial Outpost or its sponsors.
Page generated in 0.16572 seconds with 11 queries