Actuarial Outpost
 
Go Back   Actuarial Outpost > Actuarial Discussion Forum > Careers - Employment
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions

2017 ACTUARIAL SALARY SURVEYS
Contact DW Simpson for a Personalized Salary Survey

Reply
 
Thread Tools Search this Thread Display Modes
  #21  
Old 02-08-2018, 05:31 PM
OddSox's Avatar
OddSox OddSox is offline
Member
CAS
 
Join Date: Oct 2007
Studying for Jazz Piano
Favorite beer: Founders Dirty Bastard
Posts: 11,692
Default

Quote:
Originally Posted by Colonel Smoothie View Post
Okay so I've got a question. The last time I tried clicking on random squares just to see if I could proceed, the webpage stopped me and asked me to please click on the correct squares. If these quizzes are being used to tune AI, what are my answers being compared against to determine that I am wrong?
not sure how those work, but all of those two word captchas usually have one word that they check your answer against, and one word that they'll use your answer to make up the dataset for (assuming you answered the other one correctly.)
__________________
Quote:
Originally Posted by JollyGoodFCAS View Post
OddFox - a foxy man for those who can't even
Reply With Quote
  #22  
Old 02-08-2018, 05:48 PM
2pac Shakur 2pac Shakur is offline
Member
SOA
 
Join Date: Jun 2003
Location: short
Posts: 93,600
Default

variance bias tradeoff imo
__________________
Reply With Quote
  #23  
Old 02-08-2018, 06:17 PM
MathStatFin MathStatFin is offline
Member
Non-Actuary
 
Join Date: Jul 2014
Posts: 4,703
Default

Quote:
Originally Posted by Colonel Smoothie View Post
Okay so I've got a question. The last time I tried clicking on random squares just to see if I could proceed, the webpage stopped me and asked me to please click on the correct squares. If these quizzes are being used to tune AI, what are my answers being compared against to determine that I am wrong?


I also had this thought.
Reply With Quote
  #24  
Old 02-09-2018, 10:41 AM
HuskerCAS HuskerCAS is offline
Member
CAS AAA
 
Join Date: Mar 2011
College: Nebraska
Posts: 483
Default

Quote:
Originally Posted by Woodrow View Post
Maybe it depends on the topic, but for a lot of AI, being "as good as untrained people" would be an amazing success.

Think about AI tasks like "recognize a picture of a cat"
And yet it's still the best option for "recognize all pictures of cats on this 1 Tb hard drive"

One example I heard was censoring social security numbers freely given in phone calls without prompting.
Reply With Quote
  #25  
Old 02-13-2018, 05:15 PM
Revis Island's Avatar
Revis Island Revis Island is offline
Member
CAS
 
Join Date: Oct 2010
Posts: 983
Default

I follow these stories stories closely and they are always a bit annoying. Finding one area where models perform poorly does not mean the entire field is doomed to failure. The success of models and the required complexity will vary greatly by domain and available data.

I constantly argue that the real power in models is not the algorithm but the process. Setting up data -> model -> predictions -> validation/feedback is incredibly time-consuming but can have a huge impact when done correctly. Models do nothing in isolation. They must be used within this larger process to be effective. I suspect that a lot of their value in insurance is in bringing a systematic approach rather than algorithm magic.

Pure model building is also being automated to the point that it's useless to be concerned about what algorithm is ultimately selected. You can build thousands of models in an hour with personal computers on reasonably large data. In an insurance context I believe it is useful to find the best fitting model as well as the most easily explained model. Show the business users the trade-off between the best fitting model and the easiest to understand model (e.g. a GLM or decision tree).

It should not come as a surprise that simple models perform comparably to more complex models in certain domains. However in other domains, complex algorithms are demonstrably better than linear models (image recognition, voice recognition, natural language processing). For some use cases image recognition algorithms are now exceeding what can be done by human experts not just in terms of speed but also accuracy.
Reply With Quote
  #26  
Old 02-13-2018, 07:42 PM
MathStatFin MathStatFin is offline
Member
Non-Actuary
 
Join Date: Jul 2014
Posts: 4,703
Default

Quote:
Originally Posted by Revis Island View Post
I follow these stories stories closely and they are always a bit annoying. Finding one area where models perform poorly does not mean the entire field is doomed to failure. The success of models and the required complexity will vary greatly by domain and available data.

I constantly argue that the real power in models is not the algorithm but the process. Setting up data -> model -> predictions -> validation/feedback is incredibly time-consuming but can have a huge impact when done correctly. Models do nothing in isolation. They must be used within this larger process to be effective. I suspect that a lot of their value in insurance is in bringing a systematic approach rather than algorithm magic.

Pure model building is also being automated to the point that it's useless to be concerned about what algorithm is ultimately selected. You can build thousands of models in an hour with personal computers on reasonably large data. In an insurance context I believe it is useful to find the best fitting model as well as the most easily explained model. Show the business users the trade-off between the best fitting model and the easiest to understand model (e.g. a GLM or decision tree).

It should not come as a surprise that simple models perform comparably to more complex models in certain domains. However in other domains, complex algorithms are demonstrably better than linear models (image recognition, voice recognition, natural language processing). For some use cases image recognition algorithms are now exceeding what can be done by human experts not just in terms of speed but also accuracy.
What has impact is the combination of interesting data and a model/algo that can convert information into actionable insights/predictions/decisions. You can't really separate these things. In fact a bunch of researchers can come up with some new predictive algos/methods and your data collection processes might be setup in a way to feed those algos. the latter case basically contradicts what you wrote. In that particular situation, the quality of the algorithms actually drive the entire process. This is basically what is going on right now with companies scrambling to get high quality training data sets. There was an interview with Jeff Bezos and he talks about the need to get good training sets to feed the new generation of AI algorithms.
Reply With Quote
  #27  
Old 02-14-2018, 01:03 PM
Revis Island's Avatar
Revis Island Revis Island is offline
Member
CAS
 
Join Date: Oct 2010
Posts: 983
Default

Quote:
Originally Posted by MathStatFin View Post
What has impact is the combination of interesting data and a model/algo that can convert information into actionable insights/predictions/decisions. You can't really separate these things. In fact a bunch of researchers can come up with some new predictive algos/methods and your data collection processes might be setup in a way to feed those algos. the latter case basically contradicts what you wrote. In that particular situation, the quality of the algorithms actually drive the entire process. This is basically what is going on right now with companies scrambling to get high quality training data sets. There was an interview with Jeff Bezos and he talks about the need to get good training sets to feed the new generation of AI algorithms.
I don't think I follow what you're saying here. Can you restate it, particularly the bold?

What I'm arguing is that the real value is in building systems. The algorithms generally are a part of that system, but they aren't the only part. Also, in some domains they aren't even close to the most important part. Deep learning will provide far better results for image recognition or training Alexa. It's probably not going to be at all useful for a claims model with limited data. I'm merely pointing out that a failure of complex algorithms in one area does not imply a failure of complex algorithms generally because:

1) In some cases that failure is due to a failure in other parts of the system (e.g. data, implementaion)

2) In some cases that failure is due to the nature of the problem domain
Reply With Quote
  #28  
Old 02-15-2018, 11:57 AM
Locrian Locrian is offline
Member
 
Join Date: Apr 2008
Posts: 1,740
Default

Quote:
Originally Posted by Revis Island View Post
Pure model building is also being automated to the point that it's useless to be concerned about what algorithm is ultimately selected.
Quote:
Originally Posted by Revis Island View Post
What I'm arguing is that the real value is in building systems. The algorithms generally are a part of that system, but they aren't the only part. Also, in some domains they aren't even close to the most important part.
I definitely agree with the gist first quote ("useless" is a strong word, but I get the point).

I also agree with the second point. After all, forecasts are ultimately always wrong. Sometimes how the larger system manages that fact can be critical, and a smartly designed system tolerates forecast error.
__________________
Spoiler:
Quote:
Originally Posted by Andreas View Post
The most important thing I have learned from the career forum is that the gurus in this field and keepers of supreme knowledge regarding all matters pertaining to the actuarial profession are unlettered actuarial students with < 5 years of experience.
Reply With Quote
  #29  
Old 02-16-2018, 08:46 AM
ThereIsNoSpoon ThereIsNoSpoon is offline
Member
CAS SOA
 
Join Date: Sep 2014
Studying for PA,FAP
College: When the smog clears, _ _ _ _
Favorite beer: PABST! BLUE RIBBON!
Posts: 453
Default

Quote:
Originally Posted by whoanonstop View Post
If the definition of AI for the majority is different from my definition of AI, does that mean I don't know what AI is?

I don't even know what passes as a complex model anymore.

Is it a regression with an interaction term in it?

Or a deep network mutated with some genetic algorithm scheme?

Not even sure I understand what the term "model" means by itself.

Is it a few numbers entered into Excel? Are spreadsheets alone models?

Or does it actually require something more than that?

I just don't know anymore.

-Riley
It's true that the line's gotten really blurry to the point of being pixelated, Riley, but perhaps that's a matter of perspective? A model can be either too complex or too simplistic depending on the situation it's used in and the people it's presented to. There's also defining "model" in different contexts I suppose.
__________________
Spoiler:

------------------------------------------
P FM MFE C MLC VEE Economics VEE Applied Statistics VEE Corporate Finance PA FAP

Want to connect on LinkedIn? PM me!
Reply With Quote
  #30  
Old 02-16-2018, 08:50 AM
ThereIsNoSpoon ThereIsNoSpoon is offline
Member
CAS SOA
 
Join Date: Sep 2014
Studying for PA,FAP
College: When the smog clears, _ _ _ _
Favorite beer: PABST! BLUE RIBBON!
Posts: 453
Default

Quote:
Originally Posted by Revis Island View Post
I follow these stories stories closely and they are always a bit annoying. Finding one area where models perform poorly does not mean the entire field is doomed to failure. The success of models and the required complexity will vary greatly by domain and available data.

I constantly argue that the real power in models is not the algorithm but the process. Setting up data -> model -> predictions -> validation/feedback is incredibly time-consuming but can have a huge impact when done correctly. Models do nothing in isolation. They must be used within this larger process to be effective. I suspect that a lot of their value in insurance is in bringing a systematic approach rather than algorithm magic.

Pure model building is also being automated to the point that it's useless to be concerned about what algorithm is ultimately selected. You can build thousands of models in an hour with personal computers on reasonably large data. In an insurance context I believe it is useful to find the best fitting model as well as the most easily explained model. Show the business users the trade-off between the best fitting model and the easiest to understand model (e.g. a GLM or decision tree).

It should not come as a surprise that simple models perform comparably to more complex models in certain domains. However in other domains, complex algorithms are demonstrably better than linear models (image recognition, voice recognition, natural language processing). For some use cases image recognition algorithms are now exceeding what can be done by human experts not just in terms of speed but also accuracy.
It's understandably (but regrettably) a common thing though right, vast generalizations about these relatively new things without seeing where they actually perform better/worse
__________________
Spoiler:

------------------------------------------
P FM MFE C MLC VEE Economics VEE Applied Statistics VEE Corporate Finance PA FAP

Want to connect on LinkedIn? PM me!
Reply With Quote
Reply

Tags
data science, predictive analytics

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 07:58 PM.


Powered by vBulletin®
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.
*PLEASE NOTE: Posts are not checked for accuracy, and do not
represent the views of the Actuarial Outpost or its sponsors.
Page generated in 0.52754 seconds with 11 queries