Actuarial Outpost
 
Go Back   Actuarial Outpost > Actuarial Discussion Forum > Property - Casualty / General Insurance
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions


Reply
 
Thread Tools Search this Thread Display Modes
  #11  
Old 02-14-2019, 12:05 PM
Actuarially Me Actuarially Me is offline
Member
CAS
 
Join Date: Jun 2013
Posts: 62
Default

Quote:
Originally Posted by MoralHazard View Post
Relevant paper: https://www.casact.org/pubs/forum/17...ross-Evans.pdf

The authors present a case study of a claims severity model, which is commonly modeled using a gamma GLM. They compare that approach to a minimum bias model, which is equivalent to a Poisson GLM, and conclude that the min bias aka Poisson model validated similarly to (or maybe slightly better than) the gamma model. So at least based on this I'd say that yes, you're probably OK going with the Poisson model for PP.

Ha! We actually use the software created by Chris Gross, MultiRate. I knew it used minimum bias iterations instead of MLE, but wasn't sure what distributions it used in the background. It creates models with a good amount of lift, but just doesn't have the flexibility of R. Either that or I just am not familiar enough with it and still have better performance.

We had him in to give a tutorial on new features of MultiRate and he had mentioned this paper coming out and it totally slipped my mind. Thanks!
Reply With Quote
  #12  
Old 02-14-2019, 12:09 PM
Heywood J Heywood J is offline
Member
CAS
 
Join Date: Jun 2006
Posts: 4,080
Default

In general, GLMs are simplistic enough that they are fairly robust to the choice of distribution, within reason. With more sophisticated models, that may no longer be the case, but then you'd probably want to use more realistic distributions than the ones from exponential family.
Reply With Quote
  #13  
Old 02-14-2019, 12:26 PM
Actuarially Me Actuarially Me is offline
Member
CAS
 
Join Date: Jun 2013
Posts: 62
Default

Quote:
Originally Posted by Heywood J View Post
In general, GLMs are simplistic enough that they are fairly robust to the choice of distribution, within reason. With more sophisticated models, that may no longer be the case, but then you'd probably want to use more realistic distributions than the ones from exponential family.
Yeah, right now I'm looking for interpretability over predictability. Have to get people comfortable with machine learning over underwriting intuition first. So I'll be sticking w/ GLM / Elastic Net for at least the next year for rating models. Once I build confidence, I'll slowly move into more advanced models.

I have a feeling claim triage and client retention models are going to be a lot easier. Reserving models were much easier than pricing lol.
Reply With Quote
  #14  
Old Yesterday, 06:15 AM
juandeoyar juandeoyar is offline
 
Join Date: Feb 2018
College: ISEG Actuarial Science
Favorite beer: Leffe Blond
Posts: 28
Default

You can also try modelling severity by normal distribution with link log (in the case you have some large claims).
With the penalized regression you get better scores because you probably have many risk classes with scanty data.
You can use credibility approach after you fit the GLM or try a GLMM where you get shrunken coefficients (ML algorithms are based just on performance).
Reply With Quote
  #15  
Old Yesterday, 06:55 AM
FactuarialStatement's Avatar
FactuarialStatement FactuarialStatement is offline
Member
CAS AAA
 
Join Date: Oct 2012
Studying for 5
Favorite beer: Beer
Posts: 2,087
Default

Well I thought I did help, but I’ll be more explicit for you.

Reduce the number of factor levels so that your design matrix is not large p. Or introduce hierarchy to the covariance matrix if possible. The double exponential prior (lasso) shrinks the coefficients well but if they all shrink toward the global mean then you didn’t accomplish much.

Don’t use poisson to model pure premium. If you think little segmentation comes from the severity component then just model frequency.

H20 parallellizes glm fitting by rows. So this is even worse for your high cardinality features. So with so few rows and large factor levels with sparse data you should not be splitting your data further
__________________
P | FM | 3F | 3ST | 3LC | C | 5 | 6 |
OC1 | OC2 | COP
Econ | Stats | Corp Fin
ACAS

7
8
9
Reply With Quote
  #16  
Old Yesterday, 10:11 AM
Actuarially Me Actuarially Me is offline
Member
CAS
 
Join Date: Jun 2013
Posts: 62
Default

Quote:
Originally Posted by juandeoyar View Post
You can also try modelling severity by normal distribution with link log (in the case you have some large claims).
With the penalized regression you get better scores because you probably have many risk classes with scanty data.
You can use credibility approach after you fit the GLM or try a GLMM where you get shrunken coefficients (ML algorithms are based just on performance).
Thanks! Yeah I do have a lot of uncredible data buckets. For my first iteration, I can't change any of the bucketing data due to how the original model was set up. They put the coefficients in SQL and pipe it into a web service. I'm on a time constraint, so I'm just readjusting their current factors. So I need models that have multiplicative coefficients. So I'm pretty much stuck with GLM and Elastic net. And with that, Poisson has been performing better. Just wasn't sure if there's anything I'm overlooking over using Tweedie given Poisson is more developed and has better support in R.
Reply With Quote
  #17  
Old Yesterday, 10:50 AM
Actuarially Me Actuarially Me is offline
Member
CAS
 
Join Date: Jun 2013
Posts: 62
Default

Quote:
Originally Posted by FactuarialStatement View Post
Well I thought I did help, but Iíll be more explicit for you.

Reduce the number of factor levels so that your design matrix is not large p. Or introduce hierarchy to the covariance matrix if possible. The double exponential prior (lasso) shrinks the coefficients well but if they all shrink toward the global mean then you didnít accomplish much.

Donít use poisson to model pure premium. If you think little segmentation comes from the severity component then just model frequency.

H20 parallellizes glm fitting by rows. So this is even worse for your high cardinality features. So with so few rows and large factor levels with sparse data you should not be splitting your data further
No, you didn't help. You're condescending in every post (saying things like "I'll be more explicit") I've read of yours and you never answer the initial question (and still haven't). Which leads me to believe you're more interested in sounding intelligent than actually contributing. You still didn't answer the question at hand:

I have no control over the data and implementation process for my first iteration in the model. The factors are set in SQL and piped into a web service that I can't make changes to yet. Given this, I've found Poisson performs slightly better than Tweeide. Is there anything inherently wrong with using the Poisson over the Tweedie? I'm not asking how to improve my model. I'm not asking to rate my model and tell me why it sucks.

I'm also not asking why h.20 is worse. Someone else suggested it and I said I'll look into it. Then you pipe in and given your tone, say something critical like "For 75k records you can use your cellphone. I would not be considering h20 when there are much better R packages." How on earth is that helpful? Maybe state why that's the case or list the better R packages?

Since we're giving unsolicited advice, here's some for you. If how you interact on this site is an indication on how you interact with coworkers, you're going to be the guy that wonders why others pass you up in opportunities even though you believe you're much better qualified. It's because your personality sucks and no one enjoys working with you. You're the AO equivalent of /r/iamverysmart. Which leads me to believe you have the emotional insecurity of a 15-16 year old who needs constant validation.

You may be a brilliant modeler and actuary, but no one is going to take you seriously on here or in life if you act like a dick. You'll be the guy that people roll their eyes when you start to walk over toward them because all you do is complain about how you could do things better. If you're the type that complains about other coworkers work quality to every other coworker, they're going to resent you. Instead of criticizing, teach them how to be better. Everyone started somewhere and not everyone is as gifted with actuarial genius like you.

I really only feel the need to point you out because I want to encourage open discussion on here. I don't want lurkers to be afraid to ask questions (no matter how basic they may be to your inferior mind) in fear of being criticized. I know I'm not a great pricing modeler, nor do I claim to be. I've only been doing it 3 months and have to work with the data and tasks I'm given. I come here to ask advice from people who have experience and can offer advice, not to be criticized.
Reply With Quote
Reply

Tags
glm, poisson, tweedie

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 04:01 AM.


Powered by vBulletin®
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
*PLEASE NOTE: Posts are not checked for accuracy, and do not
represent the views of the Actuarial Outpost or its sponsors.
Page generated in 0.12489 seconds with 9 queries