Actuarial Outpost
 
Go Back   Actuarial Outpost > Actuarial Discussion Forum > Property - Casualty / General Insurance
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions


Reply
 
Thread Tools Search this Thread Display Modes
  #1  
Old 05-11-2016, 07:40 PM
candlejack candlejack is offline
 
Join Date: May 2016
Posts: 2
Default Confidence Intervals from combined GLM outputs

Hi everyone, long time lurker, first time poster.

Recently at work we have been debating how to go about creating confidence intervals from GLM output. Specifically the confidence interval around the predicted risk premium.

Say for instance we had:
  • 4 Perils (Theft, Fire, Windscreen, Accidental Damage)
  • A Frequency and Severity model for each

To get the risk premium estimate, we are combining the parameter estimates produced. To get the 95% Confidence intervals can we use the "Wald 95% Confidence Limits" from Proc GENMOD?

One of my colleagues is trying to use the Compound Poisson distribution as a proof that we can combine the model parameters (implying each "segment" in a regression is it's own Poisson/Gamma variable with parameters equal to the combined model efficients - no idea on how we get shape and scale parameters this way for gamma models).

I've found literature on this topic really sparse, so I would appreciate if anyone could point me in particular direction, or has had some experience with this.

Also due to the CLT are the regression coefficients normally distributed? Would it be a better approach to simulate the risk premium and assume that the coefficients between all models are independent?

Cheers for any help

cj
Reply With Quote
  #2  
Old 05-11-2016, 09:23 PM
Vorian Atreides's Avatar
Vorian Atreides Vorian Atreides is offline
Wiki/Note Contributor
CAS
 
Join Date: Apr 2005
Location: As far as 3 cups of sugar will take you
Studying for ACAS
College: Hard Knocks
Favorite beer: Most German dark lagers
Posts: 66,894
Default

What did you assume the distribution to be for Frequency & severity?

Are you assuming any correlation between perils?
__________________
I find your lack of faith disturbing

Why should I worry about dying? Itís not going to happen in my lifetime!


Freedom of speech is not a license to discourtesy

#BLACKMATTERLIVES
Reply With Quote
  #3  
Old 05-11-2016, 10:50 PM
candlejack candlejack is offline
 
Join Date: May 2016
Posts: 2
Default

Using Poisson and Gamma for frequency and severity. We have assumed perils are independent (and I think they generally are), I was wondering if there might be correlation arising in (say) the Sum Insured parameter estimates across all the models, which means using the combined 95% Wald CI understate the true range.
Reply With Quote
  #4  
Old 05-12-2016, 01:24 PM
AMedActuary AMedActuary is offline
Member
SOA
 
Join Date: May 2007
College: UCLA Alumni
Posts: 398
Default

Just so I understand better, is this how you're approaching it?

Z=N1*X1 + N2*X2 + N3*X3 + N4*X4

where each Ni~Poisson(lambda i) and Xi ~ Gamma (alpha i, beta i).

You have lambda's, alpha's, and beta's estimated from your data on each peril. Do you want a confidence interval around Z given these estimates?

Last edited by AMedActuary; 05-12-2016 at 06:44 PM..
Reply With Quote
  #5  
Old 05-12-2016, 02:43 PM
GreenPea's Avatar
GreenPea GreenPea is offline
Member
CAS AAA
 
Join Date: May 2005
Location: Great Lakes
Favorite beer: Duff
Posts: 174
Default

Bootstrapping can be a useful tool when your estimator's variance is too complicated to derive or does not have a closed form solution.
Reply With Quote
  #6  
Old 05-12-2016, 03:07 PM
whoanonstop's Avatar
whoanonstop whoanonstop is offline
Member
Non-Actuary
 
Join Date: Aug 2013
Location: Los Angeles, CA
Studying for Spark / Scala
College: College of William and Mary
Favorite beer: Orange Juice
Posts: 5,908
Blog Entries: 1
Default

Quote:
Originally Posted by GreenPea View Post
Bootstrapping can be a useful tool when your estimator's variance is too complicated to derive or does not have a closed form solution.


Exactly what I would have done myself.

-Riley
__________________
Reply With Quote
  #7  
Old 05-12-2016, 06:43 PM
AMedActuary AMedActuary is offline
Member
SOA
 
Join Date: May 2007
College: UCLA Alumni
Posts: 398
Default

I agree. If you want the uncertainty in your estimated parameters (lambda, alpha, beta) to be incorporated in the confidence interval and if there's prior information on these parameters you want to use, you would have to use a Bayesian model. If not, then bootstrapping is the way to go.
Reply With Quote
  #8  
Old 05-17-2016, 02:47 PM
mxpx=1/2's Avatar
mxpx=1/2 mxpx=1/2 is offline
Member
CAS AAA
 
Join Date: Nov 2008
Studying for reasons unknown
Posts: 329
Default

Quote:
Originally Posted by candlejack View Post
Hi everyone, long time lurker, first time poster.

Recently at work we have been debating how to go about creating confidence intervals from GLM output. Specifically the confidence interval around the predicted risk premium.

Say for instance we had:
  • 4 Perils (Theft, Fire, Windscreen, Accidental Damage)
  • A Frequency and Severity model for each

To get the risk premium estimate, we are combining the parameter estimates produced. To get the 95% Confidence intervals can we use the "Wald 95% Confidence Limits" from Proc GENMOD?

One of my colleagues is trying to use the Compound Poisson distribution as a proof that we can combine the model parameters (implying each "segment" in a regression is it's own Poisson/Gamma variable with parameters equal to the combined model efficients - no idea on how we get shape and scale parameters this way for gamma models).

I've found literature on this topic really sparse, so I would appreciate if anyone could point me in particular direction, or has had some experience with this.

Also due to the CLT are the regression coefficients normally distributed? Would it be a better approach to simulate the risk premium and assume that the coefficients between all models are independent?

Cheers for any help

cj
If I'm understanding correctly what you're looking for, this is pretty easy with a Tweedie distribution.
Reply With Quote
  #9  
Old 05-24-2016, 09:29 PM
magillaG magillaG is offline
Member
 
Join Date: Jun 2007
Posts: 3,075
Default

I believe you are talking about a confidence interval on the expected loss, and not a prediction interval around the actual losses.

In that case, if the standard errors on the frequency and severity are relatively small compared to their values, and if they are independent between frequency and severity, then you can use the following approximations:

1) Pure Premium = frequency * severity : In this case, the relative standard errors add. (error in pure premium/pure premium) = sqrt((error in frequency/frequency)^2 + (error in severity/severity)^2)

2) Pure Premium = Pure Premium of Peril 1 + Pure Premium of Peril 2 : In this case, the absolute errors add. (error in total) = sqrt((error in peril 1)^2 + (error in peril 2)^2)

This is not as accurate as bootstrapping or simulation. But it can be a useful check.
Reply With Quote
Reply

Tags
combined, confidence interval, glm

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 12:39 AM.


Powered by vBulletin®
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
*PLEASE NOTE: Posts are not checked for accuracy, and do not
represent the views of the Actuarial Outpost or its sponsors.
Page generated in 0.24416 seconds with 11 queries