Actuarial Outpost
 
Go Back   Actuarial Outpost > Exams - Please Limit Discussion to Exam-Related Topics > CAS > CAS Exams > Exam 7 (new) - Estimation of Policy Liability & ERM
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions



Reply
 
Thread Tools Search this Thread Display Modes
  #1001  
Old 03-16-2019, 12:31 AM
hjacjswo hjacjswo is offline
Member
 
Join Date: Dec 2009
College: About to be a super senior
Posts: 1,507
Default

Quote:
Originally Posted by redearedslider View Post
Hardest what
Get ur mind out of the gutter
Reply With Quote
  #1002  
Old 03-16-2019, 09:50 AM
Marcie's Avatar
Marcie Marcie is offline
Member
CAS
 
Join Date: Feb 2015
Posts: 9,592
Default

Quote:
Originally Posted by redearedslider View Post
Hardest what
They're comparing hardness of carapaces and plastrons, of course.
Reply With Quote
  #1003  
Old 03-16-2019, 10:20 AM
trueblade's Avatar
trueblade trueblade is offline
Member
CAS
 
Join Date: Jul 2012
Location: Castlevania
Studying for Freedom
Posts: 961
Default

have a stupid question for Meyers

Page 17. Table 6 - Illustrative MCMC simulations

I don't get why Betas in that table is between -0.5125 and 0.0518. These numbers do not follow the uniform distribution (-5,5) in the model setup. What am I missing?
__________________
ACAS 7 8 9
Reply With Quote
  #1004  
Old 03-16-2019, 07:18 PM
AbedNadir's Avatar
AbedNadir AbedNadir is offline
Member
CAS SOA
 
Join Date: Mar 2014
Studying for FCAS
Posts: 2,308
Default

Quote:
Originally Posted by trueblade View Post
have a stupid question for Meyers

Page 17. Table 6 - Illustrative MCMC simulations

I don't get why Betas in that table is between -0.5125 and 0.0518. These numbers do not follow the uniform distribution (-5,5) in the model setup. What am I missing?
I believe it's describing the prior distribution for the Bayesian methodology, which is your belief for where the values should belong before the posteriors are calculated. These are the "relatively wide prior distributions" it describes before listing the values. the method then homes in on the most likely values
__________________
Reply With Quote
  #1005  
Old 03-17-2019, 05:46 PM
hjacjswo hjacjswo is offline
Member
 
Join Date: Dec 2009
College: About to be a super senior
Posts: 1,507
Default

What the hell. For Mack 94, one of the chain ladder assumptions is that the variance of the expected losses is proportional to the cumulative losses to date.
Mack tests out 3 different selected LDF calculations:
fk0 = C^2 weighted average LDF
fk1 = C weighted average LDF (volume weighted LDF)
fk2 = unweighted average LDF (simple average LDF)
(C = cumulative losses)

And then, states variance assumption behind each method:
fk0 = Variance is proportional to 1
fk1 = Variance is proportional to C
fk2 = Variance is proportional to C^2

Maybe I'm very dumb. But, I feel like the variance assumptions of fk0 and fk2 should be flipped to match how they are calculated?? Do you guys understand the logic behind this?
Reply With Quote
  #1006  
Old 03-17-2019, 05:47 PM
AbedNadir's Avatar
AbedNadir AbedNadir is offline
Member
CAS SOA
 
Join Date: Mar 2014
Studying for FCAS
Posts: 2,308
Default

Quote:
Originally Posted by hjacjswo View Post
What the hell. For Mack 94, one of the chain ladder assumptions is that the variance of the expected losses is proportional to the cumulative losses to date.
Mack tests out 3 different selected LDF calculations:
fk0 = C^2 weighted average LDF
fk1 = C weighted average LDF (volume weighted LDF)
fk2 = unweighted average LDF (simple average LDF)
(C = cumulative losses)

And then, states variance assumption behind each method:
fk0 = Variance is proportional to 1
fk1 = Variance is proportional to C
fk2 = Variance is proportional to C^2

Maybe I'm very dumb. But, I feel like the variance assumptions of fk0 and fk2 should be flipped to match how they are calculated?? Do you guys understand the logic behind this?
one is the variance of c_i,j + 1 and the other is the variance of (c_i,j +1)/c_i,j) ( the link ratio)

the residual equations are looking at the variance of the incremental, not the link ratio.
__________________

Last edited by AbedNadir; 03-17-2019 at 05:54 PM..
Reply With Quote
  #1007  
Old 03-17-2019, 05:55 PM
trueblade's Avatar
trueblade trueblade is offline
Member
CAS
 
Join Date: Jul 2012
Location: Castlevania
Studying for Freedom
Posts: 961
Default

Quote:
Originally Posted by AbedNadir View Post
I believe it's describing the prior distribution for the Bayesian methodology, which is your belief for where the values should belong before the posteriors are calculated. These are the "relatively wide prior distributions" it describes before listing the values. the method then homes in on the most likely values
Thanks, that makes sense, and after all, -0.5125 and 0.0518 are still within (-5,5) lol
__________________
ACAS 7 8 9
Reply With Quote
  #1008  
Old 03-17-2019, 06:02 PM
AbedNadir's Avatar
AbedNadir AbedNadir is offline
Member
CAS SOA
 
Join Date: Mar 2014
Studying for FCAS
Posts: 2,308
Default

Quote:
Originally Posted by trueblade View Post
Thanks, that makes sense, and after all, -0.5125 and 0.0518 are still within (-5,5) lol
yea, I'm not an expert with Bayesian statistics or anything but I think it's possible for the values to actually fall outside of the prior range as well
__________________
Reply With Quote
  #1009  
Old 03-17-2019, 08:20 PM
AbedNadir's Avatar
AbedNadir AbedNadir is offline
Member
CAS SOA
 
Join Date: Mar 2014
Studying for FCAS
Posts: 2,308
Default

based on what I know about it, it's an iterative process where the parameters keep updating and shifting to maximize the likelihood of the posterior, so if your random data was produced from a normal distribution with mean 0 and sd 4 and then you gave it prior mean with range of 100-150, and sd with prior 3-7, then if you run it enough then the mean will shift below the value of 100, the range is just to help with convergence of the posterior if you can give a close estimate
__________________
Reply With Quote
  #1010  
Old Yesterday, 10:01 AM
Glenn Meyers Glenn Meyers is offline
Member
 
Join Date: Mar 2002
Posts: 265
Default

Quote:
Originally Posted by AbedNadir View Post
based on what I know about it, it's an iterative process where the parameters keep updating and shifting to maximize the likelihood of the posterior, so if your random data was produced from a normal distribution with mean 0 and sd 4 and then you gave it prior mean with range of 100-150, and sd with prior 3-7, then if you run it enough then the mean will shift below the value of 100, the range is just to help with convergence of the posterior if you can give a close estimate
The way to view Table 6 is as a random sample from the posterior distribution of the parameters. There is no "maximizing the likelihood of the posterior." The point of the discussion around Table 6 was to show how to go from a random sample from the posterior distribution of the parameters to a random sample of the posterior "outcomes" (loosely speaking).
Reply With Quote
Reply

Tags
brehm is my daddy, cfrf needs a forum, cs was here, firsttag, hi cs, mack 69

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 03:41 AM.


Powered by vBulletin®
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
*PLEASE NOTE: Posts are not checked for accuracy, and do not
represent the views of the Actuarial Outpost or its sponsors.
Page generated in 0.23759 seconds with 12 queries