Actuarial Outpost > CAS CAS Exam 7 Spring 2019 Progress Thread
 Register Blogs Wiki FAQ Calendar Search Today's Posts Mark Forums Read
 FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions

 Enter your email to subscribe to DW Simpson weekly actuarial job updates. li.signup { display: block; text-align: center; text-size: .8; padding: 0px; margin: 8px; float: left; } Entry Level Casualty Health Life Pension All Jobs

#1001
03-16-2019, 12:31 AM
 hjacjswo Member Join Date: Dec 2009 College: About to be a super senior Posts: 1,507

Quote:
 Originally Posted by redearedslider Hardest what
Get ur mind out of the gutter
#1002
03-16-2019, 09:50 AM
 Marcie Member CAS Join Date: Feb 2015 Posts: 9,592

Quote:
 Originally Posted by redearedslider Hardest what
They're comparing hardness of carapaces and plastrons, of course.
#1003
03-16-2019, 10:20 AM

have a stupid question for Meyers

Page 17. Table 6 - Illustrative MCMC simulations

I don't get why Betas in that table is between -0.5125 and 0.0518. These numbers do not follow the uniform distribution (-5,5) in the model setup. What am I missing?
__________________
ACAS 7 8 9
#1004
03-16-2019, 07:18 PM
 AbedNadir Member CAS SOA Join Date: Mar 2014 Studying for FCAS Posts: 2,308

Quote:
 Originally Posted by trueblade have a stupid question for Meyers Page 17. Table 6 - Illustrative MCMC simulations I don't get why Betas in that table is between -0.5125 and 0.0518. These numbers do not follow the uniform distribution (-5,5) in the model setup. What am I missing?
I believe it's describing the prior distribution for the Bayesian methodology, which is your belief for where the values should belong before the posteriors are calculated. These are the "relatively wide prior distributions" it describes before listing the values. the method then homes in on the most likely values
__________________
#1005
03-17-2019, 05:46 PM
 hjacjswo Member Join Date: Dec 2009 College: About to be a super senior Posts: 1,507

What the hell. For Mack 94, one of the chain ladder assumptions is that the variance of the expected losses is proportional to the cumulative losses to date.
Mack tests out 3 different selected LDF calculations:
fk0 = C^2 weighted average LDF
fk1 = C weighted average LDF (volume weighted LDF)
fk2 = unweighted average LDF (simple average LDF)
(C = cumulative losses)

And then, states variance assumption behind each method:
fk0 = Variance is proportional to 1
fk1 = Variance is proportional to C
fk2 = Variance is proportional to C^2

Maybe I'm very dumb. But, I feel like the variance assumptions of fk0 and fk2 should be flipped to match how they are calculated?? Do you guys understand the logic behind this?
#1006
03-17-2019, 05:47 PM
 AbedNadir Member CAS SOA Join Date: Mar 2014 Studying for FCAS Posts: 2,308

Quote:
 Originally Posted by hjacjswo What the hell. For Mack 94, one of the chain ladder assumptions is that the variance of the expected losses is proportional to the cumulative losses to date. Mack tests out 3 different selected LDF calculations: fk0 = C^2 weighted average LDF fk1 = C weighted average LDF (volume weighted LDF) fk2 = unweighted average LDF (simple average LDF) (C = cumulative losses) And then, states variance assumption behind each method: fk0 = Variance is proportional to 1 fk1 = Variance is proportional to C fk2 = Variance is proportional to C^2 Maybe I'm very dumb. But, I feel like the variance assumptions of fk0 and fk2 should be flipped to match how they are calculated?? Do you guys understand the logic behind this?
one is the variance of c_i,j + 1 and the other is the variance of (c_i,j +1)/c_i,j) ( the link ratio)

the residual equations are looking at the variance of the incremental, not the link ratio.
__________________

Last edited by AbedNadir; 03-17-2019 at 05:54 PM..
#1007
03-17-2019, 05:55 PM

Quote:
 Originally Posted by AbedNadir I believe it's describing the prior distribution for the Bayesian methodology, which is your belief for where the values should belong before the posteriors are calculated. These are the "relatively wide prior distributions" it describes before listing the values. the method then homes in on the most likely values
Thanks, that makes sense, and after all, -0.5125 and 0.0518 are still within (-5,5) lol
__________________
ACAS 7 8 9
#1008
03-17-2019, 06:02 PM
 AbedNadir Member CAS SOA Join Date: Mar 2014 Studying for FCAS Posts: 2,308

Quote:
 Originally Posted by trueblade Thanks, that makes sense, and after all, -0.5125 and 0.0518 are still within (-5,5) lol
yea, I'm not an expert with Bayesian statistics or anything but I think it's possible for the values to actually fall outside of the prior range as well
__________________
#1009
03-17-2019, 08:20 PM
 AbedNadir Member CAS SOA Join Date: Mar 2014 Studying for FCAS Posts: 2,308

based on what I know about it, it's an iterative process where the parameters keep updating and shifting to maximize the likelihood of the posterior, so if your random data was produced from a normal distribution with mean 0 and sd 4 and then you gave it prior mean with range of 100-150, and sd with prior 3-7, then if you run it enough then the mean will shift below the value of 100, the range is just to help with convergence of the posterior if you can give a close estimate
__________________
#1010
Yesterday, 10:01 AM
 Glenn Meyers Member Join Date: Mar 2002 Posts: 265

Quote:
 Originally Posted by AbedNadir based on what I know about it, it's an iterative process where the parameters keep updating and shifting to maximize the likelihood of the posterior, so if your random data was produced from a normal distribution with mean 0 and sd 4 and then you gave it prior mean with range of 100-150, and sd with prior 3-7, then if you run it enough then the mean will shift below the value of 100, the range is just to help with convergence of the posterior if you can give a close estimate
The way to view Table 6 is as a random sample from the posterior distribution of the parameters. There is no "maximizing the likelihood of the posterior." The point of the discussion around Table 6 was to show how to go from a random sample from the posterior distribution of the parameters to a random sample of the posterior "outcomes" (loosely speaking).

 Tags brehm is my daddy, cfrf needs a forum, cs was here, firsttag, hi cs, mack 69