

FlashChat  Actuarial Discussion  Preliminary Exams  CAS/SOA Exams  Cyberchat  Around the World  Suggestions 
ShortTerm Actuarial Math Old Exam C Forum 

Thread Tools  Search this Thread  Display Modes 
#2




Years 1 and 2 is treated as one period. There is no split or distinction between claims that happened in years 1 and 2, so there is no way you can treat it as 2 years. So from there on in, each period is 2 years. So to find claims using the credibility factor that you found, you are actually calculating the number of claims for years 3 and 4. Since you want the number for just the 3rd year, you have to divide by 2. I'm not sure if that makes sense, but I tried.

#4





#5




I'm having trouble understanding n for this problem. In fact, I'm having trouble seeing consistency in the use of n in all credibility problems. Here, isn't the exposure unit insured2years? So you have 100, 2year exposures. So n = 100, not 1?
For instance, in problem 263 You are given: (i) The number of claims incurred in a month by any insured follows a Poisson distribution with mean λ . Month Number of Insureds Number of Claims 1 100 10 2 150 11 3 250 14 So the exposure unit is insuredmonths. So, you have 100 insuredmonths for month 1, 150 insuredmonths for month 2, 250 insured months for month 3; for a total of 500 insuredmonths = n. Here, n doesn't equal 3, the number of months. So, going back to prob 197, why does n equal 1 (the number of periods of data) as opposed to 100 (the number of insureds2years)? 
#6




Just thinking aloud here. I do have similar issues on finding N too.
1) These were my first thoughts. I believe that part of the issue here is that we use the data for two purposes. One to find the missing parameter of the Poisson. The other to find Z. We have data for 2 years, taken as the sample. One way of solving it treats it as a sample of one twoyear period (N=1). Another is to treat it as 2 years data (the sample will need to be reduced proportionally to 1/2 the original values) (N=2). We are trying to figure out the number of claims per policy per year. In one case we figure it for 2 years and then we divide it by 2 to get one year. They could have given the amount for year 1 and the amount for year 2. Or they could have given data for 5 years, and then N=5. The sample would have increased proportionally. The number of policies is used merely to find the average (and the sample standard deviation) of the number of claims per policy per year. And that's what we are trying to find out. You have yearly samples of the number of claims per policy. That's why it's N= 1, or 2 or 5, depending on the amount of years sampled. And not N= 100 (for 2 years). In sample 197 the exercise mentions that the number of claims per year follows a Poisson. In Sample 263 mentions "The number of claims incurred in a month by any insured follows a Poisson distribution with mean λ." 2) And after reading about this issue I came with something that may she a new light to this. Basically we are trying to find out how do we partition the weights of 2 different means, based on how credible each value is. In the case of the sample, the more data points the more credible the sample mean, x bar, will be respective to the mean of the model, mu. In the case of the semiparametric, sample 197, we have data that we use to figure out the lambda of the Poisson frequency. On the sample we have basically a 2year period for one policyholder, that is N(1,2) = 1. That is then 1 exposure. Not the 100 policies shown before to evaluate the parameter. These 100 policies are not the sample. In the case of the BühlmannStraub, sample 263, in the data sample we have a lot more exposures. We have to be careful on how it's read, as the number of insureds change month to month. Basically there are 500 insuredmonths data with 35 claims total. We will find out what the estimated value PC is for claims per insuredmonth. So the exposure will be insuredmonths, that is N=500. And then multiply PC by the number of months(12) and insured (300), then the result of the credibility will need to be multiplied by 3600. I think that explains the difference on how N in Z is calculated for both sample problems. Something that is not explained anywhere I could find is that the model in a way needs to be connected to this N. It's like given a 1 to the model and N to the sample. What I did find is an alternate way to express it that perhaps can shed additional light Z =VHM / (VHM + EPV/N) where the second term of the denominator is the variance of the mean. The total denominator is then the Total Variance of the Estimator X bar.
__________________
German ______________ Prelims: VEE: Last edited by gauchodelpaso; 10202018 at 10:28 PM.. Reason: additional info 
#7




n in Buhlmann (and m in BuhlmannStraub) and n in full credibility is always the number of observations of the variable of interest.
__________________
Jim Daniel Jim Daniel's Actuarial Seminars www.actuarialseminars.com jimdaniel@actuarialseminars.com Last edited by Jim Daniel; 10212018 at 12:27 AM.. Reason: typo 
#8




Quote:
What was difficult to find out exactly was what was the variable of interest in both these problems. And it may be similar in so many I found. In the semiparametric case, the first sample of 100 policies confused me and the other student. We understand it's exposures for frequency, but in one it was the number of claims in 1 2year period (to add some pain to the example), so the exposure was the 2 years period. N=1 (2year period). In the other one it was the insuredmonths, that was kind of hidden too. N=500 (insured months).
__________________
German ______________ Prelims: VEE: 
#9




Just to add some to this problem, I thought the original table tells us of different groups with 0 to 4 claims, and the example is asking about one of the policies that had 1 claim in the period. They could have asked for the others, like what would be the expected number of claims for year 3 for somebody that did not have any claims or somebody that had 3 claims. As Z is small, the weight associated with the actual number of claims is small, and most of the weight is taken by the model mean of 0.76. The slope in this case is 1/2 of Z, due to the 2 years used in the sample. So the E[N3] are
N1+ N2  E[N3] 0  0.34 1  0.39 2  0.45 3  0.50 4  0.55
__________________
German ______________ Prelims: VEE: Last edited by gauchodelpaso; 10222018 at 03:41 PM.. 
Thread Tools  Search this Thread 
Display Modes  

