Actuarial Outpost Conditional expection problem
 Register Blogs Wiki FAQ Calendar Search Today's Posts Mark Forums Read
 FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions

 Bengaluru - Chennai - Hyderabad - Pune Actuarial Jobs in India New Delhi - Mumbai - Gurgaon

#1
08-02-2008, 04:46 PM
 xsimpx Member SOA Join Date: Jul 2008 Favorite beer: Lagunitas IPA Posts: 118
Conditional expection problem

An insurance company designates 10% of its customers as high risk and 90% as low risk. The number of claims made by a customer in a calendar year is Poisson distributed with mean theta and is independent of the number of claims made by a customer in the previous calendar year. For high risk customers theta=0.6, while for low risk customers theta=0.1. Calculate the expected bumber of claims made in calendar year 1998 by a customer who made one claim in calendar year 1997.

Could someone explain how they arrive at this solution.
#2
08-02-2008, 07:21 PM
 atomic Member CAS Join Date: Jul 2006 Posts: 4,088

Let H be the event that the customer is high risk, and L be the event that the customer is low risk. Then let N(y) be the number of claims made by a given customer in CY y. So

$\Pr[H | N(97) = 1] = \frac{\Pr[H {\small \cap} N(97) = 1]}{\Pr[N(97) = 1]} = \frac{\Pr[N(97) = 1 | H] \Pr[H]}{\Pr[N(97) = 1]}$.

This is simply Bayes' theorem. Now using the law of total probability to expand the denominator,

$\Pr[H | N(97) = 1] = \frac{\Pr[N(97) = 1 | H]\Pr[H]}{\Pr[N(97) = 1 | H]\Pr[H] + \Pr[N(97) = 1] | L]\Pr[L]}$.

Since we are given

$\Pr[N(y) = k | H] = e^{-.6} \frac{(0.6)^k}{k!}$,

$\Pr[N(y) = k | L] = e^{-.1} \frac{(0.1)^k}{k!}$,

and furthermore, Pr[H] = 0.1, Pr[L] = 0.9, we easily find

$\Pr[H | N(97) = 1] = \frac{(0.6) e^{-.6} (0.1)}{(0.6) e^{-.6} (0.1) + (0.1) e^{-.1} (0.9)} = 0.287929$.

Then we find

Pr[L | N(97) = 1] = 1 - 0.287929 = 0.712071,

and it follows that

E[N(98) | N(97) = 1] = E[N(98) | H] Pr[H | N(97) = 1] + E[N(98) | L] Pr[L | N(97) = 1].

Since the expected value of a Poisson distribution is equal to its parameter, we see that the above simplifies to

E[N(98) | N(97) = 1] = (0.6)(0.287929) + (0.1)(0.712071) = 0.243964.

Last edited by atomic; 08-02-2008 at 07:36 PM..
#3
08-03-2008, 11:08 AM
 xsimpx Member SOA Join Date: Jul 2008 Favorite beer: Lagunitas IPA Posts: 118

Thanks that makes a lot more sense now.
#4
08-04-2008, 03:35 PM
 cmichener Join Date: Jun 2008 Studying for Exam MLC Posts: 23

Wait, why is this conditional if the number of claims from year to year is independent?
#5
08-04-2008, 03:42 PM
 Gandalf Site Supporter Site Supporter SOA Join Date: Nov 2001 Location: Middle Earth Posts: 26,464

There are many customers. atomic's claims in one year are independent of atomic's claims in another year. xsimpx's claims in one year are independent of xsimpx's claims in another year.

In this problem, you are looking at the same customer in both years. This means the number of claims in the first year influences the probability that this particular customer is high risk, and thus influences the expected number of claims in the second year.

To take a more extreme example, suppose low risk always has 1 or 2 claims, p=.5 each; high risk always has 10 or 20, p=.5 each. If you see 10 claims in year 1, then you know you have a high risk customer. So in year 2, p(10)=p(20)=.5. But the fact that this high risk customer had 10 rather than 20 in year 1 doesn't change his likelihood of 10 vs 20 in year 2.
#6
08-04-2008, 08:07 PM
 atomic Member CAS Join Date: Jul 2006 Posts: 4,088

Quote:
 Originally Posted by cmichener Wait, why is this conditional if the number of claims from year to year is independent?
Because knowledge of the past claims experience provides partial information about the nature of the insured's risk class, which is not independent from year to year, but is (implied) a fixed property of the insured. Thus the expectation is conditional, since the more prior observations you have, the better your confidence of which risk class the insured belongs to, better than the 10% high, 90% low distribution that only applies in the absence of any information.

A simple example is in the tossing of a die that you do not know whether it is biased or fair. You inspect it closely and prior to any trial, you have no reason to say either way. As you roll the die and record the results you can start to infer with greater confidence whether the die is biased (but you can never truly be certain). This notion of revising one's prior belief based on observed outcomes is the foundation of Bayesian probability theory.

 Thread Tools Display Modes Hybrid Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off

All times are GMT -4. The time now is 10:12 PM.

 -- Default Style - Fluid Width ---- Default Style - Fixed Width ---- Old Default Style ---- Easy on the eyes ---- Smooth Darkness ---- Chestnut ---- Apple-ish Style ---- If Apples were blue ---- If Apples were green ---- If Apples were purple ---- Halloween 2007 ---- B&W ---- Halloween ---- AO Christmas Theme ---- Turkey Day Theme ---- AO 2007 beta ---- 4th Of July Contact Us - Actuarial Outpost - Archive - Privacy Statement - Top