Actuarial Outpost Developing layered losses - which order?
 User Name Remember Me? Password
 Register Blogs Wiki FAQ Calendar Search Today's Posts Mark Forums Read
 FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions

 Enter your email to subscribe to DW Simpson weekly actuarial job updates. li.signup { display: block; text-align: center; text-size: .8; padding: 0px; margin: 8px; float: left; } Entry Level Casualty Health Life Pension All Jobs

 Thread Tools Search this Thread Display Modes
#1
01-03-2018, 04:20 PM
 TDH CAS Non-Actuary Join Date: Dec 2016 Posts: 23
Developing layered losses - which order?

If you had individual claims with AOC limits and excess, and aggregate limits and excess, at what point do you develop?

I thought it would be first:

1) Trend individual losses
2) Layer individual losses
3) aggregate by year
4) apply aggregate loss modifies
5) develop the losses in the layer

However, http://www.cii.co.uk/knowledge/resou...-process/43681

states at step 10, we apply aggregate modifiers after developing - I don't see the logic here. It seems as though the author is stating to only apply the individual modifiers then develop, then apply the aggregate. What's the reasoning?

Last edited by TDH; 01-03-2018 at 04:43 PM..
#2
01-03-2018, 11:11 PM
 Harbinger Notes Contributor CAS Join Date: Feb 2005 Posts: 2,334

Can you provide some background on what you’re pricing/analyzing?
__________________
har·bin·ger (här'bin-jer): One that indicates or foreshadows what is to come; a forerunner.
#3
01-04-2018, 05:11 AM
 TDH CAS Non-Actuary Join Date: Dec 2016 Posts: 23

a large account
#4
01-04-2018, 08:19 AM
 Maphisto's Sidekick Member CAS Join Date: Nov 2001 Location: South Park Genetics Lab College: Ardnox Favorite beer: The kind with alcohol Posts: 2,192

The nice thing about mathematics is that you can do things in a different order, and arrive at the correct answer (or an approximately equally correct answer) provided the intermediate steps use information that is crafted with assumptions that are compatible with where you are in your overall process.

I've done quite a bit with large account pricing in property-casualty. I think I minimize my headaches by trending individual claims, dividing the individual claims into layers, aggregating the losses in the layers, and then developing the aggregations of the layers.

However, the development factors I apply have to themselves be created using trended, layered losses.

I prefer to do this because it neatly handles the questions of what to do about claims not yet reported, what to do about claims that develop into a higher layer, or claims that fall out of a layer when closing. The downside, however, is that if you are developing excess layer(s), the data can become so sparse as to requiring judgment in coming up with suitable development factors.

Oh, and when coming up with trend to apply to individual uncapped claims, keep in mind that some of the more popular industry trend metrics (at least in the US) are based on capped losses, and therefore some modification may be required before you use them.
#5
01-04-2018, 12:39 PM
 TDH CAS Non-Actuary Join Date: Dec 2016 Posts: 23

Quote:
 Originally Posted by Maphisto's Sidekick The nice thing about mathematics is that you can do things in a different order, and arrive at the correct answer (or an approximately equally correct answer) provided the intermediate steps use information that is crafted with assumptions that are compatible with where you are in your overall process. I've done quite a bit with large account pricing in property-casualty. I think I minimize my headaches by trending individual claims, dividing the individual claims into layers, aggregating the losses in the layers, and then developing the aggregations of the layers. However, the development factors I apply have to themselves be created using trended, layered losses. I prefer to do this because it neatly handles the questions of what to do about claims not yet reported, what to do about claims that develop into a higher layer, or claims that fall out of a layer when closing. The downside, however, is that if you are developing excess layer(s), the data can become so sparse as to requiring judgment in coming up with suitable development factors. Oh, and when coming up with trend to apply to individual uncapped claims, keep in mind that some of the more popular industry trend metrics (at least in the US) are based on capped losses, and therefore some modification may be required before you use them.
Thanks for this.

The data I have right now is fairly sparse. The only triangle I have is for the line of business, rather than the specific risk. I'll see if I can more data, else I'll go with what I have for now.

Thanks again.
#6
01-05-2018, 12:11 PM
 Maphisto's Sidekick Member CAS Join Date: Nov 2001 Location: South Park Genetics Lab College: Ardnox Favorite beer: The kind with alcohol Posts: 2,192

Quote:
 Originally Posted by TDH Thanks for this. The data I have right now is fairly sparse. The only triangle I have is for the line of business, rather than the specific risk.
That's not uncommon.

If you have the time and data, and depending on what it is that you're looking at, a few more points to consider:
• For accounts with large retentions, the delay in getting the notice of loss to the carrier is probably more dependent on the account (and/or its TPA), but once on the books, the carrier's reserving practices may not be that different from other similarly mature claims.
• There was a paper recently that discussed techniques of credibility-weighting thin loss development data. (Don't have the details off the top of my head, unfortunately.) It's not something that's proven worth the effort with the kinds of accounts I typically look at, but for something huge....
• Try to sensitivity-test assumptions or judgment calls you make. That will give you a taste for how material your choice of selection might be, as well as providing some insight as to how much you want to load your indication for parameter risk.
#7
02-12-2018, 02:35 PM
 DeepPurple Member Join Date: Jun 2004 Posts: 3,974

Without going into specifics techniques that are likely to be proprietary, I'd like the OP to consider the similarities/differences among these 4 trend assumptions of 10% combined trend.

1) Severity is trending at 10% and the frequency is trending at 0%
2) Sev = 0% Freq = 10%
3) Sev = 5% Freq = 5%
4) Sev = 20% Freq = -10%
__________________
Come on. Let's go space truckin'. Come on!
#8
02-12-2018, 02:48 PM
 DeepPurple Member Join Date: Jun 2004 Posts: 3,974

Quote:
 Originally Posted by Maphisto's Sidekick The nice thing about mathematics is that you can do things in a different order, and arrive at the correct answer (or an approximately equally correct answer) provided the intermediate steps use information that is crafted with assumptions that are compatible with where you are in your overall process.

I think this might be rule-of-thumb worthy AT BEST. I do not believe it to be true in all cases.

Consider a situation where the layer attaches above the largest untrended loss but NOT above the largest trended loss.

If you do not apply trend, there is NO FACTOR large enough to multiply against \$0 in the layer to get you an answer that is commensurate with the loss dollars that emerge when the layer is applied to losses trended for severity.
__________________
Come on. Let's go space truckin'. Come on!

Last edited by DeepPurple; 02-12-2018 at 03:11 PM..
#9
02-12-2018, 04:03 PM
 r8ingStuff CAS AAA Join Date: Apr 2016 Location: Philadelphia Studying for Nothin' Favorite beer: Urban Village Brewery Posts: 10

Has anyone used the difference of two limited layers' ultimate trended loss - say, losses limited at \$500k and losses limited at \$250k - to evaluate a 250k x 250k layer? What would be the difference in assumptions here? I'm thinking along the lines of if xs ldfs aren't available.
#10
02-13-2018, 10:37 AM
 DeepPurple Member Join Date: Jun 2004 Posts: 3,974

Quote:
 Originally Posted by r8ingStuff Has anyone used the difference of two limited layers' ultimate trended loss - say, losses limited at \$500k and losses limited at \$250k - to evaluate a 250k x 250k layer? What would be the difference in assumptions here? I'm thinking along the lines of if xs ldfs aren't available.
Would you use the same ldf's on the two sets?
__________________
Come on. Let's go space truckin'. Come on!

 Thread Tools Search this Thread Search this Thread: Advanced Search Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off

All times are GMT -4. The time now is 09:46 PM.

 -- Default Style - Fluid Width ---- Default Style - Fixed Width ---- Old Default Style ---- Easy on the eyes ---- Smooth Darkness ---- Chestnut ---- Apple-ish Style ---- If Apples were blue ---- If Apples were green ---- If Apples were purple ---- Halloween 2007 ---- B&W ---- Halloween ---- AO Christmas Theme ---- Turkey Day Theme ---- AO 2007 beta ---- 4th Of July Contact Us - Actuarial Outpost - Archive - Privacy Statement - Top