

FlashChat  Actuarial Discussion  Preliminary Exams  CAS/SOA Exams  Cyberchat  Around the World  Suggestions 

Thread Tools  Display Modes 
#1




Paid bootstrapping and Incurred LDF methods
Normally we use the bootstrap method on the paid triangles. If the central estimates are based on the incurred LDF method, how to explain the disconnect here?
Another question is if we use the incurred bootstrap, what need to be done to bring the prediction error of IBNRs to the unpaid level and any assumptions to go with it? If you can point out some good references, it would be great. Thanks a lot. 
#2




Does your incurred method allow for the possibility of negative IBNR? If it does, then aren't your IBNR predictions equivalent to ultimate predictions, and therefore unpaid as well? You've got less prediction error because the case reserves really do provide valuable information.

#4




Unpaid = case + IBNR. Case is what it is. IBNR is the variable.

#5




I prefer to use paid because it is cleaner, however, you should be able to derive the variability of the unpaid from incurred bootstrap because (hopefully), the variability around the case reserves as of the date of your analysis is 0. (I'm sure we can all give examples of the situation where we might not be comfortable making that assumption  let's ignore that for now.)
When you simulate the future incurred incrementals to get the IBNR runoff distribution, you are simultaneously simulating the final ultimates {Current incurred to date + sum(future incurred incrementals) }. You could take each of these ultimates, subtract the paid to date, and get bootstrapped estimates of the unpaid based on the incurred triangle. However, you better have a darn good solution for realistically simulating negative incrementals when there is a positive expected incremental (and vice versa) if you are doing incurreds. I personally use a form of an extreme value distribution [CDF is F(x) = exp(exp [(mu  x) / beta ]) which is kind of gamma like in the middle, but has a domain of (inf,+inf), so can pick up negative values on a positive mean.
__________________
Follow along with Secret Hitler: The Liberals Try again Sign up for Time Wars V: After the Peace  Signups end 7/31 1 
#6




That's what I said, only much better, of course.

#7




There is a good discussion to be had here. If I have my history right, bootstrapping originated with the wellknown papers of England and Verrall. The first step to their method is to use a GLM with a log link applied to incremental losses. This has problems when negative incremental losses are common. It rules out almost all incurred loss triangles. To me, bootstrapping looks like a complicated (when you look into the details) addon that appeals to the actuarial psyche.
In contrast, the Mack model works with cumulative triangles where negatives are not a problem. It works equally well with both paid and incurred losses. However, the formula for the standard error is derived in an 82 page paper. Using the R ChainLadder package relieve much of the pain involved with both of these models. In my retirement, I have been playing around with the data in the CAS Loss Reserve Database. I have been approaching the problem with two ideas in mind. First, we should check the predictions of our models against the outcomes. The CAS Loss Reserve Database makes this possible with its hundreds of paid and incurred loss squares. You can check the results of predictions based on the upper triangle with the results in the lower triangle. Second, we should not be constrained by the artificial limitations by such models as the GLM. This has led me to working with R and the Bayesian MCMC models which have, by far, fewer artificial limitations. I have been describing the results of this effort in a number of papers and Actuarial Review articles. Here is a highlevel summary of what I have found out so far. • On paid triangles the Mack and bootstrap models produce very similar results. • Both models underpredict the variability of outcomes. • Models based on paid data produce results that are often too large. This is probably a feature of the particular the time period in the CAS Loss Reserve Database. This tends to support Mary Frances’ contention that there is real information in the case reserves. • Using Bayesian MCMC models, I have been able to significantly improve on the variability predictions with incurred data that allow for correlations between accident years. One advantage with incremental models is that incorporating trends along the diagonal make sense to me. The problem with incremental losses is that they can be negative, and positively skewed. The common positively skewed distributions (e.g. gamma and lognormal) are defined only on positive losses. I have been playing around with the skewStudent t distribution (suggested to me by Frank Schmid) to deal with this problem. So far, it looks like there is a significant negative trend along the diagonal with the data in the CAS Loss Reserve Database. I will report on the results if I make any headway. 
#8




Glenn,
Have you looked at the Gumbel distribution at all? ( http://en.wikipedia.org/wiki/Gumbel_distribution ) That what I've used when I needed something to be able to go negative. It isn't the standard use for it, but it seems to work reasonably well for our data.
__________________
Follow along with Secret Hitler: The Liberals Try again Sign up for Time Wars V: After the Peace  Signups end 7/31 1 
#9




Quote:

#10




Quote:
That being said, the method doesn't produce at all realistic results (with positive or negative dev't), so it's of no use IMO.
__________________
Jason Scandopolous Williams de la Hoya 
Thread Tools  
Display Modes  

