Actuarial Outpost
 
Go Back   Actuarial Outpost > Actuarial Discussion Forum > Risk Management
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions

Browse Open Actuarial Jobs

Life  Health  Casualty  Pension  Entry Level  All Jobs  Salaries


Reply
 
Thread Tools Display Modes
  #1  
Old 06-01-2015, 01:05 PM
campbell's Avatar
campbell campbell is offline
Mary Pat Campbell
SOA AAA
 
Join Date: Nov 2003
Location: NY
Studying for duolingo and coursera
Favorite beer: Murphy's Irish Stout
Posts: 78,716
Blog Entries: 6
Default The Backtesting Crisis

http://www.ai-cio.com/channel/RISK-M...esting-Crisis/

Quote:
Most backtests published in finance journals are wrong, a quantitive research specialist from Guggenheim Partners has argued.

The tool is as popular with finance academics as is it misused, according to Marcos López de Prado, a senior managing director, which "may invalidate a large portion of the work done over the past 70 years.”

.....
In a recent presentation, he singled out a common and critical error he said undermines the majority of research based on historical simulations: multiple testing.

This involves running multiple configurations of, for example, a theoretical asset allocation through a set of past market data then picking one to highlight poisons the results with selection bias, he said.

The American Statistical Society has voiced the same concern in its ethical guidelines.

"Selecting the one ‘significant' result from a multiplicity of parallel tests poses a grave risk of an incorrect conclusion,” the guidelines stated. "Failure to disclose the full extent of tests and their results in such a case would be highly misleading.”

.....
“Backtesting: I hate it–it's just optimizing over history,” California State Teachers’ Retirement System Investment Officer Josh Diedesch told CIO last year. “You never see a bad backtest. Ever. In any strategy.”




http://papers.ssrn.com/sol3/papers.c...act_id=2606462

Quote:
Backtesting


Marcos Lopez de Prado
Guggenheim Partners, LLC; Lawrence Berkeley National Laboratory; Harvard University - RCC

May 14, 2015


Abstract:
Empirical Finance is in crisis: Our most important "discovery" tool is historical simulation, and yet, most backtests published in leading Financial journals are flawed.

The problem is well-known to professional organizations of Statisticians and Mathematicians, who have publicly criticized the misuse of mathematical tools among Finance researchers. In particular, reported results are not corrected for multiple testing. To this day, standard Econometrics textbooks seem oblivious to the issue of multiple testing. This may invalidate a large portion of the work done over the past 70 years.

We present practical solutions to this problem.

This presentation is related to papers http://ssrn.com/abstract=2308659, http://ssrn.com/abstract=2326253, http://ssrn.com/abstract=2460551, http://ssrn.com/abstract=2507040 and http://ssrn.com/abstract=2597421.

Number of Pages in PDF File: 33

Keywords: backtest, historical simulation, probability of backtest over-fitting, investment strategy, optimization, Sharpe ratio, minimum backtest length, performance degradation

JEL Classification: G0, G1, G2, G15, G24, E44

__________________
It's STUMP

LinkedIn Profile
Reply With Quote
  #2  
Old 06-08-2015, 08:15 AM
ScottKelly's Avatar
ScottKelly ScottKelly is offline
Member
SOA
 
Join Date: Jul 2012
Location: The House of Slytherin
College: University of Indianapolis, Indiana University
Posts: 270
Default

officially, what is "back testing"? Is that similar to running a simulation with variables based off of probability distributions from history?
Reply With Quote
  #3  
Old 06-08-2015, 09:49 AM
campbell's Avatar
campbell campbell is offline
Mary Pat Campbell
SOA AAA
 
Join Date: Nov 2003
Location: NY
Studying for duolingo and coursera
Favorite beer: Murphy's Irish Stout
Posts: 78,716
Blog Entries: 6
Default

Yes, it's running one's model through historical data


the problem, though, is that often one used that historical data to make the model in the first place. When you fit a model, you should "withhold" some of the data to test the model later.
__________________
It's STUMP

LinkedIn Profile
Reply With Quote
  #4  
Old 06-08-2015, 09:51 AM
campbell's Avatar
campbell campbell is offline
Mary Pat Campbell
SOA AAA
 
Join Date: Nov 2003
Location: NY
Studying for duolingo and coursera
Favorite beer: Murphy's Irish Stout
Posts: 78,716
Blog Entries: 6
Default

from Investopedia:
http://www.investopedia.com/terms/b/backtesting.asp

Quote:
DEFINITION of 'Backtesting'
The process of testing a trading strategy on prior time periods. Instead of applying a strategy for the time period forward, which could take years, a trader can do a simulation of his or her trading strategy on relevant past data in order to gauge the its effectiveness.

Most technical-analysis strategies are tested with this approach.

INVESTOPEDIA EXPLAINS 'Backtesting'
When you backtest a theory, the results achieved are highly dependent on the movements of the tested period. Backtesting a theory assumes that what happens in the past will happen in the future, and this assumption can cause potential risks for the strategy.

For example, say you want to test a strategy based on the notion that Internet IPOs outperform the overall market. If you were to test this strategy during the dotcom boom years in the late 90s, the strategy would outperform the market significantly. However, trying the same strategy after the bubble burst would result in dismal returns. As you'll frequently hear: "past performance does not necessarily guarantee future returns".



Read more: http://www.investopedia.com/terms/b/...#ixzz3cTj9aqXh
Follow us: @Investopedia on Twitter
__________________
It's STUMP

LinkedIn Profile
Reply With Quote
  #5  
Old 06-09-2015, 11:09 AM
PaulGH PaulGH is offline
Member
SOA
 
Join Date: Dec 2011
Location: Sittin' on a dock of the bay
College: Carnegie Mellon University
Favorite beer: Diet Pepsi
Posts: 405
Default

This seems somewhat related to overfitting or failing to withold a test set. Even with a test set and a historical sample, it isn't easy to avoid perfectly. You sometimes end up with a compromise of overfitting the historical data and overfitting the test.

Back testing is reasonably okay if you already have a strategy and just want to test it. If you are trying to search for a strategy by way of history, the best meethod I have off the top of my head is to split the same historical period into multiple training and testing blocks, randomly sampled (this is much harder than it sounds with serial correlation to work with). For each split, come up with the optimal strategy, then test it on its own and all the other sets. Still biased by the time period selected, but better.

I'm not sure that machine learning and market history data or some other volatile set will truly ever do more than point out history to you. Since the reality of the economy changes so quickly, it will be better at pointing out "Gee, you really shouldn't invest in internet stocks around the turn of centuries" than pointing out "The next big thing on the market will be adult underpants from India."

I'm not even close to an expert on these things though, and people with a lot more money than I will ever see seem to be betting against me.
__________________
"I sometimes don't feel pedantic enough to be an actuary."
Reply With Quote
  #6  
Old 06-09-2015, 11:23 AM
Bathory Bathory is offline
Member
Non-Actuary
 
Join Date: Apr 2013
Posts: 712
Default

wait a minute after reading that research paper I thought back testing
was mining through various models until one fit and then saying "see significant factors" when the reality is the "stat" tests have to be adjusted.

cross validation is a different issue.
Reply With Quote
  #7  
Old 06-09-2015, 01:40 PM
PaulGH PaulGH is offline
Member
SOA
 
Join Date: Dec 2011
Location: Sittin' on a dock of the bay
College: Carnegie Mellon University
Favorite beer: Diet Pepsi
Posts: 405
Default

At the point where you are using the data to develop the model, is it still testing or is it now model development, and the problem is that people are thinking you can call it a test in spite of that?
__________________
"I sometimes don't feel pedantic enough to be an actuary."
Reply With Quote
  #8  
Old 06-16-2015, 08:55 AM
sweetiepie sweetiepie is offline
Member
SOA
 
Join Date: Apr 2007
Location: 666 desdemona
Favorite beer: porters
Posts: 8,619
Blog Entries: 81
Default

This is the 800lb gorilla with all statistics imo.
Reply With Quote
  #9  
Old 06-16-2015, 09:43 AM
sweetiepie sweetiepie is offline
Member
SOA
 
Join Date: Apr 2007
Location: 666 desdemona
Favorite beer: porters
Posts: 8,619
Blog Entries: 81
Default

Quote:
Originally Posted by PaulGH View Post
At the point where you are using the data to develop the model, is it still testing or is it now model development, and the problem is that people are thinking you can call it a test in spite of that?
I think it's that people are 'testing' several models and publishing the one that generates results. Basically like this.

Ideally you have 3 sets of data. 1 that you use for training models. 1 that you use to pick which model you like. And 1 last one that you use to test that final model.

Last edited by sweetiepie; 06-16-2015 at 09:47 AM..
Reply With Quote
  #10  
Old 06-16-2015, 10:37 AM
JMO's Avatar
JMO JMO is offline
Carol Marler
Non-Actuary
 
Join Date: Sep 2001
Location: Back home again in Indiana
Studying for Nothing actuarial.
Posts: 36,291
Default

Quote:
Originally Posted by PaulGH View Post
At the point where you are using the data to develop the model, is it still testing or is it now model development, and the problem is that people are thinking you can call it a test in spite of that?
People who don't understand statistics are more likely to quote them when they seem to support a preconceived notion.
Also sweetiepie is so right!
I want to quote this from the xkcd caution language, "Warning: this comic occasionally contains strong language (which may be unsuitable for children), unusual humor (which may be unsuitable for adults), and advanced mathematics (which may be unsuitable for liberal-arts majors)."
__________________
Carol Marler, "Just My Opinion"

Pluto is no longer a planet and I am no longer an actuary. Please take my opinions as non-actuarial.


My latest favorite quotes, updated Oct 13, 2017.

Spoiler:
I should keep these four permanently.
Quote:
Originally Posted by rekrap View Post
JMO is right
Quote:
Originally Posted by campbell View Post
I agree with JMO.
Quote:
Originally Posted by Westley View Post
And def agree w/ JMO.
Quote:
Originally Posted by MG View Post
This. And everything else JMO wrote.
MORE:
Quote:
Originally Posted by Bro View Post
I recommend you get perspective.
Quote:
Originally Posted by Enough Exams Already View Post
Dude, you can't fail a personality test. It just isn't that kind of test.
Quote:
Originally Posted by Locrian View Post
I'm disappointed I don't get to do both.
Quote:
Originally Posted by elleminopee View Post
For me, I take accusations of simplicity as more of a compliment than an insult. When the level of complexity in your model is more than you can validate, you are wasting your time.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 10:08 PM.


Powered by vBulletin®
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
*PLEASE NOTE: Posts are not checked for accuracy, and do not
represent the views of the Actuarial Outpost or its sponsors.
Page generated in 0.38116 seconds with 9 queries