Crude Oil Strategy Mining Study (Part 2)
Posted by Mark on August 14, 2020 at 07:36 | Last modified: July 13, 2020 10:12Last time, I detailed specific actions taken with the software. Today I will start with some software suggestions before continuing to discuss my latest study on crude oil.
This backtesting took over 30 hours. For studies like this, implementing some of the following might be huge time savers:
- Allow execution (e.g. Monte Carlo analysis, Randomized OOS) on multiple strategies from the Results window at once.
- Offer the option to close all Results windows together* since each Results exit requires separate confirmation.
- [Alternatively] Allow multiple [strategy] selections to be retested on a particular time interval and create one new Results window with all associated data. This would save having to re-enter a new time interval [at least two entries for year, in my case, and sometimes 3-4 entries when a month and/or date got inexplicably changed. This occurred 4-8 times per page of 34 strategies during my testing] for each and every retest in addition to saving time by closing just one Results window per page (rather than per strategy).
- Include an option in Settings to have “Required” boxes automatically checked or perhaps even better, add a separate “Re-run strategy on different time interval” function altogether. Retesting a specific 4-rule strategy involves checking “Required” for each rule, but testing the same strategy on different time intervals encompasses “Required.”
- Offer the option to close all open windows (or same-type windows like “close all Monte Carlo Analysis windows?” “Close all Randomized OOS windows?”) when at least n (customizable?) windows are already open. Exiting out of non-Results windows can take noticeable time when enough (80-90 in my case) need to be consecutively closed.
>
>
>
>
>
My general approach to this study is very similar to that described in Part 6:
- Train over 2011-2015 or 2007-2011 with random entry signals and simple exit criteria.
- Test OOS from 2007-2011 or 2011-2015, respectively (two full sets of strategies).
- Identify 34 best and 34 worst performers over the whole 8-year period for each set.
- Retest over 2015-2019 (incubation).
- Re-randomize signals and run simulation two more times.
- Apply the above process to 2-rule and 4-rule strategies.
- Apply the above process to long and short positions.
- Include slippage of $30/trade and commission of $3.50/trade.
>
In total, I recorded incubation data for 2 * 34 * 2 * 3 * 2 * 2 = 1,632 strategies in this study: 816 each were long/short, 2-rule/4-rule, best/worst strategies, and OOS beginning/end (each category is itself mutually exclusive, but categories are not mutually exclusive of each other). I enter data with relative speed and accuracy, but mistakes can definitely be made. As another study improvement over the last, I therefore ran some quality control checks:
- Compare NetPNL and PNLDD for sign alignment (e.g. both should be positive, negative, or zero).
- Compare NetPNL and Avg Trade for sign alignment.
- Compare NetPNL and PF for alignment (if NetPNL < 0 then PF < 1; if NetPNL > 0 then PF > 1).
- Compare PNLDD and Avg Trade for sign alignment.
- Compare PNLDD and PF for alignment.
- Compare Avg Trade and PF for alignment.
- Verify that Avg Trade ~ (NetPNL / # trades).
- Screen PF for gross typos (e.g. 348 instead of .48; extremes for all occurrences ended up being 0.22 and 2.37).
>
I will continue next time.
>
* — This may be difficult because I want only the re-run Results windows—not the
whole simulation Results window closed. Perhaps this could be offered in Settings.
I have written elsewhere (paragraphs 4-5 here) about the potential utility of
retesting strategies on separate time intervals; this might be a widely appreciated
feature by algo traders.