Trading System Development 101 (Part 2)
Posted by Mark on December 20, 2019 at 07:42 | Last modified: April 25, 2020 15:36Today I am discussing unknowns in the feasibility testing phase.
Variable range selection can significantly affect results and may have no correct answer. If I have a short-term signal and I test a strategy over a short-term range (e.g. 10-30), then I am more likely to hit the critical value of 70% profitable than if I test over a mixed range (e.g. 10-90) or a longer-term range (e.g. 30-90).
Number of iterations can also significantly affect results and may have no correct answer. In terms of granularity percentage, success or failure of one iteration contributes 2% of the total for 50 iterations vs. 0.2% of the total for 500 iterations (target = 70%).
What segment of the data to use for feasibility is another important detail that may have no right answer. Doing feasibility testing on one 2-year period that represents a particular market environment may generate significantly different results than feasibility testing on another 2-year period. In trying to find a strategy for any given futures market, feasibility testing over multiple environments would be ideal.
Testing over multiple market environments first requires a listing of said environments. This is a subjective task (vulnerable to hindsight bias) that also has no right answer. Were I to pursue this, then I should also determine how often the different environments occur. This might feed back to help determine whether this is worth doing at all (e.g. is it worth testing on exceedingly rare market conditions?).
In my testing thus far, I have eschewed all this and simply chosen to rotate the 2-year feasibility period. How I do the rotation probably doesn’t matter as long as I do it to give strategies suited to different market environments a chance. If I end up testing 100 strategies on a futures market, then maybe I test 10 on each 2-year period within the full 10 years. I will have false negatives, as I discussed in Part 1, but such remains an inevitable reality of this approach.
I have to be a bit lucky to get a strategy to pass feasibility testing, which brings to mind two possibilities. First, it’s okay to rely on some luck and incur some false negatives since I have an infinite number of potential strategies to test. Because feasibility failure decreases the possibility that I’m dealing with a viable strategy (see last long paragraph in Part 1), I should feel good about moving on to the next candidate and minimizing wasted time.
Alternatively, perhaps some method exists that eliminates false negatives (and any reliance on luck) by testing everything. I think this would require an enormous amount of processing power (and programming), though. I already encounter processing delays with my mediocre hardware: a 10-year backtest with 70 iterations takes up to 25 minutes. Many people backtest with hundreds (or even thousands) of iterations, which would take my computer all night to run.
To feasibility test or not to feasibility test differ in how time will be spent. With feasibility testing, extra time will be spent testing additional strategies since viable strategies are inadvertently dismissed. Without feasibility testing, extra time will be spent testing strategies that would have otherwise been dismissed in feasibility over a much longer period.
I will continue next time.
Categories: System Development | Comments (0) | Permalink