In a quant trading environment, framing decisions as stochastic optimization problems – even with imperfect probabilities - transforms forecasts into robust trading strategies. For us, robustness in parameter estimation was a primary focus, which in practice meant continually monitoring our models, especially wrt regime shifts & changing mkt conditions). But as I noted in comments to the site a few months ago, the joint hypothesis problem always comes into play – tests of risk-adjusted outperformance rely on explicit pricing models, but the true mkt equilibrium model is unobservable. That’s why continually stress-testing the assumptions built into our time-series models was so critical.
Ultimately, this approach—treating forecasts as probabilities and embedding them in an optimization framework—is what makes the methodology both practical and powerful, even when the probabilities aren’t perfect. While realized P&L is the ultimate judge, it’s an incomplete measure because it doesn’t capture the opportunity costs, especially those arising from misspecified parameter choices or errors. And evaluating opportunity costs in a trading world where there is no time “T” is #ReallyHard.
Then again, applying a demanding degree of specificity to the problem of model formulation, estimation, and ultimately optimized decisions for live markets is a fool’s errand (unless you’re Jim Simon 😊). In practice, we fell back on the Herb Simon concept of “satisficing” rather than optimizing.
In a quant trading environment, framing decisions as stochastic optimization problems – even with imperfect probabilities - transforms forecasts into robust trading strategies. For us, robustness in parameter estimation was a primary focus, which in practice meant continually monitoring our models, especially wrt regime shifts & changing mkt conditions). But as I noted in comments to the site a few months ago, the joint hypothesis problem always comes into play – tests of risk-adjusted outperformance rely on explicit pricing models, but the true mkt equilibrium model is unobservable. That’s why continually stress-testing the assumptions built into our time-series models was so critical.
Ultimately, this approach—treating forecasts as probabilities and embedding them in an optimization framework—is what makes the methodology both practical and powerful, even when the probabilities aren’t perfect. While realized P&L is the ultimate judge, it’s an incomplete measure because it doesn’t capture the opportunity costs, especially those arising from misspecified parameter choices or errors. And evaluating opportunity costs in a trading world where there is no time “T” is #ReallyHard.
Then again, applying a demanding degree of specificity to the problem of model formulation, estimation, and ultimately optimized decisions for live markets is a fool’s errand (unless you’re Jim Simon 😊). In practice, we fell back on the Herb Simon concept of “satisficing” rather than optimizing.