During August 2007 a peculiar thing happened to almost all Stat Arb funds; those funds that look to exploit inefficiencies in the market. They lost money, and many lost big. The losses ranged from 5% to 30% and were notable for their ubiquity. The global financial crisis had caused many market participants to deleverage their positions. This particularly affected firms that relied on huge leverage to exploit what can often be small market imbalances.
Those firms that lost money due to the difficult market conditions no longer had access to the leverage required to take advantage of corrections and rebounds as they occurred. It was a reminder, if it were needed, that financial markets are not the efficient mechanism that many economic models suppose them to be, but they are a collection of people making decisions, and if these actions are highly correlated then unintended consequences can occur.
Trying to analyse the mechanics of these types of situation can be challenging for traditional modeling approaches, particularly if there is no historical precedent. We train our models and algorithms on historical data, which assumes the past is a good representation of the future (it often isn’t), and that there have been no structural changes to the market (there obviously have been).
The vast majority of market ‘simulation’ today is no more than a stream of historical data run at high speed. And most believe that more data equals better results. But what use is 30 years of market data when the growth in execution algorithms and high-speed trading means that the rules of the game have fundamentally changed? It is equivalent to training algorithms to play Chess and then asking it to win at Super Mario Bros. If we consistently fail to model the underlying market dynamics that are responsible for price creation and that lead to flash crashes, liquidity squeezes, and runs on stocks, then perhaps it is time to look for an alternative way of modeling markets.
One alternative is the approach adopted by Harry Markowitz, often referred to as the father of modern portfolio theory, who was trying to analyse the 1987 stock market crash. As the rest of the world was struggling to understand what had caused the crash, Markowitz and Kim published a paper explaining how to use an agent-based approach to model the relationship between portfolio-balancers and portfolio insurers. Even using a very simplified model exploring the interactions of these two groups, Markowitz showed that high margin levels could lead to explosive results. In a nutshell, understanding the market dynamics allowed you to model how each individual’s logical, self-interested decision, could add up to emergent phenomena which created a costly and illogical outcome.
Markowitz’s work laid the foundations for the Santa Fe Institute’s Artificial Stock Market Project in the 1990s where several agent-based models showed that modeling different trading strategies using an agent-based simulation produced a dazzling array of complexity. And more recently, the failure of equilibrium models to adequately deal with market conditions during the financial crisis, has led to agent-based models coming into fashion, particularly at central banks where the ECB, the IMF, the Fed and the Bank of England have all been active.
So, why have agent-based models not been used more widely in trading and investments? Well, a lack of expertise is certainly a factor. These models are much more likely to be taught within biological sciences than on economics courses. Also, what has held many agent-based approaches back has been the computational cost of scaling models up to adequately describe the full complexity of the real world. But given the advances in scalable compute power, and technologies like Simudyne, these challenges are starting to fade into the background. Running high-fidelity simulations that include billions of calculations are now relatively simple. And so, the question is whether now is the right time for agent-based models to finally take their place in every modeler’s toolkit?