A 2017 report by Investment Technology Group projected that electronic equity execution in the US market would grow to around 57% and 55% in Europe in 2019, up from 48% and 50% respectively in 2015. A sizeable portion. As the market for execution grows so does competition. The sell side is under constant pressure to provide services to attract clients, from trading analytics tools to unbundling commissions to high touch execution advisory.
But first and foremost is the highest quality execution their algorithms can provide, and that means testing and tweaking to adapt to dynamic market conditions constantly. But if all sell-sides are using the same historical data to develop trading decisions throughout their suite of algorithms, then could it be that they are unintentionally arriving at similar trading execution outcomes, leading to herding and the potential for future flash crashes?
We Are Hard-Wired to Herd
Take, for example, eating at a restaurant when you are away on a business trip or holiday. You see two Japanese restaurants next to each other. Which one do you choose? The empty one with lots of available tables, or the crowded one full of people? Most of us would choose the latter as we feel more comfortable amongst a group.
Though 2010 was a long time ago, the Flash Crash that wiped US$1 trillion of market cap off the US market isn’t forgotten amongst the trading community. According to the SEC, a large order of e-mini futures triggered the event resulting in the largest point swing ever for the venerable Dow Jones. The herd was in full panic. Proctor & Gamble, for example, traded for a penny that day, likely the result of an algorithm. Who would want to report that fill to a client?
You certainly don’t want your participation algo slicing out orders and crossing the spread during situations like the Flash Crash. Not only will your algos performance be poor but selling into such panic or capitulation scenarios will further exasperate an already bad situation. Child orders competing with the lowest latency players for bids in fast markets where everyone is selling rarely produce favorable fills for clients. The flash crash was a once in a decade occurrence and no one could have predicted it, so how could you have back-tested your algorithm models to perform under such stress?
Agent Based Modeling
Agent Based Modeling (ABM) allows us to simulate the behaviours of complex systems by simulating the individual behavior of heterogeneous agents and how they interact with each other. Rather than modeling the outcome, the outcome is an emergent behavior of the agent-based simulation.
For example, with agent-based modeling, you could modify the behavior of the Participant agents to simulate a fast market based on a news event, such as the US Federal Reserve unexpectedly changing rates. In response, specialist or market maker agents might show wider spreads, retail agents may stop trading, or sell side agents might send smaller child orders.
Equally, you could modify a Participant agent’s behavior to simulate rapid changes in order book depth. This allows you to study how your opportunistic algorithm performs or help the surveillance team identify spoofing or quote stuffing.
By running large numbers of identical simulations, a quantifiable sense of likely market impact can be calculated to ultimately help firms optimize every single trade they execute.
Forward Testing for Performance Enhancement
Among the benefits of ABM is the forward-looking nature of predicting the possible future of markets or executions. With the flexibility of each agent you can granularly modify specific inputs, observe how other inputs are influenced and what affect it has on the whole environment. On the 22 March, the US Yield curve inverted for the first time since 2007. That’s 12 years ago. How would you back-test your algorithms under a scenario that has only happened once in 12 years?
Outdated back-testing models serve to only reconstruct a past event to tweak your algorithms from, but that event may never happen again or in the exact same way it previously unfolded. Agent-based market simulation offers a new paradigm of forward testing algorithms against any plausible scenario, giving insight into the robustness of algos under different stressed conditions, and enabling sell-side execution desks to tune their solutions.
Electronic trading will continue to grow and so will markets. There may not be another Flash Crash, but you can bet that there will be more extreme trading events that back-testing with historical data will always fall short. Do you really want to be a part of the herd?