Will Deep Learning Transform the Trading Desk?
Much of the buzz around artificial intelligence has been driven by advances in deep learning. Machines have mastered humans in our most complicated games and in the process developed heuristics, or rules of thumb, that mimic how humans deal with complexity. This has reduced the amount of brute force calculation required to complete tasks.[1]
Significant advances have been made in facial and speech recognition using deep learning techniques, but, does this mean that you should be adopting AI on the trading desk, and if so which type?
Toby Walsh[2] describes two styles of researchers called Neats and Scruffies. Neats search for precise systems in which machines can learn, while Scruffies favor complex behaviors emerging from the interaction of robots, operating without explicit logical controls.
Both styles contain proponents of various schools, including; learners, reasoners, roboticists and linguists. Learners build machines that develop like humans, eliminating the need to code explicit knowledge. Reasoners equip machines with explicit rules of thought. Roboticists build machines that act in the real world, while linguists derive machine-thought from an understanding and manipulation of language.
The learners are currently in the ascendancy because of progress being made with deep learning. This requires huge amounts of data. In rules-based games, data can be generated by machines playing themselves, rapidly surpassing the 10,000 hours of deliberate practice[3] required to become world-class.
Structured practice works best when there is a clear set of rules, something that multi-participant trading clearly lacks. Studies have shown that in professions, as opposed to sports and pastimes, the advantages from practice are more limited[4]. Deep learning is also a black box, which may clash with the demands for transparency raised by regulators, managers and customers.
Traders seek a certainty of outcome, for which rules-based systems are required. Thus, even if traders move away from manual intervention, their expertise and adaption will continue to be required as gatekeepers of data, creators of rules and guardians of success. Tune out the buzz around deep learning and seek the solutions that are tailored to your needs.
Simon Maughan, Product Management
[1] The Deep Blue chess program required specialist hardware to assess 200 million positions per second. Twenty years later AlphaGo evaluates only 60,000 per second to master the far more complex game of Go
[2] Toby Walsh “Machines That Think”, Prometheus Books 2018
[3] Ericsson, Prietula and Cokely “The Making of an Expert” Harvard Business Review 2007
[4] Macnamara, Hambrick and Oswald “Deliberate Practice and Performance in Music, Games, Sports, Education and Professions: A Meta-Analysis”, Association for Psychological Science 2014