The volumes of data generated online are barely imaginable and they are certainly more than humans can handle: more than 500 million tweets are distributed daily via Twitter and some three billion search requests are entered into Google. However, when you compare these figures – hard to fathom in their own right – to the amount of information disseminated by stock exchanges, they suddenly appear in a different light.
Value of financial information not measured in volume alone
The amount of price data disseminated by Deutsche Börse alone stands at up to four billion a day; a figure that is likely to be as high as 25 billion a day worldwide. The priority is and has always been on speed and accuracy; the story of the informational advantage gained by the Rothschilds through the use of carrier pigeons after the Battle of Waterloo is legendary. These days, incredibly high data volumes mean that buzzwords and cutting-edge data processing techniques, such as “big data” and “machine learning”, have become widely used in the financial sector. But there’s one thing we mustn’t forget: the value of financial information is not measured in volume alone. The reason behind this is the transformation that the financial industry has undergone in recent years and will continue to undergo at a faster pace for the next few: the radical automation of traditional securities trading. Software developers and data scientists are increasingly replacing traditional traders. One example of this is the decision of a major US investment bank to replace 600 equity traders with 200 software developers. This change is not just affecting securities trading, but also the investment industry. New business models using robo advisory have become established, enabling investment decisions to be made on a completely automated basis.
All of these automated trading models and investment algorithms are dependent on data. However, there are big differences among the data available, which often do not become apparent until the second glance but which play an important role in today’s world of optimised trading.
Firstly, price data isn’t always generated at exactly the same time on all trading venues. Pricing often takes place on just one market – with the other markets replicating the movements of the original. Secondly, aside from variations in speed, differences in data quality – i.e. the reliability and granularity of data – are also key for professional market participants. Insight into the entire order book adds value for professional traders that goes far beyond the latest bid/ask price.
Thirdly, traditional, pure stock exchange data is increasingly supplemented by two additional categories which are used as a basis for trading algorithms. For one thing, such “alternative data” can indicate trading and investment opportunities. A much cited example of this are sales predictions made based on automated analyses of satellite images showing the usage of parking spaces at shopping centres. Sentiment analyses from social networks and other public data sources are other examples. And for another thing, quantitative analysis and aggregation of (mostly) large volumes of data are increasing. Progress in the use of artificial intelligence and related methods enables better pattern recognition and therefore a better understanding of market structures. These analytics and alternative data supplement the raw exchange data and often provide deeper and better insight into current market structures.
Stock exchanges and specialist providers are reflecting these trends and requirements by expanding their offerings. They have reacted to soaring demand for data and metrics, and are investing in new services that enable trading participants to analyse market developments and structures more quickly and in more detail – an option previously available to only a few highly specialised market participants. This translates into two demands on us as a market infrastructure provider. Firstly, we need to be able to process and disseminate the sheer volume of information with low latency at all times and independently of the market situation. This requires a complex and redundant technical infrastructure to be developed and operated, because we have to ensure that price data can flow unimpeded – at a rate of thousands per second – even on days with high volumes and high volatility. And secondly, we need to generate added value in the form of analytics in order to provide our clients with the respective services. This is the only way to transform big data into smart data – to the benefit of investors. It should be noted that, as described, market participants benefit from data in very different ways – from automated trading to general information about current developments on the stock exchange. Sophisticated fee schedules reflect this broad range of uses and thus the very different forms of value added. For example, in order to improve the general information function for private investors, Deutsche Börse recently made drastic price cuts for these end users.
Quality is key
Data and the analysis thereof (analytics) are the tools used to make predictions about the future based on the past and present. The important elements here are the high quality and continuous availability of raw data, as well as robust models to extract the relevant information from the huge volume of data. These market requirements are the benchmark by which market infrastructure providers like Deutsche Börse must measure their service offering, so that raw data can become valuable data.
This article was first published in the Börsen-Zeitung (in German) on 2 March 2018.
Share this page