In our third installment of the TrendSpotters Thought Leadership Series, I interviewed Frank Piasecki from ACTIV Financial about the trends in high speed market data.
We talked about the evolving needs of firms of varying size and business focus – from large multinational sell sides to medium buy sides to high frequency proprietary trading. In the past few years, most firms have had to focus on reducing market data latency to the lowest possible level within their budgetary restraints. However, latency is not the only consideration. Throughput is also a big issue, as is normalizing market data from multiple sources and managing it within the consuming facility’s infrastructure.
Frank points out in the interview that firms need to consider several key issues in thinking through market data infrastructure:
- Coverage – what asset class, data breadth and regional feeds are needed?
- Applications – will the data be used to support click trading, high touch trading or algorithmic black boxes? All have very different criteria for how they consume and use market data
- Location – will the consuming system be co-located with the market data source?
- Volume – what is the volume of the feed, and what demands will this place on infrastructure?
Low latency has different definitions based on the type of data and asset class. For example, equity options data is very different in character, speed, and throughput than fixed income data. Different use cases also demand different types of data. For example, click trading may need substantial conflation and filtering to show the trader what they need, while an HFT strategy needs to see every quote and every tick.
We also discussed the increasing global nature of trading, even for smaller shops. Frank pointed out that even very modest shops are trading in multiple regions in Europe, Asia, North America, and Latin America. These smaller shops are now able to take advantage of infrastructure that has been built up by the industry over time to gain cost effective access to global data streams.
Large institutions have broader needs. They need to source an aggregated set of global data plus internally generated data, and then distribute it to a wide group of applications, each of which requires very different views of the data. In addition, as competition increases in Europe and Asia, more data feeds are available from the various ATS’s, ECNs and MTFs. Trading volume has grown around the globe, causing market data volumes to grow dramatically and putting pressure on existing infrastructure.
We discussed some of the trends in Europe and Asia. In the EU, the lack of a consolidated tape persists in spite of MiFID. Issues driving this include disparate trading rules, a lack of standardization in trade conditions, and proprietary symbology. In Asia, the problem is different. There, the sheer distance between venues is enormous. Other issues affecting aggregation of Asian data include language barriers, regulatory differences, limited transparency, and a high cost for connectivity infrastructure.
We also talked about hardware acceleration. Whether through FPGA or other strategies, many firms are now starting to adopt hardware acceleration not only as a means of reducing latency, but more importantly as a way to reduce cost. These methods use less horsepower, less space and less electricity; making them extremely attractive for managing skyrocketing market data volumes and the associated infrastructure costs.
Hardware acceleration is a hot topic, and we’re going to do a TrendSpotters installment specifically on this trend. So stay tuned.
We welcome your comments. Please join with us in the TrendSpotters discussion community, where you can share your opinions, ask Frank questions, and debate the issues with other community members. We’d love to have you involved in the discussion. You can also join the conversation on Twitter by using the hashtag #TrendSpotters.
TrendSpotters by PropelGrowth is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.