Top Takeaways from the 2015 FIX Americas Conference

PropelGrowth Blog - Financial Services Marketing and Content Strategy

Candyce Edelen shares takeaways from the 2015 FIX Americas Trading Briefing

Last week, on April 15, 2015 the FIX Trading Community put on an excellent half day event for the Americas Conference. It was a substantially smaller event than in past years, in an effort to make the content more focused and the event profitable. The team accomplished both goals with

Here are some of the highlights of the event.

How Regulation is Like Debugging Code

My favorite session was the keynote. Gregg E. Berman spoke. He served until recently as the Associate Director of the Office of Analytics and Research in the Division of Trading and Markets at SEC (he’s stepping down this month). Throughout his tenure at the SEC, he has focused on using data-driven analytics to inform policy.

He compared developing regulations and altering market structure to the process of developing and debugging code. This was an interesting and very relevant metaphor. As Berman pointed out, there’s a difference between debugging and tossing out code. You have to know what the program is actually doing versus what it’s supposed to do before you can debug issues. You can’t just delete a line of code and expect things to work properly. Following this approach, his department has been very focused on understanding the implications of changing regulations.

For example, the JOBS Act included an item about decimalization. The legislators posited that if they widened ticks for small caps, there would be more profit for brokers to make markets, which would lead to more liquidity for small cap firms, which would create more jobs. This seems like a reasonable assumption.

But Berman’s team dug into the data and found that small and mid cap stocks are not homogeneous. There are wide variations in the levels of liquidity, the average daily spread and volume at depth. In 2013, approximately 41% of small caps were trading at an average daily spread of 4.5¢, 14% had a spread smaller than 1.5¢, and another 14% traded at a 15.0¢ spread (I was scribbling madly, so I might have recorded these numbers incorrectly, but you get the point). So you can’t just raise tick size across the board and expect a positive result. A tick pilot needs to be very specific.

Berman also addressed Maker/Taker changes. He talked about a Nasdaq pilot that reduced maker/taker fees on 14 stocks. They found that the change reduced Nasdaq’s market share overall. But the reductions were very different for different stocks. For example, Bank of America (BAC) saw a very slight increase, while several stocks experienced as much as a 12% reduction in market share. Liquidity moved less at depth than at top of book. Berman urged the markets to do more testing on this topic.

I hope that Berman’s influence over the SEC will outlast his tenure. Several people expressed hopes that his work will help prevent unintended consequences like those caused by RegNMS.

Order Execution Transparency

The buy-side working group is still working on getting more transparency from the sell-side about the routing decisions that are made as their orders are executed. I wrote about this topic after last year’s regional briefing.

The primary goal of the initiative is understanding how the buy-sides’ orders are being represented to the markets. They’re seeking information about execution venue selection and routing decisions, but they want to know more than just where the order was executed. They also want to identify venues where they don’t get executed. I think part of this has to do with their desire to understand the motivations of the sell side in their routing decisions. For example, how much is routing influenced by maker/taker pricing incentives? Are the routing decisions that are motivated by rebates resulting in fills, or are they getting routed away? But the panelists insisted that it’s not just about transparency in routing decisions; it’s also about consistency – they’ve been getting very different results across firms, and they want to be better at predicting outcomes.

Issues making this more complex include timestamp inconsistency, the lack of maker/taker flags from the exchanges, and inconsistency in what’s being asked from different buy-sides. The committee is trying to make the requirements as consistent as possible to make it easier for the sell-side to comply. They appear to be making progress. One panelist/buy-side working group committee member said he’s now getting venue information from close to 90% of his counterparties in the US and 70-80% globally.

Automating the IPO Process

I was very surprised to hear how little automation is in place for running IPOs. Buy-sides generate orders based on their desired participation levels, and the trading desks PHONE these orders into the syndicate of brokers running the IPO (yes…they still phone them in). After the deal is priced, the buy-side gets a portion of the allocation they asked for at a particular price. The broker CALLS the buy-side trader with the allocation, which the trader manually enters into his system. The entire process is non-transparent and has many opportunities for mis-information or human error as the traders talk to multiple brokers in the syndicate. These are typically the largest orders on a trader’s book, there are frequent modifications, the orders live over the course of several days, and there is substantial risk of error at every step.

The buy-side committee has diagrammed how the IPO process works and will work on developing a model for replicating the process electronically, which should substantially cut down on errors. But it’s likely that the industry will take a long time to adopt this approach, even after the committee finishes the initial design and specifications. (After all, how many firms are still using FIX 4.2?)

Automating the Post-Trade Process

Allocations were automated in 1997, but FIX automation stopped there instead of completing the full lifecycle of a trade. Post trade processes run much the same today as they did 20 years ago. There is still a lack of straight through processing through confirmations and clearing.

The buy-side working group has been encouraging brokers to start using FIX for allocations and confirmations. According to an article Scott Atwell wrote for FIX Global, the benefits of this automation include efficiency gains, improved straight-through processing, and quicker identification of issues, all of which provide significant risk reduction and cost savings.

One of the panelists talked about how they adopted FIX for these processes. He said 60% of trade breaks happen in settlement instructions. This is particularly problematic in emerging markets. Now, their system allows them to identify exceptions in real time, know which item had an exception, and identify the precise cause of the exception (e.g., commission, instructions, etc.) without hunting through data. The panelist said his firm has seen substantial ROI from the deployment.

Cyber Security

I must admit that the cyber security panel puzzled me. Based on what the panel discussed, I was not convinced that I see a role for the FIX Trading Community in this topic, if they restrict the discussion to how it relates to the FIX protocol.

They talked about their effort to categorize threat actors and the need to develop layers of protection. The panel pointed out that FIX networks rely heavily on network access protocols to protect against malicious attacks. One speaker talked at length about the brittleness of FIX engines. He described how in his testing, FIX engines fall over if he adds too many characters to the client order ID FIX tag.

Admittedly, there is not much security built into the protocol. Most firms and most FIX engine vendors prioritize low latency over security. The panel pointed out that while FIX to FIX is an authenticated protocol, the FIX engine is talking to multiple internal unauthenticated systems (via APIs). The risk is that if an attacker gets in via the FIX routing network, they can gain access to the rest of the trading infrastructure fairly easily.

The panel made a big deal about the vulnerability of the protocol and the FIX engines to hackers. But when I pressed them during the Q&A session, they would not (or could not) point to any instance where hackers had gained control of a FIX engine. The fact is that most hackers will gain access through a “front door.” Someone in the organization receives an email, clicks a malicious link, and malware worms its way into the network.

In my admittedly under-informed opinion, I concluded that focusing our attention on the FIX engine is like tightening the bolts on the bank vault while leaving the teller passwords taped to their workstations.

But as I was writing this post, I decided to check out a document they mentioned during the panel. Back in 2012, the SANS Institute published a detailed white paper that talks about how a firm’s FIX infrastructure could be exploited. This is critically important reading for anyone in electronic trading with responsibility for security.

Firms have to assume that they’re going to be compromised eventually. Usually, by the time a firm discovers a breach, the hacker has been inside for months or even years. Based on that SANS report, there are significant vulnerabilities that need to be addressed.

A Job Well Done

Congratulations to the organizers and committee members who planned and executed this event. The team clearly made some really good decisions that served the conference and the community well. You all did a great job this year!! Thanks also to Thomson Reuters for hosting and to all the sponsors for contributing to this event.