Tuesday, June 8, 2010

HFT and the May 6 volatility

It's now been more than a month since the stock markets went crazy on May 6, 2010, an event often referred to as the "Flash Crash". Various people have attempted to explain what happened during those 15 minutes; I recently read two such descriptions: in Newsweek, and in the Financial Times. Both were interesting perspectives, but neither seemed to be breaking ground with any new conclusions.

Newsweek points out that there is a lot of money at stake:

While most long-term investors lost their shirts during the Great Panic of 2008, high-frequency traders posted huge profits. "That was the Golden Goose era," says Narang, whose HFT shop launched in March 2009 and just finished its most profitable month.


And Newsweek also provides a nice history lesson, explaining some of what's been going on over the last decade:

You have to go back a decade, to the birth of HFT in September 2000. That month, then-SEC chairman Arthur Levitt, eager to push the market into the digital age, ordered exchanges to implement "decimalization" -- i.e., allowing stocks and options to be listed in one-cent increments rather than 12.5-cent ones.

...

But the combination of decimalization and the advent of new electronic exchanges where buyers and sellers could meet one another directly made life difficult for market makers. Some traditional Wall Street firms folded their market-making desks, while the survivors doubled down on technology and speed.

As a result, trade volume exploded. The average daily volume of equity shares traded in the U.S. zoomed from about 970 million shares in 1999 to 4.1 billion in 2005 to 9.8 billion last year.


And Newsweek also makes a very specific observation about the Flash Crash:

The sell-off gained speed as stop-loss orders were triggered once prices fell a certain amount, and many large institutional investors dumped stocks by the truckload.


It's interesting to me that, in many cases, these "stop-loss" orders may have actually created a loss. For example, suppose that you were holding some shares in Accenture, which you had purchased at $35 earlier in the year, and were feeling good about, since Accenture was trading over $40. Since you were worried that the stock might fall, you had placed a "stop-loss" order to sell at $30. Then, during the crazy 15 minutes, Accenture stock dropped from $40, all the way down to $0.01, then rebounded and closed at $39. But when your "stop-loss" order saw the price drop below $30, and issued an automated "sell", and finally was executed at $20, say, you ended up selling your stock, which was worth $40 at the start of the day, and $39 at the end of the day, for $20, thus creating a loss where none had actually existed.

I believe that many of these trades were the ones that the exchanges tracked down and cancelled, which seems like a reasonable behavior, even though I still don't understand how the exchanges can arbitrarily cancel such trades. Although you lost big with your stop-loss sell order, the counter party actually made money, so how was the exchange justified in taking their money away?

The Financial Times article focuses on an upcoming SEC event, scheduled to be held tomorrow I believe:

On Wednesday, the regulator is hosting a day-long Washington debate on the topic, involving many leading industry participants. "All market reform will be looked at through the prism of what happened on May 6," says William O'Brien, chief executive of DirectEdge, one of the four main public exchanges for US shares.


The article goes on to note that the markets can be, and are, quite mysterious:

"The real shocker is that it was nothing nefarious that caused the crash," says David Weild, senior advisor to Grant Thornton and former vice-chairman at Nasdaq. "It was acceptable investor behavior -- people trying to put on hedge transactions," he believes.


After 30 years of designing and building systems software, I'm quite accustomed to this sort of thing. It's almost routine to experience software which is running correctly, as designed, yet does something completely bizarre and unexpected in just the right circumstances. Relatively simple algorithms, when encoded into modern computers and executed zillions of times at lightning speed, can exhibit strikingly unusual behaviors. People who try to understand such events use terms like "emergent systems" or "unintended consequences" to describe what's going on, but what is really happening is just that our systems are more complex and intricate than we comprehend.

Still, just as computers have enabled us to create these systems, computers also enable us to work to tame them. So long as the systems are open to all, fairly and clearly operated, people will design software that works reliably and acceptably with them. The FT article makes the point that the regulation agencies have some technological catch-up to do:

Collecting data by fax may seem hopelessly 20th-century in an age when trading is conducted hundreds of times faster than the blink of an eye, writes Jeremy Grant.

But that is how Scott O'Malia, a commissioner at the US Commodity Futures Trading Commission, recently said his agency was still gathering certain kinds of information from traders.

If the federal government wants to help improve the markets, it should fund the regulatory agencies appropriately so that they can at least keep pace with the organizations they are attempting to oversee.


In a way, I find this discussion fairly similar to the debate going on over "Network Neutrality" and the Internet. The great success of the Internet is often, and correctly, attributed to something known as the End-to-End principle, which is that the Internet works best when it is simply a fair, open, and un-biased provider of basic services, and usage of and development of applications which take advantages of those services happens entirely at the "endpoints".

The debate over market operations, and HFT, seems to me to be metaphorically quite similar to the debates that occur in the Internet world involving traffic routing. Quite frequently, organizations will show up with an idea about how to "fix" the Internet, by including some new routing protocol, or traffic prioritization scheme, or other network asynchrony. But time and time again it has become clear that these methods actually worsen the problems they are intended to fix. Only the end-users of the Internet actually have the necessary information and tools to be able to design applications that adapt properly and behave as expected under a variety of network conditions.

It seems counter-intuitive that the way to make a network, or market, more effective, fair, efficient, and reliable is to make it simpler; most people's intuition when something is not working is to add complexity, not to remove it. But the End-to-End principle has 40 years of success behind it, and is well worth study and appreciation.

No comments:

Post a Comment