Nyse trading system architecture

The market data that is received typically informs the system of the latest orderbook. It makes one shudder to think of the expected losses in case of a ten minute downtime when daily trade crosses Rs crore. Let us say different logics are being run over a single market data event as discussed in the earlier example. Security This should be a vital and integral part of the design architecture. Our Options and Bond Markets. For firms, especially those using high frequency trading systems , it has become a necessity to innovate on technology in order to compete in the world of algorithmic trading, thus, making algorithmic trading field a hotbed for advances in computer and network technologies.

Feb 14,  · “NYSE has implemented large network pipes across data centers and trading systems. We can move data around very quickly. Data needs to move in and out of analytics systems (like Netezza) fast. NYSE Technologies makes its systems available for purchase and installation behind a firewall or as a service.

The NYSE's 3D Trading Floor

All the applications and OSs are hardened periodically for safety. Backup and recovery This has emerged as one of the vital aspects of business continuity. When online exchanges were designed a few years ago, perhaps a lot of emphasis was not placed on this aspect, as it is today. However it's not difficult to add business continuity processes to an existing network. Shenoy says, "As a backup to our VSAT network, a terrestrial-based trading network was deployed in the middle of We have more than leased lines connecting our nationwide locations.

We are the only stock exchange in the country to have a fully-redundant business continuity site in Chennai. Availability Ideally online exchanges should have 'five-nines' availability.

Exchanges usually prefer to host its infrastructure in-house and not use the services of an external data center. NSE claims to achieve uptime greater than Applications It's difficult to deploy out-of-the-box applications at exchanges as each has a unique architecture based on factors like operations flow, trading volumes, number of members, number of users, and number of locations.

The applications like trading, clearing, risk-management, surveillance, index computation, listing, membership, and accounts may be developed in-house or by external software developers. The 'big two' architectures NSE and BSE, the 'big two' exchanges believe in updating and upgrading its technology systems to keep delivering according to commitments and promises made to its members, partners, and customers.

NEAT stores all trading information in an in-memory database at the server end to achieve minimum response time and maximum system availability for users. The telecommunications network uses the X. Each trading member trades on the NSE with other members through a PC located in the trading member's office.

These leased lines are multiplexed using dedicated 2 MB optical-fiber links. The WDM participants connect to the trading system through dial-up links. The systems claim to handle up to two million trades a day. BOLT has a two-tier architecture. The trader workstations are connected directly to the backend server which acts as a communication server and a Central Trading Engine CTE. Other services like information dissemination, index computation, and position monitoring are also provided by the system.

A transaction monitoring facility in the Tandem architecture helps keep data integrity through non-stop SQL. Access to market related information through the trader workstations is essential for the market participants to act on real-time basis and take instantaneous decisions.

Market information is fed to news agencies in real time. The exchange plans to enhance the capabilities further to have an integrated two-way information flow. Online trading portals Online trading is the investment activity that takes place over the Internet without the physical inclusion of the broker. An end user investor has to register with an online trading portal like ICICdirect.

The investor thus gets into an agreement with the firm to trade in different securities according to the terms and conditions listed down on the agreement. Since the servers of the online trading portal are connected all the time to the stock exchanges and designated banks, order processing is done in real time.

Investors can also get updates on the trading and check the status of their orders either through e-mail or through the interface. Portal design Harish Malhotra, Chief Technology Officer, Motilal Oswal Securities Limited, says "the portal should be simple to navigate, full of useful and relevant information which is available with the lowest number of clicks, and should be personalized. Users are usually given options to link their bank accounts, Demat accounts, and brokerage accounts into a single interface.

There is also a single window for all exchanges and a single screen for the complete order routing mechanism. The hardware used comprises Web and application servers, switches, routers, firewalls and security devices, and specialized appliances.

The systems have been customized by its in-house team. The trading applications are outsourced. Portal success The success of a trade portal will definitely depend on its bouquet of services for an end-user. Most portals charge a small registration fee and brokerage based on various conditions. However it's important for the organization to keep focussed on customer-centric services and delivery models to actually enjoy the most attention.

Soutiman Das Gupta can be reached at soutimand networkmagazineindia. Indian Express Group Mumbai, India. All rights reserved throughout the world. However, some risk checks may be particular to certain strategies and some might need to be done across all strategies. Since the new architecture was capable of scaling to many strategies per server, the need to connect to multiple destinations from a single server emerged. So the order manager hosted several adaptors to send orders to multiple destinations and receive data from multiple exchanges.

Each adaptor acts as an interpreter between the protocol that is understood by the exchange and the protocol of communication within the system. Multiple exchanges mean multiple adaptors. However, to add a new exchange to the system, a new adapter has to be designed and plugged into the architecture since each exchange follows its protocol only that is optimized for features that the exchange provides. To avoid this hassle of adapter addition, standard protocols have been designed.

This not only makes it manageable to connect to different destinations on the fly, but also drastically reduces to the go to market when it comes to connecting with a new destination. Connecting FXCM over FIX, a detailed tutorial The presence of standard protocols makes it easy to integrate with third party vendors, for analytics or market data feeds as well.

In addition, simulation becomes very easy as receiving data from the real market and sending orders to a simulator is just a matter of using the FIX protocol to connect to a simulator. The simulator itself can be built in-house or procured from a third party vendor. Similarly recorded data can just be replayed with the adaptors being agnostic to whether the data is being received from the live market or from a recorded data set.

Emergence of low latency architectures With the building blocks of an algorithmic trading system in place, the strategies optimized on the ability to process huge amounts of data in real time and make quick trading decisions. But with the advent of standard communication protocols like FIX, the technology entry barrier to setup an algorithmic trading desk, became lower and hence more competitive.

As servers got more memory and higher clock frequencies, the focus shifted towards reducing the latency for decision making. Over time, reducing latency became a necessity for many reasons like: Strategy makes sense only in a low latency environment Survival of the fittest — competitors pick you off if you are not fast enough To know more on latency, catch our past webinar: To quantify all of them in one generic term may not usually make much sense.

Although it is very easily understood, it is quite difficult to quantify. It, therefore, becomes increasingly important how the problem of reducing latency is approached. If we look at the basic life cycle, A market data packet is published by the exchange The packet travels over the wire The packet arrives at a router on the server side. The router forwards the packet over the network on the server side. The packet arrives on the Ethernet port of the server. The adaptor then parses the packet and converts it into a format internal to the algorithmic trading platform This packet now travels through the several modules of the system — CEP, tick store, etc.

The CEP analyses and sends an order request The order request again goes through the reverse of the cycle as the market data packet. High latency at any of these steps ensures a high latency for the entire cycle. Hence latency optimization usually starts with the first step in this cycle that is in our control i. The easiest thing to do here would be to shorten the distance to the destination by as much as possible. Colocations are facilities provided by exchanges to host the trading server in close proximity to the exchange.

The following diagram illustrates the gains that can be made by cutting the distance. For any kind of a high frequency strategy involving a single destination, Colocation has become a defacto must. However, strategies that involve multiple destinations need some careful planning. Several factors like, the time taken by the destination to reply to order requests and its comparison with the ping time between the two destinations must be considered before making such a decision.

The decision may be dependent on the nature of the strategy as well. Network latency is usually the first step in reducing overall latency of an algorithmic trading system. However there are plenty of other places where the architecture can be optimized.

Propagation latency Propagation latency signifies the time taken to send the bits along the wire, constrained by speed of light of course. Several optimizations have been introduced to reduce the propagation latency apart from reducing the physical distance. For example, estimated roundtrip time for an ordinary cable between Chicago and New York is Spread networks, in October , announced latency improvements which brought the estimated roundtrip time to Microwave communication was adopted further by firms such as Tradeworx bringing the estimated roundtrip time to 8.

Note that the theoretical minimum is about 7. Continuing innovations are pushing the boundaries of science and fast reaching the theoretical limit of speed of light. Latest developments in laser communication, earlier adopted in defense technologies, has further shaved off an already thinning latency by nanoseconds over short distances.

Network processing latency Network processing latency signifies the latency introduced by routers, switches, etc. The next level of optimization in the architecture of an algorithmic trading system would be in the number of hops that a packet would take to travel from point A to point B.

For example, a packet could travel the same distance via two different paths. But It may have two hops on the first path versus 3 hops on the second. Assuming the propagation delay is the same the routers and switches each introduce their own latency and usually as a thumb rule, more the hops more is the latency added.

Network processing latency may also be affected by what we refer to as microbursts. Microbursts are defined as sudden increase in rate of data transfer which may not necessarily affect the average rate of data transfer. Since algorithmic trading systems are rule based, all such systems will react to the same event in the same way. As a result, a lot of participating systems may send orders leading to a sudden flurry of data transfer between the participants and the destination leading to a microburst.

The following diagram represents what a microburst is. The first figure shows a 1 second view of the data transfer rate. We can see that the average rate is well below the bandwidth available of 1Gbps. However if dive deeper and look at the seconds image the 5 millisecond view , we see that the transfer rate has spiked above the available bandwidth several times each second. As a result the packet buffers on the network stack, both in the network endpoints and routers and switches may overflow.

To avoid this, typically a bandwidth that is much higher than the observed average rate is usually allocated for an algorithmic trading system. Serialization latency Serialization latency signifies the time taken to pull the bits on and off the wire. A packet size of bytes transmitted on a T1 line 1,, bps would produce a serialization delay of about 8 milliseconds.

However the same byte packet using a 56K modem bps would take milliseconds. A 1G Ethernet line would reduce this latency to about 11 microseconds. Interrupt latency Interrupt latency signifies a latency introduced by interrupts while receiving the packets on a server. Interrupt latency is defined as the time elapsed between when an interrupt is generated to when the source of the interrupt is serviced.

When is an interrupt generated? Interrupts are signals to the processor emitted by hardware or software indicating that an event needs immediate attention.

Posts navigation

This was a detailed post on algorithmic trading system architecture which we are sure gave a very insightful knowledge of the components involved and also of the various challenges that the architecture developers need to handle/overcome in order to build robust automated trading systems. the trading floor. High-Level Architecture Figure 1 depicts the high-level architecture of a trading environment. The ticker plant and the algorithmic trading engines are located in the high performance trading cl uster in the firm’s data center or at the exchange. The human traders are located in the end-user applications area. Nyse trading system architecture alekp 3 Comments Increased competition, higher market data volume, and new regulatory demands are some of the driving forces behind industry changes.