Overclocking and Low Latency: Why It Is Mission Critical for High-Frequency Trading
By James Lupton, CTO of Blackcore Technologies
In the fast-paced world of high-frequency trading (HFT), every microsecond counts. With trades executed in fractions of a second, even minor improvements in processing speed can translate into significant advantages. In this context, overclocking becomes indispensable, offering traders the ability to extract maximum performance from their hardware, ultimately enhancing competitiveness and efficiency in executing lightning-fast trades.
How overclocking can optimize high-frequency trading strategies
For latency-sensitive trading strategies, every optimization along the tick-to-trade path must be considered. For very simplistic strategies field-programmable gate arrays (FPGA) may be used to minimize latency – the time delay between an order being placed and its execution – but as the algorithms become more complex, or require software level co-ordination alongside the FPGA, then a higher clock speed on the host server becomes another smart way to optimize latency.
It’s important to understand what overclocking means in this context. Overclocking is the practice of taking hardware components and pushing them beyond their standard operating speeds. For example, running the CPU (Central Processing Unit) at a higher clock rate, or tuning memory parameters for lowest latency rather than max bandwidth. This process is quite complex and if performed incorrectly can lead to symptoms such as overheating, server instability, and hardware damage. Overclocking is something of an art, you often need years of experience and expertise to safely overclock the latest processors with any sort of production-grade reliability.
However, the gains a smart overclock manufacturer can make are huge for specific workflows - particularly those found in electronic trading. Our own experience has shown up to a 45% increase in clock speed is achievable. The challenge then, is doing this reliably across thousands of systems, deployed in colocations globally. Oh, and also having the enterprise-grade tooling and remote management tools that FSI institutions expect.
Achieving low latency and why it matters in electronic trading
Not all trading strategies are latency-critical, but most are latency-sensitive. From the days of trading pits, traders would edge closer physically to the price source simply to be able to react quicker to trades. Nowadays, low-latency is achieved with co-location, low-latency wireless networking between trading venues, high-performance networking, FPGA, and compute technology. Using an overclocked server with tuned components can provide a 38% increase in IPC (Instructions Per Cycle) meaning that a trading algorithm can execute up to 38% more instructions in each clock cycle compared to the manufacturer advertised speed, therefore making the algorithm faster.
In addition, a typical specialized overclocked server can reduce RAM (Random Access Memory) latency by 34%, and cache access by 30% compared to a standard server with the same processor. All of these performance improvements lead to a decrease in compute time and an increase in algorithm performance, which means trade software can execute strategies and react to market events faster.
The future of overclocked servers, financial exchange data centers, and the overarching fintech industry
Over the next five years, we expect a large focus on density, cooling, and power efficiency in the data center space. This will focus largely on better - hyperscale appropriate - cooling technologies, like immersion and rack-level liquid cooling, as well as improving efficiency and costs via waste energy reuse programs such as using excess heat to warm adjacent facilities. This is unlikely to become mainstream in the raw latency-focused co-location spaces due to the disruption incurred by such an upgrade, the rules & regulations surrounding exchange interaction, and the additional cost and complexity of rack level or immersion cooling.
For trading firms that care about latency, transitioning to the cloud has never really been an option – so data centers in the trading space tend to be physical, located close to exchanges or similar strategic locations, and filled with the most innovative technology in the market. While some financial firms are moving towards cloud-based infrastructures, these tend to be outposts of major cloud providers but with dedicated hardware for the financial firm. While cloud infrastructure will continue to be used for non-latency sensitive activities, strategies where the tick-to-trade time is important, physical data centers, and co-location will continue to be the path for lowest latency.
The future of high-performance computing in the fintech industry?
While the last 15 years have seen a trend towards hardware-based solutions leveraging FPGAs, and more recently GPUs, software will continue to play an important role in the decision-making process for electronic trading, and high-performance compute via overclocking will allow traders to have an edge over their competitors.
There are certain finance applications that typically always run on multi-socket systems due to core count or PCIe (peripheral component interconnect express) requirements, this however typically resulted in lower clock speeds due to the nature of multi-socket platforms. With recent improvements in CPU architecture and a trend to higher core counts on a single chip, the industry is now able to offer systems with 56 or more cores in a single socket with clock speeds significantly above those achieved in traditional multi-socket systems, or non-overclocked systems. This not only means more performance but also can be easier to manage at an application level. I think we’ll see larger migrations to single socket platforms in the near future for these applications.
The finance industry is a fantastic market segment because there’s always something new and exciting being tested or deployed - someone looking to gain an advantage or exploit some new technology or strategy. Today, one of those technologies is overclocked servers, tomorrow it could be AI. Next week, it’s something no one else has even considered yet. Like all good tech stories, there’s also a good chance it’ll come from someone working in their garage. For now, however, it’s clear that leveraging overclocked machines is an integral part of the market, with more and more firms realizing that without them, they’ll be left behind.