Financial computing continuously demands higher computing performance, which can no longer be accomplished by simply increasing clock speed. Cluster and grid infrastructures grow, their cost of ownership explodes.
Over the past twenty years, financial computing has emerged as a strategic discipline for the finance industry, and as a financial sciences domain in its own right. Today it is probably one of the fastest growing areas in scientific computing. This remarkable growth has been caused by a continuous development of innovative financial products, a dramatic increase in volumes, and the requirement to process transactions in ever shorter time frames.
Researchers and practitioners have developed a thorough understanding of the mechanics of the financial markets. The modern models they have developed to price and hedge complex financial products are computationally very demanding. Improved numerical schemes and substantial computing power are indispensable. To supply such huge amounts of processing power, the standard approach is to connect many servers and desktop machines to clusters and grids. This approach has been pioneered in academia and is now heavily used not only in the financial world but also in defence, engineering, life sciences, bio-technology and medicine, to name a few.
Because higher computing performance can no longer be accomplished with an increase in clock speed, clusters and grids must grow in size. The resulting infrastructures are likely to hit new barriers, such as bandwidth limitations, network latency, power consumption, cooling, floor space and maintenance. A robust operation, which is a key requirement in any financial services environment, cannot be achieved without paying attention to all of these issues. As a consequence, building and managing traditional clusters and grids is costly.
To pack more processing power onto a single chip, the CPU semiconductor industry has recently started moving to multi-core system designs with two, four and more processor cores. The graphics hardware manufacturing industry made this transition more than ten years ago. For instance, NVIDIA released its first dual core chip as early as 1998. Today, special processors on graphics cards bring realistic 3D real-time visualization and high-definition video processing to the desktop. They have built-in remarkable general purpose computing capabilities for data-parallel problems. These new commodity computer graphics chips, also known as GPUs (Graphics Processing Units), are probably today’s most powerful computational hardware on a per dollar scale.
In this article we investigate how we can harness this power for financial computing. Several typical financial applications can be significantly accelerated with specialized massively parallel algorithms running on GPUs. We illustrate the capabilities of GPUs with an implementation of a market standard local volatility model and apply it to the pricing of structured equity basket derivatives. Our pricing algorithms run 25 to 50 times faster on a GPU than a serial implementation on a high-end CPU. This enormous performance gain renders many of today’s almost infeasible tasks possible, including real time pricing of path dependent multi-asset options. We therefore believe that using GPUs in financial computing will have a disruptive impact on the financial services industry: already a single GPU provides a cheap and simple alternative to a smaller cluster or grid, it brings high performance computing to traders’ and risk managers’ desks, and it excites financial product development.
Logged-in members can download the article by clicking the link below. To log in or register visit here.