Pricing Services for a Real-time World
Blog: The Tibco Blog
TIBCO Spotfire lets you visualize the results from high-performance computing for what-if analyses, Monte Carlo simulations, and advanced statistical models for financial risk analysis run on hundreds of computers in a TIBCO GridServer environment. This is a critical capability for financial services companies. And here’s why:
The financial industry uses many types of data analyses that can benefit from tools that help users run computations, interact with the results, posit new questions, and rerun the computations with alternative inputs—all in real time (or at least near real time). These interactive capabilities help users to gain insight and react faster in increasingly competitive markets and industries.
One type of analysis, pricing forecasts, tries to predict the future price of an investment. These forecasts are a key component in market risk measurements (both value-at-risk and expected shortfall) as well as option pricing. Because of the algorithms used to produce them, pricing forecasts are not like most other data types. As a result, there are significant challenges to integrating this data into modern analytics tools.
Why pricing services are hard to scale
Pricing services are much more CPU intensive than most other application services. The Monte Carlo simulations that typically underlie these calculations are usually repeated at least 10,000 times per investment.
It turns out that modern analytics tools are not particularly well suited to this type of work. It’s not that they are not powerful tools. It’s that, in general, they are designed to overcome the challenges of large-scale (i.e. big data) data aggregation where disk and networks speeds, not computational power, are the primary limiting factors.
We could further substantiate this point by diving into the design of various analytics platforms and doing a detailed comparative analysis of algorithmic efficiencies and scaling architectures. But, implementing pricing forecasts as an external service layer—one that can be optimized and scaled independently of the analytics layer—is a logical starting point.
Even with pricing services running on a separate host, the host in question will still require a lot of expensive horizontal scaling. In addition, there must be enough of these expensive hosts to meet peak load and fault tolerance requirements. On top of this, to complete the calculation in a reasonable timeframe, developers must implement complex multithreaded logic that can fully exploit all of the available cores. This logic is hard to create and even harder to debug when things go wrong.
Modern analytics tools create new scaling and performance challenges for pricing. New challenges often lead to new solutions. But in this case, the root of the problem is one that has been resolved before through a technology that some might have forgotten about: grid computing.
What is grid computing?
Grid computing is a concept that does not get a lot of attention these days. Some readers may have a less than precise idea of what it means.
In the early days of cloud computing, the IT community explored many alternative ideas for how to organize and exploit large pools of transient, commoditized hardware. Eventually most of the good ideas were mapped into the now familiar concepts of containers and anything-as-a-service (XaaS). However, a few good ideas that did not map cleanly into these two new paradigms fell through the cracks. One was grid computing.
The basic premise of grid computing is to distribute large computations across a pool of cheap, commoditized hardware that act collectively like a virtual supercomputer. The central component of this model is a workload manager that distributes individual tasks and aggregates the results.
Grid computing provides the following relevant benefits:
Reduced server costs—Individual tasks can be run on a pool of cheap commoditized servers instead of costly high-end horizontally scaled servers. While the total number of servers might be increased, the aggregate total cost of ownership (TCO) is typically less.
Increased server utilization—So that idle services do not consume hardware, grid computing frameworks often use just-in-time deployment. Additionally, some of these frameworks implement capabilities such as “CPU-scavenging”, which allows them to leverage slack capacity across the enterprise down to an including idle desktops. More recent advancements include support for hybrid clouds and cloud bursting allowing for tighter capacity planning.
Simplified service development—Service developers are only responsible for implementing the individual tasks. These implementations are really just lightweight microservices. The grid computing framework shields the service developer from the complexities of parallel processing and response aggregation.
One important caveat is that the original workload must be one that can easily be broken down into parallel tasks. Not every workload is a good candidate for grid computing. However, pricing services, with their heavy reliance on Monte Carlo simulations, are an ideal fit.
Does grid computing really boost pricing services?
Using grid computing to accelerate pricing services is not a theory. Grid systems were installed in a number of major banks and trading companies more than a decade ago. At the time, grid computing was a hot new technology and these deployments got a fair amount of attention. And with almost no attention, not only do these systems continue to operate, their usage is actually expanding.
Wachovia (since merged into Wells Fargo) was one of the earliest adopters. Upon initial implementation, they were immediately able to increase the number of simulations they could execute by 25x. This in turn allowed them to increase their trade in complex instruments four-fold.
Other early adopters like Credit Suisse and Bank of America saw comparable improvements. Over time, grid computing has been increasingly used in demanding environments, including high frequency trading (HFT) platforms. And, individual implementations have been scaled to tens of thousands of servers and hundreds of thousands of cores.
Adoption of grid computing grew steadily through the 2000s. By the time TIBCO bought the grid computing company DataSynapse in 2009, their product was installed at over 100 major financial institutions. Around 2011, a consensus started to form that PaaS would be the future, and it would eventually displace grid. That consensus was wrong.
PaaS most certainly has taken off. However, the grid footprint has also continued to grow. For a time, this growth did seem to be limited to the organic growth of existing implementations. But the increased demand for pricing services, and in particular value-at-risk (VaR) and expected shortfall services, necessitated by the Basel III and now FRTB mandates, have forced organizations to revisit how these services are being deployed and scaled. TIBCO has seen a renewed interest in a number of new customers moving to grid in just the last 18 months.
This new adoption wave is supported by the simple fact that grid computing continues to provide one a cost effective means for scaling and accelerating pricing services. This statement holds true whether that increased demand is caused by new regulatory mandates or by the changing behavior of clients, as with analytics.
Grid computing and TIBCO Spotfire
Continuing advancements in analytics has an enormous potential value. Scaling pricing services to support analytics users could have an equally enormous cost. Grid computing is a proven solution for reducing the costs of accelerating and scaling pricing services and TIBCO Spotfire inherently support it. Give Spotfire a free trial to see for yourself.