Home » Uncategorized » NIMo Target Applications

NIMo Target Applications

Demo 1: Run all aggregate USD GSIB balance sheets in a static 5 year monthly simulation with standardized regression fit models based on FRB historical balance and market/econometric data. For example see http://www.federalreserve.gov/datadownload/  which offers the Assets and Liabilities of Commercial Banks in the U.S.  and see as well the market and scenario  data at http://www.federalreserve.gov/bankinforeg/ccar.htm which gives the Baseline, Adverse, and Severely Adverse configs in Excel. We fit the regression models to the historical balance and returns data and simulate forward with the implied FRB levels.  No code optimization for the Reference Server  build and we use the gcc compiler with the open source code available at github.com/jsandber/finding_nimo.

We would like to do the first demo for the FRB with the Reference Server deployed on AWS or Cray. Possibly don’t bother with the multi million security count for the FRB demo. Just show the data flow and have benchmarks/timings that show how it will scale with real accrual portfolios.

Demo 2. Pick a multi-factor stochastic market model that simulates the factors used in the standard regression fit models. For the purpose of the demo we control the selection of the factors used to fit the regressions. In one way, this is easy because whatever we pick will be deemed inadequate by each of the banks as well as the FRB. But viewed as a computational prototype the performance will be reasonably representative of whatever appropriate models they desire.  The GSIBs do not have to share their models with us. They can take the gcc open source code  and make it their own simulator. We will use the multi-factor market model to generate paths for a 5Y balance sheet simulation. Run a 5Y horizon Monte Carlo balance sheet simulation of the aggregated FRB GSIB balance sheets. We would like to be involved with the FRB in devising a standard set of balance and rate models as well as a generic stochastic market model for simulation. To the GSIBs and the FRB we will offer the code optimization services to speed their specific code on a specific piece of x86 hardware (they can pick the hardware). The optimized code is not open source and is the property of the respective Firms.

Demo 3. Run new Business NLP on top of stochastic simulations of the runoff portfolio and prospective new business portfolio. Here is typically where it gets harder for people to follow along. You kind of need to find GSIB Treasury folks with quantitative backgrounds. The FRB  probably is not going to fall over themselves with excitement initially over running NLP new business simulation over the stochastic programming balance sheet simulation. It is not currently something they are mandated to monitor. Any standard set of models will have a residual fit error in forecasting that will work against the NLP optimizer. You will see this when you backtest the optimizer and attempt to explain the differences between realized and projected values. On the other hand, once the GSIBs see there are billions in revenue  on the table if they control the operational and market expectations modeling they will warm up to the idea (read as the balance and rate modeling forecasting error will get smaller, they will develop better models like MBS prepayment).  Same open source deal as before. Firms can take the gcc C++ code directly from github and use however they want internally.  We provide the option of code optimization and commodity x86 hardware selection to get  one or possibly two orders of magnitude better performance over the Reference Server running on a Haswell. the optimized code similarly remains the property of the sponsoring Firms.

Three basic analytics layers

This is a three layer cake consisting of the following:

  1. Static Balance Sheet simulation
  2. Stochastic Programming Balance Sheet simulation
  3. NLP Capital Planning optimization

There are several more analytics applications we could build on top of these three analytics layers. For the purpose of simplifying the code development we just focus on these layers.

Static balance sheet simulation means the user supplies their accrual portfolio mapped to their balance sheet accounts and to a set of security/product models known to the Reference Server. The user also supplies the complete set of market and econometric levels out to the simulation horizon. The Reference Server computes the simulated balance sheet evolution to the simulation horizon using the product and rate models. The balance sheet balances and returns are for each period out to the simulation horizon is the output.

Stochastic Programming balance sheet simulation means the user supplies the same inputs as the above static simulation. The Reference Server computes a set of paths consisting of market and econometric levels out to the simulation horizon.  the number of paths generated is determined by theory and previous runs demonstrating the Monte Carlo algorithm convergence behavior ( the rate of the MC algorithm convergence). For each path  the Reference Server simulates the balance sheet. For each element of the balance sheet matrix the Reference Server maintains a sum which at the completion of simulating all the paths will become and average by dividing the sum by the total number of paths. The average balances and returns for each period out to the simulation horizon is the output.

NLP Capital Planning optimization means the user supplies the same inputs as in the Stochastic Programming case plus: a portfolio of potential new investments available to the simulation horizon,  operational model of the probability of success of a branch making a new investment in a given size and time,  and a capital plan denoting when and how much of each new investment are expected to added to the accrual portfolio. The Reference Server solves the NLP for the maximum weighted annual Net Interest Margin. The output is a schedule of new investment positions using the expected excess capital from the Stochastic simulation of the runoff portfolio and the new investments made in each simulation period.  This new investment schedule maximizes the objective function containing the expected Net Interest Margin.

Target user interface

We understand at the outset that there is a vanishingly small probability that GSIBs will line up for the standard generic product and rate models of their Accrual Portfolio inventory. So the interface to the user takes as much state as possible from the user. Thereby we minimize the state kept by the Reference Server and the user simply decodes the meaning of the simulation. The user maintains the required information to make sense of the simulation results in their environment.  By being generic the Reference server can be used by multiple Firms and Agencies.

We don’t interpolate market data, the user does. We don’t know what the market data means we just match up the user supplied variable names to balance and rate model expressions. We do not know the details of the Accrual Portfolio securities, the user maps them to our existing security models (or provides their own models to be integrated into the Reference Server). We do not know the legal entity structure, the geographical structure, or even the logical structure of the securities on the balance sheet; the user provides all the mapping information. Accrual positions in securities are mapped to a balance sheet account id 1, …. k. The user can aggregate a slice-and-dice as appropriate.

GSIBs performance capacity in the simulation space various from Firm to Firm. Some run HPC groups that tinker with FPGA supercomputers and variable precision floating point computations. Others run simulations though Java Big Decimal software emulation and focus on end-to-end programming language uniformity. The Reference Server integrates into the Firm’s desired development and production environment while maintaining competitive performance levels for x86 commodity floating-point simulation.

There are certain types of computations that the Reference Server (even when optimized) cannot really do anything to improve performance. For example if a Firm mandates that a serial quant library be used in the Reference Server, that is generally enough to kill any optimization possibilities, unless you rewrite the cat library. Typical serial quant libraries generate code that runs x86 microprocessors at 5% or less efficiency. Oddly the processor performance monitors indicate the microprocessor is 100% utilized, which is true, it just 100% utilized on accomplishing things not directly related to finishing the computation. We have similar caveats for external servers. If your prepayment model for your AFS/HTM MBS position balances come from an external server running serial code, there is not much we can do performance wise.  You can try to parallelize but of course Amdahl’s Law will keep your performance quite far away from competitive. The Reference Server interface provides a means for supplying externally computed balances directly. So you can compute your AFS /HTM prepayment model  balances and input them into he simulation. The Reference Server will not interface with Yield Book for example, the user does. We don’t call a Firm’s Quant Library for input, the user supplies all the inputs.






Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: