Note the variables that the banking book security models have to handle are interest rate, credit, and fx risk. These quantities have to be modeled at the security (or aggregated security level) so that the LP/NLP optimization can make the combinatorial choice between all the prospective new business Assets and Liabilities. The reinvestment, execution, liquidity, and gap risks are portfolio composition issues that the optimizer can be programmed to address. The optimization problem for a buy and hold Banking Book can take advantage of the fact that there is no option to sell the majority of the positions on the books. Of course there are exceptions, like the Available For Sale portfolios developed from the credit crisis but for the most part the only way the Banking book inventory changes is through implementation of the capital plan plus the interest rate, credit, and fx market variations (and counterparty behavior).
In general, the NIM optimization could be applied to any corporate balance sheet. The banking application is interesting because of the timing and the quantitative modeling is so advanced in the secondary market trading that the cash flow model error is likely to be reasonably low simply transferring the focus from the trading book to the banking book. The banking book is highly regulated so the rules are not likely to change super fast. Moreover, there is something like tens of trillions USD of relatively easy to model Assets and Liabilities held mostly by six banks. The US Fed forces regular disclosure for these 6 banks and another 44 US Bank Holding Companies. So backing out where assets and liabilities are likely to be held, over time, is aided by the very skewed distribution. Obviously the model calibration to the historical data will be very important to getting good actionable results. That is sort the thinking behind making the -02 code open source. Net Interest Margin optimization is similar to a Paul Garabedian like situation back in day with supersonic fluid flow. Sure lots of people can look at the code and run it, but there are only ten people in the world who know how to calibrate the model so it fits reality. So the bet is controlling the data calibration knobs and the optimization level of the code is sufficient for some time.