Scope: C/C++ optimizing compiler, ICC, MKL, XLC, MASS, Java, tools, floating point standards, implementation, and benchmarking, Algorithm benchmarking, Anticipate server side is coded in C, data collection coded in Java, Front end coded in Java + WordPress.
Multicore Programming: Intel Guide. …
Random Number Generators: Hellekalek Software. …
Java: Oracle v8, …
- Enumerate Monte Carlo runtime optimization tradeoffs GPU vs. vectorized FP.
- What are the limits for NIMo Optimized vector code on an iMac?
- Good estimates for daily 5Y Simulation/Optimization of US Bank ALM runtime and approximation error.
Bibliography v 1.3
These will typically be the main floating point references used in financial code optimization. The bet being fleshed out here is that in 2017 there is an embarrassing abundance of computational power available for various types of numerical optimization and Monte Carlo simulation. So for example, all the US Bank balance sheets can be simulated forward 1 to 5 years using my iMac on a security-by-security basis. There are LP codes with 1MM variables that converge in an hour on my iMac for capital plan simulation. The reason the bet seems to be worth investing the setup time is that the information asymmetry is massive. The bank with the smartest people and the most money purchased a million dollar FPGA supercomputer in 2012 to run a gaussian copula model on their CDX portfolio and ended up losing 6 billion dollars in possibly producing the greatest technology White Elephant of all time. Fintech folks could step in to run NIMo applications or the banks may have an epiphany, but in any case if the theory and the code pan out the window is there to get this running on my iMac before these folks can pull themselves together.
Appel, A. W. (1997). Modern Compiler Implementation in C. Cambridge University Press.
Bundel, D. (2001). CS 279 Annotated Course Bibliography. Retrieved from IEEE 754 summary web page: https://cims.nyu.edu/~dbindel/class/cs279/dsb-bib.pdf
Fourer, R., Gay, D. M., & Kernighan, B. W. (2003). AMPL A Modeling Language for Mathematical Programming (Second ed.). Duxbury Thompson.
Goldberg, D. (1991). What Every Computer Scientist Should Know About Floating-Point Arithmetic. Computing Surveys, Association for Computing Machinery, Inc.
Higham, N. J. (1996). Accuracy and Stability of Numerical Algorithms. Philadelphia, PA: SIAM.
Intel. (2016, June). Intel® 64 and IA-32 Architectures Optimization Reference Manual. Retrieved from https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf
Intel. (2017, June). Intel® 64 and IA-32 Architectures Software Developer’s Manual. Retrieved from https://software.intel.com/en-us/articles/intel-sdm
Intel. (2017). MKL Development Reference C Bibliography. Retrieved from Intel Developer Zone: https://software.intel.com/en-us/mkl-developer-reference-c-bibliography
Kahan, W. (1997). IEEE Standard 754 for Binary Floating-Point Arithmetic. Retrieved from Kahan’s UCB homepage: https://people.eecs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF
Markstein, P. (2000). IA-64 and Elementary Functions: Speed and Precision. Upper Saddle River, NJ: Prentice Hall.
Muller, J.-M. (1997). Elementary Functions: Algorithms and Implementation. Boston: Birkhauser.
Muller, J.-M. (2009). Handbook of Floating-Point Arithmetic. Birkhauser.
Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (2007). Numerical Recipies. Cambridge University Press.