Tian and Benkrid, Fixed-Point Arithmetic Error Estimation in Monte Carlo Simulations, here. The proposal here is that the variance in the Monte Carlo simulation convergence will dominate the precision errors made by dropping below double precision. The idea is being offered to justify using FPGA simulation but seems like it could just as well be used to justify using EP precision in MKL/VML for Black Scholes. You can get 50x improved performance EP/AVX/VML versus icc -O2 from our benchmark.
Abstract—As Field Programmable Gate Arrays (FPGAs) get faster and denser, the scope of their applications is getting wider. High performance computing applications, for instance, are an example of such application expansion driven by FPGAs’ increasing computational power coupled with their relatively low power consumption compared to state-of- the-art microprocessor technology. However, one major hurdle facing FPGAs in the high performance computing arena, in addition to their low level programming model, is their low efficiency in implementing double precision floating-point arithmetic, which is often considered essential in many high performance applications. This paper attempts to dispel the latter perceived limitation in the area of Monte- Carlo based stochastic process simulation through a rigorous estimation of fixed-point arithmetic error in a hardware implementation of the Monte-Carlo based European option pricing model. Representations of the mean and variance of quantisation and rounding-off errors due to fixed-point arithmetic show this error to be negligible when compared to the variance of the Monte-Carlo simulation method itself. Not only does this allow us to avoid full double precision arithmetic implementation, but also to minimise the fixed- point wordlength used without practically affecting the precision of the final result. This in turn results in considerable area savings and throughput increases.