Pink Iguana

More Spectre and Meltdown

GLL, Timing Leaks Everything, 12 Jan, here. Manual loop unrolling is gonna become popular.

Paul Kocher is the lead author on the second of two papers detailing a longstanding class of security vulnerability that was recognized only recently. He is an author on the first paper. Both papers credit his CRYPTO 1996 paper as originating the broad kind of attack that exploits the vulnerability. That paper was titled, “Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems.” Last week, the world learned that timing attacks can jeopardize entire computing systems, smartphones, the Cloud, everything.

Today we give a high-level view of the Meltdown and Spectre security flaws described in these papers.

Both flaws are at processor level. They are ingrained in the way modern computers operate. They are not the kind of software vulnerabilities that we have discussed several times before. Both allow attackers to read any memory location that can be mapped to the kernel—which on most computers allows targeting any desired memory contents. Meltdown can be prevented by software patches—at least as we know it—but apparently no magic bullet can take out Spectre.


Spectre and Meltdown

Rich Brueckner, inside HPC, Radio Free HPC Looks at Diverging Chip Architectures in the Wake of Spectre and Meltdown, here. the SIFive talk video  Tech Talk by Paul Kocher on 31 Jan is not bad.

The direction that we need to go as an industry though is …We need to stop trying to build one processor architecture that is great for playing video games and doing wire transfers. We need to build architectures where there are cores and software stacks designed for security that can be slower, that can be simpler, and we need separate ones that are optimized for performance.

Lucian Armasu, tom’s Hardware, Intel Expands Bug Bounty Program To Include Side-Channel Attacks, 14 Feb, here.

  • Shifting from an invitation-only program to a program that is open to all security researchers, significantly expanding the pool of eligible researchers.

  • Offering a new program focused specifically on side channel vulnerabilities through Dec. 31, 2018. The award for disclosures under this program is up to $250,000.
  • Raising bounty awards across the board, with awards of up to $100,000 for other areas.


Ryan Whitwham, ExtremeTech, Emergency Windows Update Removes Intel’s Buggy Spectre Patch, 3-Jan, here.

The fallout from the Spectre and Meltdown CPU vulnerabilities continue to send ripples through the technology industry, and Intel is suffering more than most. Its chips were vulnerable to all three variants of these attacks, and its fixes have been heavily criticized for introducing new bugs and doing a poor job of protecting users. Now, Microsoft has issued a rare out-of-cycle patch for Windows systems that removes Intel’s Spectre patch. That has to be embarrassing for Intel.

When we talk about the attack “variants” we’re referring to specific vulnerabilities. Variant 3 is Meltdown, and Variant 1 and Variant 2 are Spectre. Of these three, Variant 2 (CVE-2017-5715) is proving to be quite difficult to pin down for Intel. This Spectre variant is what’s known as a branch target injection, which could allow an attacker to execute arbitrary code on a system. Needless to say, that’s a very bad thing.

When Spectre was originally discovered, researchers feared the only way to mitigate it would be to disable CPU’s “speculative execution” features, which allow CPUs to work ahead and do calculations that may be needed in the future. This would come with a big performance hit. Google managed to work out an alternative called “Retpoline,” but Intel went its own way.

According to Microsoft, the Intel patch for Spectre Variant 2 has been causing unexpected system glitches, corrupted data, and unexpected reboots. It’s shocking Intel’s patch could be this bad considering it was given advance notice of the defects months ago and had plenty of time to develop the fix. Intel also ran into problems with the Linux patches, which Linus Torvalds called “complete and utter garbage” last week. It even made the patches optional on Linux systems in apparent acknowledgment of how shabby they were.



Thalesians’ Data Science & ML Workshop

I’m excited to tell you about the Intensive Full-Day Workshop on Python, Data Science, and Machine Learning that we are organising next week on Thursday, 22 February, on Level39, One Canada Square, Canary Wharf, London E14 5AB.

You can view more information on

or sign up directly through


The content will be delivered by myself and my colleagues from Thalesians Ltd. We’ll cover the mathematical foundations of machine learning, practical techniques, and there will be hands-on Jupyter tutorials. This workshop will be particularly useful for algorithmic traders interested in generating alpha, data scientists working in predictive data analytics and anyone who is interested in doing serious data science and machine learning in Python and other languages.

The cost of attendance is only £299 with 50% discount available for students and academics.

The programme is as follows:

08:00 – 08:45 – Introduction to Python (optional)

09:00 – 09:30 – Lecture 1: interpretations of probability – classical, frequentist, Bayesian, axiomatic
09:30 – 10:00 – Lecture 2: statistical inference and estimation theory
10:00 – 10:30 – Tutorial 1: statistical inference and estimation theory

10:30 – 11:00 – coffee break

11:00 – 11:30 – Lecture 3: introduction to linear regression: a geometric perspective
11:30 – 12:00 – Lecture 4: interpreting the linear regression, multicollinearity
12:00 – 12:30 – Tutorial 2: linear regression

12:30 – 13:30 – lunch

13:30 – 14:00 – Lecture 5: from statistics to machine learning: bias-variance trade-off, under- and overfitting
14:00 – 14:30 – Lecture 6: cross-validation and shrinkage methods
14:30 – 15:00 – Tutorial 3: demo of cross-validation and shrinkage methods

15:00 – 15:30 – coffee break

15:30 – 16:00 – Lecture 7: optimisation, gradient descent
16:00 – 16:30 – Lecture 8: neural networks and deep learning
16:30 – 17:00 – Tutorial 4: neural networks

We very much hope that you and your colleagues will be able to join us next Thursday!

Best wishes,




Dr. Paul A. Bilokon, Founder and CEO

Thalesians Ltd

Level39, One Canada Square

Canary Wharf

London E14 5AB


Tel.:       +44 (0)20 796 57587



Twitter: @thalesians


VAT Registration Number: 228 5299 80

Company Number: 06843387 (Registered in England and Wales)

Date of Incorporation: 11th March, 2009

Quantum Computing in the NISQ era and beyond

John Preskill, Quantum Computing in the NISQ era and beyond, arXiv, Jan. 2018, here.  Aaronson says read it.

Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of today’s classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away — we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing.


Scope: Artificial Intelligence, Machine Learning, and Databases as relevant to Statistical Machine Learning and model calibration for retail banking ALM simulation. Includes data sources such as Market data, Econometric data, Accrual portfolio data, balance sheet data, external sources/packages/tools.

PeopleDeWitt,   Stonebreaker,  Andrew Ng, Norvig,  Andrew Lo, …

OrganizationsFDICSifma,  US Fed CCARBankRegData.comApache Spark, …

CoursesStanford Machine LearningStanford CS221: Artificial Intelligence: Principles and Techniques,  …

Open Problems Jan 2018:

  1. Selection of market, econometric, and historical account data for cashflow model calibration.
  2. Calibrating a credit card cashflow model – single account vs. aggregate.
  3. Calibrating a deposit cash flow  model – single account vs. aggregate.
  4. Quantify the expected error going from aggregate to single security.

Bib Evaluation Stack:


Bibliography v1.0:

The bank capital allocation process transforms an initial bank balance sheet, composed of a set of ALM securities that retain and produce cashflows, to a new target bank balance sheet. These ALM securities might be as simple as U.S. Treasury bonds, bank deposits, or credit card accounts. The characteristic cashflow quantities, timing, and duration depend on the securities, market data levels, and the behavior of the retail bank customer holding the securities. For example, a UST floater costs par/market value upfront, returns par at a predetermined maturity date, and periodically (semiannually) pays a contractually dictated coupon whose size depends on market data levels. The deposit may incrementally accept cash deposits and withdrawals starting at the inception of the deposit account. The deposit account may not have a contractual maturity, but as long as the account is active the bank must contractually pay interest on the funds held in the deposit account. Moreover U.S. deposit cash is typically protected from default by FDIC insurance up to a predetermined level (250K USD). The credit card account represents a credit line offered to a customer from the inception of the account. The credit card cash flows are the drawdowns against the line of credit, and the customer pay down against previous drawdowns. The draw down amount of the credit line may be periodically adjusted to account for the interest accrued to the bank on the previous credit line drawdowns. There may be no specific maturity date for the credit card account. Under adverse circumstances the customer may be late on paying scheduled pay downs or even default on the entire previously cash amount drawn on the credit line. So broadly, the capital allocation process tracks the largely deterministic UST cash flows; the variable and possibly perpetual deposit cashflows; and the variable, non-maturing cashflows with possible default to transform an inception balance sheet and new business to a target balance sheet. For a large US bank balance sheet there could be 100MM deposit accounts and another 100MM credit card accounts. That is the data set.

The first Machine Learning problem is calibrating a series of security level cashflow models to calculate the expected cashflows on 1. the inception balance sheet and 2. the new investment business. The second Machine Learning problem is to identify the required market data, econometric data, and customer data to fit the realized cashflow timing with minimized error. We want a current fit as well as a forward expected fit for the purpose of Monte Carlo simulation.

Federal Reserve. (2017, Feb.). 2017 Supervisory Scenarios for Annual Stress Tests Required under the Dodd-Frank Act Stress Testing Rules and the Capital Plan Rule. Retrieved from Press Releases:

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

Pearl, J. (2000). Causality: Models, Reasoning, and Inference. UK: Cambridge University Press.

Raschka, S., & Mirjalili, V. (2017). Python Machine Learning (Second ed.). Packt.

Sugiyama, M. (2015). Introduction to Statistical Machine Learning. Morgan Kaufmann.

Witten, I., Frank, E., & Hall, M. (2011). Data Mining: Practical Machine Learning Tools and Techniques (Third ed.). Morgan Kaufmann.

FDIC Credit Card Securitization, Hayre, Inside Yield Book, and Bank Cashflow Simulation

FDIC, Credit Card SEcuritization Manual, here.

Lakhbir Hayre, Salomon Smith Barney, Guide to Martgage-Backed and ASset-Backed Securities, here.

Modeling of Mortgage Payments and Defaults, 2006, here.

Liebowitz, Homer, et. al. , Inside the Yield Book, here.

Cui, Wang, and Ning, Algorithms, Time Series Prediction Method of Bank Cash Flow and Simulation Comparison, 2014, here.

Abstract: In order to improve the accuracy of all kinds of information in the cash business and enhance the linkage between cash inventory forecasting and cash management information in the commercial bank, the first moving average prediction method, the second moving average prediction method, the first exponential smoothing prediction and the second exponential smoothing prediction methods are adopted to realize the time series prediction of bank cash flow, respectively. The prediction accuracy of the cash flow time series is improved by optimizing the algorithm parameters. The simulation experiments are carried out on the reality commercial bank’s cash flow data and the predictive performance comparison results show the effectiveness of the proposed methods.


Intel® Architecture Instruction Set Extensions Programming Reference and Vector Math in MKL

Intel® Architecture Instruction Set Extensions Programming Reference, here. Embarassingly exciting.

This document describes the software programming interfaces of Intel® architecture instruction extensions for future Intel processor generations. The instruction set extensions cover a diverse range of application domains
and programming usages. There are 512-bit SIMD vector instruction extensions, instruction set extensions targeting memory protection issues such as buffer overruns, and extensions targeting secure hash algorithm (SHA) accelerations like SHA1 and SHA256.
The 512-bit SIMD vector SIMD extensions, referred to as Intel® Advanced Vector Extensions 512 (Intel® AVX512) instructions, deliver comprehensive set of functionality and higher performance than AVX and AVX2 family of instructions. AVX and AVX2 are covered in Intel® 64 and IA-32 Architectures Software Developer’s Manual sets. The reader can refer to them for basic and more background information related to various features referenced in this document.
The base of the 512-bit SIMD instruction extensions are referred to as Intel® AVX-512 Foundation instructions. They include extensions of the AVX and AVX2 family of SIMD instructions but are encoded using a new encoding scheme with support for 512-bit vector registers, up to 32 vector registers in 64-bit mode, and conditional processing using opmask registers.
Chapters 2 through 6 are devoted to the programming interfaces of the AVX-512 Foundation instruction set, additional 512-bit instruction extensions in the Intel AVX-512 family targeting broad application domains, and instruction set extensions encoded using the EVEX prefix encoding scheme to operate at vector lengths smaller
than 512-bits.
Chapter 7 covers additional 512-bit SIMD instruction extensions that targets specific application domain, Intel AVX-512 Exponential and Reciprocal instructions for certain transcendental mathematical computations, and Intel AVX-512 Prefetch instructions for specific prefetch operations.
Chapter 8 covers instruction set extensions targeted for SHA acceleration. Chapter 9 describes instruction set extensions that offer software tools with capability to address memory protection issues such as buffer overruns. For an overview and detailed descriptions of hardware -accelerated SHA extensions, and Intel® Memory Protection Extensions (Intel® MPX), see the respective chapters.
Chapter 10 covers instructions operating on general purpose registers in future Intel processors. Chapter 11 describes the architecture of Intel® Processor Trace, which allows software to capture data packets with low overhead
and to reconstruct detailed control flow information of program execution.

Intel Math Kernel Library 2018, Vector Mathematics – Performance and Accuracy Data, here.