Buying an Apt. and Selling a House

Busy with other stuff but there are a ton of things to review. Starting with Patterson went to Google.   In-Datacenter Performance Analysis of a Tensor Processing Unit. looks interesting. Things should settle down in a month or so and this will get moving.



Gavin Finch and Liam Vaughan, Bloomberg Markets, The Man Who Invented the World’s Most Important Number, Nov. 2016, here.

The product of a big survey and a little math, Libor helps set interest rates worldwide, affecting the price of more than $300 trillion in mortgages, loans, and derivatives. Despite its ubiquity, few outside the world of finance had heard of Libor until 2012, when regulators found that a dozen banks—Barclays, UBS, and Citigroup among them—had colluded to manipulate the benchmark interest rate and fined them $9 billion. In this excerpt from The Fix (Wiley, December 2016), a forthcoming book about the Libor scandal, Bloomberg reporters Gavin Finch and Liam Vaughan trace the roots of this mysterious number. 

Rachel Courtland, IEEE Spectrum, Leading Chipmakers Eye EUV Lithography to Save Moore’s Law, Oct 2016, here.

GlobalFoundries and IBM are not the only ones that have poured money into EUV. In 2012, Intel, Samsung, and Taiwan Semiconductor Manufacturing Co. (TSMC) committed a total of €1.38 billion in R&D funding to ASML for next-generation lithography research (the same deal garnered ASML €3.85 billion for nonvoting shares in the company). ASML’s Meiling estimates about 4,000 people work on EUV for the company, a figure that does not include the researchers at leading chipmakers and research institutions with EUV programs of their own.

The reason for all this investment is not only that EUV is hard but that chipmakers are coming around to the idea that, soon, they may not be able to move forward without it. If you ask Anthony Yen, who leads EUV lithography development at TSMC, how critical EUV is to Moore’s Law, he won’t beat around the bush: “Totally critical. 100 percent critical. Very, very critical.” TSMC expects to adopt EUV in 2020, when the company aims to begin producing chips on its 5-nm manufacturing line.

Joel Hruska, Extreme Tech, Why one small company thinks it has the key to extending Moore’s Law, Oct. 2016, here.

One of the major differences between the various merchant foundries (GlobalFoundries, TSMC, Samsung, SMIC, UMC) and Intel is the process nodes they focus on. Intel has a fast-moving model that emphasizes moving to new nodes fairly quickly. It depends on the latest nodes for the bulk of its revenue, and it transitions older plants to newer nodes on a fairly regular basis. This model has taken a beating over the last few years, due to 14nm difficulties and the cancellation of 450mm wafers, but it’s still the basic way that Intel does business. TSMC and its merchant foundry competitors, however, tend to derive significant amounts of revenue from older hardware.

Matt Porter, PC Gamer, AMD confirms Zen processors launching early 2017, Oct. 2016, here.

AMD has fallen behind Intel in the processor space in recent years, however in testing its Zen architecture has been showing drastic improvement. So far we know AMD will have an 8-core, 16-thread chip on sale. Testing against an Intel Broadwell-E processor with the same specs, both clocked to 3.0GHz, the AMD product came out slightly on top. Of course, we’ll have to see how they perform in the real world before jumping to any conclusions. We should expect the clock speeds to be higher on the hardware when it eventually ships though.

Matt Levine, Bloomberg, Boring Banks and Silly CDs, here.  Goldman and Morgan Stanley do Deposits, here.

Here is a big article about how boring banking has gotten, how much it is just a business of dumb pipes, and how, in the words of Sir Philip Hampton, formerly of Royal Bank of Scotland, “banks look increasingly like competitive utilities”:

Jarred Walton, Maximum PC, Intel Core I7-6950X Review, Sep. 2016, here.

But if money is no object and you simply have to have the best, the 6950X is currently as good as it gets. It’s also the first time Intel has veered away from the $1,000 price point for one of their Extreme Edition launches, which go all the way back to 2003’s Pentium 4 Extreme Edition. You could try and argue that the additional CPU cores and L3 cache justify the price, but I think it’s pretty clear that lack of competition from AMD—not to mention an unwillingness to cannibalize sales of more expensive Xeon parts, which cost more money for slightly lower clocks in most cases (e.g., the Xeon E5-4627 v4). Here’s hoping AMD’s Zen processor line can provide some much-needed competition in 2017.

George Leopold, HPC Wire, AWS Embraces FPGAs,’Elastic’ GPUs, here.

A new instance type rolled out this week by Amazon Web Services is based on customizable field programmable gate arrays that promise to strike a balance between performance and cost as emerging workloads create requirements often unmet by general-purpose processors.

The public cloud giant (NASDAQ: AMZN) announced the new “F1” instance along with a separate GPU service on Wednesday (Nov. 30) during its annual partner event. The new FPGA instance is available now as a developer preview on the U.S. East region of the AWS cloud computing service and is scheduled to be generally available by the end of the year. Pricing was not announced.

Kristen Manke, US DOE, Supercomputers’ Pit Crews, here.

Like the world’s best pit crews, groups of highly trained scientists make sure everything works together at the supercomputers available through the Department of Energy’s Office of Science. Like their NASCAR counterparts, these crews get the best possible performance from the custom-built supercomputers. Their machines are fast and powerful, but they aren’t intuitive. Add to that the fact that researchers using the computers are racing against the clock, too – to get answers before their time on the machine expires.

Unwieldy data files, slow programs and hardware crashes all eat into that time. For example, working with data files that are 10 times larger than they’ve used before can slow the code to a crawl or a crash. The team works with scientists long before their time starts on the supercomputer to optimize codes and improve workflows.

Nichole Hemsoth, The Next Platform, Intel Declares War on GPUs at Disputed HPC, AI Boarder, Nov. 2016, here.

But outside of a few announcements at this year’s SC related to beefed-up SKUs for high performance computing and Skylake plans, the real emphasis back in Portland seemed to ring far fainter for HPC and much louder for the newest server tech darlings, deep learning and machine learning. Far from the HPC crowd last week was Intel’s AI Day, an event in San Francisco chock full of announcements on both the hardware and software fronts during a week that has historically emphasized Intel’s revolving efforts in supercomputing.

As we have noted before, there is a great deal of overlap between these two segments, so it is not fair to suggest that Intel is ditching one community for the other. In fact, it is quite the opposite—or more specifically, these areas are merging to a greater degree (and far faster) than most could have anticipated. At first, and this was all just a year or so ago, deep learning communities were making HPC sexy again because model training platforms relied on GPU computing and other tricks honed by the supercomputing set. In just six months, however, that story evolved as it became clear HPC centers were thinking about ways their compute-centric simulations could get a machine learning boost, even on the same system (versus the web scale deep learning centers that require separate training and inference clusters). In short, investments in HPC are tied to AI platforms and vice versa, with just enough general purpose Xeon and specialized Knights Landing/Knights Hill thrown in to keep the CPU-only supers humming along.

Morgan Stanley Blue Papers, Banks Need to Optimize Their Resources, Jan. 2015, here.

Strategic investment in technology represents another potential area of optimization. Many banks project an image of cutting-edge technology to their clients, but, in reality, struggle with old and complex legacy systems that are often difficult and expensive to maintain, and even harder to replace because of potential operational disruptions and regulatory hurdles.

Updating and optimizing technology can yield operational efficiencies and reduce risks by improving the speed and reliably of internal operations. An evaluation of in-house IT processing and support may also determine that some technology needs can be better met by external specialists. Spinning some of the technology support and infrastructure costs out of the banks could release billions of dollars in new value.

To be sure, the remaining super-global banks seeking to partner with clients in every region will have the advantage of scale that could serve them well; at the same time, they risk getting caught flatfooted by disruptive change. Midsize banks, on the other hand, will need to reshape themselves and specialize along regional or product lines, aligning their core infrastructure strengths and franchise objectives with internal wealth and corporate clients, while letting go of clients who promise little in return.

Ginni Rometty,  Wall Street Journal, How Blockchain Will Change Your Life, Nov. 2016, here.

Financial institutions are becoming early adopters: The World Economic Forum estimates that 80% of banks are working on blockchain projects. CLS, the world’s largest multicurrency cash-settlement system, is implementing blockchain in the foreign-exchange market. The Bank of Tokyo-Mitsubishi UFJ has developed a smart-contract prototype for multiparty business transactions. China UnionPay is using blockchain for loyalty programs that operate across multiple banks.


Federal Funds

Federal Funds: Final Settlement,  here.

Carpenter, The Repo Market, here.

NY Fed Quarterly Trends, here.

Coleman, 2002,The Evolution of the Federal Reserve’s Intraday Credit Policies, here. Guide to the Federal Reserve’s Payment System Risk Policy on Intraday Credit, 2012, here. Federal Reserve Policy on Payment System Risk, 2016, here.





Machine Learning

Foundations of Machine Learning, Mohri, 2012.  w links to course material.

Data Mining Practical Machine Learning Tools and Techniques,  Witten and Frank, 2011.

Stanford CS229 – Machine Learning, Andrew Ng.

High Performance Linear Algebra, Sam Halliday, Dec 2014. github 

References – Stanford

Scalability! But at what COST?

Have Abstraction and Eat Performance, Too: Optimized Heterogeneous Computing with Parallel Patterns

Loop level Parallelism and OpenMP – Lecture 14.

Pervasive Parallelism Lab 

Open MPI: Open Source High Performance Computing

PBC References


Avinash Sodani, Intel KNL

Per Hammarlund Intel, Joel Emer – look at his MIT website for more current papers.



Brunnermeier more likely Mulvey

Shapiro Stochastic Programming

The Kelly Capital Growth Investment Criterion

Berlekamp review of Fortune’s Formula

Oracle white paper

Robert E. Bixby, Gurobi.


Bendheim Center for Finance

Bendheim Center for Finance, here.

Established in 1997 at the Initiative of Ben Bernanke, former Chair of Economics

  • Interdisciplinary Center for Excellence in research on finance and economics with special focus on
    • Financial frictions, behavioral finance, interaction with monetary policy
    • Quantitative methods derived from economics, operations research, psychology…
    • Policy and real world impact