Pink Iguana

Home » Uncategorized » Intel 10nm delayed 2019, L1 & L2, and LLVM

Intel 10nm delayed 2019, L1 & L2, and LLVM

Advertisements

Paul Alcorn, 26 Apr 2018, tom’s Hardware, Intel’s 10nm IS Broken, Delayed Until 2019, here.

Intel announced its financial results today, and although it posted yet another record quarter, the company unveiled serious production problems with its 10nm process. As a result, Intel announced that it is shipping yet more 14nm iterations this year. They’ll come as Whiskey Lake processors destined for the desktop and Cascade Lake Xeons for the data center.

The 10nm Problems

Overall, Intel had a stellar quarter, but it originally promised that it would deliver the 10nm process back in 2015. After several delays, the company assured that it would deliver 10nm processors to market in 2017. That was further refined to the second half of this year.

On the earnings call today, Intel announced that it had delayed high-volume 10nm production to an unspecified time in 2019. Meanwhile, its competitors, like TSMC, are beginning high volume manufacturing of 7nm alternatives.

Recent semiconductor node naming conventions aren’t based on traditional measurements, so they’re more of a marketing exercise than a science-based metric. That means that TSMC’s 7nm isn’t entirely on par with Intel’s 10nm process. However, continued process node shrinks at other fabs show that other companies are successfully outmaneuvering the production challenges of smaller lithographies.

Intel’s CEO Brian Krzanich repeatedly pressed the point that the company is shipping Cannon Lake in low volume, but the company hasn’t pointed to specific customers or products. And we’ve asked. As we pointed out earlier this year, the delay may seem a minor matter, but Intel has sold processors based on the underlying Skylake microarchitecture since 2015, and it’s been stuck at the 14nm process since 2014. That means Intel is on the fourth (or fifth) iteration of the same process, which has hampered its ability to bring new microarchitectures to market. That doesn’t bode well for a company that regularly claims its process node technology is three years ahead of its competitors.

Krzanich explained that the company “bit off a little too much on this thing” by increasing 10nm density 2.7X over the 14nm node. By comparison, Intel increased density by only 2.4X when it moved to 14nm. Although the difference may be small, Krzanich pointed out that the industry average for density improvements is only 1.5-2X per node transition. Because of the production difficulties with 10nm, Intel has revised its density target back to 2.4X for the transition to the 7nm node. Intel will also lean more on heterogeneous architectures with its EMIB technology (which we covered here).

Joel Hruska, 17 May 2018, Extreme Tech, How L1 and L2 caches work, and why they’re an essential part of modern chips, here.

This chart shows the relationship between an L1 cache with a constant hit rate, but a larger L2 cache. Note that the total hit rate goes up sharply as the size of the L2 increases. A larger, slower, cheaper L2 can provide all the benefits of a large L1 — but without the die size and power consumption penalty. Most modern L1 cache rates have hit rates far above the theoretical 50 percent shown here — Intel and AMD both typically field cache hit rates of 95 percent or higher.

The next important topic is the set-associativity. Every CPU contains a specific type of RAM called tag RAM. The tag RAM is a record of all the memory locations that can map to any given block of cache. If a cache is fully associative, it means that any block of RAM data can be stored in any block of cache. The advantage of such a system is that the hit rate is high, but the search time is extremely long — the CPU has to look through its entire cache to find out if the data is present before searching main memory.

LLVM, The LLVM Compiler Infrastructure, here. If you open source Reversing the Biosphere it would look like this, no?

The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. Despite its name, LLVM has little to do with traditional virtual machines. The name “LLVM” itself is not an acronym; it is the full name of the project.

LLVM began as a research project at the University of Illinois, with the goal of providing a modern, SSA-based compilation strategy capable of supporting both static and dynamic compilation of arbitrary programming languages. Since then, LLVM has grown to be an umbrella project consisting of a number of subprojects, many of which are being used in production by a wide variety of commercial and open source projects as well as being widely used in academic research. Code in the LLVM project is licensed under the “UIUC” BSD-Style license.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: