Home » Code » “One chord is fine. Two chords are pushing it. Three chords and you’re into jazz.”

“One chord is fine. Two chords are pushing it. Three chords and you’re into jazz.”

Georg Hager, Georg Hager’s Blog, Blaze library version 1.4 released, here.

Blaze is an open-source, high-performance C++ math library for dense and sparse arithmetic based on Smart Expression Templates. Right on time for SC’13, the fifth release of the Blaze library has just been released. Blaze 1.4 introduces subvector and submatrix views, which in combination with rows and columns provide great flexibility for accessing vector and matrix data.

Blaze is unique in that is does not simply rely on compiler magic but utilizes highly tuned libraries whenever feasible. In many cases, Blaze can thus achieve “best possible performance” as defined by suitable performance models.

Check out the details and download the library at the developer site: http://code.google.com/p/blaze-lib/

Google, blaze-lib  project, here. Daxpy benchmark, here. Sandy Bridge/AVX, meh – where the Haswell numbers at? The height and slope  for vectors of length less than 100 is particularly interesting for real code. That blaze FLOPs slope is pretty attractive for code that cannot assume nice long 1K vectors.

The following selected benchmarks give an impression of the performance of the Blaze library. In these benchmarks, Blaze is compared to the following third party libraries:

The benchmark system is an Intel Xeon E3-1280 (“Sandy Bridge”) CPU at 3.5 GHz base frequency with 8 MByte of shared L3 cache. Due to the “Turbo Mode” feature the processor can increase the clock speed by up to 400 MHz, depending on load and temperature. Since we use a single core only in our benchmarks, the CPU ran continuously at 3.9 GHz.

The maximum achievable memory bandwidth (as measured by the STREAM benchmark) is about 18.5 GByte/s. In contrast to other x86 processors, this limit can be hit by a single thread if the code is strongly memory bound. Each core has a theoretical peak performance of eight flops per cycle in double precision (DP) using AVX (“Advanced Vector Extensions”) vector instructions. A single core of the Xeon CPU can execute one AVX add and one AVX multiply operation per cycle. Full in-cache performance can only be achieved with SIMD-vectorized code. This includes loads and stores, which exist in full-width (AVX) vectorized, half-width (SSE) vectorized, and “scalar” variants. A maximum of one 256-bit wide AVX load and one 128-bit wide store can be sustained per cycle. 256-bit wide AVX stores thus have a twocycle throughput.

The GNU g++ 4.6.1 compiler was used with the following compiler flags:

g++ -Wall -Wshadow -Woverloaded-virtual -ansi -pedantic -O3 -mavx -DNDEBUG -DMTL_HAS_BLAS -DEIGEN_USE_BLAS ...

Advertisements

1 Comment

  1. […] can generally improve performance a factor of 3 to 4 (look at Hager’s Blaze Lib performance, here). That is why you like that sharp upwards slope in the MFlops to vector length plot. It tells you, […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: