FeaturedIT topics

LLVM 8 shines on WebAssembly, machine learning workloads

The project behind LLVM, the compiler framework that powers the Clang C/C++ compiler, and the compilers for languages such as Rust and Swift, has officially released LLVM 8.

This latest release moves WebAssembly code generation out of LLVM’s experimental status and enables it by default. Compilers have already been provisionally using LLVM’s WebAssembly code generation tools; Rust, for instance, can compile to WebAssembly, although deploying it to run takes some extra fiddling.

With this change, compilers are being given the green light to use LLVM for WebAssembly in production. WebAssembly itself is still in the early stages, but this marks another milestone toward using it to freely compile code from languages other than JavaScript to run in the browser.

Also new to LLVM 8 is support for compiling to Intel’s Cascade Lake chipset, enabled by way of a command-line flag. It’s essentially the same as existing support for Intel Skylake chipsets, but with support for emitting Vector Neural Network Instructions (VNNI), part of the new AVX-512 instruction set available in Intel Xeon Phi and Xeon Scalable processors. VNNI, as the name implies, is intended to boost the speed of deep-learning workloads on Intel systems in circumstances where GPU acceleration isn’t available.

LLVM code generation isn’t limited to CPUs. LLVM 8 also improves code generation for the AMDGPU back-end, which allows LLVM code to be generated for the open source Radeon graphics stack. New AMD GPUs, like the Vega series, will benefit most from the AMDGPU support.

Other changes include improved code generation for IBM Power processor targets, particularly Power9; support for LLVM’s just-in-time compiler (JIT) for MIPS/MIPS64 processors; cache prefetching by way of debug information gleaned from software profiles; and improved support for OpenCL and OpenMP 5.0 in the Clang (C/C++ compiler) project.

Related Articles

Back to top button