Advanced Vector Extensions
Advanced Vector Extensions are extensions to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge processor shipping in Q1 2011 and later on by AMD with the Bulldozer processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme.
AVX2 expands most integer commands to 256 bits and introduces fused multiply-accumulate operations. They were first supported by Intel with the Haswell processor, which shipped in 2013.
AVX-512 expands AVX to 512-bit support using a new EVEX prefix encoding proposed by Intel in July 2013 and first supported by Intel with the Knights Landing processor, which shipped in 2016.
Advanced Vector Extensions
AVX uses sixteen YMM registers to perform a Single Instruction on Multiple pieces of Data. Each YMM register can hold and do simultaneous operations on:- eight 32-bit single-precision floating point numbers or
- four 64-bit double-precision floating point numbers.
511 256 | 255 128 | 127 0 |
ZMM0 | YMM0 | XMM0 |
ZMM1 | YMM1 | XMM1 |
ZMM2 | YMM2 | XMM2 |
ZMM3 | YMM3 | XMM3 |
ZMM4 | YMM4 | XMM4 |
ZMM5 | YMM5 | XMM5 |
ZMM6 | YMM6 | XMM6 |
ZMM7 | YMM7 | XMM7 |
ZMM8 | YMM8 | XMM8 |
ZMM9 | YMM9 | XMM9 |
ZMM10 | YMM10 | XMM10 |
ZMM11 | YMM11 | XMM11 |
ZMM12 | YMM12 | XMM12 |
ZMM13 | YMM13 | XMM13 |
ZMM14 | YMM14 | XMM14 |
ZMM15 | YMM15 | XMM15 |
ZMM16 | YMM16 | XMM16 |
ZMM17 | YMM17 | XMM17 |
ZMM18 | YMM18 | XMM18 |
ZMM19 | YMM19 | XMM19 |
ZMM20 | YMM20 | XMM20 |
ZMM21 | YMM21 | XMM21 |
ZMM22 | YMM22 | XMM22 |
ZMM23 | YMM23 | XMM23 |
ZMM24 | YMM24 | XMM24 |
ZMM25 | YMM25 | XMM25 |
ZMM26 | YMM26 | XMM26 |
ZMM27 | YMM27 | XMM27 |
ZMM28 | YMM28 | XMM28 |
ZMM29 | YMM29 | XMM29 |
ZMM30 | YMM30 | XMM30 |
ZMM31 | YMM31 | XMM31 |
AVX introduces a three-operand SIMD instruction format, where the destination register is distinct from the two source operands. For example, an SSE instruction using the conventional two-operand form a = a + b can now use a non-destructive three-operand form c = a + b, preserving both source operands. AVX's three-operand format is limited to the instructions with SIMD operands, and does not include instructions with general purpose registers. Such support will first appear in AVX2.
The alignment requirement of SIMD memory operands is relaxed.
The new VEX coding scheme introduces a new set of code prefixes that extends the opcode space, allows instructions to have more than two operands, and allows SIMD vector registers to be longer than 128 bits. The VEX prefix can also be used on the legacy SSE instructions giving them a three-operand form, and making them interact more efficiently with AVX instructions without the need for VZEROUPPER and VZEROALL.
The AVX instructions support both 128-bit and 256-bit SIMD. The 128-bit versions can be useful to improve old code without needing to widen the vectorization, and avoid the penalty of going from SSE to AVX, they are also faster on some early AMD implementations of AVX. This mode is sometimes known as AVX-128.
New instructions
These AVX instructions are in addition to the ones that are 256-bit extensions of the legacy 128-bit SSE instructions; most are usable on both 128-bit and 256-bit operands.Instruction | Description |
VBROADCASTSS , VBROADCASTSD , VBROADCASTF128 | Copy a 32-bit, 64-bit or 128-bit memory operand to all elements of a XMM or YMM vector register. |
VINSERTF128 | Replaces either the lower half or the upper half of a 256-bit YMM register with the value of a 128-bit source operand. The other half of the destination is unchanged. |
VEXTRACTF128 | Extracts either the lower half or the upper half of a 256-bit YMM register and copies the value to a 128-bit destination operand. |
VMASKMOVPS , VMASKMOVPD | Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged. On the AMD Jaguar processor architecture, this instruction with a memory source operand takes more than 300 clock cycles when the mask is zero, in which case the instruction should do nothing. This appears to be a design flaw. |
VPERMILPS , VPERMILPD | Permute In-Lane. Shuffle the 32-bit or 64-bit vector elements of one input operand. These are in-lane 256-bit instructions, meaning that they operate on all 256 bits with two separate 128-bit shuffles, so they can not shuffle across the 128-bit lanes. |
VPERM2F128 | Shuffle the four 128-bit vector elements of two 256-bit source operands into a 256-bit destination operand, with an immediate constant as selector. |
VZEROALL | Set all YMM registers to zero and tag them as unused. Used when switching between 128-bit use and 256-bit use. |
VZEROUPPER | Set the upper half of all YMM registers to zero. Used when switching between 128-bit use and 256-bit use. |
CPUs with AVX
- Intel
- * Sandy Bridge processors, Q1 2011
- * Sandy Bridge E processors, Q4 2011
- * Ivy Bridge processors, Q1 2012
- * Ivy Bridge E processors, Q3 2013
- * Haswell processors, Q2 2013
- * Haswell E processors, Q3 2014
- * Broadwell processors, Q4 2014
- * Skylake processors, Q3 2015
- * Broadwell E processors, Q2 2016
- * Kaby Lake processors, Q3 2016/Q1 2017
- * Skylake-X processors, Q2 2017
- * Coffee Lake processors, Q4 2017
- * Cannon Lake processors, Q2 2018
- * Whiskey Lake processors, Q3 2018
- * Cascade Lake processors, Q4 2018
- * Ice Lake processors, Q3 2019
- * Comet Lake processor, Q3 2019
- * Tiger Lake processor, 2020
- AMD:
- * Jaguar-based processors and newer
- * Puma-based processors and newer
- * "Heavy Equipment" processors
- ** Bulldozer-based processors, Q4 2011
- ** Piledriver-based processors, Q4 2012
- ** Steamroller-based processors, Q1 2014
- ** Excavator-based processors and newer, 2015
- * Zen-based processors, Q1 2017
- * Zen+-based processors, Q2 2018
- * Zen 2-based processors, Q3 2019
- * Zen 3 processors, 2020
- VIA:
- * Nano QuadCore
- * Eden X4
- Zhaoxin:
- * WuDaoKou-based processors
Compiler and assembler support
- Absoft supports with -mavx flag.
- The Free Pascal compiler supports AVX and AVX2 with the -CfAVX and -CfAVX2 switches from version 2.7.1.
- The GNU Assembler inline assembly functions support these instructions, as do Intel primitives and the Intel inline assembler.
- GCC starting with version 4.6 and the Intel Compiler Suite starting with version 11.1 support AVX.
- The Open64 compiler version 4.5.1 supports AVX with -mavx flag.
- PathScale supports via the -mavx flag.
- The Vector Pascal compiler supports AVX via the -cpuAVX32 flag.
- The Visual Studio 2010/2012 compiler supports AVX via intrinsic and /arch:AVX switch.
- Other assemblers such as MASM VS2010 version, YASM, FASM, NASM and JWASM.
Operating system support
- DragonFly BSD: support added in early 2013.
- FreeBSD: support added in a patch submitted on January 21, 2012, which was included in the 9.1 stable release
- Linux: supported since kernel version 2.6.30, released on June 9, 2009.
- macOS: support added in 10.6.8 update released on June 23, 2011.
- OpenBSD: support added on March 21, 2015.
- Solaris: supported in Solaris 10 Update 10 and Solaris 11
- Windows: supported in Windows 7 SP1, Windows Server 2008 R2 SP1, Windows 8, Windows 10
- * Windows Server 2008 R2 SP1 with Hyper-V requires a hotfix to support AMD AVX processors,
Advanced Vector Extensions 2
- expansion of most vector integer SSE and AVX instructions to 256 bits
- three-operand general-purpose bit manipulation and multiply
- Gather support, enabling vector elements to be loaded from non-contiguous memory locations
- DWORD- and QWORD-granularity any-to-any permutes
- vector shifts.
- three-operand fused multiply-accumulate support
New instructions
CPUs with AVX2
- Intel
- * Haswell processor, Q2 2013
- * Haswell E processor, Q3 2014
- * Broadwell processor, Q4 2014
- * Broadwell E processor, Q3 2016
- * Skylake processor, Q3 2015
- * Kaby Lake processor, Q3 2016/Q1 2017
- * Skylake-X processor, Q2 2017
- * Coffee Lake processor, Q4 2017
- * Cannon Lake processor, Q2 2018
- * Cascade Lake processor, Q2 2019
- * Ice Lake processor, Q3 2019
- * Comet Lake processor, Q3 2019
- * Tiger Lake processor, 2020
- AMD
- * Excavator processor and newer, Q2 2015
- * Zen processor, Q1 2017
- * Zen+ processor, Q2 2018
- * Zen 2 processor, Q3 2019
- * Zen 3 processor, 2020
- VIA:
- * Nano QuadCore
- * Eden X4
AVX-512
AVX-512 instruction are encoded with the new EVEX prefix. It allows 4 operands, 7 new 64-bit opmask registers, scalar memory mode with automatic broadcast, explicit rounding control, and compressed displacement memory addressing mode. The width of the register file is increased to 512 bits and total register count increased to 32 in x86-64 mode.
AVX-512 consists of multiple extensions not all meant to be supported by all processors implementing them. The instruction set consists of the following:
- AVX-512 Foundation adds several new instructions and expands most 32-bit and 64-bit floating point SSE-SSE4.1 and AVX/AVX2 instructions with EVEX coding scheme to support the 512-bit registers, operation masks, parameter broadcasting, and embedded rounding and exception control
- AVX-512 Conflict Detection Instructions efficient conflict detection to allow more loops to be vectorized, supported by Knights Landing
- AVX-512 Exponential and Reciprocal Instructions exponential and reciprocal operations designed to help implement transcendental operations, supported by Knights Landing
- AVX-512 Prefetch Instructions new prefetch capabilities, supported by Knights Landing
- AVX-512 Vector Length Extensions extends most AVX-512 operations to also operate on XMM and YMM registers
- AVX-512 Byte and Word Instructions extends AVX-512 to cover 8-bit and 16-bit integer operations
- AVX-512 Doubleword and Quadword Instructions enhanced 32-bit and 64-bit integer operations
- AVX-512 Integer Fused Multiply Add fused multiply add for 512-bit integers.
- AVX-512 Vector Byte Manipulation Instructions adds vector byte permutation instructions which are not present in AVX-512BW.
- AVX-512 Vector Neural Network Instructions Word variable precision vector instructions for deep learning.
- AVX-512 Fused Multiply Accumulation Packed Single precision vector instructions for deep learning.
- VPOPCNTDQ count of bits set to 1.
- VPCLMULQDQ carry-less multiplication of quadwords.
- AVX-512 Vector Neural Network Instructions vector instructions for deep learning.
- AVX-512 Galois field New Instructions vector instructions for calculating Galois field.
- AVX-512 Vector AES instructions vector instructions for AES coding.
- AVX-512 Vector Byte Manipulation Instructions 2 byte/word load, store and concatenation with shift.
- AVX-512 Bit Algorithms byte/word bit manipulation instructions expanding VPOPCNTDQ.
The updated SSE/AVX instructions in AVX-512F use the same mnemonics as AVX versions; they can operate on 512-bit ZMM registers, and will also support 128/256 bit XMM/YMM registers and byte, word, doubleword and quadword integer operands.
CPUs with AVX-512
As of 2020, there are no AMD CPUs that support AVX-512, and AMD has not yet released plans to support AVX-512.Compilers supporting AVX-512
- GCC 4.9 and newer
- Clang 3.9 and newer
- ICC 15.0.1 and newer
- Microsoft Visual Studio 2017 C++ Compiler
- Java 9
- Go 1.11
- Julia
Applications
- Suitable for floating point-intensive calculations in multimedia, scientific and financial applications.
- Increases parallelism and throughput in floating point SIMD calculations.
- Reduces register load due to the non-destructive instructions.
- Improves Linux RAID software performance
Software
- Blender uses AVX2 in the render engine cycles.
- Botan uses both AVX and AVX2 when available to accelerate some algorithms, like ChaCha.
- Crypto++ uses both AVX and AVX2 when available to accelerate some algorithms, like Salsa and ChaCha.
- OpenSSL uses AVX- and AVX2-optimized cryptographic functions since version 1.0.2.. This support is also present in various clones and forks, like LibreSSL
- Prime95/MPrime, the software used for GIMPS, started using the AVX instructions since version 27.x.
- dav1d AV1 decoder can use AVX2 on supported CPUs.
- dnetc, the software used by distributed.net, has an AVX2 core available for its RC5 project and will soon release one for its OGR-28 project.
- Einstein@Home uses AVX in some of their distributed applications that search for gravitational waves.
- Folding@home uses AVX on calculation cores implemented with GROMACS library.
- RPCS3, an open source PlayStation 3 emulator, uses AVX2 and AVX-512 instructions to emulate PS3 games.
- Network Device Interface, an IP video/audio protocol developed by NewTek for live broadcast production, uses AVX and AVX2 for increased performance.
- TensorFlow since version 1.6 and tensorflow above versions requires CPU supporting at least AVX.
- Xenia requires AVX instruction set in order to run.
- x264, x265 and VTM video encoders can use AVX2 or AVX-512 to speed up encoding.
- Various CPU-based cryptocurrency miners use AVX and AVX2 for various cryptography-related routines, including SHA-256 and scrypt.
- libsodium uses AVX in the implementation of scalar multiplication for Curve25519 and Ed25519 algorithms, AVX2 for BLAKE2b, Salsa20, ChaCha20, and AVX2 and AVX-512 in implementation of Argon2 algorithm.
- libvpx open source reference implementation of VP8/VP9 encoder/decoder, uses AVX2 or AVX-512 when available.
- FFTW can utilize AVX, AVX2 and AVX-512 when available.
- LLVMpipe, a software OpenGL renderer in Mesa using Gallium and LLVM infrastructure, uses AVX2 when available.
- glibc uses AVX2 for optimized implementation of various mathematical functions in libc.
- Linux kernel can use AVX or AVX2, together with AES-NI as optimized implementation of AES-GCM cryptographic algorithm.
- Linux kernel uses AVX or AVX2 when available, in optimized implementation of multiple other cryptographic ciphers: Camellia, CAST5, CAST6, Serpent, Twofish, MORUS-1280, and other primitives: Poly1305, SHA-1, SHA-256, SHA-512, ChaCha20.
- POCL, a portable Computing Language, that provides implementation of OpenCL, makes use of AVX, AVX2 and AVX512 when possible.
- .NET Core and.NET Framework can utilize AVX, AVX2 through the generic
System.Numerics.Vectors
namespace. - .NET Core, starting from version 2.1 and more extensively after version 3.0 can directly use all AVX, AVX2 intrinsics through the
System.Runtime.Intrinsics.X86
namespace. - EmEditor 19.0 and above uses AVX-2 to speed up processing.
- Native Instruments' Massive X softsynth requires AVX.
- Microsoft Teams uses AVX2 instructions to create a blurred or custom background behind video chat participants.
- simdjson a JSON parsing library uses AVX2 to achieve improved decoding speed.
Downclocking
- L0 : The normal turbo boost limit.
- L1 : The "AVX boost" limit. Soft-triggered by 256-bit "heavy" instructions. Hard-triggered by "light" 512-bit instructions.
- L2 : The "AVX-512 boost" limit. Soft-triggered by 512-bit heavy instructions.
Downclocking means that using AVX in a mixed workload with an Intel processor can incur a frequency penalty despite it being faster in a "pure" context. Avoiding the use of wide and heavy instructions help minimize the impact in these cases. AVX-512VL is an example of only using 256-bit operands in AVX-512, making it a sensible default for mixed loads.