Does the carry-less multiplication instruction run in constant time? Said differently, is the time it takes to execute independent of its arguments?
Is CLMUL constant time?
832 Views Asked by yberman At
1
There are 1 best solutions below
Related Questions in ASSEMBLY
- (x64 Nasm) Writeline function on Linux
- Is the compiler Xcode uses to produce Assembly code a bad compiler?
- Why do we need AX instead of MOV DS, data directly with a segment?
- Bootloader in Assembly with Linux kernel
- How should the byte sequence 0x40 0x55 be interpreted by an x86-64 emulator?
- C++ code into assembly
- Drawing circles of increasing radius
- Assembly print on screen using pop ecx
- Equivalent to asm volatile in Gfortran?
- Show 640x480 BMP image with inline ASM c++
- Keep track of numbers entered in by a user in assembly
- 8086 Assembly Arrays with I/O
- DB ASM variable in Inline ASM C++
- What does Jump to means in callgrind?
- How to convert binary into decimal in assembly x8086?
Related Questions in X86
- Why do we need AX instead of MOV DS, data directly with a segment?
- Drawing circles of increasing radius
- Assembly print on screen using pop ecx
- How to add values from vector to each other
- Intel x64 instructions CMPSB/CMPSW/CMPSD/CMPSQ
- Compact implementation of logical AND in x86 assembly
- Can feenableexcept hurt a program performance?
- How do I display the result and remainder in ax and dx in Assembly (tasm)
- ASM : Trouble using int21h on real machine
- jmp instruction *%eax
- What steps are needed to load a second stage bootloader by name on a FAT32 file system in x86 Assembly?
- Assembly code to print a new line string
- Write System Call Argument Registers
- How to jump to an address saved in a register in intel assembly?
- Find middle value of a list
Related Questions in MICRO-OPTIMIZATION
- Declaring/initializing variables inside or outside of a double (or multiple) loop
- What's the easiest way to determine if a register's value is equal to zero or not?
- Efficient UTF-8 character-length decoding for a non-zero character in a 32 bit register
- Storing document and window references in an object to get better performance
- c# Denormalized Floating Point: is "zero literal 0.0f" slow?
- Branch alignment for loops involving micro-coded instructions on Intel SnB-family CPUs
- Sort pairs to be more consecutive
- What does `rep ret` mean?
- Fast way to find an ascii character in long integers using bitwise operations?
- Using SIMD/AVX/SSE for tree traversal
- Is there better option than map?
- Instruction reordering in x86 / x64 asm - performance optimisation with latest CPUs
- Do Dart property result need to cache?
- Can modern x86 implementations store-forward from more than one prior store?
- What are the costs of failed store-to-load forwarding on x86?
Related Questions in GALOIS-FIELD
- Speed up gf(eye(x)) a.k.a. Speed up Galois field creation for sparse matrices
- Algorithm to calculate rref in GF(2)?
- GCM Multiplication Implementation
- How to write a polynomial in x**3 instead of x using Polynomial
- Galois field calculator GF(2^4) GF(2^8)
- Error correcting codes for short (7-10 bits) windowed reads from cyclic tapes
- Is CLMUL constant time?
- Incorrect Multiplication/Division in Galois Field (2^8)
- Fast Exponentiation for galois fields
- Find the inverse (reciprocal) of a polynomial modulo another polynomial with coefficients in a finite field
- Calculating constants for CRC32 using PCLMULQDQ
- Template Parameter Array
- carry-less multiplication optimization for ECC over GF(2^m) in MIRACL
- Is there an efficient algorithm to compute the Jacobsthal matrix or quadratic character in GF(q)?
- The function np.dot multiplies the GF4 field matrices for a very long time
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
According to https://agner.org/optimize/ and
PCLMULQDQhas fixed latency on any given CPU. (http://www.uops.info/table.html doesn't list a latency for it, but has good stuff for most instructions).There's no reason to expect it to be data-dependent- typically only division / sqrt has data-dependent performance in modern high-performance CPUs. Regular multiply doesn't: instead they just make it fast for the general case with lots of hardware parallelism inside the execution unit.
Out-of-order instruction scheduling is a lot easier when uops have fixed latency, and so is building fully-pipelined execution units for them. The scheduler (reservation station) can avoid having 2 operations finish at the same time on the same port and create a write-back conflict. Or worse, in the same execution unit and cause stalls within it. This is why fixed-latency is very common.
(A microcoded multi-uop
pclmulqdqwith branching could potentially have variable latency, or more plausibly latency that depends on the immediate operand: maybe an extra shuffle uop or two when the immediate is non-zero. So the fixed-latency of a single uop argument doesn't necessarily apply to a micro-coded instruction, butpclmuqdqis still simple enough that you wouldn't expect it to actually branch internally the wayrep movsbhas to.)As @fuz points out, PCLMUL was made for crypto, so data-dependent performance would make it vulnerable to timing attacks. So there's a very strong reason to make PCLMUL constant time. (Or at worst, dependent on the immediate, but not the register/memory sources. e.g. an immediate other than
0could cost extra shift uops to get the high halves of the sources fed to a 64x64 => 128 carryless-multiply unit.)Numbers from Agner Fog's tables
On Intel since Broadwell,
pclmuludqis 1 uop. On Skylake, it's 7 cycle latency, 1 per clock throughput. (So you need to keep 7 independent PCLMUL operations in flight to saturate the execution unit on port 5). Broadwell has 5 cycle latency. With a memory source operand, it's 1 extra uop.On Haswell, it's 3 uops (2p0 p5) with 7 cycle latency and one per 2 clock throughput.
On Sandybridge/IvyBridge it's 18 uops, 14c latency, one per 8 clock throughput.
On Westmere (2nd Gen Nehalem) it's 12c latency, one per 8c throughput. (Unknown number of uops, neither Agner Fog nor uops.info has it. But we can safely assume it's microcoded.) This was the first generation to support the instruction- one of the very few differences from Nehalem to Westmere.
On Ryzen it's 4 uops, 4c latency, one per 2 clock throughput. http://instlatx64.atw.hu/ shows it 4.5 cycle latency. I'm not sure what the difference is between their testing and Agner's.
On Piledriver it's 5 uops, 12c latency, one per 7 clock throughput.
On Jaguar it's 1 uop, 3c latency, one per 1 clock throughput!
On Silvermont it's 8 uops, 10c latency/throughput. Goldmont = 3 uops, 6c lat / 3c tput.
See also What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand? and Agner Fog's optimization guide to understand how latency and throughput (and front-end bottlenecks) matter for performance on out-of-order CPUs, depending on the surrounding code.