Opcode/ Instruction |
Op/ En |
64/32 bit Mode Support |
CPUID Feature Flag |
Description |
EVEX.128.F2.MAP6.W0 56 /r VFCMADDCPH xmm1{k1}{z}, xmm2, xmm3/m128/m32bcst |
A |
V/V |
AVX512-FP16 AVX512VL |
Complex multiply a pair of FP16 values from xmm2 and xmm3/m128/m32bcst, add to xmm1 and store the result in xmm1 subject to writemask k1. |
EVEX.256.F2.MAP6.W0 56 /r VFCMADDCPH ymm1{k1}{z}, ymm2, ymm3/m256/m32bcst |
A |
V/V |
AVX512-FP16 AVX512VL |
Complex multiply a pair of FP16 values from ymm2 and ymm3/m256/m32bcst, add to ymm1 and store the result in ymm1 subject to writemask k1. |
EVEX.512.F2.MAP6.W0 56 /r VFCMADDCPH zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst {er} |
A |
V/V |
AVX512-FP16 |
Complex multiply a pair of FP16 values from zmm2 and zmm3/m512/m32bcst, add to zmm1 and store the result in zmm1 subject to writemask k1. |
EVEX.128.F3.MAP6.W0 56 /r VFMADDCPH xmm1{k1}{z}, xmm2, xmm3/m128/m32bcst |
A |
V/V |
AVX512-FP16 AVX512VL |
Complex multiply a pair of FP16 values from xmm2 and the complex conjugate of xmm3/ m128/m32bcst, add to xmm1 and store the result in xmm1 subject to writemask k1. |
EVEX.256.F3.MAP6.W0 56 /r VFMADDCPH ymm1{k1}{z}, ymm2, ymm3/m256/m32bcst |
A |
V/V |
AVX512-FP16 AVX512VL |
Complex multiply a pair of FP16 values from ymm2 and the complex conjugate of ymm3/ m256/m32bcst, add to ymm1 and store the result in ymm1 subject to writemask k1. |
EVEX.512.F3.MAP6.W0 56 /r VFMADDCPH zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst {er} |
A |
V/V |
AVX512-FP16 |
Complex multiply a pair of FP16 values from zmm2 and the complex conjugate of zmm3/ m512/m32bcst, add to zmm1 and store the result in zmm1 subject to writemask k1. |
Op/En |
Tuple |
Operand 1 |
Operand 2 |
Operand 3 |
Operand 4 |
A |
Full |
ModRM:reg (r, w) |
VEX.vvvv (r) |
ModRM:r/m (r) |
N/A |
This instruction performs a complex multiply and accumulate operation. There are normal and complex conjugate forms of the operation.
The broadcasting and masking for this operation is done on 32-bit quantities representing a pair of FP16 values.
Rounding is performed at every FMA (fused multiply and add) boundary. Execution occurs as if all MXCSR excep- tions are masked. MXCSR status bits are updated to reflect exceptional conditions.
VL = 128, 256, 512 KL := VL / 32 FOR i := 0 to KL-1: IF k1[i] or *no writemask*: IF broadcasting and src2 is memory: tsrc2.fp16[2*i+0] := src2.fp16[0] tsrc2.fp16[2*i+1] := src2.fp16[1] ELSE: tsrc2.fp16[2*i+0] := src2.fp16[2*i+0] tsrc2.fp16[2*i+1] := src2.fp16[2*i+1] FOR i := 0 to KL-1: IF k1[i] or *no writemask*: tmp[2*i+0] := dest.fp16[2*i+0] + src1.fp16[2*i+0] * tsrc2.fp16[2*i+0] tmp[2*i+1] := dest.fp16[2*i+1] + src1.fp16[2*i+1] * tsrc2.fp16[2*i+0] FOR i := 0 to KL-1: IF k1[i] or *no writemask*: // non-conjugate version subtracts even term dest.fp16[2*i+0] := tmp[2*i+0] - src1.fp16[2*i+1] * tsrc2.fp16[2*i+1] dest.fp16[2*i+1] := tmp[2*i+1] + src1.fp16[2*i+0] * tsrc2.fp16[2*i+1] ELSE IF *zeroing*: dest.fp16[2*i+0] := 0 dest.fp16[2*i+1] := 0 DEST[MAXVL-1:VL] := 0
VL = 128, 256, 512 KL := VL / 32 FOR i := 0 to KL-1: IF k1[i] or *no writemask*: IF broadcasting and src2 is memory: tsrc2.fp16[2*i+0] := src2.fp16[0] tsrc2.fp16[2*i+1] := src2.fp16[1] ELSE: tsrc2.fp16[2*i+0] := src2.fp16[2*i+0] tsrc2.fp16[2*i+1] := src2.fp16[2*i+1] FOR i := 0 to KL-1: IF k1[i] or *no writemask*: tmp[2*i+0] := dest.fp16[2*i+0] + src1.fp16[2*i+0] * tsrc2.fp16[2*i+0] tmp[2*i+1] := dest.fp16[2*i+1] + src1.fp16[2*i+1] * tsrc2.fp16[2*i+0] FOR i := 0 to KL-1: IF k1[i] or *no writemask*: // conjugate version subtracts odd final term dest.fp16[2*i+0] := tmp[2*i+0] + src1.fp16[2*i+1] * tsrc2.fp16[2*i+1] dest.fp16[2*i+1] := tmp[2*i+1] - src1.fp16[2*i+0] * tsrc2.fp16[2*i+1] ELSE IF *zeroing*: dest.fp16[2*i+0] := 0 dest.fp16[2*i+1] := 0 DEST[MAXVL-1:VL] := 0
VFCMADDCPH __m128h _mm_fcmadd_pch (__m128h a, __m128h b, __m128h c); VFCMADDCPH __m128h _mm_mask_fcmadd_pch (__m128h a, __mmask8 k, __m128h b, __m128h c); VFCMADDCPH __m128h _mm_mask3_fcmadd_pch (__m128h a, __m128h b, __m128h c, __mmask8 k); VFCMADDCPH __m128h _mm_maskz_fcmadd_pch (__mmask8 k, __m128h a, __m128h b, __m128h c); VFCMADDCPH __m256h _mm256_fcmadd_pch (__m256h a, __m256h b, __m256h c); VFCMADDCPH __m256h _mm256_mask_fcmadd_pch (__m256h a, __mmask8 k, __m256h b, __m256h c); VFCMADDCPH __m256h _mm256_mask3_fcmadd_pch (__m256h a, __m256h b, __m256h c, __mmask8 k); VFCMADDCPH __m256h _mm256_maskz_fcmadd_pch (__mmask8 k, __m256h a, __m256h b, __m256h c); VFCMADDCPH __m512h _mm512_fcmadd_pch (__m512h a, __m512h b, __m512h c); VFCMADDCPH __m512h _mm512_mask_fcmadd_pch (__m512h a, __mmask16 k, __m512h b, __m512h c); VFCMADDCPH __m512h _mm512_mask3_fcmadd_pch (__m512h a, __m512h b, __m512h c, __mmask16 k); VFCMADDCPH __m512h _mm512_maskz_fcmadd_pch (__mmask16 k, __m512h a, __m512h b, __m512h c); VFCMADDCPH __m512h _mm512_fcmadd_round_pch (__m512h a, __m512h b, __m512h c, const int rounding); VFCMADDCPH __m512h _mm512_mask_fcmadd_round_pch (__m512h a, __mmask16 k, __m512h b, __m512h c, const int rounding); VFCMADDCPH __m512h _mm512_mask3_fcmadd_round_pch (__m512h a, __m512h b, __m512h c, __mmask16 k, const int rounding); VFCMADDCPH __m512h _mm512_maskz_fcmadd_round_pch (__mmask16 k, __m512h a, __m512h b, __m512h c, const int rounding); VFMADDCPH __m128h _mm_fmadd_pch (__m128h a, __m128h b, __m128h c); VFMADDCPH __m128h _mm_mask_fmadd_pch (__m128h a, __mmask8 k, __m128h b, __m128h c); VFMADDCPH __m128h _mm_mask3_fmadd_pch (__m128h a, __m128h b, __m128h c, __mmask8 k); VFMADDCPH __m128h _mm_maskz_fmadd_pch (__mmask8 k, __m128h a, __m128h b, __m128h c); VFMADDCPH __m256h _mm256_fmadd_pch (__m256h a, __m256h b, __m256h c); VFMADDCPH __m256h _mm256_mask_fmadd_pch (__m256h a, __mmask8 k, __m256h b, __m256h c); VFMADDCPH __m256h _mm256_mask3_fmadd_pch (__m256h a, __m256h b, __m256h c, __mmask8 k); VFMADDCPH __m256h _mm256_maskz_fmadd_pch (__mmask8 k, __m256h a, __m256h b, __m256h c); VFMADDCPH __m512h _mm512_fmadd_pch (__m512h a, __m512h b, __m512h c); VFMADDCPH __m512h _mm512_mask_fmadd_pch (__m512h a, __mmask16 k, __m512h b, __m512h c); VFMADDCPH __m512h _mm512_mask3_fmadd_pch (__m512h a, __m512h b, __m512h c, __mmask16 k); VFMADDCPH __m512h _mm512_maskz_fmadd_pch (__mmask16 k, __m512h a, __m512h b, __m512h c); VFMADDCPH __m512h _mm512_fmadd_round_pch (__m512h a, __m512h b, __m512h c, const int rounding); VFMADDCPH __m512h _mm512_mask_fmadd_round_pch (__m512h a, __mmask16 k, __m512h b, __m512h c, const int rounding); VFMADDCPH __m512h _mm512_mask3_fmadd_round_pch (__m512h a, __m512h b, __m512h c, __mmask16 k, const int rounding); VFMADDCPH __m512h _mm512_maskz_fmadd_round_pch (__mmask16 k, __m512h a, __m512h b, __m512h c, const int rounding);
Invalid, Underflow, Overflow, Precision, Denormal.
EVEX-encoded instructions, see Table 2-49, "Type E4 Class Exception Conditions." |
|
Additionally: |
|
#UD |
If (dest_reg == src1_reg) or (dest_reg == src2_reg). |