LL rt, offset(rs) |
nanoMIPS, availability varies by format. |
Load Linked word/Load Linked word using EVA addressing/Load Linked Word Pair/Load |
LLE rt, offset(rs) |
nanoMIPS, availability varies by format. |
Load Linked word/Load Linked word using EVA addressing/Load Linked Word Pair/Load |
LLWP rt, ru, (rs) |
nanoMIPS, availability varies by format. |
Load Linked word/Load Linked word using EVA addressing/Load Linked Word Pair/Load |
LLWPE rt, ru, (rs) |
nanoMIPS, availability varies by format. |
Load Linked word/Load Linked word using EVA addressing/Load Linked Word Pair/Load |
Load Linked word/Load Linked word using EVA addressing/Load Linked Word Pair/LoadLinked Word Pair using EVA addressing. For LL/LLE,load word for atomic RMW to register $rt from address $rs + offset (register plus immediate).For LLWP/LLWPE,load words for atomic RMW to
registers $rt and $ru from address $rs. For LLE/LLWPE, translate the virtual address as though the core is in user mode, although it is actually in kernel mode.
nanoMIPS, availability varies by format.
101001 |
rt |
rs |
s[8] |
1010 |
0 |
01 |
s[7:2] |
00 |
6 |
5 |
5 |
1 |
4 |
1 |
2 |
6 |
2 |
offset = sign_extend(s, from_nbits=9) nbytes = 4 is_eva = False
101001 |
rt |
rs |
s[8] |
1010 |
0 |
10 |
s[7:2] |
00 |
6 |
5 |
5 |
1 |
4 |
1 |
2 |
6 |
2 |
offset = sign_extend(s, from_nbits=9) nbytes = 4 is_eva = True
101001 |
rt |
rs |
x |
1010 |
0 |
01 |
ru |
x |
01 |
6 |
5 |
5 |
1 |
4 |
1 |
2 |
5 |
1 |
2 |
offset = 0 nbytes = 8 is_eva = False
101001 |
rt |
rs |
x |
1010 |
0 |
10 |
ru |
x |
01 |
6 |
5 |
5 |
1 |
4 |
1 |
2 |
5 |
1 |
2 |
offset = 0 nbytes = 8 is_eva = True
if nbytes == 8 and C0.Config5.XNP: raise exception('RI', 'LLWP[E] requires word-paired support') if is_eva and not C0.Config5.EVA: raise exception('RI') va = effective_address(GPR[rs], offset, 'Load', eva=is_eva) # Linked access must be aligned. if va & (nbytes-1): raise exception('ADEL', badva=va) pa, cca = va2pa(va, 'Load', eva=is_eva) if (cca == 2 or cca == 7) and not C0.Config5.ULS: raise UNPREDICTABLE('uncached CCAnotsynchronizable when Config5.ULS=0') # (Preferred behavior for non-synchronizableaddressisBusError). # Indicate that there is an active RMW sequence onthisprocessor. C0.LLAddr.LLB = 1 # Save target address of active RMW sequence. record_linked_address(va, pa, cca, nbytes=nbytes) data = read_memory(va, pa, cca, nbytes=nbytes) if nbytes == 4: # LL/LLE GPR[rt] = sign_extend(data, from_nbits=32) else: # LLWP/LLWPE word0 = data[63:32] if C0.Config.BE else data[31:0] word1 = data[31:0] if C0.Config.BE else data[63:32] if rt == ru: raise UNPREDICTABLE() GPR[rt] = sign_extend(word0, from_nbits=32) GPR[ru] = sign_extend(word1, from_nbits=32)
The LL/LLE/LLWP/LLWPE instructions are used to initiate an atomic read-modify-write sequence. C0.LLAddr.LLB is set to 1,indicating that there is an active RMW sequence on the current processor,
and an implementation dependent set of state is saved which indicates the address and access type of the active RMW sequence. There can be only one active RMW sequence per processor.
The RMW sequence will be completed by a matching SC/SCE/SCWP/SCWPE instruction.The storeconditional instruction will only complete if the system can guarantee that the accessed memory location has not been modified since the load-linked instruction occurred, as discussed in more detail
in
the SC/SCE/SCWP/SCWPE instruction description.
The address and CCA targeted by the LL/LLE/LLWP/LLWPE must be must be synchronizable by all processors and I/O devices sharing the location; if it is not, the result is UNPREDICTABLE. Which storage is
synchronizable is a function of both CPU and system implementations - see the SC/SCE/SCWP/SCWPE
instruction for the formal definition.The preferred behavior for a load-linked instruction which attempts to access an address which is not synchronizable is a Bus Error exception.
If Config5.ULS is set, then the system supports uncached load-linked/store-conditional accesses. Otherwise, the result of uncached accesses is unpredictable.
A LL/LLE/LLWP/LLWPE instruction on one processor must nottake action that, by itself, causes a
store-conditional instruction for the same block on another processor to fail. For example, if an implementation depends on retaining the data in the cache during the RMW sequence, cache misses caused
by a load-linked instruction must not fetch data in the exclusive state, since that would remove it from another core’s cache if it were present.
An execution of a load-linked instruction does not have to be followed by execution of store-conditional instruction; a program is free to abandon the RMW sequence without attempting a write.
Supportfor the paired word instructions LLWP/LLWPE is indicated by the Config5.XNP bit.Paired word support is required for nanoMIPS™ cores, except for NMS cores, where it is optional.
The result of LLWP/LLWPE is unpredictable if $rt and $ru are the same register.
Address Error. Bus Error. Coprocessor Unusable for LLE/LLWPE. Reserved Instruction for LLE/LLWPE if EVA not implemented. Reserved Instruction for LLWP/LLWPE ifload linked pair not implemented.
TLB Invalid. TLB Read Inhibit. TLB Refill. Watch.