ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation
Currently, we always invalidate the TLBs entirely after making any modification to the page tables. Now that we have introduced strict memory permissions in quite a number of places, such modifications occur much more often, and it is better for performance to flush only those TLB entries that are actually affected by the changes. At the same time, relax some system wide data synchronization barriers to non-shared. When running in UEFI, we don't share virtual address translations with other masters, unless we are running under virt, but in that case, the host will upgrade them as appropriate (by setting an override at EL2) Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
This commit is contained in:
@@ -124,15 +124,15 @@ ASM_FUNC(ArmSetMAIR)
|
||||
// IN VOID *MVA // X1
|
||||
// );
|
||||
ASM_FUNC(ArmUpdateTranslationTableEntry)
|
||||
dc civac, x0 // Clean and invalidate data line
|
||||
dsb sy
|
||||
dsb nshst
|
||||
lsr x1, x1, #12
|
||||
EL1_OR_EL2_OR_EL3(x0)
|
||||
1: tlbi vaae1, x1 // TLB Invalidate VA , EL1
|
||||
b 4f
|
||||
2: tlbi vae2, x1 // TLB Invalidate VA , EL2
|
||||
b 4f
|
||||
3: tlbi vae3, x1 // TLB Invalidate VA , EL3
|
||||
4: dsb sy
|
||||
4: dsb nsh
|
||||
isb
|
||||
ret
|
||||
|
||||
|
Reference in New Issue
Block a user