Intel has some features need to use processor extended
information under CPU feature InitializeFunc(), so add code
to support it: This patch is to add CPU_V2_EXTENDED_TOPOLOGY
to get processor extended info.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
This patch is to remove legacy SmBase relocation in
PiSmmCpuDxeSmm Driver. The responsibility for SmBase
relocation has been transferred to the SmmRelocationInit
interface, which now handles the following tasks:
1. Relocates the SmBase for each processor.
2. Generates the gSmmBaseHobGuid HOB.
As a result of this change, the PiSmmCpuDxeSmm driver's
role in SMM environment setup is simplified to:
1. Utilize the gSmmBaseHobGuid to determine the SmBase.
2. Perform the ExecuteFirstSmiInit() to do early SMM
initialization.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Since SMM relocation is performed serially for each CPU, there is
no need to allocate buffers for all CPUs to store the SmBase
address in mSmBase and the Rebased flag in mRebased. A defined
global variable is sufficient.
This patch focuses on the mSmBase and mRebased global variables
to prevent unnecessary memory allocation for these variables.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
This patch just separates the smbase relocation logic from
PiSmmCpuDxeSmm driver, and moves to the SmmRelocationInit
interface. It maintains the original implementation of most
functions and leaves the definitions of global variables
intact. Further refinements to the code are planned for
subsequent patches.
Platform shall consume the interface for the smbase
relocation if need SMM support.
Note:
Before using SmmRelocationLib, the PiSmmCpuDxeSmm driver
allocates the SMRAM to be used for SMI handler and Save
state area of each processor from Smst->AllocatePages().
With SmmRelocationLib, the SMRAM allocation for SMI
handlers and Save state areas is moved to early PEI
phase (Smst->AllocatePages() service is not available).
So, the allocation is done by splitting the SMRAM out of
the SMRAM regions reported from gEfiSmmSMramMemoryGuid.
So, Platform must produce the gEfiSmmSMramMemoryGuid HOB
for SmmRelocationLib usage.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Intel plans to separate the smbase relocation logic from
PiSmmCpuDxeSmm driver, and the related behavior will be
moved to the new interface defined by the SmmRelocationLib
class.
The SmmRelocationLib class provides the SmmRelocationInit()
interface for platform to do the smbase relocation, which
shall provide below 2 functionalities:
1. Relocate smbases for each processor.
2. Create the gSmmBaseHobGuid HOB.
With SmmRelocationLib, PiSmmCpuDxeSmm driver (which runs at
a later phase) shall:
1. Consume the gSmmBaseHobGuid HOB for the relocated smbases
for each Processor.
2. Execute the early SMM Init.
This patch just provides the SmmRelocationLib class.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
On a multi-processor system, if the BSP dose not know how many APs are
online or cannot wake up the AP via broadcast, it can collect AP
resouces before wakeing up the AP and add a new HOB to save the
processor resouces.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Move the WaitLoopExecutionMode and StartupSignalValue fields to a
separate HOB with the new struct.
WaitLoopExecutionMode and StartupSignalValue are independent of
processor index ranges; they are global to MpInitLib (i.e., the entire
system). Therefore they shouldn't be repeated in every MpHandOff GUID
HOB.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20240228114855.1615788-1-kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Oliver Steffen <osteffen@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
[lersek@redhat.com: turn the "Cc:" message headers from Gerd's on-list
posting into "Cc:" tags in the commit message, in order to pacify
"PatchCheck.py"]
Loop over all MP_HAND_OFF HOBs instead of expecting a single HOB
covering all CPUs in the system.
Add a new FirstMpHandOff variable, which caches the first HOB body for
faster lookups. It is also used to check whenever MP_HAND_OFF HOBs are
present. Using the MpHandOff pointer for that does not work any more
because the variable will be NULL at the end of HOB loops.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Message-Id: <20240222160106.686484-5-kraxel@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Rename the function to GetNextMpHandOffHob(), add MP_HAND_OFF parameter.
When called with NULL pointer return the body of the first HOB, otherwise
return the next in the chain.
Also add the function prototype to the MpLib.h header file.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20240222160106.686484-2-kraxel@redhat.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4614
About the IsModified, current function doesn't consider that hardware
also may change the pagetable. The issue is that in the first call of
internal function PageTableLibMapInLevel, the function assume page
table is not changed, and add ASSERT to check. But hardware may change
the page table, which cause the ASSERT happens.
Fix the issue by adding addtional condition to only check if the page
table is changed when the software want to modify the page table.
Also, add more comment to explain this behavior.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Crystal Lee <CrystalLee@ami.com.tw>
Cc: Pedro Falcato <pedro.falcato@gmail.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
The purpose of writing CR3 in ConvertMemoryPageToNotPresent is just
to flush TLB, because CR3 won't be changed in function
ConvertMemoryPageToNotPresent.
After ConvertMemoryPageToNotPresent, there is always a flush TLB
function. Also, because ConvertMemoryPageToNotPresent in called in a
loop, to improve performance, there is no need to flush TLB
inside ConvertMemoryPageToNotPresent. Just flushing TLB after the loop
is enough.
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Zhiguang Liu <zhiguang.liu@intel.com>
This patch is to check BspIndex first before lock cmpxchg operation.
If BspIndex has not been set, then do the lock cmpxchg, otherwise,
the APs don't need to lock cmpxchg the BspIndex value since the BSP
election has been done. It's the optimization to lower the resource
contention caused by the atomic compare exchange operation, so as to
improve the SMI performance for BSP election.
Cc: Ray Ni <ray.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Kinney Michael D <michael.d.kinney@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=4682
Fixes: 725acd0b9c
Before commit 725acd0b9c ("UefiCpuPkg: Avoid assuming only one
smmbasehob", 2023-12-12), PiCpuSmmEntry() used to look up
"gSmmBaseHobGuid", and allocate "mCpuHotPlugData.SmBase" regardless of the
GUID's presence:
> - mCpuHotPlugData.SmBase = (UINTN *)AllocatePool (sizeof (UINTN) * mMaxNumberOfCpus);
> - ASSERT (mCpuHotPlugData.SmBase != NULL);
After commit 725acd0b9c, PiCpuSmmEntry() -> GetSmBase() would allocate
"mCpuHotPlugData.SmBase" only on the success path, and no allocation would
be performed on *any* of the error paths.
This caused a problem: if "mCpuHotPlugData.SmBase" was left NULL because
the GUID HOB was missing, PiCpuSmmEntry() would still be supposed to
allocate "mCpuHotPlugData.SmBase", just like earlier. However, because
commit 725acd0b9c conflated the two possible error modes (out of SMRAM,
and no GUID HOB), PiCpuSmmEntry() could not decide whether it should
allocate "mCpuHotPlugData.SmBase", or not. Currently, we never allocate if
GetSmBase() fails -- for any reason --, which means that on platforms that
don't produce the GUID HOB, "mCpuHotPlugData.SmBase" is left NULL, leading
to null pointer dereferences later, in PiCpuSmmEntry().
Now that a prior patch in the series distinguishes the two error modes
from each other, we can tell exactly when the GUID HOB is not found, and
reinstate the earlier "mCpuHotPlugData.SmBase" allocation for that case.
(With an actual error check thrown in, in addition to the original
"assertion".)
Cc: Dun Tan <dun.tan@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Reported-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=4682
Commit 725acd0b9c ("UefiCpuPkg: Avoid assuming only one smmbasehob",
2023-12-12) introduced a helper function called GetSmBase(), replacing
the lookup of the first and only "gSmmBaseHobGuid" GUID HOB and
unconditional "mCpuHotPlugData.SmBase" allocation, with iterated lookups
plus conditional memory allocation.
This introduced a new failure mode for setting "mCpuHotPlugData.SmBase".
Namely, before commit 725acd0b9c, "mCpuHotPlugData.SmBase" would be
allocated regardless of the GUID HOB being absent. After the commit,
"mCpuHotPlugData.SmBase" could remain NULL if the GUID HOB was absent,
*or* one of the memory allocations inside GetSmBase() failed; and in the
former case, we'd even proceed to the rest of PiCpuSmmEntry().
In relation to this conflation of distinct failure modes, commit
725acd0b9c actually introduced a NULL pointer dereference. Namely, a
NULL "mCpuHotPlugData.SmBase" is not handled properly at all now. We're
going to fix that NULL pointer dereference in a subsequent patch; however,
as a pre-requisite for that we need to tell apart the failure modes of
GetSmBase().
For memory allocation failures, return EFI_OUT_OF_RESOURCES. Move the
"assertion" that SMRAM cannot be exhausted happen out to the caller
(PiCpuSmmEntry()). Strengthen the assertion by adding an explicit
CpuDeadLoop() call. (Note: GetSmBase() *already* calls CpuDeadLoop() if
(NumberOfProcessors != MaxNumberOfCpus).)
For the absence of the GUID HOB, return EFI_NOT_FOUND.
For good measure, make GetSmBase() STATIC (it should have been STATIC from
the start).
This is just a refactoring, no behavioral difference is intended (beyond
the explicit CpuDeadLoop() upon SMRAM exhaustion).
Cc: Dun Tan <dun.tan@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Rahul Kumar <rahul1.kumar@intel.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
This patch is to map SMRAM in 4K page granularity
during SMM page table initialization(SmmInitPageTable)
so as to avoid the SMRAM paging-structure layout
change when SMI happens (PerformRemainingTasks).
The reason is to avoid the Paging-Structure change
impact to the multiple Processors. Refer SDM section
"4.10.4" & "4.10.5".
Currently, SMM BSP needs to update the SMRAM range
paging attribute in smm page table according to the
SmmMemoryAttributesTable when SMM ready to lock
happens. If the SMRAM range is not 4k mapped in page
table, the page table update process may split 1G/2M
paging entries to 4k ones.Meanwhile, all APs are still
running in SMI, which might access the affected
linear-address range between the time of modification
and the time of invalidation access. That will be
a potential problem leading exception happens.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
Reviewed-by: Jiaxin Wu <jiaxin.wu@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>