Add a bool parameter to ScanOrAdd64BitE820Ram to explicitly specify
whenever ScanOrAdd64BitE820Ram should add HOBs for high memory (above
4G) or scan only.
Also add a lowmem parameter so ScanOrAdd64BitE820Ram
can report the memory size below 4G.
This allows a more flexible usage of ScanOrAdd64BitE820Ram,
a followup patch will use it for all memory detection.
No functional change.
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=3593
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
FdtClient is unhappy without a device tree, so add an empty fdt
which we can use in case etc/fdt is not present in fw_cfg.
On ARM machines a device tree is mandatory for hardware detection,
that's why FdtClient fails hard.
On microvm the device tree is only used to detect virtio-mmio devices
(this patch series) and the pcie host (future series). So edk2 can
continue with limited functionality in case no device tree is present:
no storage, no network, but serial console and direct kernel boot
works.
qemu release 6.2 & newer will provide a device tree for microvm.
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=3689
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3275
When SEV-SNP is active, a memory region mapped encrypted in the page
table must be validated before access. There are two approaches that
can be taken to validate the system RAM detected during the PEI phase:
1) Validate on-demand
OR
2) Validate before access
On-demand
=========
If memory is not validated before access, it will cause a #VC
exception with the page-not-validated error code. The VC exception
handler can perform the validation steps.
The pages that have been validated will need to be tracked to avoid
the double validation scenarios. The range of memory that has not
been validated will need to be communicated to the OS through the
recently introduced unaccepted memory type
https://github.com/microsoft/mu_basecore/pull/66, so that OS can
validate those ranges before using them.
Validate before access
======================
Since the PEI phase detects all the available system RAM, use the
MemEncryptSevSnpValidateSystemRam() function to pre-validate the
system RAM in the PEI phase.
For now, choose option 2 due to the dependency and the complexity
of the on-demand validation.
Cc: Michael Roth <michael.roth@amd.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Jiewen Yao <Jiewen.yao@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3108
Protect the GHCB backup pages used by an SEV-ES guest when S3 is
supported.
Regarding the lifecycle of the GHCB backup pages:
PcdOvmfSecGhcbBackupBase
(a) when and how it is initialized after first boot of the VM
If SEV-ES is enabled, the GHCB backup pages are initialized when a
nested #VC is received during the SEC phase
[OvmfPkg/Library/VmgExitLib/SecVmgExitVcHandler.c].
(b) how it is protected from memory allocations during DXE
If S3 and SEV-ES are enabled, then InitializeRamRegions()
[OvmfPkg/PlatformPei/MemDetect.c] protects the ranges with an AcpiNVS
memory allocation HOB, in PEI.
If S3 is disabled, then these ranges are not protected. PEI switches to
the GHCB backup pages in permanent PEI memory and DXE will use these
PEI GHCB backup pages, so we don't have to preserve
PcdOvmfSecGhcbBackupBase.
(c) how it is protected from the OS
If S3 is enabled, then (b) reserves it from the OS too.
If S3 is disabled, then the range needs no protection.
(d) how it is accessed on the S3 resume path
It is rewritten same as in (a), which is fine because (b) reserved it.
(e) how it is accessed on the warm reset path
It is rewritten same as in (a).
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@arm.com>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Message-Id: <119102a3d14caa70d81aee334a2e0f3f925e1a60.1610045305.git.thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3108
In order to be able to issue messages or make interface calls that cause
another #VC (e.g. GetLocalApicBaseAddress () issues RDMSR), add support
for nested #VCs.
In order to support nested #VCs, GHCB backup pages are required. If a #VC
is received while currently processing a #VC, a backup of the current GHCB
content is made. This allows the #VC handler to continue processing the
new #VC. Upon completion of the new #VC, the GHCB is restored from the
backup page. The #VC recursion level is tracked in the per-vCPU variable
area.
Support is added to handle up to one nested #VC (or two #VCs total). If
a second nested #VC is encountered, an ASSERT will be issued and the vCPU
will enter CpuDeadLoop ().
For SEC, the GHCB backup pages are reserved in the OvmfPkgX64.fdf memory
layout, with two new fixed PCDs to provide the address and size of the
backup area.
For PEI/DXE, the GHCB backup pages are allocated as boot services pages
using the memory allocation library.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@arm.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Message-Id: <ac2e8203fc41a351b43f60d68bdad6b57c4fb106.1610045305.git.thomas.lendacky@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2198
After having transitioned from UEFI to the OS, the OS will need to boot
the APs. For an SEV-ES guest, the APs will have been parked by UEFI using
GHCB pages allocated by UEFI. The hypervisor will write to the GHCB
SW_EXITINFO2 field of the GHCB when the AP is booted. As a result, the
GHCB pages must be marked reserved so that the OS does not attempt to use
them and experience memory corruption because of the hypervisor write.
Change the GHCB allocation from the default boot services memory to
reserved memory.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@arm.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2198
Protect the SEV-ES work area memory used by an SEV-ES guest.
Regarding the lifecycle of the SEV-ES memory area:
PcdSevEsWorkArea
(a) when and how it is initialized after first boot of the VM
If SEV-ES is enabled, the SEV-ES area is initialized during
the SEC phase [OvmfPkg/ResetVector/Ia32/PageTables64.asm].
(b) how it is protected from memory allocations during DXE
If SEV-ES is enabled, then InitializeRamRegions()
[OvmfPkg/PlatformPei/MemDetect.c] protects the ranges with either
an AcpiNVS (S3 enabled) or BootServicesData (S3 disabled) memory
allocation HOB, in PEI.
(c) how it is protected from the OS
If S3 is enabled, then (b) reserves it from the OS too.
If S3 is disabled, then the range needs no protection.
(d) how it is accessed on the S3 resume path
It is rewritten same as in (a), which is fine because (b) reserved it.
(e) how it is accessed on the warm reset path
It is rewritten same as in (a).
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@arm.com>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Julien Grall <julien@xen.org>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2198
The SEV support will clear the C-bit from non-RAM areas. The early GDT
lives in a non-RAM area, so when an exception occurs (like a #VC) the GDT
will be read as un-encrypted even though it is encrypted. This will result
in a failure to be able to handle the exception.
Move the GDT into RAM so it can be accessed without error when running as
an SEV-ES guest.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@arm.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2198
Allocate memory for the GHCB pages and the per-CPU variable pages during
SEV initialization for use during Pei and Dxe phases. The GHCB page(s)
must be shared pages, so clear the encryption mask from the current page
table entries. Upon successful allocation, set the GHCB PCDs (PcdGhcbBase
and PcdGhcbSize).
The per-CPU variable page needs to be unique per AP. Using the page after
the GHCB ensures that it is unique per AP. Only the GHCB page is marked as
shared, keeping the per-CPU variable page encyrpted. The same logic is
used in DXE using CreateIdentityMappingPageTables() before switching to
the DXE pagetables.
The GHCB pages (one per vCPU) will be used by the PEI and DXE #VC
exception handlers. The #VC exception handler will fill in the necessary
fields of the GHCB and exit to the hypervisor using the VMGEXIT
instruction. The hypervisor then accesses the GHCB associated with the
vCPU in order to perform the requested function.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@arm.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2198
Protect the memory used by an SEV-ES guest when S3 is supported. This
includes the page table used to break down the 2MB page that contains
the GHCB so that it can be marked un-encrypted, as well as the GHCB
area.
Regarding the lifecycle of the GHCB-related memory areas:
PcdOvmfSecGhcbPageTableBase
PcdOvmfSecGhcbBase
(a) when and how it is initialized after first boot of the VM
If SEV-ES is enabled, the GHCB-related areas are initialized during
the SEC phase [OvmfPkg/ResetVector/Ia32/PageTables64.asm].
(b) how it is protected from memory allocations during DXE
If S3 and SEV-ES are enabled, then InitializeRamRegions()
[OvmfPkg/PlatformPei/MemDetect.c] protects the ranges with an AcpiNVS
memory allocation HOB, in PEI.
If S3 is disabled, then these ranges are not protected. DXE's own page
tables are first built while still in PEI (see HandOffToDxeCore()
[MdeModulePkg/Core/DxeIplPeim/X64/DxeLoadFunc.c]). Those tables are
located in permanent PEI memory. After CR3 is switched over to them
(which occurs before jumping to the DXE core entry point), we don't have
to preserve PcdOvmfSecGhcbPageTableBase. PEI switches to GHCB pages in
permanent PEI memory and DXE will use these PEI GHCB pages, so we don't
have to preserve PcdOvmfSecGhcbBase.
(c) how it is protected from the OS
If S3 is enabled, then (b) reserves it from the OS too.
If S3 is disabled, then the range needs no protection.
(d) how it is accessed on the S3 resume path
It is rewritten same as in (a), which is fine because (b) reserved it.
(e) how it is accessed on the warm reset path
It is rewritten same as in (a).
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@arm.com>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Julien Grall <julien@xen.org>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
The previous patch has no effect -- i.e., it cannot stop the tracking of
BS Code/Data in MemTypeInfo -- if the virtual machine already has a
MemoryTypeInformation UEFI variable.
In that case, our current logic allows the DXE IPL PEIM to translate the
UEFI variable to the HOB, and that translation is verbatim. If the
variable already contains records for BS Code/Data, the issues listed in
the previous patch persist for the virtual machine.
For this reason, *always* install PlatformPei's own MemTypeInfo HOB. This
prevents the DXE IPL PEIM's variable-to-HOB translation.
In PlatformPei, consume the records in the MemoryTypeInformation UEFI
variable as hints:
- Ignore all memory types for which we wouldn't by default install records
in the HOB. This hides BS Code/Data from any existent
MemoryTypeInformation variable.
- For the memory types that our defaults cover, enable the records in the
UEFI variable to increase (and *only* to increase) the page counts.
This lets the MemoryTypeInformation UEFI variable function as designed,
but it eliminates a reboot when such a new OVMF binary is deployed (a)
that has higher memory consumption than tracked by the virtual machine's
UEFI variable previously, *but* (b) whose defaults also reflect those
higher page counts.
Cc: Ard Biesheuvel <ard.biesheuvel@arm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=2706
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20200508121651.16045-3-lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@arm.com>
When OVMF runs in a SEV guest, the initial SMM Save State Map is
(1) allocated as EfiBootServicesData type memory in OvmfPkg/PlatformPei,
function AmdSevInitialize(), for preventing unintended information
sharing with the hypervisor;
(2) decrypted in AmdSevDxe;
(3) re-encrypted in OvmfPkg/Library/SmmCpuFeaturesLib, function
SmmCpuFeaturesSmmRelocationComplete(), which is called by
PiSmmCpuDxeSmm right after initial SMBASE relocation;
(4) released to DXE at the same location.
The SMRAM at the default SMBASE is a superset of the initial Save State
Map. The reserved memory allocation in InitializeRamRegions(), from the
previous patch, must override the allocating and freeing in (1) and (4),
respectively. (Note: the decrypting and re-encrypting in (2) and (3) are
unaffected.)
In AmdSevInitialize(), only assert the containment of the initial Save
State Map, in the larger area already allocated by InitializeRamRegions().
In SmmCpuFeaturesSmmRelocationComplete(), preserve the allocation of the
initial Save State Map into OS runtime, as part of the allocation done by
InitializeRamRegions(). Only assert containment.
These changes only affect the normal boot path (the UEFI memory map is
untouched during S3 resume).
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1512
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Message-Id: <20200129214412.2361-9-lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The 128KB SMRAM at the default SMBASE will be used for protecting the
initial SMI handler for hot-plugged VCPUs. After platform reset, the SMRAM
in question is open (and looks just like RAM). When BDS signals
EFI_DXE_MM_READY_TO_LOCK_PROTOCOL (and so TSEG is locked down), we're
going to lock the SMRAM at the default SMBASE too.
For this, we have to reserve said SMRAM area as early as possible, from
components in PEI, DXE, and OS runtime.
* QemuInitializeRam() currently produces a single resource descriptor HOB,
for exposing the system RAM available under 1GB. This occurs during both
normal boot and S3 resume identically (the latter only for the sake of
CpuMpPei borrowing low RAM for the AP startup vector).
But, the SMRAM at the default SMBASE falls in the middle of the current
system RAM HOB. Split the HOB, and cover the SMRAM with a reserved
memory HOB in the middle. CpuMpPei (via MpInitLib) skips reserved memory
HOBs.
* InitializeRamRegions() is responsible for producing memory allocation
HOBs, carving out parts of the resource descriptor HOBs produced in
QemuInitializeRam(). Allocate the above-introduced reserved memory
region in full, similarly to how we treat TSEG, so that DXE and the OS
avoid the locked SMRAM (black hole) in this area.
(Note that these allocations only occur on the normal boot path, as they
matter for the UEFI memory map, which cannot be changed during S3
resume.)
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1512
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Message-Id: <20200129214412.2361-8-lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The permanent PEI RAM that is published on the normal boot path starts
strictly above MEMFD_BASE_ADDRESS (8 MB -- see the FDF files), regardless
of whether PEI decompression will be necessary on S3 resume due to
SMM_REQUIRE. Therefore the normal boot permanent PEI RAM never overlaps
with the SMRAM at the default SMBASE (192 KB).
The S3 resume permanent PEI RAM is strictly above the normal boot one.
Therefore the no-overlap statement holds true on the S3 resume path as
well.
Assert the no-overlap condition commonly for both boot paths in
PublishPeiMemory().
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1512
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Message-Id: <20200129214412.2361-7-lersek@redhat.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
MaxCpuCountInitialization() currently handles the following options:
(1) QEMU does not report the boot CPU count (FW_CFG_NB_CPUS is 0)
In this case, PlatformPei makes MpInitLib enumerate APs up to the
default PcdCpuMaxLogicalProcessorNumber value (64) minus 1, or until
the default PcdCpuApInitTimeOutInMicroSeconds (50,000) elapses.
(Whichever is reached first.)
Time-limited AP enumeration had never been reliable on QEMU/KVM, which
is why commit 45a70db3c3 strated handling case (2) below, in OVMF.
(2) QEMU reports the boot CPU count (FW_CFG_NB_CPUS is nonzero)
In this case, PlatformPei sets
- PcdCpuMaxLogicalProcessorNumber to the reported boot CPU count
(FW_CFG_NB_CPUS, which exports "PCMachineState.boot_cpus"),
- and PcdCpuApInitTimeOutInMicroSeconds to practically "infinity"
(MAX_UINT32, ~71 minutes).
That causes MpInitLib to enumerate exactly the present (boot) APs.
With CPU hotplug in mind, this method is not good enough. Because,
using QEMU terminology, UefiCpuPkg expects
PcdCpuMaxLogicalProcessorNumber to provide the "possible CPUs" count
("MachineState.smp.max_cpus"), which includes present and not present
CPUs both (with not present CPUs being subject for hot-plugging).
FW_CFG_NB_CPUS does not include not present CPUs.
Rewrite MaxCpuCountInitialization() for handling the following cases:
(1) The behavior of case (1) does not change. (No UefiCpuPkg PCDs are set
to values different from the defaults.)
(2) QEMU reports the boot CPU count ("PCMachineState.boot_cpus", via
FW_CFG_NB_CPUS), but not the possible CPUs count
("MachineState.smp.max_cpus").
In this case, the behavior remains unchanged.
The way MpInitLib is instructed to do the same differs however: we now
set the new PcdCpuBootLogicalProcessorNumber to the boot CPU count
(while continuing to set PcdCpuMaxLogicalProcessorNumber identically).
PcdCpuApInitTimeOutInMicroSeconds becomes irrelevant.
(3) QEMU reports both the boot CPU count ("PCMachineState.boot_cpus", via
FW_CFG_NB_CPUS), and the possible CPUs count
("MachineState.smp.max_cpus").
We tell UefiCpuPkg about the possible CPUs count through
PcdCpuMaxLogicalProcessorNumber. We also tell MpInitLib the boot CPU
count for precise and quick AP enumeration, via
PcdCpuBootLogicalProcessorNumber. PcdCpuApInitTimeOutInMicroSeconds is
irrelevant again.
This patch is a pre-requisite for enabling CPU hotplug with SMM_REQUIRE.
As a side effect, the patch also enables S3 to work with CPU hotplug at
once, *without* SMM_REQUIRE.
(Without the patch, S3 resume fails, if a CPU is hot-plugged at OS
runtime, prior to suspend: the FW_CFG_NB_CPUS increase seen during resume
causes PcdCpuMaxLogicalProcessorNumber to increase as well, which is not
permitted.
With the patch, PcdCpuMaxLogicalProcessorNumber stays the same, namely
"MachineState.smp.max_cpus". Therefore, the CPU structures allocated
during normal boot can accommodate the CPUs at S3 resume that have been
hotplugged prior to S3 suspend.)
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Julien Grall <julien.grall@arm.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1515
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Message-Id: <20191022221554.14963-4-lersek@redhat.com>
Acked-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
(This is a replacement for commit 39b9a5ffe6 ("OvmfPkg/PlatformPei: fix
MTRR for low-RAM sizes that have many bits clear", 2019-05-16).)
Reintroduce the same logic as seen in commit 39b9a5ffe6 for the pc
(i440fx) board type.
For q35, the same approach doesn't work any longer, given that (a) we'd
like to keep the PCIEXBAR in the platform DSC a fixed-at-build PCD, and
(b) QEMU expects the PCIEXBAR to reside at a lower address than the 32-bit
PCI MMIO aperture.
Therefore, introduce a helper function for determining the 32-bit
"uncacheable" (MMIO) area base address:
- On q35, this function behaves statically. Furthermore, the MTRR setup
exploits that the range [0xB000_0000, 0xFFFF_FFFF] can be marked UC with
just two variable MTRRs (one at 0xB000_0000 (size 256MB), another at
0xC000_0000 (size 1GB)).
- On pc (i440fx), the function behaves dynamically, implementing the same
logic as commit 39b9a5ffe6 did. The PciBase value is adjusted to the
value calculated, similarly to commit 39b9a5ffe6. A further
simplification is that we show that the UC32 area size truncation to a
whole power of two automatically guarantees a >=2GB base address.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1859
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
This reverts commit 60e95bf509.
The original fix for <https://bugzilla.tianocore.org/show_bug.cgi?id=1814>
triggered a bug / incorrect assumption in QEMU.
QEMU assumes that the PCIEXBAR is below the 32-bit PCI window, not above
it. When the firmware doesn't satisfy this assumption, QEMU generates an
\_SB.PCI0._CRS object in the ACPI DSDT that does not reflect the
firmware's 32-bit MMIO BAR assignments. This causes OSes to re-assign
32-bit MMIO BARs.
Working around the problem in the firmware looks less problematic than
fixing QEMU. Revert the original changes first, before implementing an
alternative fix.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1859
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
This reverts commit 9a2e8d7c65.
The original fix for <https://bugzilla.tianocore.org/show_bug.cgi?id=1814>
triggered a bug / incorrect assumption in QEMU.
QEMU assumes that the PCIEXBAR is below the 32-bit PCI window, not above
it. When the firmware doesn't satisfy this assumption, QEMU generates an
\_SB.PCI0._CRS object in the ACPI DSDT that does not reflect the
firmware's 32-bit MMIO BAR assignments. This causes OSes to re-assign
32-bit MMIO BARs.
Working around the problem in the firmware looks less problematic than
fixing QEMU. Revert the original changes first, before implementing an
alternative fix.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1859
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
This reverts commit 75136b2954.
The original fix for <https://bugzilla.tianocore.org/show_bug.cgi?id=1814>
triggered a bug / incorrect assumption in QEMU.
QEMU assumes that the PCIEXBAR is below the 32-bit PCI window, not above
it. When the firmware doesn't satisfy this assumption, QEMU generates an
\_SB.PCI0._CRS object in the ACPI DSDT that does not reflect the
firmware's 32-bit MMIO BAR assignments. This causes OSes to re-assign
32-bit MMIO BARs.
Working around the problem in the firmware looks less problematic than
fixing QEMU. Revert the original changes first, before implementing an
alternative fix.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=1859
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>