Following the Hardware Info library, create the DxeHardwareInfoLib
which implements the whole API capable of parsing heterogeneous hardware
information. The list-like API grants callers a flexible and common
pattern to retrieve the data. Moreover, the initial source is a BLOB
which generalizes the host-to-guest transmission mechanism.
The Hardware Info library main objective is to provide a way to
describe non-discoverable hardware so that the host can share the
available resources with the guest in Ovmf platforms. This change
features and embraces the main idea behind the library by providing
an API that parses a BLOB into a linked list to retrieve hardware
data from any source. Additionally, list-like APIs are provided so
that the hardware info list can be traversed conveniently.
Similarly, the capability is provided to filter results by specific
hardware types. However, heterogeneous elements can be added to the
list, increasing the flexibility. This way, a single source, for
example a fw-cfg file, can be used to describe several instances of
multiple types of hardware.
This part of the Hardware Info library makes use of dynamic memory
and is intended for stages in which memory services are available.
A motivation example is the PciHostBridgeLib. This library, part
of the PCI driver populates the list of PCI root bridges during DXE
stage for future steps to discover the resources under them. The
hardware info library can be used to obtain the detailed description
of available host bridges, for instance in the form of a fw-cfg file,
and parse that information into a dynmaic list that allows, first to
verify consistency of the data, and second discover the resources
availabe for each root bridge.
Cc: Alexander Graf <graf@amazon.de>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Nicolas Ojeda Leon <ncoleon@amazon.com>
Define the HardwareInfoLib API and create the PeiHardwareInfoLib
which implements it, specifically for Pei usage, supporting
only static accesses to parse data directly from a fw-cfg file.
All list-like APIs are implemented as unsupported and only a
fw-cfg wrapper to read hardware info elements is provided.
The Hardware Info library is intended to describe non-discoverable
hardware information and share that from the host to the guest in Ovmf
platforms. The QEMU fw-cfg extension for this library provides a first
variation to parse hardware info by reading it directly from a fw-cfg
file. This library offers a wrapper function to the plain
QmeuFwCfgReadBytes which, specifically, parses header-data pairs out
of the binary values in the file. For this purpose, the approach is
incremental, reading the file block by block and outputting the values
only for a specific known hardware type (e.g. PCI host bridges). One
element is returned in each call until the end of the file is reached.
Considering fw-cfg as the first means to transport hardware info from
the host to the guest, this wrapping library offers the possibility
to statically, and in steps, read a specific type of hardware info
elements out of the file. This method reads one hardware element of a
specific type at a time, without the need to pre-allocate memory and
read the whole file or dynamically allocate memory for each new
element found.
As a usage example, the static approach followed by this library
enables early UEFI stages to use and read hardware information
supplied by the host. For instance, in early times of the PEI stage,
hardware information can be parsed out from a fw-cfg file prescinding
from memory services, that may not yet be available, and avoiding
dynamic memory allocations.
Cc: Alexander Graf <graf@amazon.de>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Nicolas Ojeda Leon <ncoleon@amazon.com>
We can have multiple [LibraryClasses] sections, so we can place
all TPM-related library configuration to a single include file.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
kvm FSB clock is 1GHz, not 100 MHz. Timings are off by factor 10.
Fix all affected build configurations. Not changed: Microvm and
Cloudhw (they have already have the correct value), and Xen (has
no fixed frequency, the PCD is configured at runtime by platform
initialization code).
Fixes: c37cbc030d ("OvmfPkg: Switch timer in build time for OvmfPkg")
Reported-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
ConSplitterDxe will pick the highest available resolution then,
thereby making better use of the available display space.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3711
Discussion in https://bugzilla.tianocore.org/show_bug.cgi?id=1496 shows
that 8254TimerDxe was not written for OVMF. It was moved over from
PcAtChipsetPkg to OvmfPkg in 2019. Probably because OVMF was the only
user left.
Most likely the reason OVMF used 8254TimerDxe initially was that it could
just use the existing driver in PcAtChipsetPkg. And it simply hasn't
been changed ever.
CSM support was moved in 2019 too. (CSM support depends on 8254/8259
drivers). So 8254TimerDxe will be used when CSM_ENABLE=TRUE.
There are 4 .dsc which include the 8254Timer.
- OvmfPkg/AmdSev/AmdSevX64.dsc
- OvmfPkg/OvmfPkgIa32.dsc
- OvmfPkg/OvmfPkgIa32X64.dsc
- OvmfPkg/OvmfPkgX64.dsc
For the three OvmfPkg* configs using 8254TimerDxe with CSM_ENABLE=TRUE
and LapicTimerDxe otherwise.
For the AmdSev config it doesn't make sense to support a CSM. So use
the lapic timer unconditionally.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Suggested-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
RFC: https://bugzilla.tianocore.org/show_bug.cgi?id=3429
The IOMMU protocol driver provides capabilities to set a DMA access
attribute and methods to allocate, free, map and unmap the DMA memory
for the PCI Bus devices.
The current IoMmuDxe driver supports DMA operations inside SEV guest.
To support DMA operation in TDX guest,
CC_GUEST_IS_XXX (PcdConfidentialComputingGuestAttr) is used to determine
if it is SEV guest or TDX guest.
Due to security reasons all DMA operations inside the SEV/TDX guest must
be performed on shared pages. The IOMMU protocol driver for the SEV/TDX
guest uses a bounce buffer to map guest DMA buffer to shared pages in
order to provide the support for DMA operations inside SEV/TDX guest.
The call of SEV or TDX specific function to set/clear EncMask/SharedBit
is determined by CC_GUEST_IS_XXX (PcdConfidentialComputingGuestAttr).
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
RFC: https://bugzilla.tianocore.org/show_bug.cgi?id=3429
There are below major changes in this commit.
1. SecEntry.nasm
In TDX BSP and APs goes to the same entry point in SecEntry.nasm.
BSP initialize the temporary stack and then jumps to SecMain, just as
legacy Ovmf does.
APs spin in a modified mailbox loop using initial mailbox structure.
Its structure defition is in OvmfPkg/Include/IndustryStandard/IntelTdx.h.
APs wait for command to see if the command is for me. If so execute the
command.
2. Sec/SecMain.c
When host VMM create the Td guest, the system memory informations are
stored in TdHob, which is a memory region described in Tdx metadata.
The system memory region in TdHob should be accepted before it can be
accessed. So the major task of this patch is to process the TdHobList
to accept the memory. After that TDVF follow the standard OVMF flow
and jump to PEI phase.
PcdUse1GPageTable is set to FALSE by default in OvmfPkgX64.dsc. It gives
no chance for Intel TDX to support 1G page table. To support 1G page
table this PCD is set to TRUE in OvmfPkgX64.dsc.
TDX_GUEST_SUPPORTED is defined in OvmfPkgX64.dsc. This macro wraps the
Tdx specific code.
TDX only works on X64, so the code is only valid in X64 arch.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3863
There are 3 variants of PlatformPei in OvmfPkg:
- OvmfPkg/PlatformPei
- OvmfPkg/XenPlatformPei
- OvmfPkg/Bhyve/PlatformPei/PlatformPei.inf
These PlatformPeis can share many common codes, such as
Cmos / Hob / Memory / Platform related functions. This commit
(and its following several patches) are to create a PlatformInitLib
which wraps the common code called in above PlatformPeis.
In this initial version of PlatformInitLib, below Cmos related functions
are introduced:
- PlatformCmosRead8
- PlatformCmosWrite8
- PlatformDebugDumpCmos
They correspond to the functions in OvmfPkg/PlatformPei:
- CmosRead8
- CmosWrite8
- DebugDumpCmos
Considering this PlatformInitLib will be used in SEC phase, global
variables and dynamic PCDs are avoided. We use PlatformInfoHob
to exchange information between functions.
EFI_HOB_PLATFORM_INFO is the data struct which contains the platform
information, such as HostBridgeDevId, BootMode, S3Supported,
SmmSmramRequire, etc.
After PlatformInitLib is created, OvmfPkg/PlatformPei is refactored
with this library.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
The SNP patch series updated the OvmfPkgX64 build but forgot the AmdSev
variant, resulting in a broken OvmfSevMetadata table.
Fixes: cca9cd3dd6 ("OvmfPkg: reserve CPUID page")
Fixes: 707c71a01b ("OvmfPkg: reserve SNP secrets page")
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Jiewen Yao <jiewen.yao@intel.com>
It's a UINT8 (enum) PCD telling where the PcdVideoHorizontalResolution
and PcdVideoVerticalResolution values are coming from. It can be:
0 (unset aka default from dsc file), or
1 (from PlatformConfig), or
2 (set by Video Driver).
It will be used by video drivers to avoid overriding PlatformConfig
values, or override each others values in case multiple display devices
are present.
The underlying problem this tries to solve is that the GOP protocol has
no way to indicate the preferred video mode. On physical hardware this
isn't much of a problem because using the highest resolution available
works just fine as that is typically the native display resolution
But in a virtual machine you don't want come up with a huge 4k window by
default just because the virtual vga is able to handle that. Cutting
down the video mode list isn't a great solution either as that would
also remove the modes from the platform configuration so the user
wouldn't be able to pick a resolution higher than the default any more.
So with patch drivers can use use PcdVideoHorizontalResolution and
PcdVideoVerticalResolution to indicate what the preferred display
resolution is, without overwriting the user preferences from
PlatformConfig if present.
A possible alternative approach would be to extend the GOP protocol, but
I'm not sure this is a good plan given this is mostly a problem for
virtual machines and using PCDs allows to keep this local to OvmfPkg.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
ovmf default display resolution is 800x600. This is rather small for
modern guests. qemu used 1024x768 as default for a long time and
switched the to 1280x800 recently[1] for the upcoming 7.0 release.
This patch brings ovmf in sync with the recent qemu update and likewise
switches the default to 1280x800.
[1] de72c4b7cd
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
With this in place the tpm configuration is not duplicated for each of
our four ovmf config variants (ia32, ia32x64, x64, amdsev) and it is
easier to keep them all in sync when updating the tpm configuration.
No functional change.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
In FvbInitialize Function,
PcdFlashNvStorageVariableBase64 PcdFlashNvStorageFtwWorkingBase
PcdFlashNvStorageFtwSpareBase will not exceed 0x100000000,
Due to truncation and variable type limitations.
That leads to the NV variable cannot be saved to the memory above 4G.
Modify as follows:
1.Remove the forced type conversion of UINT32.
2.Use UINT64 type variables.
Signed-off-by: xianglai li <lixianglai@loongson.cn>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Don't make the package Qemu centric so that we can introduce some
alternative support for other VMMs not using the fw_cfg mechanism.
This patch is purely about renaming existing files with no functional
change.
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3275
Commit 85b8eac59b added support to ensure
that MMIO is only performed against the un-encrypted memory. If MMIO
is performed against encrypted memory, a #GP is raised.
The AmdSevDxe uses the functions provided by the MemEncryptSevLib to
clear the memory encryption mask from the page table. If the
MemEncryptSevLib is extended to include VmgExitLib then depedency
chain will look like this:
OvmfPkg/AmdSevDxe/AmdSevDxe.inf
-----> MemEncryptSevLib class
-----> "OvmfPkg/BaseMemEncryptSevLib/DxeMemEncryptSevLib.inf" instance
-----> VmgExitLib class
-----> "OvmfPkg/VmgExitLib" instance
-----> LocalApicLib class
-----> "UefiCpuPkg/BaseXApicX2ApicLib/BaseXApicX2ApicLib.inf" instance
-----> TimerLib class
-----> "OvmfPkg/AcpiTimerLib/DxeAcpiTimerLib.inf" instance
-----> PciLib class
-----> "OvmfPkg/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf" instance
-----> PciExpressLib class
-----> "MdePkg/BasePciExpressLib/BasePciExpressLib.inf" instance
The LocalApicLib provides a constructor that gets called before the
AmdSevDxe can clear the memory encryption mask from the MMIO regions.
When running under the Q35 machine type, the call chain looks like this:
AcpiTimerLibConstructor () [AcpiTimerLib]
PciRead32 () [DxePciLibI440FxQ35]
PciExpressRead32 () [PciExpressLib]
The PciExpressRead32 () reads the MMIO region. The MMIO regions are not
yet mapped un-encrypted, so the check introduced in the commit
85b8eac59b raises a #GP.
The AmdSevDxe driver does not require the access to the extended PCI
config space. Accessing a normal PCI config space, via IO port should be
sufficent. Use the module-scope override to make the AmdSevDxe use the
BasePciLib instead of BasePciExpressLib so that PciRead32 () uses the
IO ports instead of the extended config space.
Cc: Michael Roth <michael.roth@amd.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Jiewen Yao <Jiewen.yao@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Suggested-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3345
During PEI, the MMIO range for the TPM is marked as encrypted when running
as an SEV guest. While this isn't an issue for an SEV guest because of
the way the nested page fault is handled, it does result in an SEV-ES
guest terminating because of a mitigation check in the #VC handler to
prevent MMIO to an encrypted address. For an SEV-ES guest, this range
must be marked as unencrypted.
Create a new x86 PEIM for TPM support that will map the TPM MMIO range as
unencrypted when SEV-ES is active. The gOvmfTpmMmioAccessiblePpiGuid PPI
will be unconditionally installed before exiting. The PEIM will exit with
the EFI_ABORTED status so that the PEIM does not stay resident. This new
PEIM will depend on the installation of the permanent PEI RAM, by
PlatformPei, so that in case page table splitting is required during the
clearing of the encryption bit, the new page table(s) will be allocated
from permanent PEI RAM.
Update all OVMF Ia32 and X64 build packages to include this new PEIM.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Message-Id: <42794cec1f9d5bc24cbfb9dcdbe5e281ef259ef5.1619716333.git.thomas.lendacky@amd.com>
[lersek@redhat.com: refresh subject line]
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
In NOOPT and DEBUG builds, if "PcdMaximumLinkedListLength" is nonzero,
then several LIST_ENTRY *node* APIs in BaseLib compare the *full* list
length against the PCD.
This turns the time complexity of node-level APIs from constant to linear,
and that of full-list manipulations from linear to quadratic.
As an example, consider the EFI_SHELL_FILE_INFO list, which is a data
structure that's widely used in the UEFI shell. I randomly extracted 5000
files from "/usr/include" on my laptop, spanning 1095 subdirectories out
of 1538, and then ran "DIR -R" in the UEFI shell on this tree. These are
the wall-clock times:
PcdMaximumLinkedListLength PcdMaximumLinkedListLength
=1,000,000 =0
-------------------------- ---------------------------
FAT 4 min 31 s 18 s
virtio-fs 5 min 13 s 1 min 33 s
Checking list lengths against an arbitrary maximum (default: 1,000,000)
seems useless even in NOOPT and DEBUG builds, while the cost is
significant; so set the PCD to 0.
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Ard Biesheuvel <ard.biesheuvel@arm.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Julien Grall <julien@xen.org>
Cc: Leif Lindholm <leif@nuviainc.com>
Cc: Peter Grehan <grehan@freebsd.org>
Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
Cc: Rebecca Cran <rebecca@bsdio.com>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=3152
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@arm.com>
Message-Id: <20210113085453.10168-10-lersek@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=3108
In order to be able to issue messages or make interface calls that cause
another #VC (e.g. GetLocalApicBaseAddress () issues RDMSR), add support
for nested #VCs.
In order to support nested #VCs, GHCB backup pages are required. If a #VC
is received while currently processing a #VC, a backup of the current GHCB
content is made. This allows the #VC handler to continue processing the
new #VC. Upon completion of the new #VC, the GHCB is restored from the
backup page. The #VC recursion level is tracked in the per-vCPU variable
area.
Support is added to handle up to one nested #VC (or two #VCs total). If
a second nested #VC is encountered, an ASSERT will be issued and the vCPU
will enter CpuDeadLoop ().
For SEC, the GHCB backup pages are reserved in the OvmfPkgX64.fdf memory
layout, with two new fixed PCDs to provide the address and size of the
backup area.
For PEI/DXE, the GHCB backup pages are allocated as boot services pages
using the memory allocation library.
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@arm.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Message-Id: <ac2e8203fc41a351b43f60d68bdad6b57c4fb106.1610045305.git.thomas.lendacky@amd.com>
It is anticipated that this part of the code will work for both Intel
TDX and AMD SEV, so remove the SEV specific naming and change to
ConfidentialComputing as a more architecture neutral prefix. Apart
from the symbol rename, there are no code changes.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Message-Id: <20201216014146.2229-3-jejb@linux.ibm.com>
Reviewed-by: Jiewen Yao <Jiewen.yao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>