BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
The SVSM specification relies on a specific register calling convention to
hold the parameters that are associated with the SVSM request. The SVSM is
invoked by requesting the hypervisor to run the VMPL0 VMSA of the guest
using the GHCB MSR Protocol or a GHCB NAE event.
Create a new version of the VMGEXIT instruction that will adhere to this
calling convention and load the SVSM function arguments into the proper
register before invoking the VMGEXIT instruction. On return, perform the
atomic exchange on the SVSM call pending value as specified in the SVSM
specification.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
For scene of
HandOffToDxeCore()->SwitchStack(DxeCoreEntryPoint)->
InternalSwitchStack()->LongJump(),Variable HobList.Raw
will be passed (from *Context1 to register a0) to
DxeMain() in parameter *HobStart.
However, meanwhile the function LongJump() overrides
register a0 with a1 (-1) due to commit (ea628f28e5 "RISCV: Fix
InternalLongJump to return correct value"), then cause hang.
Replacing calling LongJump() with new InternalSwitchStackAsm() to pass
addres data in register s0 to register a0 could fix this issue (just
like the solution in MdePkg/Library/BaseLib/AArch64/SwitchStack.S)
Signed-off-by: Yang Wang <wangyang@bosc.ac.cn>
Cc: Bamvor Jian ZHANG <zhangjian@bosc.ac.cn>
Cc: Andrei Warkentin <andrei.warkentin@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Ran Wang <wangran@bosc.ac.cn>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
Implement Cache Management Operations (CMO) defined by
RISC-V spec https://github.com/riscv/riscv-CMOs.
Notes:
1. CMO only supports block based Operations. Meaning cache
flush/invd/clean Operations are not available for the entire
range. In that case we fallback on fence.i instructions.
2. Operations are implemented using Opcodes to make them compiler
independent. binutils 2.39+ compilers support CMO instructions.
Test:
1. Ensured correct instructions are refelecting in asm
2. Qemu implements basic support for CMO operations in that it allwos
instructions without exceptions. Verified it works properly in
that sense.
3. SG2042Pkg implements CMO-like instructions. It was verified that
CpuFlushCpuDataCache works fine. This more of less
confirms that framework is alright.
4. TODO: Once Silicon is available with exact instructions, we will
further verify this.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Daniel Schaefer <git@danielschaefer.me>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Pedro Falcato <pedro.falcato@gmail.com>
Signed-off-by: Dhaval Sharma <dhaval@rivosinc.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Sunil V L <sunilvl@...>
Reviewed-by: Jingyu Li <jingyu.li01@...>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4609
The current CalculateCrc16Ansi implementation does the following:
1) Invert the passed checksum
2) Calculate the new checksum by going through data and using the
lookup table
3) Invert it back again
This emulated my design for CalculateCrc32c, where 0 is
passed as the initial checksum, and it inverts in the end.
However, CRC16 does not invert the checksum on input and output.
So this is incorrect.
Fix the problem by not inverting input checksums nor output checksums.
Callers should now pass CRC16ANSI_INIT as the initial value instead of
"0". This is a breaking change.
This problem was found out-of-list when older ext4 filesystems
(that use crc16 checksums) failed to mount with "corruption".
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Pedro Falcato <pedro.falcato@gmail.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
The ARM implementation of InternalLongJump always returned the value
Value - but it is not supposed to ever return 0. Add the test to prevent
that, and return 1 if Value is 0 - as is already present in AArch64.
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Reviewed-by: Sami Mujawar <sami.mujawar@arm.com>
Both in SetJump and in InternalLongJump, 32-bit w register views were
used for the UINTN return value. In SetJump, this did not cause errors;
it was only counterintuitive. But in InternalLongJump, it meant the top
32 bits of Value were stripped off.
Change all of these to use the 64-bit x register views.
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reanimated-by: Andrei Warkentin <andrei.warkentin@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Sami Mujawar <sami.mujawar@arm.com>
Reviewed-by: Sami Mujawar <sami.mujawar@arm.com>
Reviewed-by: Andrei Warkentin <andrei.warkentin@intel.com>
Implement the SpeculationBarrier with implementations consisting of
fence instruction which provides finer-grain memory orderings.
Perform Data Barrier in RiscV: fence rw,rw
Perform Instruction Barrier in RiscV: fence.i; fence r,r
More detail is in Appendix A: RVWMO Explanatory Material in
https://github.com/riscv/riscv-isa-manual
This API is first introduced in the below commits for IA32 and x64
d9f1cac51be83d841fdc
and below the commit for ARM and AArch64 implementation
c0959b4426
This commit is to add the RiscV64 implementation which will be used by
variable service under Variable/RuntimeDxe
Cc: Andrei Warkentin <andrei.warkentin@intel.com>
Cc: Evan Chai <evan.chai@intel.com>
Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Tuan Phan <tphan@ventanamicro.com>
Signed-off-by: Yong Li <yong.li@intel.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
Update BaseLib host-based unit test INF file to only list
VALID_ARCHITECTURES of IA32 and X64 to align with all other
host-based unit test INF files. The UnitTestFrameworkPkg only
provides build support of host-based unit tests to OS applications
for IA32 and X64.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Oliver Smith-Denny <osde@linux.microsoft.com>
Fixes CodeQL alerts for CWE-457:
https://cwe.mitre.org/data/definitions/457.html
Note that this change affects the actual return value from the
following functions. The functions documented that if an integer
overflow occurred, MAX_UINTN would be returned. They were
implemented to actually return an undefined value from the stack.
This change makes the function follow its description. However, this
is technically different than what callers may have previously
expected.
MdePkg/Library/BaseLib/String.c:
- StrDecimalToUintn()
- StrDecimalToUint64()
- StrHexToUintn()
- StrHexToUint64()
- AsciiStrDecimalToUintn()
- AsciiStrDecimalToUint64()
- AsciiStrHexToUintn()
- AsciiStrHexToUint64()
Cc: Erich McMillan <emcmillan@microsoft.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Michael Kubacki <mikuback@linux.microsoft.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Co-authored-by: Erich McMillan <emcmillan@microsoft.com>
Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Reviewed-by: Oliver Smith-Denny <osd@smith-denny.com>
Add the BTI instructions and the associated note to make the AArch64 asm
objects compatible with BTI enforcement.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Oliver Smith-Denny <osd@smith-denny.com>
Currently, the AArch64 implementation of LongJump() avoids using the RET
instruction to perform the jump, even though the target address is held
in the link register X30, as the nature of a long jump implies that the
ordinary return address prediction machinery will not be able to make a
correct prediction.
However, LongJump() is rarely used, and the return stack will be out of
sync in any case, so this optimization has little value in practice, and
given that indirect calls other than function returns require a BTI
landing pad at the call site, this optimization is not compatible with
BTI. So let's just use RET instead.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
Reviewed-by: Oliver Smith-Denny <osd@smith-denny.com>
__FUNCTION__ is a pre-standard extension that gcc and Visual C++ among
others support, while __func__ was standardized in C99.
Since it's more standard, replace __FUNCTION__ with __func__ throughout
MdePkg.
Visual Studio versions before VS 2015 don't support __func__ and so
will fail to compile. A workaround is to define __func__ as
__FUNCTION__ :
#define __func__ __FUNCTION__
Signed-off-by: Rebecca Cran <rebecca@quicinc.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Sunil V L <sunilvl@ventanamicro.com>
RVCT is obsolete and no longer used.
Remove support for it.
Signed-off-by: Rebecca Cran <quic_rcran@quicinc.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3325
1. AsmReadMsr64() in X64/GccInlinePriv.c
AsmReadMsr64 can return uninitialized value if FilterBeforeMsrRead
returns False. This causes build error with the CLANG toolchain.
2. AsmWriteMsr64() in X64/GccInlinePriv.c
In the case that FilterBeforeMsrWrite changes Value and returns True,
The original Value, not the changed Value, is written to the MSR.
This behavior is different from the one of AsmWriteMsr64() in
X64/WriteMsr64.c for the MSFT toolchain.
Signed-off-by: Takuto Naito <naitaku@gmail.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
CpuPause() might allow the CPU to go into a lower power state
state while we spin.
On X86, CpuPause() executes a PAUSE instruction which the Intel
and AMD specs describe as follows:
Intel:
"PAUSE: An additional function of the PAUSE instruction is to reduce
the power consumed by a processor while executing a spin loop. A
processor can execute a spin-wait loop extremely quickly, causing the
processor to consume a lot of power while it waits for the resource it
is spinning on to become available. Inserting a pause instruction in a
spin-wait loop greatly reduces the processor?s power consumption."
AMD:
"PAUSE: Improves the performance of spin loops, by providing a hint to
the processor that the current code is in a spin loop. The processor
may use this to optimize power consumption while in the spin loop.
Architecturally, this instruction behaves like a NOP instruction."
On RISC-V and ARM64, CpuPause() executes a NOP, which is no worse than
the tight loop we have.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>