BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4654
The SVSM specification relies on a specific register calling convention to
hold the parameters that are associated with the SVSM request. The SVSM is
invoked by requesting the hypervisor to run the VMPL0 VMSA of the guest
using the GHCB MSR Protocol or a GHCB NAE event.
Create a new version of the VMGEXIT instruction that will adhere to this
calling convention and load the SVSM function arguments into the proper
register before invoking the VMGEXIT instruction. On return, perform the
atomic exchange on the SVSM call pending value as specified in the SVSM
specification.
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Implement Cache Management Operations (CMO) defined by
RISC-V spec https://github.com/riscv/riscv-CMOs.
Notes:
1. CMO only supports block based Operations. Meaning cache
flush/invd/clean Operations are not available for the entire
range. In that case we fallback on fence.i instructions.
2. Operations are implemented using Opcodes to make them compiler
independent. binutils 2.39+ compilers support CMO instructions.
Test:
1. Ensured correct instructions are refelecting in asm
2. Qemu implements basic support for CMO operations in that it allwos
instructions without exceptions. Verified it works properly in
that sense.
3. SG2042Pkg implements CMO-like instructions. It was verified that
CpuFlushCpuDataCache works fine. This more of less
confirms that framework is alright.
4. TODO: Once Silicon is available with exact instructions, we will
further verify this.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Daniel Schaefer <git@danielschaefer.me>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Pedro Falcato <pedro.falcato@gmail.com>
Signed-off-by: Dhaval Sharma <dhaval@rivosinc.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Sunil V L <sunilvl@...>
Reviewed-by: Jingyu Li <jingyu.li01@...>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4609
The current CalculateCrc16Ansi implementation does the following:
1) Invert the passed checksum
2) Calculate the new checksum by going through data and using the
lookup table
3) Invert it back again
This emulated my design for CalculateCrc32c, where 0 is
passed as the initial checksum, and it inverts in the end.
However, CRC16 does not invert the checksum on input and output.
So this is incorrect.
Fix the problem by not inverting input checksums nor output checksums.
Callers should now pass CRC16ANSI_INIT as the initial value instead of
"0". This is a breaking change.
This problem was found out-of-list when older ext4 filesystems
(that use crc16 checksums) failed to mount with "corruption".
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Pedro Falcato <pedro.falcato@gmail.com>
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2198
VMGEXIT is a new instruction used for Hypervisor/Guest communication when
running as an SEV-ES guest. A VMGEXIT will cause an automatic exit (AE)
to occur, resulting in a #VMEXIT with an exit code value of 0x403.
Since SEV-ES is only supported in X64, provide the necessary X64 support
to execute the VMGEXIT instruction, which is coded as "rep vmmcall". For
IA32, since "vmmcall" is not supported in NASM 32-bit mode and VMGEXIT
should never be called, provide a stub implementation that is identical
to CpuBreakpoint().
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Regression-tested-by: Laszlo Ersek <lersek@redhat.com>
Introduce two public functions CharToUpper and AsciiCharToUpper.
They have the same functions as InternalCharToUpper and
InternalBaseLibAsciiToUpper.Considering the internal functions will
be removed,so directly I change their function names to the public ones'.
https://bugzilla.tianocore.org/show_bug.cgi?id=1369
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com>
Reviewed-by: Ray Ni <ray.ni@intel.com>
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1417
X86 specific BaseLib API AsmLfence() was introduced to address the Spectre
Variant 1 (CVE-2017-5753) issue. The purpose of this API is to insert
barriers to stop speculative execution. However, the API is highly
architecture (X86) specific, and thus should be avoided using across
generic code.
To address this issue, this patch will add a new BaseLib API called
SpeculationBarrier(). Different architectures will have different
implementations for this API.
For IA32 and x64, the implementation of SpeculationBarrier() will
directly call AsmLfence().
For ARM and AARCH64, this patch will add a temporary empty implementation
as a placeholder. We hope experts in ARM can help to contribute the actual
implementation.
For EBC, similar to the ARM and AARCH64 cases, a temporary empty
implementation is added.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Hao Wu <hao.a.wu@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Removing rules for Ipf sources file:
* Remove the source file which path with "ipf" and also listed in
[Sources.IPF] section of INF file.
* Remove the source file which listed in [Components.IPF] section
of DSC file and not listed in any other [Components] section.
* Remove the embedded Ipf code for MDE_CPU_IPF.
Removing rules for Inf file:
* Remove IPF from VALID_ARCHITECTURES comments.
* Remove DXE_SAL_DRIVER from LIBRARY_CLASS in [Defines] section.
* Remove the INF which only listed in [Components.IPF] section in DSC.
* Remove statements from [BuildOptions] that provide IPF specific flags.
* Remove any IPF sepcific sections.
Removing rules for Dec file:
* Remove [Includes.IPF] section from Dec.
Removing rules for Dsc file:
* Remove IPF from SUPPORTED_ARCHITECTURES in [Defines] section of DSC.
* Remove any IPF specific sections.
* Remove statements from [BuildOptions] that provide IPF specific flags.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Chen A Chen <chen.a.chen@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Hopefully this should tidy the conversion warnings.
----
Add 32-bit and 64-bit functions that count number of set bits in a bitfield
using a divide-and-count method.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Tomas Pilar <tpilar@solarflare.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
Reviewed-by: Michael Kinney <michael.d.kinney@intel.com>
1. Do not use tab characters
2. No trailing white space in one line
3. All files must end with CRLF
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
"#endif" preprocessing directives near the top of "BaseLib.h" helpfully
repeat the preprocessing conditions from their matching "#if", "#ifdef",
and "#ifndef" directives. This practice has been less followed recently;
supplement the missing comments.
Cc: Liming Gao <liming.gao@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=866
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
When compiling with any ARM toolchain and Os, registers can get
trashed when returning for the second time from SetJump because GCC
only handles this correctly when using standard names like 'setjmp' or
'getcontext'. When different names are used you have to use the
attribute 'returns_twice' to tell gcc to be extra careful.
example:
extern int FN_NAME(void*);
void jmp_buf_set(void *jmpb, void (*f)(void))
{
if (!FN_NAME(jmpb))
f();
}
this code produces this wrong code with Os:
00000000 <jmp_buf_set>:
0: e92d4010 push {r4, lr}
4: e1a04001 mov r4, r1
8: ebfffffe bl 0 <nonstandard_setjmp>
c: e3500000 cmp r0, #0
10: 01a03004 moveq r3, r4
14: 08bd4010 popeq {r4, lr}
18: 012fff13 bxeq r3
1c: e8bd4010 pop {r4, lr}
20: e12fff1e bx lr
The generated code pushes backups of r4 and lr to the stack and then
saves all registers using nonstandard_setjmp.
Then it pops the stack and jumps to the function in r3 which is the
main problem because now the function can overwrite our register
backups on the stack.
When we return a second time from the call to nonstandard_setjmp, the
stack pointer has it's original(pushed) position and when the code
pops r4 and lr from the stack the values are not guaranteed to be the
same.
When using a standard name like setjmp or getcontext or adding
'__attribute__((returns_twice))' to nonstandard_setjmp's declaration
the code looks different:
00000000 <jmp_buf_set>:
0: e92d4007 push {r0, r1, r2, lr}
4: e58d1004 str r1, [sp, #4]
8: ebfffffe bl 0 <setjmp>
c: e3500000 cmp r0, #0
10: 059d3004 ldreq r3, [sp, #4]
14: 01a0e00f moveq lr, pc
18: 012fff13 bxeq r3
1c: e28dd00c add sp, sp, #12
20: e49de004 pop {lr} ; (ldr lr, [sp], #4)
24: e12fff1e bx lr
Here the problem is being solved by restoring r3 from the stack
without popping it.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Michael Zimmermann <sigmaepsilon92@gmail.com>
Reviewed-by: Liming Gao <liming.gao@intel.com>
The new definitions include two structures
IA32_TASK_STATE_SEGMENT
IA32_TSS_DESCRIPTOR
two macros
IA32_GDT_TYPE_TSS
IA32_GDT_ALIGNMENT
and one API
VOID
EFIAPI
AsmWriteTr (
IN UINT16 Selector
);
They're needed to setup task gate and interrupt stack table for stack switch.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Reviewed-by: Jeff Fan <vanjeff_919@hotmail.com>
Reviewed-by: Jiewen.yao@intel.com
CalculateCrc32() bases on the initialized mCrcTable. When CalculateCrc32()
is used, mCrcTable will take 1KB size in the image. When CalculateCrc32()
is not used, mCrcTable will not be built in the image, and no size impact.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>