The QEMU Bochs display driver and the QEMU Firmware Configuration
interface code (in the qemu-i440fx mainboard dir) were written for x86.
These devices are available in QEMU VMs of other architectures as well,
so we want to port them to be independent from x86.
The main problem is that the drivers use x86 port I/O functions to
communicate with devices over PCI I/O space. These are currently not
available for ARM* and RISC-V, although it is often still possible to
access PCI I/O ports over MMIO through a translator.
Add implementations of port I/O functions that work with PCI I/O space
on these architectures as well, assuming there is such a translator at a
known address configured at build-time.
Change-Id: If7d9177283e8c692088ba8e30d6dfe52623c8cb9
Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/80372
Reviewed-by: Nico Huber <nico.h@gmx.de>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Julius Werner <jwerner@chromium.org>
ARM SoC supports FEAT_CCIDX after ARMv8.3. The register field
description of CCSIDR_EL1 is different when FEAT_CCIDX is implemented.
If numsets and associativity from CCSIDR_EL1 are not correct, the system
would hang during mmu_disable().
Rather than assuming that FEAT_CCIDX is not implemented, this patch
adds a check to dcache_apply_all to use the right register format.
Reference:
- https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/12770
BUG=b:317015456
TEST=mmu_disable works on the FEAT_CCIDX supported SoC.
TEST=manually add mmu_disable to emulation/qemu-aarch64/bootblock.c and
verify with the command
qemu-system-aarch64 -bios \
./coreboot-builds/EMULATION_QEMU_AARCH64/coreboot.rom -M \
virt,secure=on,virtualization=on -cpu max -cpu cortex-a710 \
-nographic -m 8192M
Change-Id: Ieadd0d9dfb8911039b3d36c9419af4ae04ed814c
Signed-off-by: Yidi Lin <yidilin@chromium.org>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/82635
Reviewed-by: Eric Lai <ericllai@google.com>
Reviewed-by: Paul Menzel <paulepanter@mailbox.org>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Julius Werner <jwerner@chromium.org>
Reviewed-by: Yu-Ping Wu <yupingso@google.com>
<stdio.h> header is used for input/output operations (such as printf,
scanf, fopen, etc.). Although some input/output functions can manipulate
strings, they do not need to directly include <string.h> because they
are declared independently.
Change-Id: Ibe2a4ff6f68843a6d99cfdfe182cf2dd922802aa
Signed-off-by: Elyes Haouas <ehaouas@noos.fr>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/82665
Reviewed-by: Yidi Lin <yidilin@google.com>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Both acpi_create_madt_sci_override and acpi_sci_int have special
handling for the ACPI_NO_PCAT_8259 case, but those cases weren't exactly
obvious, so add a comment with the reason for that.
Signed-off-by: Felix Held <felix-coreboot@felixheld.de>
Change-Id: Ia6dcf59d5ab9226c61e9c4af95a73a07771b71d1
Reviewed-on: https://review.coreboot.org/c/coreboot/+/82643
Reviewed-by: Nico Huber <nico.h@gmx.de>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
This patch removes all instances of the `protected_mode_jump` API and
its associated header file.
The API is no longer used by any code within the tree.
BUG=b:332759882
TEST=Built and booted 64-bit coreboot with 32-bit payload successfully.
Change-Id: I3eb31b09c92512338ccc540f60289960bd6bf439
Signed-off-by: Subrata Banik <subratabanik@google.com>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/82372
Reviewed-by: Patrick Rudolph <patrick.rudolph@9elements.com>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
The payload execution process has been updated to utilize
protected_mode_call_1arg in order to guarantee proper handling of
function parameters.
The previous use of protected_mode_jump with a "jmp" instruction did
not allow for proper stack setup for argument passing, as the calling
convention was not aligned with the System V ABI calling convention.
This patch ensures that calling into the libpayload entry point using
protected mode is now aligned with the System V ABI calling convention.
This resolves an issue where retrieving the "pointer to coreboot tables"
from within the libpayload entry point was failing due to incorrect
argument passing.
BUG=b:332759882
TEST=Built and booted 64-bit coreboot with 32-bit payload successfully.
Change-Id: Ibd522544ad1e9deed6a11015b0c0e95265bda8eb
Signed-off-by: Subrata Banik <subratabanik@google.com>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/82294
Reviewed-by: Patrick Rudolph <patrick.rudolph@9elements.com>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Nick Vaccaro <nvaccaro@google.com>
TF-A migrates the default choice of linker to GCC in order to enable
LTO. Change BL31_LDFLAGS from `--emit-relocs` to '-Wl,--emit-relocs', so
that GCC is able to pass `--emit-relocs` to the linker.
[1]: https://review.trustedfirmware.org/c/26703
BUG=b:338420310
TEST=emerge-geralt coreboot
TEST=./util/abuild/abuild -t google/geralt -b geralt -a
TEST=./util/abuild/abuild -t google/oak -b elm -a
Change-Id: I65b96aaa052138592a0f57230e1140a1bb2f07ac
Signed-off-by: Yidi Lin <yidilin@chromium.org>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/82189
Reviewed-by: Eric Lai <ericllai@google.com>
Reviewed-by: Yu-Ping Wu <yupingso@google.com>
Reviewed-by: Julius Werner <jwerner@chromium.org>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
This change is for upcoming arm-trusted uprev commit.
TF-A refactors the toolchain detection in [1][2]. After that `AR`,
`CC`, `LD` and other toolchain variables have precedence over
`CROSS_COMPILE`.
Since ChromeOS build system also sets those toolchain variables when
building coreboot, it results that TF-A uses CrOS GCC instead of
coreboot SDK. It needs to unset those variables in order to make
`CROSS_COMPILE` effective.
TF-A upstream changes the default linker from BFD to GCC in [3].
Therefore, temporarily overriding LD as $(LD_arm64} to fix the below
build error.
aarch64-elf-gcc: error: unrecognized command-line option '--emit-relocs'
In addition, TF-A wrapped LD with single quotes to solve Windows path
issue[4]. On MT8173 platform, `--fix-cortex-a53-843419` is appended to
$(LD_arm64} for ERRATA_A53_843419. It results in the below build error.
/bin/sh: 1: --fix-cortex-a53-843419: not found
Since `--fix-cortex-a53-843419` is never passed to TF-A, simply extract
the LD command from $(LD_arm64) by $(word 1, $(LD_arm64)).
[1]: https://review.trustedfirmware.org/c/24921
[2]: https://review.trustedfirmware.org/c/25333
[3]: https://review.trustedfirmware.org/c/26703
[4]: https://review.trustedfirmware.org/c/26737
BUG=b:338420310
TEST=emerge-geralt coreboot
TEST=./util/abuild/abuild -t google/geralt -b geralt -a -x
TEST=./util/abuild/abuild -t google/oak -b elm -a -x
TEST=./util/abuild/abuild -t google/cherry -x -a
Change-Id: Ieac9f96e81e574b87e20cd2df335c36abcb8bb5c
Signed-off-by: Yidi Lin <yidilin@chromium.org>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/82187
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Yu-Ping Wu <yupingso@google.com>
Reviewed-by: Julius Werner <jwerner@chromium.org>
Reviewed-by: Eric Lai <ericllai@google.com>
Currently, arch/arm64 requires coreboot to run on EL3 due
to EL3 register access. This might be an issue when, for example,
one boots into TF-A first and drops into EL2 for coreboot afterwards.
This patch aims at making arch/arm64 more versatile by removing the
current EL3 constraint and allowing arm64 coreboot to run on EL1,
EL2 and EL3.
The strategy here, is to add a Kconfig option (ARM64_CURRENT_EL) which
lets us specify coreboot's EL upon entry. Based on that, we access the
appropriate ELx registers. So, for example, when running coreboot on
EL1, we would not access vbar_el3 or vbar_el2 but instead vbar_el1.
This way, we don't generate faults when accessing higher-EL registers.
Currently only tested on the qemu-aarch64 target. Exceptions were
tested by enabling FATAL_ASSERTS.
Signed-off-by: David Milosevic <David.Milosevic@9elements.com>
Change-Id: Iae1c57f0846c8d0585384f7e54102a837e701e7e
Reviewed-on: https://review.coreboot.org/c/coreboot/+/74798
Reviewed-by: Werner Zeh <werner.zeh@siemens.com>
Reviewed-by: ron minnich <rminnich@gmail.com>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Julius Werner <jwerner@chromium.org>
When switching back and forth between 32 to 64 bit mode, for example to
call a 32-bits FSP or to call the payload, new page tables in the
respective stage will be linked.
The advantages of this approach are:
- No need to determine a good place for page tables in CBFS that does
not overlap.
- Works with non memory mapped flash (however all coreboot targets
currently do support this)
- If later stages can use their own page tables which fits better with
the vboot RO/RW flow
A disadvantage is that it increases the stage size. This could be
improved upon by using 1G pages and generating the pages at runtime.
Note: qemu cannot have the page tables in the RO boot medium and needs
to relocate them at runtime. This is why keeping the existing code with
page tables in CBFS is done for now.
TEST: Booted to payload on google/vilbox and qemu/q35
Signed-off-by: Arthur Heymans <arthur@aheymans.xyz>
Change-Id: Ied54b66b930187cba5fbc578a81ed5859a616562
Reviewed-on: https://review.coreboot.org/c/coreboot/+/80337
Reviewed-by: Patrick Rudolph <patrick.rudolph@9elements.com>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Testing on the unmatched shows the code no longer works completely
correctly; Linux has taken over the handling of misalignment
anyway, because handling it in firmware, with the growing
complexity of the ISA and the awkward way in which it
has to be handled, is more trouble than its worth.
Plus, we don't WANT misalignment handled, magically, in
firmware: the cost of getting it wrong is high (as I've
spent a month learning); the performance is terrible (350x
slowdown; and most toolchains now know to avoid unaligned
load/store on RISC-V anyway.
But, mostly, if alignment problems exist, *we need to know*,
and if they're handled invisibly in firmware, we don't.
The problem with invisible handling was shown a while back
in the Go toolchain: runtime had a small error, such that
many misaligned load/store were happening, and it was
not discovered for some time. Had a trap been directed
to kernel or user on misalignment, the problem would
have been known immediately, not after many months.
(The error, btw, was masking the address with 3,
not 7, to detect misalignment; an easy mistake!).
But, the coreboot code does not work any more any way,
and it's not worth fixing. Remove it.
Tested by booting Linux to runlevel 1; before,
it would hang on an alignment fault, as the
alignment code was failing (somewhere).
This takes the coreboot SBI code much closer to
revival.
Change-Id: I84a8d433ed2f50745686a8c109d101e8718f2a46
Signed-off-by: Ronald G Minnich <rminnich@gmail.com>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/81416
Reviewed-by: Maximilian Brune <maximilian.brune@9elements.com>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Instead of using some aritmetics that sometimes works, use the largest
alignment necessary (page tables) and align downwards in the linker
script.
This fixes linking failing when linking in page tables inside the
bootblock.
This can result in a slight increase in bootblock size of at most 4096 -
512 bytes.
Signed-off-by: Arthur Heymans <arthur@aheymans.xyz>
Change-Id: I78c6ba6e250ded3f04b12cd0c20b18cb653a1506
Reviewed-on: https://review.coreboot.org/c/coreboot/+/80346
Reviewed-by: Martin L Roth <gaumless@gmail.com>
Reviewed-by: Jérémy Compostella <jeremy.compostella@intel.com>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Older parts do not have the menvcfg csr.
Provide a Kconfig variable, default y, to enable it.
Check the variable in the payload code, when coreboot SBI
is used, and print out if it is enabled.
The SiFive FU540 and FU740 do not support this register;
set the variable to n for those parts.
Add constants for this new CSR.
Change-Id: I6ea302a5acd98f6941bf314da89dd003ab20b596
Signed-off-by: Ronald G Minnich <rminnich@gmail.com>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/81425
Reviewed-by: Nico Huber <nico.h@gmx.de>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Martin L Roth <gaumless@gmail.com>
Introduce a function to determine whether the number of cache sets is
a power of two. This aligns with common cache design practices that
favor power-of-two counts for efficient indexing and addressing.
BUG=b:306677879
BRANCH=firmware-rex-15709.B
TEST=Verified functionality on google/ovis and google/rex (including
a non-power-of-two Ovis configuration).
Change-Id: I819e0d1aeb4c1dbe1cdf3115b2e172588a6e8da5
Signed-off-by: Subrata Banik <subratabanik@google.com>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/81268
Reviewed-by: Stefan Reinauer <stefan.reinauer@coreboot.org>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
This patch moves commonlib/stdlib.h -> commonlib/bsd/stdlib.h, since
all code is BSD licensed anyway.
It also moves some code from libpayloads stdlib.h to
commonlib/bsd/stdlib.h so that it can be shared with coreboot. This is
useful for a subsequent commit that adds devicetree.c into commonlib.
Also we don't support DMA on arm platforms in coreboot (only libpayload)
therefore `dma_malloc()` has been removed and `dma_coherent()` has been
moved to architecture specific functions. Any architecture that tries to
use `dma_coherent()` now will get a compile time error. In order to not
break current platforms like mb/google/herobrine which make use of the
commonlib/storage/sdhci.c controller which in turn uses `dma_coherent` a
stub has been added to arch/arm64/dma.c.
Signed-off-by: Maximilian Brune <maximilian.brune@9elements.com>
Change-Id: I3a7ab0d1ddcc7ce9af121a61b4d4eafc9e563a8a
Reviewed-on: https://review.coreboot.org/c/coreboot/+/77969
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Julius Werner <jwerner@chromium.org>
PMP (Physical Memory Protection) is a feature of the RISC-V
Privileged Architecture spec, that allows defining region(s) of
the address space to be protected in a variety of ways: ranges
for M mode can be protected against access from lower privilege
levels, and M mode can be locked out of accessig to memory
reserved for lower privilege levels. Limits on Read, Write, and
Execute are allowed. In coreboot, we protect against Write and
Execute of PMP code from lower levels, but allow Reading, so as
to ease data structure access. PMP is not a security boundary,
it is an accident prevention device.
PMP is used here to protect persistent ramstage code that is
used to support SBI, e.g. printk and some data structures. It
also protects the SBI stacks. Note that there is one stack per
hart. There are 512- and 1024-hart SoC's being built today, so
the stack should be kept small.
PMP is not a general purpose protection mechanism and it is easy
to get around it. For example, S mode can stage a DMA that
overwrites all the M mode code. PMP is, rather, a way to avoid
simple accidents. It is understood that PMP depends on proper OS
behavior to implement true SBI security (personal conversation
with a RISC-V architect). Think of PMP as "Protection Minus
Protection".
PMP is also a very limited resource, as defined in the
architecture. This language is instructive: "PMP entries are
described by an 8-bit configuration register and one XLEN-bit
address register. Some PMP settings additionally use the address
register associated with the preceding PMP entry. Up to 16 PMP
entries are supported. If any PMP entries are implemented, then
all PMP CSRs must be implemented, but all PMP CSR fields are
WARL and may be hardwired to zero. PMP CSRs are only accessible
to M-mode."
In other words if you implement PMP even a little, you have to
impelement it all; but you can implement it in part by simply
returning 0 for a pmpcfg. Also, PMP address registers (pmpaddr)
don't have to implement all the bits. On a SiFive FU740, for
example, PMP only implements bits 33:0, i.e. a 34 bit address.
PMPs are just packed with all kinds of special cases. There are
no requirements that you read back what you wrote to the pmpaddr
registers. The earlier PMP code would die if the read did not
match the write, but, since pmpaddr are WARL, that was not
correct. An SoC can just decide it only does 4096-byte
granularity, on TOR PMP types, and that is your problem if you
wanted finer granulatiry. SoC's don't have to implement all the
high order bits either.
And, to reiterate, there is no requirement about which of the pmpcfg
are implemented. Implementing just pmpcfg15 is allowed.
The coreboot SBI code was written before PMP existed. In order
for coreboot SBI code to work, this patch is necessary.
With this change, a simple S-mode payload that calls SBI putchar
works:
1:
li a7, 1
li a0, 48
ecall
j 1b
Without this change, it will not work.
Getting this to build on RV32 required changes to the API,
as it was incorrect. In RV32, PMP entries are 34 bits.
Hence, the setup_pmp needed to accept u64. So,
uinptr_t can not be used, as on 32 bits they are
only 32 bit numbers. The internal API uses uintptr_t,
but the exported API uses u64, so external code
does not have to think about right shifts on base
and size.
Errors are detected: an error in base and size will result
in a BIOS_EMERG print, but not a panic.
Boots not bricks if possible.
There are small changes to the internal API to reduce
stack pressure: there's no need to have two pmpcfg_t
on the stack when one will do.
TEST: Linux now boots partly on the SiFive unmatched. There are
changes in flight on the coreboot SBI that will allow Linux to
boot further, but they are out of scope for this patch.
Currently, clk_ignore_unused is required, this requires a
separate patch.
Change-Id: I6edce139d340783148cbb446cde004ba96e67944
Signed-off-by: Ronald G Minnich <rminnich@gmail.com>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/81153
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Philipp Hug <philipp@hug.cx>
Starting with Intel CPX there is a bug in the reference code during
the Pipe init. This code synchronises the CAR between sockets in FSP-M.
This code implicitly assumes that the FSP heap is right above the
RC heap, where both of them are located at the bottom part of CAR.
Work around this issue by making that implicit assumption done in FSP
explicit in the coreboot linker script and allocation.
TEST=intel/archercity CRB
Signed-off-by: Arthur Heymans <arthur@aheymans.xyz>
Signed-off-by: Shuo Liu <shuo.liu@intel.com>
Change-Id: I38a4f4b7470556e528a1672044c31f8bd92887d4
Reviewed-on: https://review.coreboot.org/c/coreboot/+/80579
Reviewed-by: Lean Sheng Tan <sheng.tan@9elements.com>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Nico Huber <nico.h@gmx.de>
Reviewed-by: Shuo Liu <shuo.liu@intel.com>
Current version of qemu raise an exception when accessing invalid
memory. Modify the probing code to temporary redirect the exception
handler like on ARM platform.
Also move saving of the stack frame out to trap_util.S to have all at
the same place for a future rewrite.
TEST=boots to ramstage
Change-Id: I25860f688c7546714f6fdbce8c8f96da6400813c
Signed-off-by: Philipp Hug <philipp@hug.cx>
Signed-off-by: Arthur Heymans <arthur@aheymans.xyz>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/36486
Reviewed-by: ron minnich <rminnich@gmail.com>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
It simply adds a comment to indicate to the reader that the
RISCV_PAYLOAD_MODE_S parameter causes OpenSBI to switch to Supervisor
mode. Otherwise it could be interpreted that coreboot switches to
Supervisor mode before starting OpenSBI (which is not the case)
Signed-off-by: Maximilian Brune <maximilian.brune@9elements.com>
Change-Id: Ib62be0c2ff59361200df4c65f9aca5f7456a0ada
Reviewed-on: https://review.coreboot.org/c/coreboot/+/79949
Reviewed-by: ron minnich <rminnich@gmail.com>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Philipp Hug <philipp@hug.cx>