As noted by Quentin, in CI we should be at least versioning the pytest
that we install. To avoid problems later, go with the whole requirements
file being used. Furthermore, our documentation building for readthedocs
must also have pytest so install the requirements file there as well.
Reported-by: Quentin Schulz <quentin.schulz@cherry.de>
Signed-off-by: Tom Rini <trini@konsulko.com>
This adds the vexpress_fvp and vexpress_fvp_bloblist platforms to the
list of platforms we test via emulator in CI. In order to do this we
need to first have our container runtime have TF-A builds for the
vexpress_fvp platform, both with and without transfer list support as
well as installing "telnet" so that we can access console. In the CI
files we check for the existence of /opt/tf-a/${TEST_PY_BD} and if
found, copy bl1.bin and fip.bin to /tmp and set the variables so that we
can later run FVP to run.
Note that we currently disable the hostfs (semihosting) tests as they
trigger a bug in FVP. This has been reported upstream, and can be
enabled when fixed.
Reviewed-by: Harrison Mutai <harrison.mutai@arm.com>
Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Tom Rini <trini@konsulko.com>
Now that U-Boot can boot this quickly, using kvm, add a test that the
installer starts up correctly.
Use the qemu-x86_64 board in the SJG lab.
Signed-off-by: Simon Glass <sjg@chromium.org>
Binman includes a good set of tests covering all of its functionality.
This includes a code-coverage test.
However to date the code-coverage test has not been checked
automatically by CI, relying on people to run 'binman test -T'
themselves.
Plug the gap to avoid bugs creeping in future.
Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
It seems that the new approach doesn't go far enough, since when there
are multiple independent runners on a machine, they end up sharing the
same git directory.
Add $CI_RUNNER_ID as well, so they are kept separate.
Signed-off-by: Simon Glass <sjg@chromium.org>
Execution time varies widely with the existing tests. Provides a way to
produce a summary of the time taken for each test, along with a
histogram.
This is enabled with the --timing flag.
Enable it for sandbox in CI.
Example:
Duration : Number of tests
======== : ========================================
<1ms : 1
<8ms : 1
<20ms : # 20
<30ms : ######## 127
<50ms : ######################################## 582
<75ms : ####### 102
<100ms : ## 39
<200ms : ##### 86
<300ms : # 29
<500ms : ## 42
<750ms : # 16
<1.0s : # 15
<2.0s : # 23
<3.0s : 13
<5.0s : 9
<7.5s : 1
<10.0s : 6
<20.0s : 12
Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Tom Rini <trini@konsulko.com>
When running multiple runners on the same machine, each git repo should
be in its own place to avoid them interfering with either other.
Link to docs:
https://docs.gitlab.com/ee/ci/runners/configure_runners.html#custom-build-directories
Series-to: u-boot
Series-cc: trini
Series-version: 2
Series-links: 442655
Series-changes: 3
- Use GIT_CLONE_PATH instead
Series-changes: 2
- Put this under the global 'default' part
END
Signed-off-by: Simon Glass <sjg@chromium.org>
This is a global default, so put it under 'default' like the tags.
Series-changes: 2
- Add new patch to move default image under global defaults
Signed-off-by: Simon Glass <sjg@chromium.org>
Suggested-by: Tom Rini <trini@konsulko.com>
I have one of these 2GB boards with Ubuntu 24.10 loaded. Add an entry
for it so that it can be used for testing.
Signed-off-by: Simon Glass <sjg@chromium.org>
The xtensa architecture is interesting in that the platforms we support
are only valid on the binary-only toolchains as the DC233C instruction
set requires those toolchains (and not the FSF instruction set). Only
install the binary toolchain on amd64 hosts and only run the tests on
them as well.
Signed-off-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
These Radxa boards use similar Rockchip SoCs and unfortunately have a
lot of secret code. Add entries for these so that they can be used for
testing.
For some reason the 5b hangs (for me) with any significant amount of
output (e.g. dm tree). The board currently uses this for DDR init:
rk3588_ddr_lp4_2112MHz_lp5_2400MHz_v1.18.bin
Signed-off-by: Simon Glass <sjg@chromium.org>
This board has two variants, one of which boots with tpl. Labgrid gets
upset when they both try to claim the same hardware, so we might need to
see if we can handle that cleanly, e.g. by making the job wait.
Signed-off-by: Simon Glass <sjg@chromium.org>
I have one of these boards with Armbian_22.02.2_Rockpro64_bullseye
loaded. Add an entry for it so that it can be used for testing.
Signed-off-by: Simon Glass <sjg@chromium.org>
I have one of these boards loaded with Linux 6.6.2 and a video demo. Add
an entry for it so that it can be used for testing.
Unfortunately quite a lot of work-arounds are needed to make this board
work, but I am working with the supplier on this.
Signed-off-by: Simon Glass <sjg@chromium.org>
I have one of these boards loaded with Ubuntu 24.10 (64-bit). Add an
entry for it so that it can be used for testing.
Signed-off-by: Simon Glass <sjg@chromium.org>
Acked-By: Heinrich Schuchardt <xypron.glpk@gmx.de>
I have an original rpi installed now, loaded with OS Lite (32-bit)
Add an entry for it so that it can be used for testing.
Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Tom Rini <trini@konsulko.com>
Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Tom Rini <trini@konsulko.com> says:
Hey all,
This is picking up Simon's v5 of the above-named series and making a few
more changes so that the follow-up series I have leads to arm64 being
supported for almost all jobs. To quote Simon's cover letter:
All gitlab runners are currently amd64 machines. This series attempts to
create a docker image which can also support arm64 so that sandbox tests
can be run on it.
The TARGET_... environment variables for grub could perhaps be adjusted,
using the new variables, but I have not done that for now.
Adding to what Simon said, we now build grub for all architectures as
the reason to install it was to be able to use the binaries in QEMU.
That won't provide us with amd64 binaries on arm64 hosts so we can't use
that shortcut anymore.
Link: https://lore.kernel.org/r/20241127172247.1488685-1-trini@konsulko.com
For consistency now, and future ease of testing with non-amd64 hosts,
build grub for all architectures rather than relying on host binaries
for i386/x86_64.
Signed-off-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Some of the boards have developed problems since the last passing run.
Disable these while diagnosis is ongoing.
Signed-off-by: Simon Glass <sjg@chromium.org>
The rules part of the template makes sure that this doesn't run until
specifically requested. Drop the check in the script itself, so it is
possible to trigger a run manually without re-pushing the tree.
Signed-off-by: Simon Glass <sjg@chromium.org>
Most gitlab runners are mostly idle at present and could happily run
several jobs at once.
There are a few exceptions, such as the world builds.
Add a 'single' tag for the exceptions so that they are dealt with
separately, with no concurrency. This is not quite perfect, since it
means that a 'single' job can be combined with a number of normal jobs,
but in practice this doesn't seem to matter.
To use this, for each existing machine, add a new, separate runner, with
the tag 'single'. Set 'limit = 1' in its config.toml file and set
'concurrent = 10' (for example) in the global section. The machine will
then pick up one 'single' job and up to 9 other jobs at once.
In my testing this reduces the time to complete a CI pipeline.
My choice of which jobs to mark as single is mostly down to how much CPU
they need. For the 'docs' job, it runs for a short time, so doesn't seem
worth marking it as a 'single' job.
Signed-off-by: Simon Glass <sjg@chromium.org>
Simon Glass <sjg@chromium.org> says:
Labgrid provides access to a hardware lab in an automated way. It is
possible to boot U-Boot on boards in the lab without physically touching
them. It relies on relays, USB UARTs and SD muxes, among other things.
By way of background, about 4 years ago I wrong a thing called Labman[1]
which allowed my lab of about 30 devices to be operated remotely, using
tbot for the console and build integration. While it worked OK and I
used it for many bisects, I didn't take it any further.
It turns out that there was already an existing program, called Labgrid,
which I did not know about at time (thank you Tom for telling me). It is
more rounded than Labman and has a number of advantages:
- does not need udev rules, mostly
- has several existing users who rely on it
- supports multiple machines exporting their devices
It lacks a 'lab check' feature and a few other things, but these can be
remedied.
On and off over the past several weeks I have been experimenting with
Labgrid. I have managed to create an initial U-Boot integration (this
series) by adding various features to Labgrid[2] and the U-Boot test
hooks.
I hope that this might inspire others to set up boards and run tests
automatically, rather than relying on infrequent, manual test. Perhaps
it may even be possible to have a number of labs available.
Included in the integration are a number of simple scripts which make it
easy to connect to boards and run tests:
ub-int <target>
Build and boot on a target, starting an interactive session
ub-cli <target>
Build and boot on a target, ensure U-Boot starts and provide an interactive
session from there
ub-smoke <target>
Smoke test U-Boot to check that it boots to a prompt on a target
ub-bisect <target>
Bisect a git tree to locate a failure on a particular target
ub-pyt <target> <testspec>
Run U-Boot pytests on a target
Some of these help to provide the same tbot[4] workflow which I have
relied on for several years, albeit much simpler versions.
The goal here is to create some sort of script which can collect
patches from the mailing list, apply them and test them on a selection
of boards. I suspect that script already exists, so please let me know
what you suggest.
I hope you find this interesting and take a look!
[1] https://github.com/sjg20/u-boot/tree/lab6a
[2] https://github.com/labgrid-project/labgrid/pull/1411
[3] https://github.com/sjg20/uboot-test-hooks/tree/labgrid
[4] https://tbot.tools/index.html
Link: https://lore.kernel.org/r/20241112141326.643128-1-sjg@chromium.org
[trini: Move the sjg-lab job to prior to world build, to fix pipeline
status]
Signed-off-by: Tom Rini <trini@konsulko.com>
Add a way to run tests on a real hardware lab. This is in the very early
experimental stages. There are only 23 boards and 3 of those are broken!
(bob, ff3399, samus). A fourth fails due to problems with the TPM tests.
To try this, assuming you have gitlab access, set SJG_LAB=1, e.g.:
git push -o ci.variable="SJG_LAB=1" dm HEAD:try
This relies on the two previous series targeted at -next as well as the
bugfix series for -master
Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Andrejs Cainikovs <andrejs.cainikovs@toradex.com>
Patrick Rudolph <patrick.rudolph@9elements.com> says:
Based on the existing work done by Simon Glass this series adds
support for booting aarch64 devices using ACPI only.
As first target QEMU SBSA support is added, which relies on ACPI
only to boot an OS. As secondary target the Raspberry Pi4 was used,
which is broadly available and allows easy testing of the proposed
solution.
The series is split into ACPI cleanups and code movements, adding
Arm specific ACPI tables and finally SoC and mainboard related
changes to boot a Linux on the QEMU SBSA and RPi4. Currently only the
mandatory ACPI tables are supported, allowing to boot into Linux
without errors.
The QEMU SBSA support is feature complete and provides the same
functionality as the EDK2 implementation.
The changes were tested on real hardware as well on QEMU v9.0:
qemu-system-aarch64 -machine sbsa-ref -nographic -cpu cortex-a57 \
-pflash secure-world.rom \
-pflash unsecure-world.rom
qemu-system-aarch64 -machine raspi4b -kernel u-boot.bin -cpu cortex-a72 \
-smp 4 -m 2G -drive file=raspbian.img,format=raw,index=0 \
-dtb bcm2711-rpi-4-b.dtb -nographic
Tested against FWTS V24.03.00.
Known issues:
- The QEMU rpi4 support is currently limited as it doesn't emulate PCI,
USB or ethernet devices!
- The SMP bringup doesn't work on RPi4, but works in QEMU (Possibly
cache related).
- PCI on RPI4 isn't working on real hardware since the pcie_brcmstb
Linux kernel module doesn't support ACPI yet.
Link: https://lore.kernel.org/r/20241023132116.970117-1-patrick.rudolph@9elements.com
Add QEMU's SBSA ref board to azure pipelines and gitlab CI to run tests on it.
TEST: Run on Azure pipelines and confirmed that tests succeed.
Signed-off-by: Patrick Rudolph <patrick.rudolph@9elements.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Build and run qemu_arm64_lwip_defconfig in CI. This tests the lightweight
IP (lwIP) implementation of the dhcp, tftpboot and ping commands.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
When we have platforms being emulated by QEMU we cannot rely on the
"sleep" command running for the expected wall-clock amount of time. Even
with our current allowance for deviation from expected time, it will
still fail from time to time. Exclude the sleep test here.
Signed-off-by: Tom Rini <trini@konsulko.com>
Jiaxun Yang <jiaxun.yang@flygoat.com> says:
Hi all,
This series enabled qemu-xtensa board.
For dc232b CPU it needs to be built with toolchain[1].
This is a side product of me investigating architectures
physical address != virtual address in U-Boot. Now we can
get it covered under CI and regular tests.
VirtIO devices are not working as expected, due to U-Boot's
assumption on VA == PA everywhere, I'm going to get this fixed
later.
My Xtensa knowledge is pretty limited, Xtensa people please
feel free to point out if I got anything wrong.
Thanks
[1]: https://github.com/foss-xtensa/toolchain/releases/download/2020.07/x86_64-2020.07-xtensa-dc232b-elf.tar.gz
Both GitLab and Azure (and other CI systems) have native support for
displaying JUnitXML test report results. The pytest framework that we
use can generate these reports. Change our CI tests so that they will
generate these reports and then have the respective CI platform pick
them up. We write to different locations because of where each CI is
(and isn't) able to easily pass things along.
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Rini <trini@konsulko.com>
Instead of downloading coreboot binaries from a Google drive location,
use the ones we have built ourselves.
Signed-off-by: Tom Rini <trini@konsulko.com>
The primary motivation for having a sandbox without LTO build in CI is
to ensure that we don't have that option break. We now have the ability
to run tests of specific options being enabled/disabled, so drop the
parts of CI that build and test that configuration specifically and add
a build test instead. We still test that "NO_LTO=1" rather than editing
the config file works via the ftrace tests.
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Rini <trini@konsulko.com>
At this point we have all of the defconfigs maintained again, so
re-enable the check to prevent further regressions.
Signed-off-by: Tom Rini <trini@konsulko.com>