New VBE Boot Method: Decoupling Your OS and Devicetrees

In the world of embedded systems, a Flattened Image Tree (FIT) is the standard way to package a bootable OS, typically bundling the kernel, a ramdisk, and the necessary devicetree (FDT) into a single, verifiable file. While convenient, this approach tightly couples the OS with its hardware description. But what if the OS and the devicetree could have independent lifecycles?

A new patch series introduces an enhancement to U-Boot’s Verified Boot for Embedded (VBE) flow that does just that, adding significant flexibility for system integrators and distributors.

The Challenge: Separate Lifecycles

For a Linux distribution aiming to support a wide range of hardware, it’s often desirable to separate the OS from the OEM-controlled devicetrees. This allows the OEM to update the devicetree to fix hardware-specific issues or enable new features without requiring a full OS update from the distro. Conversely, the OS can be updated without touching the OEM’s hardware configuration.

This series addresses this challenge by introducing a new boot method, CONFIG_BOOTMETH_VBE_ABREC_OS, which allows a devicetree to be loaded from a separate, “load-only” FIT before the main OS FIT is processed.


How It Works: A Two-Step Boot Process

The new VBE boot method orchestrates a two-step process, relying on a state file and enhancements to mkimage and the bootm command.

  1. State-Driven Boot Selection: The process starts by looking for a vbe-state file in the boot partition. This file, which is a simple devicetree blob, tells U-Boot which OS slot to boot next: A, B, or recovery. This maintains the robust A/B update scheme that VBE is known for.
  2. The OEM Devicetree FIT: After selecting a slot (e.g., slot ‘A’), U-Boot checks for an OEM-provided FIT, such as a/oem.fit. Thanks to a new --load-only option in mkimage, this FIT can be created to contain only devicetrees, without a kernel image.
  3. Restartable bootm: If an OEM FIT is found, U-Boot loads it using bootm. Since there’s no OS to boot, bootm simply loads the best-matching devicetree into memory and exits. The key innovation here is that the bootm process can now be restarted.
  4. Booting the OS: U-Boot then proceeds to load the main OS FIT (e.g., as specified in an extlinux.conf file). It calls bootm again, but this time with a flag indicating it’s a restart. This tells bootm to skip loading a devicetree from the OS FIT and instead use the one already loaded from the OEM FIT.

The end result is that the OS boots using the devicetree provided by the OEM, achieving a clean separation of concerns.


Under the Hood

This powerful new feature is enabled by a series of changes:

  • mkimage Enhancement: The mkimage tool can now create load-only FITs, which are essential for packaging the devicetrees separately.
  • PXE/Extlinux Integration: The PXE and extlinux boot methods have been updated to support restarting a boot sequence, allowing the devicetree to be preserved across the two bootm calls.
  • Refactoring and Cleanup: The series includes numerous cleanups, such as improving FIT information display and refactoring the PXE parsing logic for better maintainability.
  • Comprehensive Testing: A new set of unit tests for the VBE OS flow has been added for sandbox, ensuring the feature is robust and reliable.
  • Documentation: The new feature is accompanied by detailed documentation, which you can find in doc/develop/bootstd/vbe_os.rst.

This series is a great example of how U-Boot continues to evolve to meet the complex demands of modern embedded systems. By decoupling the OS and devicetree, it provides a more flexible and maintainable boot architecture for product developers and OEMs alike.




Virtio-SCSI Arrives, Backed by a Major SCSI Overhaul

We’re excited to announce a significant new feature in U-Boot: a virtio-scsi driver. While U-Boot has long supported virtio-blk for block device access in virtualized environments, virtio-scsi offers greater flexibility, allowing a single virtio device to host multiple disks (LUNs) and supporting features like hotplug.

This comprehensive 27-patch series, does more than just add a new driver. To make virtio-scsi a reality, U-Boot’s entire SCSI subsystem has received a much-needed modernization, resulting in a faster, more robust, and more maintainable implementation for all SCSI devices.


Smarter Scanning with REPORT LUNS

One of the most significant improvements is in how U-Boot discovers SCSI devices. Previously, a scsi scan would blindly iterate through every possible target and Logical Unit Number (LUN). In a QEMU environment, this could mean checking up to 256 targets, each with 16,384 LUNs—a time-consuming process for finding just one or two disks.

The SCSI subsystem now uses the REPORT LUNS command. Instead of guessing, U-Boot simply asks each target which LUNs it actually has. This can dramatically speed up the scanning process and reduce unnecessary bus traffic, providing a much snappier user-experience.


Better Error Reporting and Code Quality

A key focus of this series was improving the robustness and maintainability of the SCSI and partition code.

No More Silent Failures

Have you ever tried to list partitions on a disk and received a cryptic “unsupported partition type” message, even when you knew the disk was valid? This often happened because a lower-level read error (e.g., from a bad LUN or an inaccessible disk) was silently ignored by the partition drivers.

This has now been fixed. The partition probing functions now correctly propagate I/O errors, so if a disk read fails, you will see a clear “Error reading from device” message instead of being led down a confusing diagnostic path.

Cleaning Up the Code

The core scsi_read() and scsi_write() functions have been thoroughly refactored. The changes make the logic much easier to follow by:

  • Improving the logic for handling read/write loops
  • Replacing magic numbers with proper constants and inquiry response structures
  • Ensuring compliance with modern SCSI specifications, which handle LUN addressing differently
  • Fixing subtle bugs, like off-by-one errors in the device scan loop

The New virtio-scsi Driver 🚀

With these foundational improvements in place, the new virtio-scsi driver integrates smoothly into U-Boot. It registers itself as a SCSI host, allowing the standard scsi commands to work transparently with virtio-based disks.

To make testing and usage easier, the QEMU build scripts have also been updated to support booting from a disk attached via virtio-scsi, allowing developers and users to immediately take advantage of this new capability.

In summary, this series is a good example of holistic development. It not only delivers a powerful new feature but also strengthens the core infrastructure it’s built upon, benefiting all users of U-Boot’s SCSI subsystem.




Giving FIT-loading a Much-Needed Tune-Up

The U-Boot boot process relies heavily on the Flattened Image Tree (FIT) format to package kernels, ramdisks, device trees, and other components. At the heart of this lies the fit_image_load() function, which is responsible for parsing the FIT, selecting the right images, and loading them into memory.

Over the years, as more features like the “loadables” property were added, this important function grew in size and complexity. While it was a significant improvement over the scattered code it replaced, it had become a bit unwieldy—over 250 lines long! Maintaining and extending such a large function can be challenging.

Recognizing this, U-Boot developer Simon Glass recently undertook a refactoring effort to improve its structure and maintainability.


A Classic Refactor: Divide and Conquer

The core strategy of this patch series was to break down the monolithic fit_image_load() function into a collection of smaller, more focused helper functions. This makes the code easier to read, debug, and paves the way for future feature development.

The refactoring splits the loading process into logical steps, each now handled by its own function:

  • Image Selection: A new select_image() function now handles finding the correct configuration and image node within the FIT.
  • Verification and Checks: The print_and_verify() and check_allowed() functions centralize image verification and checks for things like image type, OS, and CPU architecture.
  • Loading and Decompression: The actual data loading and decompression logic were moved into handle_load_op() and decomp_image(), respectively.

Along with this restructuring, the series includes several smaller cleanups, such as removing unused variables and tidying up conditional compilation (#ifdef) directives for host builds.


Test Suite Improvements ⚙️

Good code changes are always backed by solid tests. This effort also included several improvements to the FIT test suite:

  • The test_fit() routine was renamed to test_fit_base() to prevent naming conflicts with other tests.
  • The test was updated to no longer require a full U-Boot restart, significantly speeding up test execution.
  • A new check was added to ensure U-Boot correctly reports an error when a required kernel image is missing from the FIT.

For a detailed look at all the changes, you can check out the merge commit or patches.




The pytest / board Integration

The integration of pytest with real boards (test.py) was written by Stephen Warren of Nvidia, some 9 years ago. It has certainly stood the test of time. The original code has been tweaked for various purposes over the years, but considering the number of tests added in that time, the changes are very small. Here is a diffstat for the changes up until a recent rename:

 test/py/multiplexed_log.css           |  11 +-
 test/py/multiplexed_log.py            | 133 ++++++++++---
 test/py/test.py                       |  31 ++--
 test/py/u_boot_console_base.py        | 341 ++++++++++++++++++++++++++++------
 test/py/u_boot_console_exec_attach.py |  40 ++--
 test/py/u_boot_console_sandbox.py     |  54 ++++--
 test/py/u_boot_spawn.py               | 212 ++++++++++++++++++---
 test/py/u_boot_utils.py               | 197 ++++++++++++++++++--
 8 files changed, 848 insertions(+), 171 deletions(-)

When Stephen wrote the code, there was no Gitlab system in U-Boot (it used Travis). Tom Rini added Gitlab in 2019: test.py mostly just worked in that environment. One of the reasons the code has proven so stable is that it deals with boards at the console level, simply relying on shell-script hooks to talk start up and communicate with boards. These scripts can be made to do a lot of different things, such as powering boards on and off, sending U-Boot over USB, etc.

But perhaps it might be time to make a few changes. Let me give a bit of background first.

In 2020 I decided to try to get my collection of boards into some sort of lab. Picking out a board to manually test with it was quite annoying. I wrote Labman, a Python program which created various files based on a yaml description of the lab. Labman generates udev rules and an /etc/fstab file. It also creates small Python programs which know how to build U-Boot and write it to a board, including dealing with the reset/recovery sequences, SD-wire, etc. With all that in place, Tbot provides a way to get an interactive session on a board. It also provides a way to run U-Boot tests.

Early last year I decided to take another look at this. The best things about Labman were its unified lab description (including understanding how many ports each USB hub has and the address of each) and a ‘labman check’ option which quickly pointed to connection problems. The bad thing about Labman was…well, everything else. It was annoying to re-run the scripts and restart udev after each lab change. The Python code-generation was a strange way of dealing with the board-specific logic.

Tom Rini suggested looking at Labgrid. After a bit of investigation, it looked good to me. The specification of hubs is somewhat primitive and the split between the exporter and the environment is confusing. But the structure of it (coordinator, exporters and clients) is much better than Labman. The approach to connecting to boards (ssh) is better as well, since it starts ser2net automatically. Labman is a thin layer of code over some existing services. Labman is much better designed.

So overall I was pretty enthusiastic and set to work on creating an integration for U-Boot. So I can again build U-Boot, write it to a board and start it up with a simple command:

ellesmere:~/u$ ub-int rock5b
Building U-Boot in sourcedir for rock5b-rk3588
Bootstrapping U-Boot from dir /tmp/b/rock5b-rk3588
Writing U-Boot using method rockchip
DDR 9fa84341ce typ 24/09/06-09:51:11,fwver: v1.18

<...much unfortunate spam from secret binaries here...>

U-Boot Concept 2025.01-rc3-01976-g290829cc0d20 (Jul 20 2025 - 20:10:36 -0600)

Model: Radxa ROCK 5B
SoC:   RK3588
DRAM:  4 GiB
Core:  362 devices, 34 uclasses, devicetree: separate
MMC:   mmc@fe2c0000: 1, mmc@fe2d0000: 2, mmc@fe2e0000: 0
Loading Environment from nowhere... OK
In:    serial@feb50000
Out:   serial@feb50000
Err:   serial@feb50000
Model: Radxa ROCK 5B
SoC:   RK3588
Net:   No ethernet found.
Hit any key to stop autoboot:  0 
=> 

I’ve also used this integration to make my lab accessible to gitlab, so that any branch or pull-request can be tested on the lab, to make sure it has not broken U-Boot.

So, back to the topic. The Labgrid integration supports test.py and it works fine. A minor improvement is ‘lab mode’, where Labgrid handles getting U-Boot to a prompt, making it work with boards like the Beagleplay, which has a special autoboot message.

But the test.py interface is (at last) showing its age. It’s only real interface to Labgrid is via the u-boot-test-console script, which just runs the Labgrid client. Some tests restart the board, perhaps because they boot and OS or do something destructive to the running U-Boot. This results in U-Boot being built again, flashed to the board again and started again. When something breaks, it could be a lab failure or a test failure, but all we can do is show the output and let the user figure it out. The current lab works remarkably well given its fairly basic setup, but it is certainly not reliable. Sometimes a board will fail a test, but trying it again will pass, for example.

So I am thinking that it might make sense to integrate test.py and Labgrid a little more closely. Both are written in Python, so test.py could import some Labgrid modules, get the required target, start up the console and then let the tests run. If a test wants to restart, a function can do this in the most efficient and reliable way possible.

This might be more efficient and it might also provide better error messages. We would then not need the hook functions for the Labgrid case.




New U-Boot CI Lab Page

U-Boot has a new continuous integration (CI) lab page that provides a real-time look at the status of various development boards. The page, located at https://lab.u-boot.org/, offers a simple and clean interface that allows developers and curious people to quickly check on the health and activity of each board in the lab.

When you first visit the page, you’ll see a grid of all the available boards. Each board’s card displays its name and current status, making it easy to see which boards are online and which are not. A single click on any board will show a console view, taken from the last health check. This allows you see why boards are failing, for example.

This new lab page is a nice resource for the U-Boot community. It provides a transparent and accessible way to monitor this part of the CI system.

Check it out and get in touch if you have any suggestions or feedback! 🧪




QEMU improvements

Since 2018 U-Boot has had a good selection of features for running on top of QEMU, including:

  • virtio using PCI devices (legacy and modern)
  • virtio using memory-mapped I/O
  • block devices, to support filesystems, etc. (virtio-blk)
  • network devices (virtio-net)
  • random-number device (virtio-rng)

Most of this was written by Bin Meng. It uses driver model and is nicely implemented.

What’s new?

More recently a few more features have been added:

  • SCSI devices, for more flexible discovery with multiple disks (virtio-scsi)
  • Filesystem devices, for access to host files (virtio-fs). See the separate post about this.
  • Visibility into the available virtio devices (virtio list)
  • Additions to the qfw command to improve visibility

The `virtio list` command can be useful for seeing what paravirtualised devices are available and whether U-Boot has a driver for them. Here you can see U-Boot running on an x86 host.

=> virtio scan
=> virtio list
Name                  Type            Driver
--------------------  --------------  ---------------
virtio-pci.m#0         5: balloon     (none)
virtio-pci.m#1         4: rng         virtio-rng#1
virtio-pci.m#2        12: input-host  (none)
virtio-pci.m#3        12: input-host  (none)
virtio-pci.m#4        13: vsock       (none)
virtio-pci.m#5         3: serial      (none)
virtio-pci.m#6         8: scsi        virtio-scsi#6
virtio-pci.m#7         9: 9p          (none)
virtio-pci.m#8        1a: fs          virtio-fs#8
virtio-pci.m#9        10: gpu         (none)
virtio-pci.m#10        1: net         virtio-net#a
=>

Here you can see how the random-number driver can be used:

=> random 1000 10
16 bytes filled with random data
=> md.b 1000 10
00001000: 00 3f e2 f8 a1 70 4e 5f 8c 19 19 ba 18 76 32 bc  .?...pN_.....v2.
=> 

SCSI is accessed by scanning it first. Note that standard boot does this automatically, if you are just booting an OS.

=> scsi scan
scanning bus for devices...
  Device 0: (0:1) Vendor: QEMU Prod.: QEMU HARDDISK Rev: 2.5+
            Type: Hard Disk
            Capacity: 10240.0 MB = 10.0 GB (20971520 x 512)
=> part list scsi 0

Partition Map for scsi device 0  --   Partition Type: EFI

Part	Start LBA	End LBA		Name
	Attributes
	Type GUID
	Partition GUID
  1	0x00200800	0x013fffde	""
	attrs:	0x0000000000000000
	type:	0fc63daf-8483-4772-8e79-3d69d8477de4
		(linux)
	guid:	e53def26-b3a7-4227-8175-b933282b824f
  e	0x00000800	0x000027ff	""
	attrs:	0x0000000000000000
	type:	21686148-6449-6e6f-744e-656564454649
		(21686148-6449-6e6f-744e-656564454649)
	guid:	a52718a3-62a4-483a-b8fa-38cefacad2fd
  f	0x00002800	0x000377ff	""
	attrs:	0x0000000000000000
	type:	c12a7328-f81f-11d2-ba4b-00a0c93ec93b
		(EFI System Partition)
	guid:	077b8491-26d1-4984-a86f-2c8674c438ee
 10	0x00037800	0x00200000	""
	attrs:	0x0000000000000000
	type:	bc13c2ff-59e6-4262-a352-b275fd6f7172
		(bc13c2ff-59e6-4262-a352-b275fd6f7172)
	guid:	3b088c0d-2f7b-4d92-b7c0-561d0e2cdd30
=> 

You can also inspect some of the qfw tables directly. The qfw list command has been around for a while, although some minor updates were added recently. It shows the files that QEMU presents to U-Boot:

=> qfw list 
    Addr     Size Sel Name
-------- -------- --- ------------
       0        0  20 bios-geometry                                           
       0       6d  21 bootorder                                               
3fcab000       14  22 etc/acpi/rsdp                                           
3fcad000    20000  23 etc/acpi/tables                                         
       0        4  24 etc/boot-fail-wait                                      
       0       28  25 etc/e820                                                
       0        8  26 etc/msr_feature_control                                 
       0       18  27 etc/smbios/smbios-anchor                                
       0      169  28 etc/smbios/smbios-tables                                
       0        1  29 etc/smi/features-ok                                     
       0        8  2a etc/smi/requested-features                              
       0        8  2b etc/smi/supported-features                              
       0        6  2c etc/system-states                                       
       0     1000  2d etc/table-loader                                        
       0        0  2e etc/tpm/log                                             
       0        8  2f etc/vmgenid_addr                                        
3fcac000     1000  30 etc/vmgenid_guid                                        
       0     2400  31 genroms/kvmvapic.bin 

Low-level features

The new qfw table command can be useful to seeing exactly how the ACPI tables are provided:

=> qfw table
  0 alloc: align 10 zone fseg name 'etc/acpi/rsdp'
  1 alloc: align 1000 zone high name 'etc/vmgenid_guid'
  2 alloc: align 40 zone high name 'etc/acpi/tables'
  3 add-chksum offset 49 start 40 length 30f7 name 'etc/acpi/tables'
  4 add-ptr offset 315b size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
  5 add-ptr offset 315f size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
  6 add-ptr offset 31c3 size 8 dest 'etc/acpi/tables' src 'etc/acpi/tables'
  7 add-chksum offset 3140 start 3137 length f4 name 'etc/acpi/tables'
  8 add-chksum offset 3234 start 322b length 120 name 'etc/acpi/tables'
  9  10 add-ptr offset 3375 size 4 dest 'etc/acpi/tables' src 'etc/vmgenid_guid'
 11 add-chksum offset 3354 start 334b length ca name 'etc/acpi/tables'
 12 add-chksum offset 341e start 3415 length 38 name 'etc/acpi/tables'
 13 add-chksum offset 3456 start 344d length 208 name 'etc/acpi/tables'
 14 add-chksum offset 365e start 3655 length 3c name 'etc/acpi/tables'
 15 add-chksum offset 369a start 3691 length 28 name 'etc/acpi/tables'
 16 add-ptr offset 36dd size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 17 add-ptr offset 36e1 size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 18 add-ptr offset 36e5 size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 19 add-ptr offset 36e9 size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 20 add-ptr offset 36ed size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 21 add-ptr offset 36f1 size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 22 add-ptr offset 36f5 size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 23 add-chksum offset 36c2 start 36b9 length 40 name 'etc/acpi/tables'
 24 add-ptr offset 10 size 4 dest 'etc/acpi/rsdp' src 'etc/acpi/tables'
 25 add-chksum offset 8 start 0 length 14 name 'etc/acpi/rsdp'

Since QEMU can also provide SMBIOS tables, these can be inspected using the smbios command:

=> smbios
SMBIOS 3.0.0 present.
9 structures occupying 361 bytes
Table at 0x3fcdc018

Handle 0x0100, DMI type 1, 27 bytes at 0x3fcdc018
System Information
	Manufacturer: QEMU
	Product Name: Standard PC (Q35 + ICH9, 2009)
	Version: pc-q35-8.2
	Serial Number: 
	UUID: 8967d155-8cbb-484f-9246-d2c4eeeedff1
	Wake-up Type: Power Switch
	SKU Number: 
	Family: 

Handle 0x0200, DMI type 2, 15 bytes at 0x3fcdc063
Baseboard Information
	Manufacturer: Canonical Ltd.
	Product Name: LXD
	Version: pc-q35-8.2
	Serial Number: 
	Asset Tag: 
	Feature Flags: 0x01
	Chassis Location: 
	Chassis Handle: 0x0300
	Board Type: Motherboard
	Number of Contained Object Handles: 0x00

Handle 0x0300, DMI type 3, 22 bytes at 0x3fcdc091
Baseboard Information
	Manufacturer: QEMU
	Type: 0x01
	Version: pc-q35-8.2
	Serial Number: 
	Asset Tag: 
	Boot-up State: Safe
	Power Supply State: Safe
	Thermal State: Safe
	Security Status: Unknown
	OEM-defined: 0x00000000
	Height: 0x00
	Number of Power Cords: 0x00
	Contained Element Count: 0x00
	Contained Element Record Length: 0x00
	SKU Number: 

Handle 0x0400, DMI type 4, 48 bytes at 0x3fcdc0b8
Processor Information:
	Socket Designation: CPU 0
	Processor Type: Central Processor
	Processor Family: Other
	Processor Manufacturer: QEMU
	Processor ID word 0: 0x000a06a4
	Processor ID word 1: 0x0f8bfbff
	Processor Version: pc-q35-8.2
	Voltage: 0x00
	External Clock: 0x0000
	Max Speed: 0x07d0
	Current Speed: 0x07d0
	Status: 0x41
	Processor Upgrade: Other
	L1 Cache Handle: 0xffff
	L2 Cache Handle: 0xffff
	L3 Cache Handle: 0xffff
	Serial Number: 
	Asset Tag: 
	Part Number: 
	Core Count: 0x16
	Core Enabled: 0x16
	Thread Count: 0x16
	Processor Characteristics: 0x0002
	Processor Family 2: Other
	Core Count 2: 0x0016
	Core Enabled 2: 0x0016
	Thread Count 2: 0x0016
	Thread Enabled: 0x5043

Handle 0x1000, DMI type 16, 23 bytes at 0x3fcdc0ff
Header and Data:
	00000000: 10 17 00 10 01 03 06 00 00 10 00 fe ff 01 00 00
	00000010: 00 00 00 00 00 00 00

Handle 0x1100, DMI type 17, 40 bytes at 0x3fcdc118
Header and Data:
	00000000: 11 28 00 11 00 10 fe ff ff ff ff ff 00 04 09 00
	00000010: 01 00 07 02 00 00 00 02 00 00 00 00 00 00 00 00
	00000020: 00 00 00 00 00 00 00 00
Strings:
	String 1: DIMM 0
	String 2: QEMU

Handle 0x1300, DMI type 19, 31 bytes at 0x3fcdc14d
Header and Data:
	00000000: 13 1f 00 13 00 00 00 00 ff ff 0f 00 00 10 01 00
	00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

Handle 0x2000, DMI type 32, 11 bytes at 0x3fcdc16e
Header and Data:
	00000000: 20 0b 00 20 00 00 00 00 00 00 00

Handle 0x7f00, DMI type 127, 4 bytes at 0x3fcdc17b
End Of Table

You can see above that these are created by QEMU, under control of LXD. Thus the environment which U-Boot sees can be controlled by QEMU or by tools which integrate QEMU.

What’s next?

U-Boot has a fairly solid set of QEMU features at this point. It provides an alternative to OVMF in some cases, with a faster boot, while still using EFI. Future work may include looking at booting without EFI, thus saving time and reducing complexity.




Streamlining the Final Leap: Unifying U-Boot’s Pre-OS Cleanup

What happens in the final moments before U-Boot hands control over to the operating system? Until recently, the answer was, “it’s complicated.” Each architecture like ARM, x86, and RISC-V had its own way of handling the final pre-boot cleanup, leading to a maze of slightly different functions and duplicated code. It was difficult to know what was really happening just before the kernel started.

Thanks to a recent series of commits in Concept, this critical part of the boot process has been significantly cleaned up and unified.

A Simpler, Centralized Approach

The core of this effort is the introduction of a new generic function: bootm_final(). This function’s purpose is to consolidate all the common steps that must happen right before booting an OS. By moving to this centralized model, we’ve replaced various architecture-specific functions, like bootm_announce_and_cleanup(), with a single, unified call.

This new approach has been adopted across the x86, RISC-V, and ARM architectures, as well as for the EFI loader.

Key Improvements in This Series

  • Unified Cleanup: Common tasks like disabling interrupts, quiescing board devices, and calling cleanup_before_linux() are now handled in one place, reducing code duplication and increasing consistency.
  • Better Bootstage Reporting: The EFI boot path now benefits from bootstage processing. If enabled, U-Boot will produce a bootstage report, offering better insights into boot-time performance when launching an EFI application. This report is emitted when exit-boot-services is called, thus allowing timing of GRUB and the kernel EFI stuff, if present.
  • Code Simplification: With the new generic function in place, redundant architecture-specific functions have been removed. We also took the opportunity to drop an outdated workaround for an old version of GRUB (EFI_GRUB_ARM32_WORKAROUND).

This cleanup makes the boot process more robust, easier to understand, and simpler to maintain. While there is still future work to be done in this area, this is a major step forward in standardizing the final hand-off from U-Boot to the OS.




A boot logo for EFI

U-Boot Concept now supports the EFI Boot Graphics Resource Table (BGRT) feature. This enhancement allows for a more seamless and branded boot experience on devices that use EFI_LOADER, i.e. the Unified Extensible Firmware Interface (UEFI).

What is BGRT?

The BGRT is a table in the ACPI (Advanced Configuration and Power Interface) that allows the firmware to pass a logo or image to the operating system during the boot process. This means that instead of a generic boot screen, users can be greeted with a custom logo, such as a company or product brand. This creates a more professional and polished user experience.

Why is this important for U-Boot?

By supporting BGRT, U-Boot can now provide a more consistent and visually appealing boot experience on a wider range of devices, particularly those running operating systems like Windows or Linux that support UEFI. This is especially valuable in embedded systems and custom hardware where branding and a unique user experience are important.

This new feature further solidifies U-Boot’s position as a leading bootloader for a diverse range of applications, from embedded systems to servers. It demonstrates the community’s commitment to keeping U-Boot up-to-date with the latest industry standards and providing developers with the tools they need to create modern and user-friendly products.




Host-file Access with New virtio-fs

What is virtio-fs?

For those unfamiliar, virtio-fs is a modern shared filesystem designed specifically for virtualised environments. It allows a virtual machine (the “guest”) to access a directory on the host system, but it does so with a focus on performance and providing local filesystem semantics.

Unlike traditional methods like network filesystems (e.g., NFS, Samba) or even the older virtio-9p protocol, virtio-fs is engineered to take advantage of the fact that the guest and host are running on the same machine. By leveraging shared memory and a design based on FUSE (Filesystem in Userspace), it bypasses much of the communication overhead that can slow down other solutions. The result is a faster, more seamless file sharing experience that is ideal for development, testing, and booting from a root filesystem located on the host.

virtio-fs arrives in U-Boot Concept

The recent merge request in U-Boot Concept introduces a new virtio-fs driver within U-Boot. This initial implementation enables two key functions:

  • List directories on the host
  • Read files from the host

This is made possible by a new filesystem driver that integrates with U-Boot’s new FS, DIR, and FILE uclasses. A compatibility layer is included so that existing command-line functionalities continue to work as expected.

This new capability in U-Boot opens up more flexible and efficient workflows. For example, developers can now more easily load kernels, device tree blobs, or other artifacts directly from their development workstation into a QEMU guest running U-Boot, streamlining the entire test and debug cycle. For cloud use cases, reading configuration files from via virtio-fs is a common requirement.

Overall this lays a strong foundation for future enhancements to virtio-fs support within U-Boot, promising even tighter integration between guest environments and the host system.




Keeping Our Linker Lists in Line

U-Boot makes extensive use of linker-generated lists to discover everything from drivers to commands at runtime. This clever mechanism allows developers to add new features with a single macro, and the linker automatically assembles them into a contiguous array. The C code can then iterate through this array by finding its start and end markers, which are also provided by the linker.

For this to work, there’s a critical assumption: the array of structs is perfectly contiguous, with each element having the exact same size. But what happens when the linker, in its quest for optimisation, breaks this assumption?

A Little Wrinkle

We have known for a while about a subtle issue where the linker, in certain cases, would insert a few bytes of padding between elements in these lists. This is usually done to align the next element to a more efficient memory boundary (like 8 or 16 bytes).

While this is often harmless, it breaks U-Boot’s C code, which expects to find the next element by simply adding a fixed size to the address of the current one. This unexpected padding can lead to misaligned memory access, corrupted data, and hard-to-debug crashes.

Here is an example of what this looks like in the symbol table. Notice the gap between virtio_fs and virtio_fs_dir is 0x80 bytes, while the expected size is 0x78:

...
00000000011d0070 D _u_boot_list_2_driver_2_virtio_blk
00000000011d0160 D _u_boot_list_2_driver_2_virtio_fs
00000000011d01e0 D _u_boot_list_2_driver_2_virtio_fs_dir
...

This 8-byte padding (0x80 - 0x78) is the source of the problem.

A Script to the Rescue

To catch these alignment problems automatically, we’ve developed a new Python script, check_list_alignment.py, now in U-Boot Concept (merge).

The script works as follows:

  1. Runs nm -n on the final u-boot ELF file to get all symbols sorted by address.
  2. Automatically discovers all the different linker lists in use (e.g., driver, cmd, uclass_driver).
  3. For each list, calculates the gap between every consecutive element.
  4. Determines the most common gap size, assuming this is the correct sizeof(struct).
  5. Flags any gap that doesn’t match this common size.

Now, if the linker introduces any unexpected padding, the build will fail immediately with a clear error message:

$ ./scripts/check_list_alignment.py -v u-boot
List Name           # Symbols   Struct Size (hex)
-----------------   -----------   -----------------
...
driver                       65              0x78
  - Bad gap (0x80) before symbol: _u_boot_list_2_driver_2_virtio_fs_dir
...

FAILURE: Found 1 alignment problems

This simple check provides a powerful guarantee. It ensures the integrity of our linker lists, prevents a whole class of subtle bugs, and allows developers to continue using this powerful U-Boot feature with confidence.