New U-Boot CI Lab Page

U-Boot has a new continuous integration (CI) lab page that provides a real-time look at the status of various development boards. The page, located at https://lab.u-boot.org/, offers a simple and clean interface that allows developers and curious people to quickly check on the health and activity of each board in the lab.

When you first visit the page, you’ll see a grid of all the available boards. Each board’s card displays its name and current status, making it easy to see which boards are online and which are not. A single click on any board will show a console view, taken from the last health check. This allows you see why boards are failing, for example.

This new lab page is a nice resource for the U-Boot community. It provides a transparent and accessible way to monitor this part of the CI system.

Check it out and get in touch if you have any suggestions or feedback! 🧪




QEMU improvements

Since 2018 U-Boot has had a good selection of features for running on top of QEMU, including:

  • virtio using PCI devices (legacy and modern)
  • virtio using memory-mapped I/O
  • block devices, to support filesystems, etc. (virtio-blk)
  • network devices (virtio-net)
  • random-number device (virtio-rng)

Most of this was written by Bin Meng. It uses driver model and is nicely implemented.

What’s new?

More recently a few more features have been added:

  • SCSI devices, for more flexible discovery with multiple disks (virtio-scsi)
  • Filesystem devices, for access to host files (virtio-fs). See the separate post about this.
  • Visibility into the available virtio devices (virtio list)
  • Additions to the qfw command to improve visibility

The `virtio list` command can be useful for seeing what paravirtualised devices are available and whether U-Boot has a driver for them. Here you can see U-Boot running on an x86 host.

=> virtio scan
=> virtio list
Name                  Type            Driver
--------------------  --------------  ---------------
virtio-pci.m#0         5: balloon     (none)
virtio-pci.m#1         4: rng         virtio-rng#1
virtio-pci.m#2        12: input-host  (none)
virtio-pci.m#3        12: input-host  (none)
virtio-pci.m#4        13: vsock       (none)
virtio-pci.m#5         3: serial      (none)
virtio-pci.m#6         8: scsi        virtio-scsi#6
virtio-pci.m#7         9: 9p          (none)
virtio-pci.m#8        1a: fs          virtio-fs#8
virtio-pci.m#9        10: gpu         (none)
virtio-pci.m#10        1: net         virtio-net#a
=>

Here you can see how the random-number driver can be used:

=> random 1000 10
16 bytes filled with random data
=> md.b 1000 10
00001000: 00 3f e2 f8 a1 70 4e 5f 8c 19 19 ba 18 76 32 bc  .?...pN_.....v2.
=> 

SCSI is accessed by scanning it first. Note that standard boot does this automatically, if you are just booting an OS.

=> scsi scan
scanning bus for devices...
  Device 0: (0:1) Vendor: QEMU Prod.: QEMU HARDDISK Rev: 2.5+
            Type: Hard Disk
            Capacity: 10240.0 MB = 10.0 GB (20971520 x 512)
=> part list scsi 0

Partition Map for scsi device 0  --   Partition Type: EFI

Part	Start LBA	End LBA		Name
	Attributes
	Type GUID
	Partition GUID
  1	0x00200800	0x013fffde	""
	attrs:	0x0000000000000000
	type:	0fc63daf-8483-4772-8e79-3d69d8477de4
		(linux)
	guid:	e53def26-b3a7-4227-8175-b933282b824f
  e	0x00000800	0x000027ff	""
	attrs:	0x0000000000000000
	type:	21686148-6449-6e6f-744e-656564454649
		(21686148-6449-6e6f-744e-656564454649)
	guid:	a52718a3-62a4-483a-b8fa-38cefacad2fd
  f	0x00002800	0x000377ff	""
	attrs:	0x0000000000000000
	type:	c12a7328-f81f-11d2-ba4b-00a0c93ec93b
		(EFI System Partition)
	guid:	077b8491-26d1-4984-a86f-2c8674c438ee
 10	0x00037800	0x00200000	""
	attrs:	0x0000000000000000
	type:	bc13c2ff-59e6-4262-a352-b275fd6f7172
		(bc13c2ff-59e6-4262-a352-b275fd6f7172)
	guid:	3b088c0d-2f7b-4d92-b7c0-561d0e2cdd30
=> 

You can also inspect some of the qfw tables directly. The qfw list command has been around for a while, although some minor updates were added recently. It shows the files that QEMU presents to U-Boot:

=> qfw list 
    Addr     Size Sel Name
-------- -------- --- ------------
       0        0  20 bios-geometry                                           
       0       6d  21 bootorder                                               
3fcab000       14  22 etc/acpi/rsdp                                           
3fcad000    20000  23 etc/acpi/tables                                         
       0        4  24 etc/boot-fail-wait                                      
       0       28  25 etc/e820                                                
       0        8  26 etc/msr_feature_control                                 
       0       18  27 etc/smbios/smbios-anchor                                
       0      169  28 etc/smbios/smbios-tables                                
       0        1  29 etc/smi/features-ok                                     
       0        8  2a etc/smi/requested-features                              
       0        8  2b etc/smi/supported-features                              
       0        6  2c etc/system-states                                       
       0     1000  2d etc/table-loader                                        
       0        0  2e etc/tpm/log                                             
       0        8  2f etc/vmgenid_addr                                        
3fcac000     1000  30 etc/vmgenid_guid                                        
       0     2400  31 genroms/kvmvapic.bin 

Low-level features

The new qfw table command can be useful to seeing exactly how the ACPI tables are provided:

=> qfw table
  0 alloc: align 10 zone fseg name 'etc/acpi/rsdp'
  1 alloc: align 1000 zone high name 'etc/vmgenid_guid'
  2 alloc: align 40 zone high name 'etc/acpi/tables'
  3 add-chksum offset 49 start 40 length 30f7 name 'etc/acpi/tables'
  4 add-ptr offset 315b size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
  5 add-ptr offset 315f size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
  6 add-ptr offset 31c3 size 8 dest 'etc/acpi/tables' src 'etc/acpi/tables'
  7 add-chksum offset 3140 start 3137 length f4 name 'etc/acpi/tables'
  8 add-chksum offset 3234 start 322b length 120 name 'etc/acpi/tables'
  9  10 add-ptr offset 3375 size 4 dest 'etc/acpi/tables' src 'etc/vmgenid_guid'
 11 add-chksum offset 3354 start 334b length ca name 'etc/acpi/tables'
 12 add-chksum offset 341e start 3415 length 38 name 'etc/acpi/tables'
 13 add-chksum offset 3456 start 344d length 208 name 'etc/acpi/tables'
 14 add-chksum offset 365e start 3655 length 3c name 'etc/acpi/tables'
 15 add-chksum offset 369a start 3691 length 28 name 'etc/acpi/tables'
 16 add-ptr offset 36dd size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 17 add-ptr offset 36e1 size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 18 add-ptr offset 36e5 size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 19 add-ptr offset 36e9 size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 20 add-ptr offset 36ed size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 21 add-ptr offset 36f1 size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 22 add-ptr offset 36f5 size 4 dest 'etc/acpi/tables' src 'etc/acpi/tables'
 23 add-chksum offset 36c2 start 36b9 length 40 name 'etc/acpi/tables'
 24 add-ptr offset 10 size 4 dest 'etc/acpi/rsdp' src 'etc/acpi/tables'
 25 add-chksum offset 8 start 0 length 14 name 'etc/acpi/rsdp'

Since QEMU can also provide SMBIOS tables, these can be inspected using the smbios command:

=> smbios
SMBIOS 3.0.0 present.
9 structures occupying 361 bytes
Table at 0x3fcdc018

Handle 0x0100, DMI type 1, 27 bytes at 0x3fcdc018
System Information
	Manufacturer: QEMU
	Product Name: Standard PC (Q35 + ICH9, 2009)
	Version: pc-q35-8.2
	Serial Number: 
	UUID: 8967d155-8cbb-484f-9246-d2c4eeeedff1
	Wake-up Type: Power Switch
	SKU Number: 
	Family: 

Handle 0x0200, DMI type 2, 15 bytes at 0x3fcdc063
Baseboard Information
	Manufacturer: Canonical Ltd.
	Product Name: LXD
	Version: pc-q35-8.2
	Serial Number: 
	Asset Tag: 
	Feature Flags: 0x01
	Chassis Location: 
	Chassis Handle: 0x0300
	Board Type: Motherboard
	Number of Contained Object Handles: 0x00

Handle 0x0300, DMI type 3, 22 bytes at 0x3fcdc091
Baseboard Information
	Manufacturer: QEMU
	Type: 0x01
	Version: pc-q35-8.2
	Serial Number: 
	Asset Tag: 
	Boot-up State: Safe
	Power Supply State: Safe
	Thermal State: Safe
	Security Status: Unknown
	OEM-defined: 0x00000000
	Height: 0x00
	Number of Power Cords: 0x00
	Contained Element Count: 0x00
	Contained Element Record Length: 0x00
	SKU Number: 

Handle 0x0400, DMI type 4, 48 bytes at 0x3fcdc0b8
Processor Information:
	Socket Designation: CPU 0
	Processor Type: Central Processor
	Processor Family: Other
	Processor Manufacturer: QEMU
	Processor ID word 0: 0x000a06a4
	Processor ID word 1: 0x0f8bfbff
	Processor Version: pc-q35-8.2
	Voltage: 0x00
	External Clock: 0x0000
	Max Speed: 0x07d0
	Current Speed: 0x07d0
	Status: 0x41
	Processor Upgrade: Other
	L1 Cache Handle: 0xffff
	L2 Cache Handle: 0xffff
	L3 Cache Handle: 0xffff
	Serial Number: 
	Asset Tag: 
	Part Number: 
	Core Count: 0x16
	Core Enabled: 0x16
	Thread Count: 0x16
	Processor Characteristics: 0x0002
	Processor Family 2: Other
	Core Count 2: 0x0016
	Core Enabled 2: 0x0016
	Thread Count 2: 0x0016
	Thread Enabled: 0x5043

Handle 0x1000, DMI type 16, 23 bytes at 0x3fcdc0ff
Header and Data:
	00000000: 10 17 00 10 01 03 06 00 00 10 00 fe ff 01 00 00
	00000010: 00 00 00 00 00 00 00

Handle 0x1100, DMI type 17, 40 bytes at 0x3fcdc118
Header and Data:
	00000000: 11 28 00 11 00 10 fe ff ff ff ff ff 00 04 09 00
	00000010: 01 00 07 02 00 00 00 02 00 00 00 00 00 00 00 00
	00000020: 00 00 00 00 00 00 00 00
Strings:
	String 1: DIMM 0
	String 2: QEMU

Handle 0x1300, DMI type 19, 31 bytes at 0x3fcdc14d
Header and Data:
	00000000: 13 1f 00 13 00 00 00 00 ff ff 0f 00 00 10 01 00
	00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

Handle 0x2000, DMI type 32, 11 bytes at 0x3fcdc16e
Header and Data:
	00000000: 20 0b 00 20 00 00 00 00 00 00 00

Handle 0x7f00, DMI type 127, 4 bytes at 0x3fcdc17b
End Of Table

You can see above that these are created by QEMU, under control of LXD. Thus the environment which U-Boot sees can be controlled by QEMU or by tools which integrate QEMU.

What’s next?

U-Boot has a fairly solid set of QEMU features at this point. It provides an alternative to OVMF in some cases, with a faster boot, while still using EFI. Future work may include looking at booting without EFI, thus saving time and reducing complexity.




Streamlining the Final Leap: Unifying U-Boot’s Pre-OS Cleanup

What happens in the final moments before U-Boot hands control over to the operating system? Until recently, the answer was, “it’s complicated.” Each architecture like ARM, x86, and RISC-V had its own way of handling the final pre-boot cleanup, leading to a maze of slightly different functions and duplicated code. It was difficult to know what was really happening just before the kernel started.

Thanks to a recent series of commits in Concept, this critical part of the boot process has been significantly cleaned up and unified.

A Simpler, Centralized Approach

The core of this effort is the introduction of a new generic function: bootm_final(). This function’s purpose is to consolidate all the common steps that must happen right before booting an OS. By moving to this centralized model, we’ve replaced various architecture-specific functions, like bootm_announce_and_cleanup(), with a single, unified call.

This new approach has been adopted across the x86, RISC-V, and ARM architectures, as well as for the EFI loader.

Key Improvements in This Series

  • Unified Cleanup: Common tasks like disabling interrupts, quiescing board devices, and calling cleanup_before_linux() are now handled in one place, reducing code duplication and increasing consistency.
  • Better Bootstage Reporting: The EFI boot path now benefits from bootstage processing. If enabled, U-Boot will produce a bootstage report, offering better insights into boot-time performance when launching an EFI application. This report is emitted when exit-boot-services is called, thus allowing timing of GRUB and the kernel EFI stuff, if present.
  • Code Simplification: With the new generic function in place, redundant architecture-specific functions have been removed. We also took the opportunity to drop an outdated workaround for an old version of GRUB (EFI_GRUB_ARM32_WORKAROUND).

This cleanup makes the boot process more robust, easier to understand, and simpler to maintain. While there is still future work to be done in this area, this is a major step forward in standardizing the final hand-off from U-Boot to the OS.




A boot logo for EFI

U-Boot Concept now supports the EFI Boot Graphics Resource Table (BGRT) feature. This enhancement allows for a more seamless and branded boot experience on devices that use EFI_LOADER, i.e. the Unified Extensible Firmware Interface (UEFI).

What is BGRT?

The BGRT is a table in the ACPI (Advanced Configuration and Power Interface) that allows the firmware to pass a logo or image to the operating system during the boot process. This means that instead of a generic boot screen, users can be greeted with a custom logo, such as a company or product brand. This creates a more professional and polished user experience.

Why is this important for U-Boot?

By supporting BGRT, U-Boot can now provide a more consistent and visually appealing boot experience on a wider range of devices, particularly those running operating systems like Windows or Linux that support UEFI. This is especially valuable in embedded systems and custom hardware where branding and a unique user experience are important.

This new feature further solidifies U-Boot’s position as a leading bootloader for a diverse range of applications, from embedded systems to servers. It demonstrates the community’s commitment to keeping U-Boot up-to-date with the latest industry standards and providing developers with the tools they need to create modern and user-friendly products.




Host-file Access with New virtio-fs

What is virtio-fs?

For those unfamiliar, virtio-fs is a modern shared filesystem designed specifically for virtualised environments. It allows a virtual machine (the “guest”) to access a directory on the host system, but it does so with a focus on performance and providing local filesystem semantics.

Unlike traditional methods like network filesystems (e.g., NFS, Samba) or even the older virtio-9p protocol, virtio-fs is engineered to take advantage of the fact that the guest and host are running on the same machine. By leveraging shared memory and a design based on FUSE (Filesystem in Userspace), it bypasses much of the communication overhead that can slow down other solutions. The result is a faster, more seamless file sharing experience that is ideal for development, testing, and booting from a root filesystem located on the host.

virtio-fs arrives in U-Boot Concept

The recent merge request in U-Boot Concept introduces a new virtio-fs driver within U-Boot. This initial implementation enables two key functions:

  • List directories on the host
  • Read files from the host

This is made possible by a new filesystem driver that integrates with U-Boot’s new FS, DIR, and FILE uclasses. A compatibility layer is included so that existing command-line functionalities continue to work as expected.

This new capability in U-Boot opens up more flexible and efficient workflows. For example, developers can now more easily load kernels, device tree blobs, or other artifacts directly from their development workstation into a QEMU guest running U-Boot, streamlining the entire test and debug cycle. For cloud use cases, reading configuration files from via virtio-fs is a common requirement.

Overall this lays a strong foundation for future enhancements to virtio-fs support within U-Boot, promising even tighter integration between guest environments and the host system.




Keeping Our Linker Lists in Line

U-Boot makes extensive use of linker-generated lists to discover everything from drivers to commands at runtime. This clever mechanism allows developers to add new features with a single macro, and the linker automatically assembles them into a contiguous array. The C code can then iterate through this array by finding its start and end markers, which are also provided by the linker.

For this to work, there’s a critical assumption: the array of structs is perfectly contiguous, with each element having the exact same size. But what happens when the linker, in its quest for optimisation, breaks this assumption?

A Little Wrinkle

We have known for a while about a subtle issue where the linker, in certain cases, would insert a few bytes of padding between elements in these lists. This is usually done to align the next element to a more efficient memory boundary (like 8 or 16 bytes).

While this is often harmless, it breaks U-Boot’s C code, which expects to find the next element by simply adding a fixed size to the address of the current one. This unexpected padding can lead to misaligned memory access, corrupted data, and hard-to-debug crashes.

Here is an example of what this looks like in the symbol table. Notice the gap between virtio_fs and virtio_fs_dir is 0x80 bytes, while the expected size is 0x78:

...
00000000011d0070 D _u_boot_list_2_driver_2_virtio_blk
00000000011d0160 D _u_boot_list_2_driver_2_virtio_fs
00000000011d01e0 D _u_boot_list_2_driver_2_virtio_fs_dir
...

This 8-byte padding (0x80 - 0x78) is the source of the problem.

A Script to the Rescue

To catch these alignment problems automatically, we’ve developed a new Python script, check_list_alignment.py, now in U-Boot Concept (merge).

The script works as follows:

  1. Runs nm -n on the final u-boot ELF file to get all symbols sorted by address.
  2. Automatically discovers all the different linker lists in use (e.g., driver, cmd, uclass_driver).
  3. For each list, calculates the gap between every consecutive element.
  4. Determines the most common gap size, assuming this is the correct sizeof(struct).
  5. Flags any gap that doesn’t match this common size.

Now, if the linker introduces any unexpected padding, the build will fail immediately with a clear error message:

$ ./scripts/check_list_alignment.py -v u-boot
List Name           # Symbols   Struct Size (hex)
-----------------   -----------   -----------------
...
driver                       65              0x78
  - Bad gap (0x80) before symbol: _u_boot_list_2_driver_2_virtio_fs_dir
...

FAILURE: Found 1 alignment problems

This simple check provides a powerful guarantee. It ensures the integrity of our linker lists, prevents a whole class of subtle bugs, and allows developers to continue using this powerful U-Boot feature with confidence.




Streamlining Emulation in U-Boot: A Kconfig Cleanup 🧹

In the world of software development, consistency is key. A recent update to U-Boot Concept takes a solid step in that direction by restructuring how it handles emulation targets. This change makes life easier for developers working across different processor architectures.

Previously there were inconsistencies in the configuration system (Kconfig). For example, enabling QEMU emulation for ARM systems used the ARCH_QEMU symbol, while x86 systems used VENDOR_EMULATION for a similar purpose. This could create confusion and added complexity when managing board configurations.

To resolve this, a new, architecture-neutral symbol, MACH_QEMU, has been introduced. This single, unified option replaces the separate symbols for both ARM and x86 emulation targets.

This small but important merge tidies up the codebase, creating a more consistent and intuitive developer experience. It also sets the stage for future work, with the potential to extend this unified approach to other architectures. It’s a great example of the continuous effort to keep U-Boot clean, efficient, and easy to maintain for everyone involved.




Filesystems in U-Boot

U-Boot supports a fairly wide variety of filesystems, including ext4, ubifs, fat, exfat, zfs, btrfs. These are an important part of bootloader functionality, since reading files from bare partitions or disk offsets is neither scalable nor convenient.

The filesystem API is functional but could use an overhaul. The main interface is in fs/fs.c, which looks like this:

struct fstype_info {
	int fstype;
	char *name;
	/*
	 * Is it legal to pass NULL as .probe()'s  fs_dev_desc parameter? This
	 * should be false in most cases. For "virtual" filesystems which
	 * aren't based on a U-Boot block device (e.g. sandbox), this can be
	 * set to true. This should also be true for the dummy entry at the end
	 * of fstypes[], since that is essentially a "virtual" (non-existent)
	 * filesystem.
	 */
	bool null_dev_desc_ok;
	int (*probe)(struct blk_desc *fs_dev_desc,
		     struct disk_partition *fs_partition);
	int (*ls)(const char *dirname);
	int (*exists)(const char *filename);
	int (*size)(const char *filename, loff_t *size);
	int (*read)(const char *filename, void *buf, loff_t offset,
		    loff_t len, loff_t *actread);
	int (*write)(const char *filename, void *buf, loff_t offset,
		     loff_t len, loff_t *actwrite);
	void (*close)(void);
	int (*uuid)(char *uuid_str);
	/*
	 * Open a directory stream.  On success return 0 and directory
	 * stream pointer via 'dirsp'.  On error, return -errno.  See
	 * fs_opendir().
	 */
	int (*opendir)(const char *filename, struct fs_dir_stream **dirsp);
	/*
	 * Read next entry from directory stream.  On success return 0
	 * and directory entry pointer via 'dentp'.  On error return
	 * -errno.  See fs_readdir().
	 */
	int (*readdir)(struct fs_dir_stream *dirs, struct fs_dirent **dentp);
	/* see fs_closedir() */
	void (*closedir)(struct fs_dir_stream *dirs);
	int (*unlink)(const char *filename);
	int (*mkdir)(const char *dirname);
	int (*ln)(const char *filename, const char *target);
	int (*rename)(const char *old_path, const char *new_path);
};

At first glance this seems like a reasonable API. But where is the filesystem specified? The API seems to assume that this is already present somehow.

In fact there is a pair of separate functions responsible for selecting which filesystem the API acts on:

int fs_set_blk_dev(const char *ifname, const char *dev_part_str, int fstype)
int fs_set_blk_dev_with_part(struct blk_desc *desc, int part)

When you want to access a file, call either of these functions. It sets three ‘global’ variables, fs_dev_desc, fs_dev_part and fs_type . After each operation, a call to fs_close() resets things. This means you must select the block device before each operation. For example, see this code in bootmeth-uclass.c:

	if (IS_ENABLED(CONFIG_BOOTSTD_FULL) && bflow->fs_type)
		fs_set_type(bflow->fs_type);

	ret = fs_size(path, &size);
	log_debug("   %s - err=%d\n", path, ret);

	/* Sadly FS closes the file after fs_size() so we must redo this */
	ret2 = bootmeth_setup_fs(bflow, desc);
	if (ret2)
		return log_msg_ret("fs", ret2);

It is a bit clumsy. Obviously this interface is not set up to support caching. In fact the filesystem is mounted afresh each time it is accessed. In a bootloader this is normally not too much of a problem. Since the OS and associated files are normally packaged in a FIT, a single read is enough to obtain everything that is needed. But if multiple directories need to be searched to find that FIT, or if there are multiple files to read, the repeated mounting does slow things down.

If you have sharp eyes you might have seen another problem. The two functions above assume that they are dealing with a block device. In fact, struct blk_desc is the uclass-private data for a block device. What about when the filesystem is on the network? Also, with sandbox it is possible to access host files:

=> ls hostfs 0 /tmp/gimp
DIR    1044480 ..
DIR       4096 .
DIR       4096 2.10
=> 

Clearly, the files on the hostsystem are not accessed at the block level. How does that work?

The key to this is null_dev_desc_ok , which is true for the hostfs filesystem. There is a special case in the code to handle this.

int blk_get_device_part_str(const char *ifname, const char *dev_part_str,
			     struct blk_desc **desc,
			     struct disk_partition *info, int allow_whole_dev)
{
...
#if IS_ENABLED(CONFIG_SANDBOX) || IS_ENABLED(CONFIG_SEMIHOSTING)
	/*
	 * Special-case a pseudo block device "hostfs", to allow access to the
	 * host's own filesystem.
	 */
	if (!strcmp(ifname, "hostfs")) {
		strcpy((char *)info->type, BOOT_PART_TYPE);
		strcpy((char *)info->name, "Host filesystem");

		return 0;
	}
#endif

It isn’t great. I’ve been looking at virtio-fs lately, which also doesn’t use a block device.

There are other things that could be improved, too:

  • Filesystems must be specified explicitly by their device and partition number. It would be nice to have a unified ‘VFS’ like Linux (and Barebox) so filesystems could be mounted within a unified space.
  • Files cannot be accessed from a device, nor is there any way to maintain a reference to a file you are working with
  • Reading a file must done all at once, in most cases. It would be nice to have an interface to open, read and close the file.

Instead of adding yet more special cases, it may be time to overhaul the code a little.




Verified Boot for Embedded on RK3399

VBE has been a long-running project to create a smaller and faster alternative to EFI. It was originally introduced as a concept in 2022, along with a sandbox implementation and a simple firmware updater for fwupd.

In the intervening period an enormous about of effort has gone into getting this landed in U-Boot for a real board. This has resulted in 10 additional series, on top of the sandbox work:

  • A – Various MMC and SPL tweaks (14 patches, series)
  • B – binman: Enhance FIT support for firmware (20 patches, series)
  • C – binman: More patches to support VBE (15 patches, series)
  • D – A collection of minor tweaks in MMC and elsewhere (18 patches, series)
  • E – SPL improvements and other tweaks (19 patches, series)
  • F – VBE implementation itself, with SPL ‘relocating jump’ (22 patches, series)
  • G – VBE ‘ABrec’ implementation in TPL/SPL/VPL (19 patches, series)
  • H – xPL-stack cleanup (4 patches, series)
  • I – Convert rockchip to use Binman templates (7 patches, series), kindly taken over and landed by Jonas Karlman
  • J – Implementation for RK3399 (25 patches, series)

That’s a total of 163 patches!

The Firefly RK3399 board was chosen, since it has (just) enough SRAM and is fully open source.

The final series has not yet landed in the main tree and it is unclear whether it will. For now I have put it in the Concept tree. You can see a video of it booting below:

I have been thinking about why this took so long to (almost) land. Here is my list, roughly in order from most important to least:

  1. Each series had to land before the next could be sent, with it taking at least one release cycle (3 months) to land each one
  2. Some of the new features were difficult to implement, particularly the relocating SPL jump and the new Binman features
  3. Many of the patches seemed aimless or irrelevant when sent, since they had no useful purpose before VBE could fully land. This created resistance in review
  4. On the other hand, sending too many patches at once would cause people to ignore the series

Overall it was a very difficult process, even for someone who knows U-Boot well. It concerns me that it has become considerably harder to introduce major new things in U-Boot, compared to the days of sandbox or driver model. I don’t have much of a comparison with other firmware projects, but I’m interested in hearing other people’s point of view. Please add a comment if you have thoughts on this.

Anyway, I am pleased to be done with it. The only thing missing at present is ‘ABrec’ updates in fwupd. It should be fairly easy to do, but for the signature checking. Since fwupd has its own implementation of libfdt, that might be non-trivial.

More information on VBE:




Booting into Linux in 100ms

A few weeks ago I took a look at Qboot, a minimal x86 firmware for QEMU which can boot in milliseconds. Qboot was written by Paolo Bonzini and dates from 2015 and there is an LWN article with the original announcement.

I tried it on my machine and it booted in QEMU (with kvm) in about 20ms, from entering Qboot to entering Linux. Very impressive! I was intrigued as to what makes it so fast.

There is another repo, qemu-boot-time by Stefan Garzarella, which provides an easy means to benchmark the boot. It uses perf events in Linux to detect the start and end of Qboot.

Using x86 post codes, I added the same to U-Boot. Initially the boot time was 2.9 seconds! Terrible. Here is a script which works on my machine and measures the time taken for U-Boot boot, using the qemu-x86_64 target:

It turned out that almost two of the seconds were the U-Boot boot delay. Another 800ms was the PXE menu delay. With those removed the time dropped to 210ms, which is not bad.

Using CONFIG_NO_NET dropping CONFIG_VIDEO each shaved off about 50ms. I then tried passing the kernel and initrd through QEMU using the QFW interface. It only saved 15ms but it is something.

I figured that command-line processing would be quite slow. With CONFIG_CMDLINE disabled another 5ms was saved. A final 7ms came from disabling filesystems and EFI loader. Small gains.

In the end, my result is about 83ms (in bold below):

$ ./contrib/qemu-boot-timer.sh
starting perf
building U-Boot
running U-Boot in QEMU
waiting for a bit
qemu-system-x86_64: terminating on signal 15 from pid 2775874 (bash)
parsing perf results
1) pid 2779434
qemu_init_end: 51.924873
u_boot_start: 51.962744 (+0.037871)
u_boot_do_boot: 134.781048 (+82.818304)

One final note: the qemu-x86_64 target actually boots by starting SPL in 16-bit mode and then moving to 64-bit mode to start U-Boot proper. This was partly to avoid calling 16-bit video ROMs from 64-bit code. Now that bochs is used for the display, it should be easy enough to drop SPL for this target. I’m not sure how much time that would save.

Note: Some final hacks are tagged here.