Silencing the Sphinx: Cleaner Documentation Builds

If you have ever run make htmldocs in U-Boot, you are likely familiar with the “wall of text” it produces. Between the standard Sphinx output, sub-make messages, and custom progress indicators, the build process has traditionally been very noisy.

While verbose output can be useful for debugging the toolchain itself, it is a hindrance when you are just trying to write documentation. The sheer volume of text makes it difficult to spot legitimate warnings and errors, which often get buried in the scroll.

A recent 5-part series in Concept addresses this. The goal is simple: if the build prints something, it should be something that requires your attention.

What changed?

The series cleans up the output in three specific ways:

  1. Enabling Quiet Mode: We now pass the -q flag to SPHINXOPTS and -s to the sub-make invocations. This suppresses the standard “reading sources” and “picking up dependencies” log lines.
  2. Removing Informational Clutter: We dropped the explicit print statement regarding CJK (Chinese/Japanese/Korean) font support in conf.py. The functionality remains, but we don’t need to be told about it on every run.
  3. Dropping Custom Progress Bars: The SPHINX and PARSE progress messages defined in the Makefiles have been removed.

The Result

The documentation build is now silent by default. This aligns the htmldocs target with the philosophy of the rest of the Kbuild system: silence is golden.

Now, when you run the docs build, you can be confident that any output appearing in your terminal is a warning or an error that needs to be fixed. This should make it significantly easier to maintain clean documentation and spot regressions in CI.




Modernising Allocation: U-Boot Upgrades to dlmalloc 2.8.6

For over two decades—since 2002—U-Boot has relied on version 2.6.6 of Doug Lea’s malloc (dlmalloc, old docs) to handle dynamic memory allocation. While reliable, the codebase was showing its age.

In a massive 37-patch series, we have finally updated the core allocator to dlmalloc 2.8.6. This update brings modern memory efficiency algorithms, better security checks, and a cleaner codebase, all while fighting to keep the binary size footprint minimal for constrained devices.

Why the Update?

The move to version 2.8.6 isn’t just about bumping version numbers. The new implementation offers specific technical advantages:

  • Improved Binning: The algorithm for sorting free chunks is more sophisticated, leading to better memory efficiency and less fragmentation.
  • Overflow Protection: Robust checks via MAX_REQUEST prevent integer overflows during allocation requests, improving security.
  • Reduced Data Usage: The old pre-initialised av_[] array (which lived in the data section) has been replaced with a _sm_ struct in the BSS (Block Started by Symbol) section. This change reduces usage in the data section by approximately 1.5KB, a significant win for boards with constraints on initialised data.

The Battle for Code Size

One of the biggest challenges in embedded development is code size. Out of the box, the newer dlmalloc was significantly larger than the old version—a non-starter for SPL (Secondary Program Loader) where every byte counts.

To combat this, the patch series a Kconfig option to strip down the allocator for space-constrained SPL builds (specifically via CONFIG_SYS_MALLOC_SMALL).

Key optimisations include:

  1. NO_TREE_BINS: Disables complex binary-tree bins for large allocations (>256 bytes), falling back to a simple doubly-linked list. This trades O(log n) performance for O(n) but saves ~1.25KB of code.
  2. SIMPLE_MEMALIGN: Simplifies the logic for aligned allocations, removing complex retry fallback mechanisms that are rarely needed in SPL. This saves ~100-150 bytes.
  3. NO_REALLOC_IN_PLACE: Disables the logic that attempts to resize a memory block in place. Instead, it always allocates a new block and copies data. This saves ~500 bytes.

With these adjustments enabled, the new implementation is actually smaller than the old 2.6.6 version on architectures like Thumb2.

Keeping U-Boot Features Alive

This wasn’t a clean upstream import. Over the last 20 years, U-Boot accrued many custom modifications to dlmalloc. This series carefully ports these features to the new engine:

  • Pre-relocation Malloc: Support for malloc_simple, allowing memory allocation before the main heap is initialised.
  • Valgrind Support: Annotations that allow developers to run U-Boot sandbox under Valgrind to detect memory leaks and errors.
  • Heap Protection: Integration with mcheck to detect buffer overruns and heap corruption.

New Documentation and Tests

To ensure stability, a new test suite (test/common/malloc.c) was added to verify edge cases, realloc behaviours, and large allocations. Additionally, comprehensive documentation has been added to doc/develop/malloc.rst, explaining the differences between pre- and post-relocation allocation and how to tune the new Kconfig options.

Next Steps

The legacy allocator is still available via CONFIG_SYS_MALLOC_LEGACY for compatibility testing, but new boards are encouraged to use the default 2.8.6 implementation.




Cleaning up the Nulls: Introducing ofnode Stubs for Non-DT Builds

In the world of U-Boot, the Device Model (DM) and Device Tree (DT) are the standard for hardware description. However, U-Boot runs on a massive variety of hardware, including constrained systems where full Device Tree support (OF_REAL) might be disabled.

A recent patch cleans up how the core handles these “no-Device-Tree” scenarios, ensuring that code remains clean, compilable, and safe even when the DT is missing.

The Problem: When the Tree Falls

When OF_REAL is disabled, there is logically no point in trying to find nodes, read properties, or traverse a tree—everything is effectively null.

Previously, handling this required scattering #ifdef guards throughout driver code or dealing with linking errors if a driver attempted to call an ofnode function that wasn’t compiled in. This made the codebase harder to read and harder to maintain.

The Solution: Static Inline Stubs

The patch, dm: core: Create ofnode stubs when OF_REAL is disabled introduces a comprehensive set of static inline “stub” functions in include/dm/ofnode.h.

Here is the logic:

  1. Check Configuration: The header now checks #if CONFIG_IS_ENABLED(OF_REAL).
  2. Provide Stubs: If OF_REAL is off, it defines dummy versions of functions like ofnode_read_u32 or ofnode_next_subnode.
  3. Safe Returns: These stubs return safe default values—typically NULL, false, -ENOSYS, or -EINVAL—allowing the compiler to optimise the code paths without breaking the build.

This allows drivers to call ofnode functions blindly. If the DT is missing, the function simply returns an error code, and the driver handles it gracefully rather than the build failing.

The Cost: Pushing the Limits

While this approach significantly cleans up driver code, it comes with a trade-off. include/dm/ofnode.h has grown significantly with this patch.

As noted in the commit, the file is becoming “a little convoluted” due to the sheer volume of inline implementations and dual definitions. We are likely reaching the limit of how many static inlines can reasonably live in this single header! While this solution is better than the alternative (broken builds or messy #ifdefs in drivers), future work may require splitting the header file to keep the core API definitions digestible and organised.

Key Refinements

The patch also includes some “plumbing” adjustments:

  • Macro Relocation: Iterator macros like ofnode_for_each_compatible_node() were moved outside the OF_REAL condition. Since they rely on the newly created stubs, these loops will now simply do nothing on a non-DT system, rather than causing compiler errors.
  • Edge-Case Support: A specific adjustment in drivers/Makefile ensures ofnode.o is compiled when OF_REAL is enabled but the full DM is not, preserving necessary support for boards like the kontron-sl-mx6ul.



Introducing Codman: A Deep Dive into U-Boot Build Analysis

U-Boot is a massive project. With thousands of files, nearly endless configuration possibilities, and complex Kconfig dependencies, a single board configuration often only compiles a small fraction of the total source tree.

For developers and maintainers, this complexity often leads to difficult questions:

  • “I just enabled CONFIG_CMD_NET; how much code did that actually add?”
  • “How much bloat would I remove by disabling CONFIG_CMDLINE?”
  • “Which specific lines of this driver are active for my board?”

Simply searching for CONFIG_ macros or header inclusions is rarely enough. The build logic takes many forms—Makefile rules, #ifdefs, IS_ENABLED(), and static inlines—making static analysis tricky.

Enter Codman (Code Manager), a new tool designed to cut through this complexity by analysing the actual build artefacts generated by the compiler.

What is Codman?

Codman is a Python-based tool located in tools/codman/. It works out exactly which source files and lines of code are compiled and used for a specific board. It works by:

  1. Building the specified board (or using an existing build).
  2. Parsing .cmd files to find which source files were compiled.
  3. Analysing the source code (using a specialized unifdef) or object files (using DWARF tables) to figure out exactly which lines made it into the final binary.

Feature Highlight: Impact Analysis

One of Codman’s most powerful features is Impact Analysis. This allows you to explore “what if” scenarios without manually editing defconfig files or running menuconfig.

Using the -a (adjust) flag, you can modify the Kconfig configuration on the fly before the analysis runs. This is perfect for seeing exactly how much code a specific feature adds.

Example: Checking the impact of USB To enable the USB subsystem on the sandbox board and see how the code stats change:

./tools/codman/codman.py -b sandbox -a CMD_USB stats

Example: Disabling Networking To see what code remains active when networking is explicitly disabled:

./tools/codman/codman.py -b sandbox -a ~NET,NO_NET stats

Visualising the Build

Codman provides several ways to view your build data.

1. High-Level Statistics The stats command gives you a bird’s-eye view of your build size.

$ codman -b qemu-x86 stats
======================================================================
FILE-LEVEL STATISTICS
======================================================================
Total source files:    14114
Used source files:      1046 (7.4%)
Unused source files:   13083 (92.7%)

Total lines of code:  3646331
Used lines of code:    192543 (5.3%)

2. Directory Breakdown Use dirs to see which subsystems are contributing the most weight to your board.

$ codman dirs
BREAKDOWN BY TOP-LEVEL DIRECTORY
Directory        Files    Used  %Used  %Code     kLOC    Used
-------------------------------------------------------------
arch               234     156     67     72     12.3     8.9
board              123      45     37     25      5.6     1.4
cmd                 89      67     75     81      3.4     2.8
...

You can also break down the information by showing subdirectories (-s) or even individual files (-f).

$ codman -n -b qemu-x86 dirs --subdirs  -f
=======================================================================================
BREAKDOWN BY TOP-LEVEL DIRECTORY
=======================================================================================
Directory                                  Files    Used  %Used  %Code     kLOC    Used
---------------------------------------------------------------------------------------
arch/x86/cpu                                  20      15     75     85      3.8     3.2
  start.S                                    318     190   59.7     128
  cpu.c                                      399     353   88.5      46
  mp_init.c                                  902     877   97.2      25
  turbo.c                                    103      92   89.3      11
  lapic.c                                    158     156   98.7       2
  resetvec.S                                  18      18  100.0       0
  pci.c                                      100     100  100.0       0
  mtrr.c                                     455     455  100.0       0
  start16.S                                  123     123  100.0       0
  sipi_vector.S                              215     215  100.0       0
  ioapic.c                                    36      36  100.0       0
  call32.S                                    61      61  100.0       0
  qfw_cpu.c                                   86      86  100.0       0
  irq.c                                      366     366  100.0       0
  cpu_x86.c                                   99      99  100.0       0
arch/x86/cpu/i386                              4       4    100     98      1.4     1.4
  cpu.c                                      649     630   97.1      19
  interrupt.c                                630     622   98.7       8
  call64.S                                    92      92  100.0       0
  setjmp.S                                    65      65  100.0       0
arch/x86/cpu/intel_common                     18       6     33     23      3.3     0.8
  microcode.c                                187     183   97.9       4
  pch.c                                       23      23  100.0       0
  lpc.c                                      100     100  100.0       0

3. Line-by-Line Detail Perhaps the most useful feature for debugging configuration issues is the detail view. It shows you exactly which lines are active or inactive within a file.

$ codman -b qemu-x86 detail common/main.c
...
    24 | static void run_preboot_environment_command(void)
    25 | {
    26 | 	char *p;
    27 | 
    28 | 	p = env_get("preboot");
    29 | 	if (p != NULL) {
    30 | 		int prev = 0;
    31 | 
-   32 | 		if (IS_ENABLED(CONFIG_AUTOBOOT_KEYED))
-   33 | 			prev = disable_ctrlc(1); /* disable Ctrl-C checking */
    34 | 
    35 | 		run_command_list(p, -1, 0);
    36 | 
-   37 | 		if (IS_ENABLED(CONFIG_AUTOBOOT_KEYED))
-   38 | 			disable_ctrlc(prev);	/* restore Ctrl-C checking */
    39 | 	}
    40 | }
    41 | 
...

(Lines marked with - are not included in the build and show in a different colour)

Under the Hood: Unifdef vs. DWARF

Codman supports two methods for analysis:

  • Unifdef (Default): Simulates the C preprocessor to determine which lines are active based on CONFIG_ settings. It is fast (leveraging multiprocessing) and provides a great preprocessor-level view. It uses a patched version of unifdef that supports U-Boot’s IS_ENABLED() macros.
  • DWARF (-w): Rebuilds the project with debug info and analyses the DWARF line number tables. This is highly accurate for executable code but won’t count declarations or comments.

Getting Started

# Basic stats for sandbox
./tools/codman/codman.py -b sandbox stats

# Find unused files
./tools/codman/codman.py -b sandbox unused

# Extract only used sources to a new directory (great for minimal distributions)
./tools/codman/codman.py -b sandbox copy-used /tmp/minimal-tree

Check the documentation for more details!




Tidying up the FIT: Refactoring, Testing, and Shrinking U-Boot

Flattened Image Trees (FIT) are a cornerstone of modern U-Boot booting, offering a flexible way to package kernels, device trees, ramdisks, and firmware. However, the code responsible for printing information about these images—the output you see when running mkimage -l or iminfo—has been around for a long time.

As with any legacy code, it had become difficult to maintain. It lacked unit tests, relied on ad-hoc printing logic, and was cluttered within the massive boot/image-fit.c file.

This week, I submitted a 30-patch series titled “fit: Improve and test the code to print FIT info” to address these issues. Here is a look at what changed and why.

The Problem: Spaghetti Printing

Previously, the logic for printing FIT details was scattered. Functions manually handled indentation strings, and there were inconsistent printf calls for every property. If we wanted to change how a property was displayed or correct alignment, we had to touch multiple places in the code. Furthermore, there was no safety net; modifying the printing logic risked breaking the output parsing for users or scripts.

The Solution: Context and Helpers

The refactoring process followed a structured approach:

  1. Test First: Before touching the core logic, I added comprehensive tests. This includes a Python test (test_fit_print.py) that generates a FIT with various components (kernels, FDTs, signatures, loadables) and asserts the output is exactly as expected. This ensured that subsequent refactoring didn’t break existing functionality.
  2. Separation of Concerns: The printing code was moved out of boot/image-fit.c and into its own dedicated file, boot/fit_print.c.
  3. Context Structure: Instead of passing the FIT pointer and indentation strings recursively through every function, a new struct fit_print_ctx was introduced.
  4. Helper Functions: I introduced helpers like emit_label_val, emit_timestamp, and emit_addr. This replaced manual formatting with standardized calls, ensuring consistent alignment and handling of “unavailable” optional properties automatically.

AI-Assisted Development

An interesting aspect of this series is the use of AI assistance. You might notice the Co-developed-by: Claude tags in the commit log. The AI assisted in generating boilerplate, suggesting refactoring patterns, and ensuring coding standards were met, speeding up the iterative process of cleaning up the codebase.

The Results

The refactoring didn’t just make the code cleaner; it made it more efficient.

  • Binary Size Reduction: By removing duplicate printing logic and streamlining the flow, we see a binary-size reduction of approximately 320 bytes on aarch64 builds.
  • Better Output: The output columns are now strictly aligned, making it easier to read visually.
  • Maintainability: With the printing logic isolated and heavily tested, future changes to FIT reporting can be made with confidence.

The series is currently available on the mailing list for review.




Measuring expo performance

Expo is U-Boot’s menu- and GUI-layout system. It provides a set of objects like text, images and menu items. Expo allows these objects to be positioned on the display. Most importantly it can render the objects onto the display.

Input delays

The typical loop polls the expo for input (keyboard or mouse), does any required updates and then renders the expo on the display. This works fine, but is it efficient?

Recent work to add mouse support to expo exposed some shortcomings in the polling process. For example, keyboard input was waiting for up to 100ms for a key to be pressed in the EFI app. The original goal of this delay was to allow for escape sequences (such as are used by arrow keys, for example) to be fully captured in a single poll. But it slows the process down considerably.

This problem was fixed by commit a5c5b3b2fb6 (“expo: Speed up polling the keyboard”).

When to sync?

Another problem that came up was that the video system has its own idea of when to ‘sync’ the display. On most hardware this is a manual process. For sandbox, it involves drawing the contents on the U-Boot framebuffer on the SDL surface / display. On x86 devices it involves updating the framebuffer copy, used to improve performance with write-combining memory. On ARM devices it involves flushing the cache. Once these operations are completed, the changes become visible for the user.

In most cases syncing happens based on a a timer, e.g. a few times a second. In normal use this is fine since we don’t want to waste time updating the display when nothing has changed. A sync is also performed when U-Boot is idle, but in a tight polling loop it may never actually become idle.

In reality, expo should know whether anything has changed and some recent work has started the process of implementing that, making use of the video-damage work. In any case, when a mouse is present, expo wants to be as responsive as possible, so the mouse moves smoothly, rather than jerking from place to place 5 times a second.

Expo mode and manual sync

To resolve this a recent series in Concept adds support for an ‘expo mode’, where expo is in charge of syncing. It allows expo to initiate a video sync when it decides it wants to. The video sync is then done completely under expo control and the timer is not used.

Checking the frame rate

As it turns out, these changes exposed various problems with expo. To help with this a new ‘expo test’ mode was added in another series. This shows the frame rate and other useful information on the top right of the display, like this:

To enable it set the expotest environment variable to 1. From then on, your expo will show this information on the top right:

  • Frame number
  • Frames per second (rolling average over 5 seconds)
  • Render time in milliseconds
  • Sync time (i.e. display update)
  • Poll time (i.e. capturing keyboard and mouse input)

The information should help you track down any slow-downs in your drivers and expo itself!




New video command and unified embedded image handling

U-Boot has long supported embedding a graphical image directly into the binary – like the boot logo and the recently added BGRT (Boot Graphics Resource Table) image for EFI systems. But the way these images were handled was a bit of a mixed bag, with different patterns for different images and custom boilerplate for each one.

A new 6-patch series cleans this up, introducing a unified infrastructure for embedded images along with new commands to work with them.

What’s new

Unified image infrastructure

Previously, images were handled through ad-hoc Makefile rules that looked for specific patterns like _logo.bmp or _image.bmp. Each image required custom accessor macros and boilerplate code.

The new approach moves all embedded images into a single drivers/video/images/ directory and automatically generates linker list entries for them. This makes it trivial to add new images – just add obj-y += myimage.o to the Makefile and reference it using video_image_get(myimage, &size) or video_image_getptr(myimage).

The linker list infrastructure ensures that all images are discoverable at runtime, which enables the new video images command described below.

New video command

A new video command has been added with four subcommands:

  • video setcursor <col> <row> – Set cursor position (equivalent to existing setcurs command)
  • video puts <string> – Write string at current position (equivalent to existing lcdputs command)
  • video write -p [<col>:<row> <string>...] – Write string at a given position, either a character or a pixel position
  • video images – List all images compiled into U-Boot

The existing standalone setcurs and lcdputs commands remain available for backward compatibility.

Example usage

=> video images
Name                       Size
-------------------- ----------
bgrt                      43926
u_boot                     6932

Total images: 2

=> video setcursor 10 5
=> video puts "Hello U-Boot!"
=> video write -p a3:34 "Pixels"

The payoff

This series removes 46 lines of duplicate accessor code while adding about 500 lines total (mostly documentation and tests). But the real win is in maintainability:

  • Simpler to extend: Adding a new embedded image now requires just a single line in a Makefile
  • Discoverable: The video images command shows what’s available at runtime
  • Better organized: All images live in drivers/video/images/ rather than scattered across the tree
  • Consistent API: One pair of macros works for all images

The series also brings comprehensive documentation for the video commands (which previously had none) and adds tests to ensure everything works correctly.

If you’ve ever wanted to add a custom boot logo or wondered what images are built into your U-Boot binary, this series makes both much easier!




Enhancing EFI Boot and Developer Experience

We’ve just rolled out a series of updates aimed at improving the U-Boot EFI application, with a special focus on streamlining the testing and debugging process, particularly for ARM platforms. This batch of 24 patches introduces several quality-of-life improvements, from better debugging tools to more robust boot procedures. Let’s dive into the key changes.


Streamlining the Boot Process with ‘Fake Go’ 🚀

One of the standout features in this release is the introduction of a ‘fake go’ option for the boot process. Previously available only for tracing, this feature is now a standalone debugging tool enabled by CONFIG_BOOTM_FAKE_GO.

When you initiate a boot with the ‘fake go’ flag (e.g., bootflow boot -f or bootm fake), U-Boot performs all the necessary steps to prepare for booting an OS—loading the kernel, setting up the device tree, and preparing memory—but stops just short of jumping to the OS. This allows you to inspect the system’s state at the final moment before handoff, which is invaluable for debugging complex boot issues without needing a full OS boot cycle.


Pager Improvements for Better Interaction 📄

The console pager is a useful tool, but it can be cumbersome when you’re working without a serial console or need to quickly bypass lengthy output. We’ve introduced two new ways to control the pager on the fly:

  • Quit and Suppress (q): Pressing q at the pager prompt will now immediately stop all further output for the current command. This is perfect for when you’ve seen what you need and want to return to the prompt without sitting through pages of text.
  • Bypass Session (Q): Pressing Q will put the pager into bypass mode for the rest of your U-Boot session, allowing all subsequent commands to print their full output without interruption.

These small changes make console interaction much more fluid and give you greater control over command output.


Key Fixes and Enhancements 🛠️

Alongside these major features, this series includes a number of other smaller updates:

  • Safer Image Relocation on ARM: We’ve improved how kernel images are relocated on ARM. Instead of moving the image to a static offset, which could risk overwriting other critical data like the device tree, U-Boot now uses the LMB (Logical Memory Block) library to safely allocate a new, unused region of memory.
  • Improved Debugging Output: We’ve added more detailed debug messages throughout the boot process, especially in FIT image handling and device tree selection, making it easier to trace the boot flow and diagnose issues.
  • Cleaner ATAGs Messaging: The often-confusing “FDT and ATAGS support not compiled in” error has been clarified. U-Boot will now correctly report when a device tree is missing, preventing developers from going down the wrong path when debugging.
  • CI and Build Fixes: A few patches have been included to fix a bug in our automated release script that was causing CI failures, ensuring our development and release processes remain smooth.

These updates continue the development of the EFI app, while benefiting others boards as well.




Spring Cleaning: Refactoring the U-Boot Test Suite for a Brighter Future


A robust and efficient test suite is the backbone of a healthy open-source project. It gives developers the confidence to add new features and refactor code without causing regressions. Recently, we’ve merged a significant 19-patch series that begins a much-needed cleanup of our Python test infrastructure, paving the way for faster, more reliable, and more parallelizable tests.

The Old Way: A Monolithic Setup

For a long time, many of our tests, particularly those for the bootstd (boot standard) commands, have relied on a collection of disk images. These images simulate various boot scenarios, from Fedora and Ubuntu systems to ChromiumOS and Android partitions.

Our previous approach was to have a single, monolithic test called test_ut_dm_init_bootstd() that would run before all other unit tests. Its only job was to create every single disk image that any of the subsequent tests might need.

While this worked, it had several drawbacks:

  • Inefficiency: Every test run created all the images, even if you only wanted to run a single test that didn’t need any of them.
  • Hidden Dependencies: The relationship between a test and the image it required was not explicit. If an image failed to generate, a seemingly unrelated test would fail later, making debugging confusing.
  • No Parallelism: This setup made it impossible to run tests in parallel (make pcheck). Many tests implicitly depended on files created by other tests, a major barrier to parallel execution.
  • CI Gaps: Commands like make qcheck (quick check) and make pcheck were not being tested in our CI, which meant they could break and remain broken for long periods.

A New Direction: Embracing Test Fixtures

The long-term goal is to move away from this monolithic setup and towards using proper test fixtures. In frameworks like pytest (which we use), a fixture is a function that provides a well-defined baseline for tests. For us, this means a fixture would create a specific disk image and provide it directly to the tests that need it, and only those tests.

This 19-patch series is the first major step in that direction.


The Cleanup Process: A Three-Step Approach

The series can be broken down into three main phases of work.

1. Stabilization and Bug Squashing

Before making big changes, we had to fix the basics. The first few patches were dedicated to getting make qcheck to pass reliably. This involved:

  • Disabling Link-Time Optimization (LTO), which was interfering with our event tracing tools (Patch 1/19).
  • Fixing a memory leak in the VBE test code (Patch 2/19).
  • Standardizing how we compile device trees in tests to fix path-related issues (Patch 3/19).

2. Decoupling Dependent Tests

A key requirement for parallel testing is that each test must be self-contained. We found a great example of a dependency where test_fdt_add_pubkey() relied on cryptographic keys created by an entirely different test, test_vboot_base().

To fix this, we first moved the key-generation code into a shared helper function (Patch 6/19). Then, we updated test_fdt_add_pubkey() to call this helper itself, ensuring it creates all the files it needs to run (Patch 7/19). This makes the test independent and ready for parallel execution.

3. Preparing for Fixtures by Refactoring

The bulk of the work in this series was a large-scale refactoring of all our image-creation functions. Previously, functions like setup_fedora_image() took a ubman object as an argument. This ubman is a function-scoped fixture, meaning it’s set up and torn down for every single test. This is not suitable for creating images, which we’d prefer to do only once per test session.

The solution was to change the signature of all these setup functions. Instead of: def setup_fedora_image(ubman):

They now accept the specific dependencies they actually need: def setup_fedora_image(config, log, ...):

This was done for every image type: Fedora, Ubuntu, Android, ChromiumOS, EFI, and more. This change decouples the image creation logic from the lifecycle of an individual test run, making it possible for us to move this code into a session-scoped fixture in the future.

What’s Next?

This series has laid the groundwork. The immediate bugs are fixed, tests are more independent, and the code is structured correctly. The next step will be to complete the transition by creating a session-scoped pytest fixture that handles all this image setup work once at the start of a test run.

This investment in our test infrastructure will pay dividends in the form of faster CI runs, a more pleasant developer experience, and a more stable and reliable U-Boot. Happy testing! 🌱




Giving FIT-loading a Much-Needed Tune-Up

The U-Boot boot process relies heavily on the Flattened Image Tree (FIT) format to package kernels, ramdisks, device trees, and other components. At the heart of this lies the fit_image_load() function, which is responsible for parsing the FIT, selecting the right images, and loading them into memory.

Over the years, as more features like the “loadables” property were added, this important function grew in size and complexity. While it was a significant improvement over the scattered code it replaced, it had become a bit unwieldy—over 250 lines long! Maintaining and extending such a large function can be challenging.

Recognizing this, U-Boot developer Simon Glass recently undertook a refactoring effort to improve its structure and maintainability.


A Classic Refactor: Divide and Conquer

The core strategy of this patch series was to break down the monolithic fit_image_load() function into a collection of smaller, more focused helper functions. This makes the code easier to read, debug, and paves the way for future feature development.

The refactoring splits the loading process into logical steps, each now handled by its own function:

  • Image Selection: A new select_image() function now handles finding the correct configuration and image node within the FIT.
  • Verification and Checks: The print_and_verify() and check_allowed() functions centralize image verification and checks for things like image type, OS, and CPU architecture.
  • Loading and Decompression: The actual data loading and decompression logic were moved into handle_load_op() and decomp_image(), respectively.

Along with this restructuring, the series includes several smaller cleanups, such as removing unused variables and tidying up conditional compilation (#ifdef) directives for host builds.


Test Suite Improvements ⚙️

Good code changes are always backed by solid tests. This effort also included several improvements to the FIT test suite:

  • The test_fit() routine was renamed to test_fit_base() to prevent naming conflicts with other tests.
  • The test was updated to no longer require a full U-Boot restart, significantly speeding up test execution.
  • A new check was added to ensure U-Boot correctly reports an error when a required kernel image is missing from the FIT.

For a detailed look at all the changes, you can check out the merge commit or patches.