Towards a better way to hack and test your system components

There are similarities to packages, sure.

A huge benefit of sysext over a package manager is the fact that the changes are easily reversible. A sysext doesn’t modify the OS in place; it layers changes on top. You can just as easily un-layer the changes to return to a functional configuration. You can even put the sysexts into /run so that if your changes completely fubar the system you can just hard-reset to get back to work. On the contrary, a package is outright replacing system components with the potentially broken ones you just built, and if the breakage is severe recovering becomes very difficult very quickly (especially if you don’t have a second device with SSH available to you, or a bootable USB for a chroot environment, or you are offline, etc)

Another motivation here is that GNOME OS doesn’t have a package manager, and traditional package management is pretty much incompatible with the dm-verity and TPM-based security model we’re building with GNOME OS. sysexts work on image-based OSs; packages do not. Developers should be able to benefit from boot- and data-security state-of-the-art

“favorite package format” is also an issue with packages: we’d need to standardize on a distro. We cannot expect every CI workflow to produce RPMs, debs, and (insert infinite other package formats here) simultaneously - we’d need to settle on one distro. Pretty much all downstream distros diverge from GNOME’s upstream ideal/recommended configuration in some way (or are out-of-date, heavily-patched, whatever) and so aren’t perfectly suited to be picked for this role. Meanwhile, we already have GNOME OS for QA, and it’s already developed as a golden reference image and testing environment for integrating the whole GNOME stack together. GNOME OS already has nightly builds with git versions of the entire stack for development. Apps are already developed via Flatpak, which uses GNOME OS as the platform the apps run on top of. So if we have to standardize on something, it makes a lot of sense to standardize on GNOME OS. And the closest thing to a package manager GNOME OS has is sysext.

Ultimately you can mix and match approaches here. You can always use sysext on a package-based distro where you do ad-hoc builds from a git checkout just for the easy-to-revert benefits.

Hey @matthiasc :wave: the goal of the work presented here is to produce testing artifacts in the form of system extensions.

These extensions are:

  • A good fit for immutable systems like GNOME OS, through a mechanism that’s already available there.
  • Not permanent. The recovery story is as simple as restarting the system when things go really bad. No additional steps, and less pain overall.

Additionally:

  • Can be used to speed up end-to-end testing, as we don’t need to rebuild the entire system, only specific components.
  • Can provide a more approachable experience for non-programmers to test changes to components, e.g., an extension is automatically published for every MR, then a non-programmer can use GNOME Boxes with an existing GNOME OS image, apply the extension with one command, and test it. Similar to how we tests apps with Flatpak.

I have to say I don’t really get why/how this is supposed to replace a package manager either.

It seems to me like sysexts are meant to cover a different field, and that the main purpose is “batching a bunch of changes to the file system into an atomically applicable and reversible format”. Package managers effectively end up covering the same field, but that’s more of a side-effect and not their main purpose. Their main purpose is providing context which file belongs to which software project, and to resolve dependencies between those, and they do that very well.

I don’t see why a change applied to the file system applied by a package manager like dnf could not be applied as a sysext? That would allow having the benefits of both sysexts and package managers, and in addition it would mean we don’t end up building yet another package manager?

I don’t see why a change applied to the file system applied by a package manager like dnf could not be applied as a sysext

It absolutely can be.

If you’re running (insert distro here), then you absolutely can use (insert package manager here)'s bespoke functionality to install packages into a sysext, then apply the sysext. It’s just that such tooling is going to have to be built per-distro (which we don’t have the interest or bandwidth to do), and those sysexts will only apply per-distro (different distros have different sets of packages, differrent ABI, etc. You don’t want to build a franken-distro), and even then they’re going to be version locked to the build of the distro in ways that the distro wasn’t built to support: unlike image-based OSs, traditional distros don’t have an overarching OS version that determines the version of every single package. The package-manager’s (again: bespoke) database isn’t designed for sysext and can’t really handle it. On top of that, if we go the package manger route we will explicitly never support GNOME OS here because GNOME OS does not and can not have a package manager.

it would mean we don’t end up building yet another package manager

I think we need to take a step back and look at what we’re solving here, because it’s not entirely clear to me what exactly a package manager has to do with it.

First, we want to make it possible to develop multiple system components at a time, and then integrate them into an OS and test/dogfood them. It should be easy to check out a couple of projects, iterate on them, and then test them. This solution must be flexible, to support various developer workflows: ad-hoc builds from git checkouts, or robust distro-quality builds via BuildStream. Should support installing into your own currently-running system, or booting into a VM with all your changes applied. Should support GNOME OS or other distros (but hopefully, eventually, everyone will be developing GNOME on GNOME OS). And all of this should be fast to enable fast iteration and testing cycles. Right now this kind of workflow is sorta possible on traditional package-based distros, and it’s what we’ve all been doing for all these years, but we want to make it possible on image-based distros too and we want to make everything atomic, reversible, and overall better.

Second, we want to make the various GNOME CIs produce artifacts that you can download and apply over top of a reference OS (i.e. GNOME OS) to test a branch of a repo. GNOME app developers can do the exact same thing with apps: their CIs produce Flatpaks that you can download and install to test a given branch of a given app. Right now, testing system components is very difficult. We want to enable the designers to go to the gnome-shell repo, download a CI artifact, overlay it onto their GNOME OS Nightly VM or onto their GNOME OS Nightly host system, and then play around with it.

Neither of these situations that we’re addressing here look like package management. Neither of these situations have been solved by package management. So we’re not building a package manager here.

What you described a few of us built around 2009 at VMware, it was called CVP. Though it’s purpose was for checking in/out VMs from a broker onto laptops (and then checking them back in later) but the principles are very similar.

It really was an operating system of it’s own in terms of maintenance. The number of gotchas you run into is considerable. Doing pass-through for WiFi, GPU, sound, etc is extremely brittle to the point you end up having to provide an allow-list of what hardware works. Especially when device resets are in play.

If you don’t want to do pass-through, there is the option of emulating very simple device drivers and para-virtualize them. This has it’s own set of nastiness and maintenance burden though.

But you are right, it is possible.

You can also do other incredibly fun things from the hypervisor like deny page-table mappings if they match a virus signature.

What you described a few of us built around 2009 at VMware, it was called CVP

But you are right, it is possible

Unlike VMware, we won’t be implementing our own hypervisor for this kind of thing, though. QEMU + KVM + the Linux kernel already provide us with lots of para-virtualized devices: networking, sound, input (keyboard/mouse, and maybe even touch/pen). Filesystems, arbitrary sockets, and even Wayland too. And, of course, graphics and even 3d-accelerated graphics, with DRM and all that jazz.

So the question becomes more of what devices are relevant to pass through for GNOME developers. I think the most common answer is GPU and touchpads. GPU is a proven concept - lots of people pass their single GPU in and out of VMs for gaming, across both NVIDIA and AMD. I wouldn’t expect touchpads to be too difficult either - they’re usually just HID as far as I know. Everything else doesn’t really need to be “real” in the VM - input, networking, sound, etc can all work through paravirtualization.

Of course some developers may need to hack on NetworkManager or something like that, but then I’d expect the workflow for that to be a bit more custom. Presumably whatever we come up with would be re-configurable, so we could let developers that really do need direct access to the WiFi chipset to try to pass that through

So really what I’m proposing is GNOME Boxes, with a special mode where instead of giving a virtualized GPU it gives the VM the real GPU, touchpad, and any other device the user lists, and lets everything else be paravirtualized like it is today.

I suggest that this tooling specifically avoids dependency management, for that reason. What we have instead is Meson subproject support, which can take care of simple cases like ‘I need an unmerged branch of GLib’ - but indeed, if you need to express complex dependency graphs then there are plenty of package managers already :slight_smile: