Changing the way GNOME modules are released

But see also: Even/odd versioning is confusing, let's stop doing it?

Would this proposed policy apply to all GNOME modules, or just libraries? For example, Calendar and Settings are leaf modules that don’t block other modules’ releases, would the release team still release them?

I mean unless you want to stop mentioning translations entirely not mentioning individual translations doesn’t solve much - maintainers still needs to add the

- Translation updates

line, and since translations land independently you’d have to check new commits in case there’a translation.

Moreover - if a module is unmaintained but there were translations - nobody would add that line even if other maintainers are tracking their commits.

Unless there is indeed a script adding it.

Considering that some apps are also providers of an API for plugins, the line is somewhat blurred.

I’d rather have the release team control all releases, to be quite honest with you. As I said, existing maintainer can do more releases if they so choose—just like they can do more than one alpha/beta/rc release during development, or they can release more often than once a month during the stable cycle.

The sneaky bit of the straw man is to ensure that the release team can switch over to a tag-based release model, instead of a tarball-based one; this requires figuring out which tags to trust, and using a GNOME release team key to sign a tag would help ensuring that the release team has at least verified that a tag is safe. I am not sure if Git allows two people to sign the same tag.

They already get credit in the About dialog.

The main problem, as I mentioned in my reply to @alexm, is that the author of the translation (especially when using Damned Lies) is not preserved in the commit log; this means that you cannot already credit the proper translators in your NEWS file.

That’s fine. I’m okay either way, just will need some time and mental bandwidth to eventually adapt to this new workflow.

I’m a big fan of this idea. :slight_smile:

@pwithnall has already written this script! gitlab-changelog script looks at commits since the last annotated tag and among other things emits output like this:

* Translation updates:
 - Czech
 - Dutch
 - French
 - Friulian
 - Georgian
 - Greek, Modern (1453-)
 - Icelandic
 - Latvian
 - Vietnamese

I think you could approximate the “whodunnit” bit by looking at the Last-Translator field in the po file header (including all changes to this field since the last annotated tag).

Well, it’s a strawman proposal, so everything subject to change. We could exclude leaf modules. But that wouldn’t really solve any grand problems. Calendar is part of core, so it does block the overall GNOME release. Some hypothetical examples of leaf modules causing problems:

  • Library you depend on breaks API, fix is committed to gnome-calendar git master a day or two later, but not released. Release team does not notice until trying to build release, when we’re in headless chicken mode.
  • Meson gets a bit stricter. Fix is committed to gnome-calendar git master, but not released. Release team again does not notice until headless chicken mode.
  • Vala or GCC changes something. Fix is committed to gnome-calendar git master, but not released. More release team headless chickens.

These scenarios should be familiar. Pretty sure Calendar has failed to build in releases more than once. There’s just no way to predict what will go wrong when modules miss the tarball deadline. We can’t really notice in advance because there’s just no way to predict what will break when building from the latest releases until we see what has released and what has not. We could set up a “tarball CI” to build the latest tarball of everything and verify that it works, but that would probably be useless because it will work fine until release week, then everything will break all at once and we’ll wind up exactly where we are right now.

I think automating as much as possible is a great idea, and maintainers probably need help with this. Not everyone has made the jump into the CI/containers/reproducibility world.

Case in point: of the accessibility modules, only atk had CI for the last major release; pyatspi2 and Orca still don’t. Yelp doesn’t have CI. I haven’t checked other core-desktop modules.

The CI schemes vary widely, and impose different responsibilities on maintainers and contributors. For librsvg and at-spi2-core I’m really happy with freedesktop ci-templates, but for example, GTK doesn’t use them (its container image must be built by hand), so it’s hard for me as a contributor to make an MR that needs an updated image.

For librsvg, which unfortunately still uses autotools, the CI pipeline runs a “distcheck” job, and when I do the merge request to update NEWS/etc., I get the tarball out of the distcheck job’s artifacts. I copy that to, etc. I’d really like to be able to tag a commit by hand, and have the “make a tarball → upload it to FTP” step be automatic and triggered by the act of tagging a release. I think we could have a good session during GUADEC or soon enough to brainstorm all the supply chain and security implications of enabling that :slight_smile:

I suppose things are a lot easier for Meson modules, where there is no “make distcheck” step that potentially needs a lot of babysitting.

If the release team embarks on a “CI all the things” task, maybe we could look for things to standardize / automate a bit? Publishing generated docs to Gitlab pages (which destination path?), publishing other generated stuff (coverage reports, what else?). All those would be easier for maintainers to add if there were a list of “cool CI toys and what to cut&paste to enable them”.

We can totally go to the model of “keep everything in a release-worthy state all the time”, but tooling needs plenty of work. I think its worthwhile.


At the same time, it reduces work for individual maintainers, but it also makes releases harder to do (both for maintainers and the release team), in my opinion.

See for example this article that I liked:

It basically explains that the more frequent it is, the more we will want to automate it.

A basic example is for the NEWS file, if I have a huge backlog of commits to summarize, it’s harder. So your suggestion to regularly update the NEWS file outside of releases is a good idea for everybody I think, especially if there are less releases. (it’s something that I was already doing).

1 Like

By the way, a more general comment about trying to standardize more and more things between GNOME modules: it can be hard to achieve, and it’s more work for developers, to keep reminding “oh, I also need to do this that way”.

As already said, it’s where automation is useful (here for releasing), and code re-use for CI (but also for other global initiatives, when trying to have homogeneity).

I think in the end this could be a watershed moment for GNOME much like our 6 month release cadence. It will take a bit of getting used to, but in the end I think it’s very much worth it.

I know there are things as a maintainer I need to do better at, and I’ve always felt keeping NEWS up to date incrementally was the right thing to do (even though I generally don’t). But it also means we can push that work off into the merge request so that contributors are in the habit of both summarizing their changes in NEWS as well as documenting them as part of MRs (when necessary). Even less for the maintainer.

Of course, this effort seems like a lot of work to implement, but I would very much love to be doing less repetitive work as a maintainer.

Yes, yes and YES!

For gnome-shell, we have dist scripts that pre-generate some resources, namely stylesheets(*) and man pages(**), so I’d still like to generate those. This is currently also done in a CI job, but the "download locally, upload somewhere else, run ftpadmin install" dance is quite annoying.

*) so RHEL doesn’t need a sassc package
**) so Ubuntu doesn’t need their python2-based asciidoc package

I do consider post-release version bumps to be required, though.

I did try that when adapting the new version scheme, but went back to pre-release bumps when starting to include “meson dist” in the CI pipeline.

The current workflow is as follows:

  1. update NEWS, metainfo, …, bump version in
  2. open a merge request
  3. there is a “dist” job that is triggered by changes to the toplevel
  4. after merging, tag the release
  5. the tag pipeline triggers the “dist” job again, followed by a job that publishes the tarball as artifact

That makes sure that “meson dist” passes before tagging the release, but is also tied to pre-release bumps.

There’s the additional complication that “meson dist” checks that appstream metainfo has current release information (I used to forget that all the time).

Adding an empty release tag during a post-release version bump pretty much defeats the purpose of the test; also appstream-util only validates release tags with a timestamp, which must not be from the future.

While I had switched to post-release bumps, I “solved” that by only adding the tag at the time of the release, and ignoring the broken distcheck in the meantime. Not great, and not an option since dist was hooked up in CI.

That’s not meant as stop energy by any means. I’m happy to give post-release bumps another go, but need suggestions how to adjust the current setup to make them work (hopefully without losing the checks).

Maybe only include the distchecks when using the “release” buildtype? Is it possible to have a git hook that enforces that a corresponding pipeline must succeed before accepting a new tag?

I would very much like to point out that the core of this proposal is to side-step the entire “generate a tarball” approach.

Yes, having a CI pipeline running distcheck is good and fine, and we can automate it; the main issue is taking the tarball and uploading it somewhere: we cannot add the SSH keys to the GNOME FTP share to the CI runners.

One option is to have the CI pipeline store the release archive as an artifact for, say, 24 hours; when the pipeline ends, we could use something like:

curl -X POST \
  -F project=${CI_PROJECT_NAME} \
  -F branch=${CI_COMMIT_BRANCH} \
  -F archive=${TARBALL_FILE} \

And have a simple service on answering to that request (if it’s coming from an authorised project on, of course) that will download the tarball from the CI artifacts of the given branch. Both GTK and libadwaita use a similar approach to download the API references generated on various branches, in order to publish them.

This approach would also eliminate the need to hand out SSH access to maintainers in order to upload archives to

From a maintainer’s perspective, including the release team’s one, the tarball is just a byproduct; it’s just not important.

The less convoluted approach is to have people download the release artifact associated to a tag; for instance, GLib’s 2.73.2 release:

Screenshot from 2022-07-15 21-31-10

GitLab generates these assets automatically; releases are tied to the lifetime of a tag, and since we don’t allow tags to be deleted, releases cannot be deleted either. This would move archive management from to—though we can likely set up redirects. For projects still using Autotools, it would mean saying goodbye to self-hosting release archives, but every project that has switched to Meson or CMake can already cope with it. As far as I know, GitLab source assets are not generated on the fly, and thus won’t change if the Git archive format changes, unlike with GitHub; but I’d have to double check.

Since releases can be created as part of the CI pipeline, we could automate that step as well, and tie it to a tag.

In short: there are alternatives to building our own tarballs. They depend on maintainers setting up a CI pipeline, which is another reason why maintainers that don’t have time or knowledge to do so, should ask for help, instead of waiting for the perfect time to learn how to do that. We have a large community, but we cannot go around asking people if they need help; please, be proactive.

I started writing templates for publishing API references; it needs some cleanup, and ideally it would be great if I could depend on the GNOME run time images instead of using a Fedora container. Still, if you either have non-bleeding edge dependencies, or you have your dependencies listed as Meson subprojects to fall back to, then you can already copy-paste the YAML into your project’s CI pipeline.

The GTK and libadwaita CI setup is a bit more complicated, as it allows publishing multiple references from different projects or branches, and it requires a per-project token which can only be generated by a maintainer.

I am also very much in favour of releasing from tags and automating this. But I suspect its going to be a fair amount of work having investigated this in the past.

One issue with keeping the NEWS uptodate is that we will end up with a bunch of merge conflicts on the NEWS file, however if that becomes a big issue we can do something similar to what gitlab started doing recently to workaround this issue.

Having a NEWS/${release} directory would be an interesting solution, yes. If we need to run a script anyway to add the translations line item, it would at least justify getting things in order.

Another idea that could work for projects using Marge-Bot would be to have a separate ‘News’ section in the MR description. Then when Marge-Bot is assigned and merges the MR she could take the text from that section and add it to the NEWS file. The MR description can already be edited by the maintainers so this might be easier than having to push some additional files.

Back to the straw man proposal, quoting @mcatanzaro 's latest blog post:

Remember, upstreams know their software better than downstreams do, so the hard thinking should happen upstream.

(although the blog post is for build options, I think that quote is relevant here as well).

Here, the upstreams are individual maintainers, and “downstreams” the release team. The release team has a good overview of all the modules, but when zooming in into an individual module, it is the module’s maintainer(s) who know better what’s best to release.

So, I think, when building things with gnome-build-meta, it’s important to have a good visualization of what’s going on: which modules build fine, and at which point it fails (if it fails).

A good practice also, when doing integration work: it’s to do the integration one module at a time (I don’t know if it’s already done that way though), starting at the “bottom” (for example GLib), then building the direct reverse deps, and so on until reaching the apps. I think it’s actually better to build the modules sequentially, to see at which point it blocks.

Edit: note that I’m not a member of the release team, perhaps all the tools are already well in place, and it’s just the modules’ releases (if missing) that are mostly problematic.

Edit 2: from my point of view, we can also distinguish two different things here, although related: making the release team’s life easier, versus making modules maintainers’ life easier. Both can be improved independently.

What I currently do, is when I have a screen full of commits (with git log --oneline), I write the NEWS so far.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.