Changing the way GNOME modules are released

I would very much like to point out that the core of this proposal is to side-step the entire “generate a tarball” approach.

Yes, having a CI pipeline running distcheck is good and fine, and we can automate it; the main issue is taking the tarball and uploading it somewhere: we cannot add the SSH keys to the GNOME FTP share to the CI runners.

One option is to have the CI pipeline store the release archive as an artifact for, say, 24 hours; when the pipeline ends, we could use something like:

curl -X POST \
  -F project=${CI_PROJECT_NAME} \
  -F branch=${CI_COMMIT_BRANCH} \
  -F archive=${TARBALL_FILE} \

And have a simple service on answering to that request (if it’s coming from an authorised project on, of course) that will download the tarball from the CI artifacts of the given branch. Both GTK and libadwaita use a similar approach to download the API references generated on various branches, in order to publish them.

This approach would also eliminate the need to hand out SSH access to maintainers in order to upload archives to

From a maintainer’s perspective, including the release team’s one, the tarball is just a byproduct; it’s just not important.

The less convoluted approach is to have people download the release artifact associated to a tag; for instance, GLib’s 2.73.2 release:

Screenshot from 2022-07-15 21-31-10

GitLab generates these assets automatically; releases are tied to the lifetime of a tag, and since we don’t allow tags to be deleted, releases cannot be deleted either. This would move archive management from to—though we can likely set up redirects. For projects still using Autotools, it would mean saying goodbye to self-hosting release archives, but every project that has switched to Meson or CMake can already cope with it. As far as I know, GitLab source assets are not generated on the fly, and thus won’t change if the Git archive format changes, unlike with GitHub; but I’d have to double check.

Since releases can be created as part of the CI pipeline, we could automate that step as well, and tie it to a tag.

In short: there are alternatives to building our own tarballs. They depend on maintainers setting up a CI pipeline, which is another reason why maintainers that don’t have time or knowledge to do so, should ask for help, instead of waiting for the perfect time to learn how to do that. We have a large community, but we cannot go around asking people if they need help; please, be proactive.

I started writing templates for publishing API references; it needs some cleanup, and ideally it would be great if I could depend on the GNOME run time images instead of using a Fedora container. Still, if you either have non-bleeding edge dependencies, or you have your dependencies listed as Meson subprojects to fall back to, then you can already copy-paste the YAML into your project’s CI pipeline.

The GTK and libadwaita CI setup is a bit more complicated, as it allows publishing multiple references from different projects or branches, and it requires a per-project token which can only be generated by a maintainer.

I am also very much in favour of releasing from tags and automating this. But I suspect its going to be a fair amount of work having investigated this in the past.

One issue with keeping the NEWS uptodate is that we will end up with a bunch of merge conflicts on the NEWS file, however if that becomes a big issue we can do something similar to what gitlab started doing recently to workaround this issue.

Having a NEWS/${release} directory would be an interesting solution, yes. If we need to run a script anyway to add the translations line item, it would at least justify getting things in order.

Another idea that could work for projects using Marge-Bot would be to have a separate ‘News’ section in the MR description. Then when Marge-Bot is assigned and merges the MR she could take the text from that section and add it to the NEWS file. The MR description can already be edited by the maintainers so this might be easier than having to push some additional files.

Back to the straw man proposal, quoting @mcatanzaro 's latest blog post:

Remember, upstreams know their software better than downstreams do, so the hard thinking should happen upstream.

(although the blog post is for build options, I think that quote is relevant here as well).

Here, the upstreams are individual maintainers, and “downstreams” the release team. The release team has a good overview of all the modules, but when zooming in into an individual module, it is the module’s maintainer(s) who know better what’s best to release.

So, I think, when building things with gnome-build-meta, it’s important to have a good visualization of what’s going on: which modules build fine, and at which point it fails (if it fails).

A good practice also, when doing integration work: it’s to do the integration one module at a time (I don’t know if it’s already done that way though), starting at the “bottom” (for example GLib), then building the direct reverse deps, and so on until reaching the apps. I think it’s actually better to build the modules sequentially, to see at which point it blocks.

Edit: note that I’m not a member of the release team, perhaps all the tools are already well in place, and it’s just the modules’ releases (if missing) that are mostly problematic.

Edit 2: from my point of view, we can also distinguish two different things here, although related: making the release team’s life easier, versus making modules maintainers’ life easier. Both can be improved independently.

What I currently do, is when I have a screen full of commits (with git log --oneline), I write the NEWS so far.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.