GTK4 - GtkGLArea breakdown - Fedora 40

Hello,
going back to Linux after few OSX days to my Fedora 40 workstation, GTK 4.14.4 (same as OSX Sonoma) to face an unexpected issue after a dnf update (too bad I can’t even remember what was updated).

No way to create a GTK4 GtkGLArea widget without a fatal error and breakdown of the application, with the following message:

No provider of glDepthRange found.  Requires one of:
    Desktop OpenGL 1.0

Same code: no issue what so ever with GTK3.

I am using Nvidia drivers (470xx, rpmfusion), removing them does not help.
Note I noticed that the Nvidia proprietary drivers were having issues (flickering), and that a GSK_RENDERER=gl program would fix it (for GTK4, no issues with GTK3), but still this does not help with the fatal error above …

I noticed that in the gtk4-demo the Shadertoy cannot creates GL context, not sure if this is related.

I tried to downgrade to GTK 4.14.2 but this does not change anything, so not even sure that this is a GTK4 issue … this is kind of confusing, if any one has any idea of what is going on, I would really appreciate it.

GTK 4.14 prefers OpenGL ES, not OpenGL.

This is likely going to be the issue: the driver is too old.

Try exporting:

GSK_RENDERER=gl
GDK_DEBUG=gl-prefer-gl

in your environment.

With the following it works:

GDK_DEBUG=gl-glx

Found out this here any idea what it implies for future developments of my atomes program ?!

In particular because I packaged, and it is now distributed in Fedora and Debian repositories, should I packaged and run it through a script like:

#!/bin/sh

export GSK_RENDER=gl
export GDK_DEBUG=gl-glx 
atomes-bin

with the script named atomes in the PATH and the binary renamed atomes-bin

Thanks in advance for your thoughts

It implies that, if you want to support older nVidia hardware that is not covered by newer versions of the binary nVidia driver, you’ll have to export those environment variables in your execution environment prior to initialise GTK; you can do that by calling g_setenv() first thing in your main function.

1 Like

Thank you for your help @ebassi (again).
Now last question (for today) do you think I should always use the g_setenv() function (that I discover today, thank you, always learning here), so to set both GSK_RENDERER=gl and GDK_DEBUG=gl-prefer-gl or should I test if the installed driver is ok … if yes is this even possible and how, and after the test adjust the values for both environment variables … man that’s 3 questions sorry :stuck_out_tongue:

No, you can’t test the driver without initialising GTK, at which point it’s too late to set the environment variables.

You have three options:

  1. always set those variables unconditionally, and live with the fact that you’re using GLX and an older renderer when using GTK
  2. bail out of your application if the GLArea cannot be used, which means losing support for older nVidia GPUs and drivers
  3. create a “launcher” binary that initialises GTK, tries to realize a GdkGLContext, and if it fails, sets the environment variables before spawning your actual application, using GSubprocessLauncher

The third option is the more complex, but it allows you to conditionally set up the execution environment.

1 Like

Thank you for the detailed explanation, unfortunately I have a hard time figuring out how to implement the third option:

The code is breaking on a fatal error at runtime because a function glDepthRange (and more actually, if I decide to comment it out) cannot be found, yet I can build the same program, on the same system because it does find that function during compilation, since it is included in the appropriate libraries.

Then I should test if that function is available at runtime, so that I could get a result for the initialization to know what to do next … is this even possible ?

Since I miss the appropriate vocabulary to look it up could you please let me know how this process, to check at runtime, is called ?

And I guess that’s even more questions …

Still putting ideas in words just to try to clarify the process:

I checked the GIO subprocess page and If I understand it correctly to implement this checking I need to build another executable, that I will package and install with atomes and that I will use (at least the first runtime, then I could save configuration somewhere) to check what are the appropriate options: is that correct ? No way to do this in atomes 's code without an external command ?

[EDIT]
Ok I did it using a subprocess and an external binary that I built and installed separately.
The subprocess is launched by the main application if needed, then the parameters are adjusted using the test results.

Not sure if there is any other way of doing this, but it works now, so thank you @ebassi for point me in the right direction :wink:
[/EDIT]

Some time ago I also bumped into renderer issues with gtk4.14+, it wasn’t nVidia, but it looks similar in some way (asked here GTK4.14: Which GSK renderer is safe to use in general?), gtk4-demo was used as a reference app to be sure that’s not connected with own code. After getting that demo displayed normally with gl and opengl renderers, I’ve put GSK_RENDERER into code in own apps at the beginning of main() function, it’s like this:

if (!getenv("GSK_RENDERER")) {
  int major = gtk_get_major_version(), minor = gtk_get_minor_version();
  if ((major == 4) && ((minor == 14) || (minor == 15))) 
    putenv("GSK_RENDERER=gl");
}

and it still works in my case.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.