How to embed foreign surface/window in gtk4 window?

Hi everyone,

I’m working on a python + gtk4 frontend, for an emulator called mupen64plus, that it was originally written in Gtk 3 but now I’m trying to port it to use Gtk 4 instead, but I’m currently struggling with it.

First of all, a bit of context may be needed here: This emulator doesn’t come with a GUI, but it does have an API that allows any third party frontend to communicate with it. One subset of this API provides callbacks for 3D render output, that can be redefined by the frontend to provide its own callbacks instead (https ://mupen64plus .org/wiki/index.php?title=Mupen64Plus_v2.0_Core_Video_Extension) .

The purpose of this override is that it allows the frontend to embed the output in the window. But there’s some limitation in what the frontend can do and what it cannot do, e.g. for most part it doesn’t handle OpenGL calls, the emulator is the one that still does it.

With that said, I have to redefine VidExt_GL_SetAttribute and VidExt_GL_GetAttribute callbacks to set the pixel format (https ://github .com/mupen64plus/mupen64plus-core/blob/53c20cd18c57bdc4f39ff0aba6dc7510dc57045f/src/api/m64p_types.h#L408) that must be then passed back to the emulator, so that I can use it to draw on a dedicated area of the window.

With gtk3, I could use EGL to set the OpenGL attributes for the pixel format, and then use eglCreateWindowSurface (see here) and pass the window handle (XID) as parameter to it. In the window all I have to do is to put a Drawing Area on it and let the magic happen.

But with gtk4 this seems isn’t possible anymore because, with subwindows being removed and apparently there’s no substitute for it, EGL just draws all over the window. Could a subsurface be a good replacement for it? In that case, how exactly can I implement that in a Drawing Area or any other suitable widget?

If not, I can try to think of any other alternatives, for example I could implement eglCreatePbufferSurface (see here) instead of eglCreateWindowSurface and use that to draw in the Drawing Area, but sadly I have yet to understand how to set up the pixel buffer on Gtk4 and I’m not even sure if this whole idea can work. Other than that, I can’t come up with anything…

Many thanks for any possible help or advice.

Hello, you may be able to use pbuffers, you can also try to use EGLImages or dmabufs to move textures in between EGL contexts depending on your requirements. I don’t have any sample code in python, you can find a lot of EGL examples in C though if you look around on github. You don’t set up the buffer in GTK4, you set it up in your alternate context and then in GTK you would import the texture by drawing it into a GtkGLArea or a GtkSnapshot.

The old method of drawing into a reparented window is mostly obsolete, EGL has enough support for offscreen rendering that programs no longer need to always direct render to the screen via the window system.

Hi Jason, thank you! I’m sorry for not responding immediately, but I had to be out of town for a while.
Returning in topic, this is what I was afraid for, it was much simpler with the direct rendering via window system. Anyways I’m exploring those ways you have mentioned, but so far I have been unsuccessful, sadly. (I’d write about the issue I’m having with eglCreateImage and glEGLImageTargetTexture2DOES, but that would be off topic in this thread, I suppose.)

Maybe there’s something I still don’t quite grasp. After all, the only programming experience I have is with Python, and I don’t know much about OpenGL/EGL.

Is it enough to write OpenGL commands inside the ‘render’ callback of GtkGLArea, or there’s something particular I also have to do? Is it possible to obtain the GtkGLArea’s EGL context, e.g. for sharing objects between contexts?

Thank you much for your time.

I am not sure if this is helpful but I have a small C demo for rendering EGL from a remote process and sending it to a GTK4 program to be displayed: https://gitlab.gnome.org/jf/remote-egl-test

No context sharing is necessary. For your purposes you may want to look at the way EGLImages are used there, if this is happening in one single process you only need to share the EGLImage and you can ignore the dmabuf passing parts. The main extension that you need to have is GL_OES_EGL_image. It may be somewhat more involved to port this all to python.

Thank you! It’s a neat demo and, thanks to it, I have managed to create an EGLImage. Since this is a single process (the emulator really is just a library that can be integrated in a frontend),then I only need to share the image in the frontend, right, so I have tried to call glEGLImageTargetTexture2DOES and pointing to a variable holding the image inside GtkGLArea’s render callback, but it doesn’t seem to work. I have seen that you have used Gtk snapshots to embed the rendering in the demo, I tried to make sense of the relevant code in main.c, but sadly I couldn’t do much.

I made a simpler single process demo that is able to render a stream of EGLImages from any context: https://gitlab.gnome.org/jf/egl-image-widget

One thing to note, this still needs some additional code to support GLX.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.