I have the code of an existing GTK4 (using gtkmm) application.
I am looking for a way to change the code which will render the window as it was onscreen to an offscreen buffer (preference is OpenGL texture or buffer).
The idea is to render this texture/buffer in another application.
I looked into offscreen window but is there a better way for something that is not a one time off screenshot?
Any examples will be appreciated.
You can use WidgetPaintable for this.
If you’re thinking of replacing GtkPlug
/GtkSocket
, then your approach won’t work: texture objects are local to a process.
Your “another application” must be a GTK4 application, otherwise the widget you’re drawing off screen won’t be usable because it won’t know when to redraw itself.
I’m afraid you’ll have to provide a bit more context as to:
- what you’re really trying to achieve
- what kind of application you’re using
My personal suggestion would be to write a small, embedded Wayland compositor into the “main” application, and then have real GTK applications connect to it.
I have 2 application:
- Application showing live image from a camera. This application is highly sensitive to latency therefore cannot go thru the compositor. It is rendering the live image in full screen mode to bypass composition.
- UI application using GTK which is supposed to be overlaid on top of the live image.
Since I want to avoid using the OS compositor, I thought it would be best to use an image of the UI application (which can be refreshed at 15-30fps) and display it as part of the live image rendering.
I can take an image from memory or texture and render it from the live image application without effecting latency.
What I am asking is how do I change the UI application to render its image to memory or even better to a texture or GPU memory so I can access it from the live image application?
In addition how do I maintain user interaction (mainly mouse) with the UI as it is no longer displayed on screen?
Those can’t be done without implementing a wayland compositor or the equivalent of it, that’s essentially what the wayland protocol is. I am not sure you would see any significant latency improvements that way, since you’re asking about compositing a window on top of another window. That’s roughly the same thing that the OS compositor would be doing.
Thanks for the reply @jfrancis .
I do not actually want to overlay a window on top of another window.
I plan to have a single full screen window displaying an OpenGL texture.
What i want to do is to render the GTK window to some off-screen buffer (can be memory or GPU).
Then in my live image application take this memory/buffer, convert it to openGL texture and overlay it on top of the live image.
The overlaying of an additional texture doesnt effect the latency.
Since I lack the knowledge in GTK, I would like to understand what is the recommended approach to continuously render a GTK application to an off-screen location?
And since it is off-screen how can I continue to get mouse events for that application?
It’s unclear to me how that is different from letting the OS compositor do it. The OS compositor does the same thing, it converts windows to OpenGL textures and then overlays windows on top of other windows using OpenGL.
If you really want to do this yourself, and you have real measurements to demonstrate that there would be a benefit from doing it this way, I would have to agree with @ebassi and say you probably want to implement a Wayland compositor. But I am skeptical of whether you would see any latency improvements from that.
It is different then the OS compositor as the OS compositor waits for all the windows to render and then does the compositing and sends to screen.
My live image application does not wait for the UI (GTK) image to be ready. It just takes the last available image (using a round list of buffers and some form of synchronization) and overlays it whenever the camera image is ready.
This way there is no need to lose a frame during composition as the live image application doesn’t wait for GTK to have an image but takes the latest available.
That is already how a Wayland compositor should work though, it will not wait for windows that don’t render. If you go the route of using an internal Wayland compositor you will have to implement that same logic at the Wayland level anyway. Any other method will be very similar as this is the exact thing that Wayland does (share OpenGL buffers one way, receive input the other way).
Can I use an internal Wayland compositor in my application and still use the X compositor (mutter) as the OS compositor?
The internal Wayland will be used to get the GTK UI output and I will implement the composition logic. Whilst the X compositor will be used for the output of my live image application except for those windows that operate in full screen and bypass the OS composition.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.