Improve performance of gdk_texture_download

I am using gdk_texture_download to transfer an image (obtained from snapshot) of the UI from one process to another.
Currently the snapshot is for a window size of 4K (3840x2160). On a high end PC (Nvidia Quadro RTX5000) it takes around 100ms for the download operation. This is very slow compared to similar operations I do (cuda upload of the same size image is ~4ms).
Most of the texture is empty and if compression is possible it would result in a very small image. However I cannot downscale the snapshot as it results in quality deterioration (mainly for text).
Is there any way to speed the download of the texture to memory?
Any way to access the texture from another process while it remains on the GPU?

Yes, but you must transfer it as a dmabuf using a pair of EGL extensions:

EGL_MESA_image_dma_buf_export

EGL_EXT_image_dma_buf_import

These are somewhat complex to use, there is an example of how to do it in gstreamer: gsteglimage.c

Most of this logic of sending a dmabuf stream is implemented by gstreamer and pipewire, so using those two may be the easiest option.

Thanks @jfrancis
I am missing the part which connects the GdkTexture to the EGLImage.
How do I create an EGLImage based on an existing GdkTexture?

Alternatively I can use a CUDA mapped glTexture to share the GPU memory.
Is it possible and how do I convert a GdkTexture to glTexture?

Sorry I forgot to mention, this only works for textures you created from a OpenGL texture id. GDK has no way to get the texture id back out of a GLTexture. Maybe it makes sense to put in a feature request for an API to get it.

A workaround could be to put the texture in a texture node and then draw it to a framebuffer texture that you created, this still technically does a copy but will probably be faster than a CPU copy.

Is there an example code or specific API I should use to “draw it to a framebuffer texture that you created”?

The closest I could think would be the source code to GtkGLArea.

Hi @jfrancis , I’m also interested in this subject.

I understand how to create a GdkTexture from a GLTexture as GLArea does.
The problem is that I don’t know how to make it represent gtk standard widgets(e.g. GtkLabel, GtkButton, etc…).
My workflow so far:

  1. I have all my widgets contained in a GtkBox.
  2. I create a snapshot of my GtkBox usin gtk_widget_snapshot_child().
  3. I get a GskRenderNode from the snapshot using gtk_snapshot_free_to_node().
  4. I get a GskRenderer using the box’s native and gtk_native_get_renderer().
  5. I get a GdkTexture from the GskRenderer and the GskRenderNode using gsk_renderer_render_texture().

So of course I have no access to a GLTexture.
My question is, at which point could I have created my own GLTexture and have gtk render into it(if at all)?

I am looking at this now and I got it wrong before, it looks like there is actually no other easy way to draw into a framebuffer using a GskRenderer.

So you may have to ask for a feature request, maybe for making the gdk_gl_texture_get_id function public. This may not always work because gsk_renderer_render_texture does not always return a GdkGLTexture. Another idea could be to ask for a new function that turns any GdkTexture into an EGLImage.

Also to note, last I heard those extensions I mentioned above do not work with the proprietary nvidia driver: EGL_EXT_image_dma_buf_import broken - EGL_BAD_ALLOC with tons of free RAM - OpenGL - NVIDIA Developer Forums

For those drivers I believe you will have to fall back to a CPU memfd approach, or use a nvidia specific extension which I believe is called EGL Streams.

I believe there is a way to do it on Nvidia by mapping the texture to a cuda resource and then using the CUDA IPC functions to a get a pointer on the device which can be shared.

What controls the type texture gsk_renderer_render_texture creates?
Is this texture being reused when calling this function multiple times?

I don’t know about that, anything that uses a memory mapped pointer is probably hitting CPU memory. This was the nvidia extension I heard about: EGL_KHR_stream_cross_process_fd

But I have not actually heard of anything that supports this extension. Gstreamer and pipewire for example do not use it and only support dma buf.

You can see here: gskglrenderer.c

It seems that currently, a new texture is always made, and the memory texture is only used if the texture is too large to fit in GPU memory.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.