GtkGLArea on macOS appears to be reading pixels back from the GPU

I’m seeing a performance issue when I build and run the Vice C64 emulator with a Gtk3 UI on macOS. It might be relevant that I am using Gtk3 as supplied by homebrew (brew install gtk+3).

The core issue is that gtk_gl_area_draw is calling gdk_cairo_draw_from_gl, which in turn is spending a lot of time inside glReadPixels and cairo_paint, which itself is spending a lot of time inside CGContextDrawImage.

So it appears to me that having rendered the C64 screen texture to a quad in hardware, the result is then pulled back to system memory and then re-rendered via CoreGraphics.

I see the same issue when running the following GtkGLArea example:

Is this expected behaviour for GtkGLArea on macOS?

Also seeing messages like:

Gdk-WARNING **: 19:57:35.505: GL implementation doesn't support any form of non-power-of-two textures

Which makes me wonder if something is not quite right with the Gtk+3 / GDK / Cairo installation generally.

The macOS support for OpenGL is still very much unstable and experimental; it has only recently been added, and requires more debugging from somebody who is familiar with both GL and Darwin/CoreGraphics.

Thanks for the response. If someone not familiar with Gtk wanted to spend some time on this, where should they start looking?

One thing I did notice was that under macOS I had to add a call to either of glFlush() or glFinish() to the end of the render callback, which isn’t needed for the same code running under Linux or Windows. Additionally the screen doesn’t seem to update frequently if glFlush() is used - glFinish() is required to correctly display the output. If neither call is added, nothing is displayed.

The first place to look into is the macOS implementation of GdkGLContext.

The base class is available here; this is the X11 implementation as a comparison.

Excellent, thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.