Rather off topic - Looking to get a better understanding of the low(ish) level of creating UIs

I’ve got a reasonable amount of coding experience; and am looking to get a better understanding of the low(ish) level of creating UIs. Appreciate this is vastly off topic for this forum; but I feel this community will have exactly the insights I’m looking for - so any thoughts / comments / suggestions would be most appreciated. If anyone has a more appropriate forum to suggest posting on -that would be great too.

My google search results get flooded with “beginners guide on how to use existing framework X” - which is not what I’m looking for; hence switching to human aided search.

These are the types of questions I would like to get a better working mental model of:

At some point program Q is deciding that pixel x,y should be set to RGB value something. Is it reasonable to say at that point - the data is basically doing the same as what a JPG does? (not exactly the same encoding; but from the perspective that is only setting a pixel to a value)

Any pretty diagrams giving an overview of how different frameworks interact might be helpful. Something like on Linux; using C++ you have a,b,c.

Maybe there are some great tutorials like “How to build your own Java SWING UI framework from scratch in 2 months” (I have stumbled across a few building your own X - which are really helpful for understanding some of the important points in other domains.)

I’ve started looking a little bit at openGL; and again - anything that helps me build the correct mental model of the “architecture” type stuff going on there. I’m working through some tutorials - but there is still handwaving which I would at least like to know what it’s covering. For example - is there actually a step after doing a batch of logic on the GPU to get it to the screen - or are GPU’s and openGL “hard coded” to send things to the screen straight away?

When I’m just using my OS to open chrome to read reddit; is there anything going on with my GPU? (assuming there is no webGL running on the page) If; “pretty much no” - then why is my HDMI cable plugged into my graphics card? That kinda feels like a stupid question - but then - how does my graphics card work before I install its drivers when I re-installing an OS - seems like there must be quite a lot of industry standards going on that its likely somewhat non-trivial.

Thanks all

All you are looking for is Cairo graphics which you can use to make custom widgets, UIs and any fancy stuff.

We do have a lot of standards. If all your aim was to draw elements on screen, then you can start simple. You can start by reading evolution of GUI and related. We have abstractions, protocols or layers which make it easy for graphical tool-kits to work without taking trouble of hardware difference and others. The more deeper you dig, more “a lot” it becomes. :wink:

This is an interesting discussion, but I’m not sure “Language bindings” - or possibly this discourse instance - is the best place to discuss it

At some point program Q is deciding that pixel x,y should be set to RGB value something. Is it reasonable to say at that point - the data is basically doing the same as what a JPG does? (not exactly the same encoding; but from the perspective that is only setting a pixel to a value)

Basically yes, except really not jpeg

This is a wonderful explanation of how Firefox/Servo translate web concepts to the GPU: https://hacks.mozilla.org/2017/10/the-whole-web-at-maximum-fps-how-webrender-gets-rid-of-jank/

WebKit is somewhat different, but for example https://www.slideshare.net/ariyahidayat/understanding-webkit-rendering and https://www.slideshare.net/joone/hardware-acceleration-in-webkit will give you an idea of how it works. Both Firefox and WebKit ultimately work in terms of translating “web stuff” in to “layers that can be composited by the GPU”; both take different approaches to this.

As for GNOME and GTK… GTK3 comes from the days of X11 and Cairo, where the windowing system sends client applications an expose event (“redraw this window”), and GTK uses a callback into the program’s code to ask it to draw itself to a Cairo context. This context can be backed by an X11 surface, or maybe a Cairo image surface (RGBA buffer in RAM). Under Wayland, I think one may get a Cairo context backed by a GL surface.

In GTK4, things are different, as it is moving to a model friendlier to GPUs. GTK4 expects the calling code to be able to provide a Paintable for things that can draw themselves. A Paintable can be asked to provide a Snapshot, which is basically the “draw yourself” call. Your code creates a Snapshot by building a graph of render nodes. See https://blog.gtk.org/2020/04/24/custom-widgets-in-gtk-4-drawing/ for an introduction.

2 Likes

Thanks for those pointers & thoughts; very helpful & exactly the type of info I was looking to help me get going.

Is anyone aware of a more appropriate forum to post these kind of open ended discussion on?

Thanks

Here is fine! Especially if questions like yours help us know what newcomers to the platform are looking for in terms of documentation :slight_smile:

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.