Thanks again for the answers to the initial thread, that closed just 6 hours ago.
I have two followup questions on the same topic, if I may, following my research time in between:
I may assume that some of the conceivable integration hooks for an accessibility service as I see it, would actually need to happen inside the Mutter codebase (the Wayland protocol implementation of GNOME Shell as I understand). Things like knowing the active window and getting its display geometry, and getting the topmost window under the mouse pointer.
Is it the case that when an input device generates a key or pointer event, then this event is sent by the kernel to GNOME Shell and then the latter sends it to the window which is “active” if it is a keyboard input event or to the topmost window under the pointer if it’s a mouse click? How does this process work if some windows do not use the Wayland client protocol? Can you clarify on the input handling architecture please?
Thanks in advance, and I will address future questions on a separate thread after going more length into prototyping my technical design, according to the really helpful and vast documentation suite and your prior inputs. I took a tour of the API and remained unsure about these points above so far.