Guidance for working on an accessibility service | Part II

Thanks again for the answers to the initial thread, that closed just 6 hours ago.
I have two followup questions on the same topic, if I may, following my research time in between:

  1. I may assume that some of the conceivable integration hooks for an accessibility service as I see it, would actually need to happen inside the Mutter codebase (the Wayland protocol implementation of GNOME Shell as I understand). Things like knowing the active window and getting its display geometry, and getting the topmost window under the mouse pointer.

    The newly reworked docs do seem to mention that overriding Mutter code in an extension is possible, however what techniques are available for accomplishing such overrides robustly, if at all? Or can you get that active window information by simply querying Mutter from the javascript portion of an extension?

  2. Is it the case that when an input device generates a key or pointer event, then this event is sent by the kernel to GNOME Shell and then the latter sends it to the window which is “active” if it is a keyboard input event or to the topmost window under the pointer if it’s a mouse click? How does this process work if some windows do not use the Wayland client protocol? Can you clarify on the input handling architecture please?


Thanks in advance, and I will address future questions on a separate thread after going more length into prototyping my technical design, according to the really helpful and vast documentation suite and your prior inputs. I took a tour of the API and remained unsure about these points above so far.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.