Feature Request / Question: Direct Speech API for Orca (similar to NVDA Controller Client)

Hello everyone,

I hope you are doing well.

I would like to ask if there is currently a way (or any plans) to provide an API in Orca similar to what NVDA offers on Windows through nvdaControllerClient.dll.

In NVDA, developers can send text directly to the screen reader to be spoken immediately, regardless of what is currently focused on screen. NVDA temporarily pauses normal screen reading and speaks the provided text instead. This is extremely useful for accessibility-focused applications.


My use case:

I am currently developing an educational application designed for blind users to learn typing skills.

In this type of application, I need to send very specific instructional messages to the screen reader, such as:

  • “Press the letter A using your left pinky”
  • “Press the letter J using your right index finger”
  • “Press the letter Q above the left pinky”

These instructions are not necessarily tied to UI elements or focus changes, so relying on standard accessibility events is not sufficient.


What I am looking for:

Ideally, I would like:

  • A way to send arbitrary text directly to Orca’s speech output
  • The ability to interrupt current speech
  • Consistent behavior regardless of the currently focused UI element

Questions:

  1. Is there an existing API, DBus interface, or method in Orca that allows this?
  2. If not, what would be the recommended approach to achieve similar behavior?
  3. Are there any plans to support something like this in the future?

Additional notes:

I am aware of tools like spd-say (Speech Dispatcher), but they do not integrate tightly with Orca’s speech management (e.g., interrupting, prioritization, respecting Orca settings, etc.).

What I am looking for is deeper integration with Orca itself, similar to how NVDA exposes control through its controller client.


Thank you very much for your time and for the amazing work on Orca.

Hi,

There is an announce feature for that.
Some examples are available here: README-APPLICATION-DEVELOPERS.md · main · GNOME / orca · GitLab

Thanks a lot, this is extremely helpful!

I see that the “notification” signal (with politeness levels) seems to be the modern approach, especially with assertive priority for interrupting speech.

Also, the new PresentMessage D-Bus API in Orca v49 looks very promising and much closer to what NVDA provides.

I will experiment with both approaches.

Thanks again for pointing me to this — this is exactly what I needed.

1 Like

Hi again,

I wanted to share some follow-up findings after testing the newer D-Bus API (PresentMessage) mentioned in the documentation.

Summary of my findings

I am currently testing on:

  • Orca 49.1
  • Ubuntu 25.10 (GNOME, Wayland session)

While Orca is running and functioning correctly as a screen reader, I am unable to access the PresentMessage D-Bus interface.

What I tested

Calling the method directly:

gdbus call --session \
  --dest org.gnome.Orca \
  --object-path /org/gnome/Orca \
  --method org.gnome.Orca.PresentMessage \
  "test"

Results in:

GDBus.Error: org.freedesktop.DBus.Error.ServiceUnknown:
The name org.gnome.Orca was not provided by any .service files

I also verified the available D-Bus names, and I can see:

  • org.gnome.Orca.Service
  • org.gnome.Orca.KeyboardMonitor

But org.gnome.Orca is not present at all.

Interpretation

From this, it appears that:

  • Orca is running normally
  • But the org.gnome.Orca D-Bus name (used for PresentMessage) is not being registered

This suggests that the new D-Bus API may not be available in the Ubuntu build of Orca, or may require additional configuration or build options.

Questions

  • Is the PresentMessage D-Bus interface expected to be available in all Orca 49+ builds?
  • Does it require specific build flags or runtime conditions to be enabled?
  • Is this interface currently considered experimental or optional?

Context

My goal is still to achieve something similar to NVDA’s controller client:

  • Send arbitrary text to be spoken
  • Interrupt current speech
  • Work independently of focus or UI events

The announcement/notification APIs are helpful, but they depend on accessible objects and UI context, which is not always suitable for my use case (e.g. typing trainer instructions).

Suggestion

If this API is intended to be available, it might be helpful to:

  • Clarify in the documentation whether it is guaranteed to be present in packaged builds
  • Or provide guidance on how to enable it when missing

Thanks again for your help and for all the work on Orca — this feature would be extremely valuable for accessibility-focused applications.

Best regards,
Ahmed Bakr (MesterPerfect)