[Question] Sending audio from a different app to gnome-calls

Hello!
I’m experimenting with using an app’s output audio as a source for call audio, instead of the mic.

The device I’m using is a PinephonePro running PMOS (postmarketOS) Edge.

I have been able to send audio from an app (so far I’ve tested with Amberol) to Sound Recorder no problem, with two different methods:

  1. Destroy the existing links for Amberol and Sound Recorder, and then create links between them:
# Delete existing Sound Recorder links
pw-link -l -I | grep "Sound Recorder" | grep "|->" | awk '{print $1}' | xargs -n 1 pw-link -d
# Delete existing Amberol links
pw-link -l -I | grep "Amberol" | grep "|<-" | awk '{print $1}' | xargs -n 1 pw-link -d
# Link Amberol's FL to Sound Recorder's FL
pw-link -L Amberol:output_FL "Sound Recorder:input_FL"
# Link Amberol's FR to Sound Recorder's FR
pw-link -L Amberol:output_FR "Sound Recorder:input_FR"
  1. Create a virtual mic (from here):

pw-loopback --capture-props='node.target=Amberol' --playback-props='media.class=Audio/Source node.name=virtmic node.description="VirtualMic"'

This method is pretty neat, as I can then just select “VirtualMic” as the input device in settings, open up Sound Recorder, and it will record the music instead of the mic.

The problem I’m having is, whereas these above methods work for Sound Recorder (and other apps), they don’t work for gnome-calls.

When a phone call is started, the input will get switched to the real mic.

If I select “Virtual Mic” as the input during the call, it doesn’t affect the call, which continues to use the real mic.

I also don’t understand why gnome-calls doesn’t show up in qpwgraph like other apps (see screenshot below).

Is it possible to make gnome-calls receive audio from another app?

If so, how?

Let me know if you have any questions! TIA.

Screenshot of qpwgraph See how Amberol and Sound Recorder are shown? gnome-calls never shows up in qpwgraph.

1 Like

Most likely because, while /usr/bin/gnome-calls is linked with EVERYTHING, on Fedora (including tons of both Wayland and Xorg libraries), it isn’t linked with a single audio library. No codecs, no stream interfaces, nothin’.

What it IS linked with is the separate libcallaudio-0.1.so.0 library, which is part of the separate callaudiod project. Its dependencies are (…wait for it…):

  • libasound2-dev
  • libglib2.0-dev
  • libpulse-dev

…libasound2 is better known as the ALSA client library, and if it’s directly manipulating ALSA devices then it’s definitely not on PipeWire’s radar. (Also, it doesn’t sound like it actually streams the audio at all, so much as it just… configures the backend device routing.)

So if it’s connecting, say, your microphone’s input to your speaker output… directly in ALSA… then there’s nothing to (hi)jack in a patchbay. And definitely nothing that would make gnome-calls show up, since all of that’s being handled by a completely separate callaudiod process.

(callaudiod has a companion CLI tool, BTW:)

$ callaudiocli --help
Usage:
  callaudiocli [OPTION?] - A helper tool for callaudiod

Help Options:
  -h, --help               Show help options

Application Options:
  -m, --select-mode        Select mode
  -s, --enable-speaker     Enable speaker
  -u, --mute-mic           Mute microphone
  -S, --status             Print status

$ callaudiocli -S
Selected mode: CALL_AUDIO_MODE_DEFAULT
Speaker enabled: CALL_AUDIO_SPEAKER_OFF
Mic muted: CALL_AUDIO_MIC_OFF
1 Like

Thanks for the reply!
I’ll start looking into doing it with ALSA.

I tried doing the stuff mentioned on this page, but I wasn’t able to make any progress. I’m still learning about how the audio system works as far as PipeWire and ALSA coexisting on a system. If anyone has any info, please let me know!

Well, turns out I steered you wrong there; while callaudiod does indeed link with the ALSA library, it’s only because PulseAudio devices are mapped to ALSA hardware devices, and will therefore contain references to ALSA device information that callaudiod wants access to. But it does fully support PipeWire (well, in its PulseAudio-emulating form, as it’s still using the PulseAudio library to manage PipeWire devices).

The real issue is that callaudiod is extremely particular about what devices it will interact with. There are numerous reports of it not finding devices, or automagically selecting the wrong device, or configuring it in the wrong way.

But there’s also the reason that it’s particular, which is that (as I earlier surmised, but have now better confirmed from reading more of the code) call audio in a device like a Pinephone isn’t routed through Linux at all. It’s directly wired into the hardware.

When a call begins, callaudiod ensures (among other things) that the person on the other end of the call can hear your voice. This is done by: Unmuting the microphone on the default hardware device.

That’s it. That’s all that’s done, and all that NEEDS to be done, because the Pinephone’s “telephone” hardware is physically wired to the microphone built into the phone. Nothing is streamed or connected in PipeWire because, just like in a non-smart telephone, the microphone is simply one of the physical hardware components of the phone itself.

The difference in a smartphone is that the microphone is also accessible from within Linux. But that doesn’t mean that call audio is routed through Linux. It’s not. (AFAICT, without having access to one of those devices.) The Pinephone’s actual-phone functionality appears to be implemented at the hardware level; it’s not a “softphone” that would stream audio through Linux devices in order to do capture/playback during a call.

To route audio from a different device into a phone call, you’d need an audio sink that represents the “audio in” for the telephony circuitry — the one the microphone is normally connected to for the call. But there isn’t any such sink, because that side of things ISN’T exposed to Linux. The audio capture device for a phone call is the microphone, and only the microphone, because it’s the (audio-input) device that’s physically connected to the telephony hardware.

Thanks for the info!
I’ll have to have a go at it again, maybe (hopefully) there’s still a way to get audio from an app to the modem. I’ll post again when I’ve tried some more.

I just stumbled across this page: “Audio on PinePhone”. Here’s a quote from it:

The diagram shows multiple different paths, but only a few are improtant for connecting audio from the mic1 to the modem (bb) and from modem to the earphone on the top of the phone.
The remaining 4 routes are meant for the CPU to be able to record me, record the other caller, play back audio to me, and to play back audio to the other caller. Each of these routes can be used separately by sending/re­ading audio data to/from the left (me) and the right (other caller) channels of the playback and capture PCM devices. (see above)

I still have a lot to read, but I think it’s possible. Let me know if you find anything else interesting. Thanks again!