Splitting A Matroska Video (And Audio) Into Streams For Libretro

How does the libretro-backend API work for providing audio/video data? I assume it’s this one here?

Do those functions block? I see that video can only be uploaded once per frame, so presumably you’d call these in some kind of event/rendering loop? It doesn’t look like you can call it from arbitrary threads anyway.

In that case I would probably implement this by keeping the last frame around all the time and then pass that in there every time a frame has to be uploaded, and for the audio keep a queue of audio frames around and implement dropping/skipping as needed (not sure how that would work with this minimal API from libretro-backend…).

So you’d basically share a single-element “channel” for video with that appsink callback, e.g. a Arc<Mutex<Option<gst_video::VideoFrame<Readable>>. And for the audio a multi-element queue instead of just having an Option in there. And I assume there is a way how you can wake up your event/rendering loop somehow from another thread, and that’s what you would do whenever the appsink callbacks are called.

But in the end this all depends on how the libretro-backend API works, and I don’t know that API.