Splitting A Matroska Video (And Audio) Into Streams For Libretro

Essentially what I want to do is play a video (and audio) through libretro.
If it helps, I will be using a matroska container, with h.265 video and 24-bit FLAC audio.

I need to use gstreamer to split the file into two streams.

The call to .upload_video_frame() needs an &[u8], as shown here.
The call to .upload_audio_frame() needs an &[i16], as shown in the same file, on line 138.

I have deduced that I need a pipeline, something like this:
filesrc location=/path/to/file.mkv ! decodebin ! appsink
Where the appsink is responsible for splitting and uploading the audio and video frames.

Do I have that correct thus far?

Also, I need some way to track the timestamp or frames played, because I want to use conditional logic like this:

if current_time < 4 seconds {
    return;
} else if current_time >= 4 seconds && current_time < 5 seconds {
    play_different_video()
} else {
    play_another_video()
}

I found this, but it was not entirely clear to me if that is what I need.

I have read a good deal through the gstreamer-rs examples, and I have the very beginnings of a function to do this:

fn output_audio_and_video_streams(file_path: &str) -> Result<(), Error> {
    gstreamer::init()?;

    let pipeline: gstreamer::Pipeline = gstreamer::Pipeline::new(None);
    let src: gstreamer::Element = gstreamer::ElementFactory::make("filesrc", None).map_err(|_| MissingElement("filesrc"))?;
    let decodebin: gstreamer::Element =
        gstreamer::ElementFactory::make("decodebin", None).map_err(|_| MissingElement("decodebin"))?;

    src.set_property("location", &file_path)?;

    pipeline.add_many(&[&src, &decodebin])?;
    // I have gathered that I do not add the appsink here in this link_many() call
    gstreamer::Element::link_many(&[&src, &decodebin])?;

    let pipeline_weak: glib::object::WeakRef<gstreamer::Pipeline> = pipeline.downgrade();


    // Things to figure out here

    Ok(())
}

Thank you so much in advance :slight_smile:

You need to add two appsinks, one for audio and one for video. You’d do that similar to here from the pad-added signal on the decodebin.

You probably also want audio/video in a very specific format, so you should configure the correct caps on the two appsinks. See e.g. here.

From the appsink’s new-sample callback you would get a gst::Sample. That contains a gst::Buffer, which has a PTS (presentation timestamp). In addition the sample has a gst::Segment, which allows you together with the PTS to get the stream time of each buffer.

The stream time is what you would use for knowing for each of the buffers where it is logically in the timeline of the input file.

Alternatively you could query the position on the whole pipeline but that would require you to poll regularly.

See the links above :slight_smile: If you have any more specific questions at that point, let me know.

Thank you so much for the help :slight_smile:

Focusing on the appsink mechanics, I now have the following for my function:

fn output_audio_and_video_streams(file_path: &str) -> Result<(), Error> {
    gstreamer::init()?;

    let pipeline: gstreamer::Pipeline = gstreamer::Pipeline::new(None);
    let filesrc: gstreamer::Element = gstreamer::ElementFactory::make("filesrc", None).map_err(|_| MissingElement("filesrc"))?;
    let decodebin: gstreamer::Element =
        gstreamer::ElementFactory::make("decodebin", None).map_err(|_| MissingElement("decodebin"))?;
    let sink_audio: gstreamer::Element = gstreamer::ElementFactory::make("appsink", None).map_err(|_| MissingElement("appsink"))?;
    let sink_video: gstreamer::Element = gstreamer::ElementFactory::make("appsink", None).map_err(|_| MissingElement("appsink"))?;

    filesrc.set_property("location", &file_path)?;

    pipeline.add_many(&[&filesrc, &decodebin])?;
    gstreamer::Element::link_many(&[&filesrc, &decodebin])?;

    let appsink_audio: gstreamer_app::AppSink = sink_audio
        .dynamic_cast::<gstreamer_app::AppSink>()
        .expect("Sink element is expected to be an appsink!");


    let audio_caps: gstreamer::Caps = gstreamer::Caps::builder("audio/x-raw")
        .field("format", &gstreamer_audio::AudioFormat::S16le.to_str())
        .field("layout", &"interleaved")
        .field("channels", &(1i32))
        .field("rate", &gstreamer::IntRange::<i32>::new(1, i32::MAX))
        .build();

    appsink_audio.set_caps(Some(&audio_caps));

    appsink_audio.connect_pad_added(move |audio_sink, audio_pad| {
        // Fun audio bits go here
    });

    let appsink_video: gstreamer_app::AppSink = sink_video
        .dynamic_cast::<gstreamer_app::AppSink>()
        .expect("Sink element is expected to be an appsink!");

    let video_caps: gstreamer::Caps = gstreamer::Caps::builder("video/x-raw")
        .features(&[&gstreamer_gl::CAPS_FEATURE_MEMORY_GL_MEMORY])
        .field("format", &gstreamer_video::VideoFormat::Rgba.to_str())
        .field("texture-target", &"2D")
        .build();

    appsink_video.set_caps(Some(&video_caps));

    appsink_video.connect_pad_added(move |video_sink, video_pad| {
        // Fun video bits go here
    });


    let pipeline_weak: glib::object::WeakRef<gstreamer::Pipeline> = pipeline.downgrade();

    Ok(())
}

I do not quite see what I need do do with the decodebin variable, nor do I exactly know what to do within the .connect_pad_added() closures.

Is there something I am missing from the appsink example?

No, but from the decodebin example. You need to connect to pad-added on the decodebin, and from the callback you would link to the correct appsink like in that example (just that the example links to autoaudiosink / autovideosink). See this documentation for some more details about.

Also you will have to add a new-sample callback to the appsinks so that you can be notified whenever data is available.

Okay, I feel like I have made some solid progress connecting all of the wires :slight_smile:

This is what I have now:

fn output_audio_and_video_streams(file_path: &str) -> Result<(), Error> {
    gstreamer::init()?;

    let pipeline: gstreamer::Pipeline = gstreamer::Pipeline::new(None);
    let filesrc: gstreamer::Element = gstreamer::ElementFactory::make("filesrc", None).map_err(|_| MissingElement("filesrc"))?;
    let decodebin: gstreamer::Element =
        gstreamer::ElementFactory::make("decodebin", None).map_err(|_| MissingElement("decodebin"))?;

    filesrc.set_property("location", &file_path)?;

    pipeline.add_many(&[&filesrc, &decodebin])?;
    gstreamer::Element::link_many(&[&filesrc, &decodebin])?;

    let pipeline_weak: glib::object::WeakRef<gstreamer::Pipeline> = pipeline.downgrade();

    decodebin.connect_pad_added(move |decode_element, decode_pad| {
        let pipeline: gstreamer::Pipeline = match pipeline_weak.upgrade() {
            Some(pipeline) => pipeline,
            None => return,
        };

        let (is_audio, is_video): (bool, bool) = {
            let media_type: Option<(bool, bool)> = decode_pad.get_current_caps().and_then(|caps| {
                caps.get_structure(0).map(|s| {
                    let name = s.get_name();
                    (name.starts_with("audio/"), name.starts_with("video/"))
                })
            });

            match media_type {
                None => {
                    gst_element_warning!(
                        decode_element,
                        gstreamer::CoreError::Negotiation,
                        ("Failed to get media type from pad {}", decode_pad.get_name())
                    );

                    return;
                }
                Some(media_type) => media_type,
            }
        };

        let insert_sink = |is_audio, is_video| -> Result<(), Error> {
            if is_audio {
                let queue: gstreamer::Element = gstreamer::ElementFactory::make("queue", None)
                    .map_err(|_| MissingElement("queue"))?;
                let convert: gstreamer::Element = gstreamer::ElementFactory::make("audioconvert", None)
                    .map_err(|_| MissingElement("audioconvert"))?;
                let resample: gstreamer::Element = gstreamer::ElementFactory::make("audioresample", None)
                    .map_err(|_| MissingElement("audioresample"))?;
                let sink_audio: gstreamer::Element = gstreamer::ElementFactory::make("appsink", None)
                    .map_err(|_| MissingElement("appsink"))?;

                let elements: &[&gstreamer::Element; 4] = &[&queue, &convert, &resample, &sink_audio];
                pipeline.add_many(elements)?;
                gstreamer::Element::link_many(elements)?;

                for element in elements {
                    element.sync_state_with_parent()?;
                }

                let appsink_audio: gstreamer_app::AppSink = sink_audio
                    .dynamic_cast::<gstreamer_app::AppSink>()
                    .expect("Sink element is expected to be an appsink!");

                let audio_caps: gstreamer::Caps = gstreamer::Caps::builder("audio/x-raw")
                    .field("format", &gstreamer_audio::AudioFormat::S16le.to_str())
                    .field("layout", &"interleaved")
                    .field("channels", &(1i32))
                    .field("rate", &gstreamer::IntRange::<i32>::new(1, i32::MAX))
                    .build();

                appsink_audio.set_caps(Some(&audio_caps));

                let callbacks = gstreamer_app::AppSinkCallbacks::builder()
                    .new_sample(|appsink| {
                        let sample: gstreamer::Sample = appsink.pull_sample().map_err(|_| gstreamer::FlowError::Eos)?;
                        let buffer: &gstreamer::BufferRef = sample.get_buffer().ok_or_else(|| {
                            gst_element_error!(
                        appsink,
                        gstreamer::ResourceError::Failed,
                        ("Failed to get buffer from appsink")
                    );

                            gstreamer::FlowError::Error
                        })?;

                        let map: gstreamer::BufferMap<gstreamer::buffer::Readable> = buffer.map_readable().map_err(|_| {
                            gst_element_error!(
                        appsink,
                        gstreamer::ResourceError::Failed,
                        ("Failed to map buffer readable")
                    );

                            gstreamer::FlowError::Error
                        })?;

                        let audio_samples: &[i16] = map.as_slice_of::<i16>().map_err(|_| {
                            gst_element_error!(
                        appsink,
                        gstreamer::ResourceError::Failed,
                        ("Failed to interpret buffer as S16 PCM")
                    );

                            gstreamer::FlowError::Error
                        })?;

                        Ok(gstreamer::FlowSuccess::Ok)
                    })
                    .build();

                appsink_audio.set_callbacks(callbacks);

                let sink_pad: gstreamer::Pad = queue.get_static_pad("sink").expect("queue has no sinkpad");
                decode_pad.link(&sink_pad)?;
            } else if is_video {
                let queue: gstreamer::Element = gstreamer::ElementFactory::make("queue", None)
                    .map_err(|_| MissingElement("queue"))?;
                let convert: gstreamer::Element = gstreamer::ElementFactory::make("videoconvert", None)
                    .map_err(|_| MissingElement("videoconvert"))?;
                let scale: gstreamer::Element = gstreamer::ElementFactory::make("videoscale", None)
                    .map_err(|_| MissingElement("videoscale"))?;
                let sink_video: gstreamer::Element = gstreamer::ElementFactory::make("appsink", None)
                    .map_err(|_| MissingElement("appsink"))?;

                let elements: &[&gstreamer::Element; 4] = &[&queue, &convert, &scale, &sink_video];
                pipeline.add_many(elements)?;
                gstreamer::Element::link_many(elements)?;

                for element in elements {
                    element.sync_state_with_parent()?
                }

                let appsink_video: gstreamer_app::AppSink = sink_video
                    .dynamic_cast::<gstreamer_app::AppSink>()
                    .expect("Sink element is expected to be an appsink!");

                let video_caps: gstreamer::Caps = gstreamer::Caps::builder("video/x-raw")
                    // .features(&[&gstreamer_gl::CAPS_FEATURE_MEMORY_GL_MEMORY])
                    .field("format", &gstreamer_video::VideoFormat::Rgba.to_str())
                    .field("texture-target", &"2D")
                    .build();

                appsink_video.set_caps(Some(&video_caps));

                let callbacks = gstreamer_app::AppSinkCallbacks::builder()
                    .new_sample(|appsink| {
                        let sample: gstreamer::Sample = appsink.pull_sample().map_err(|_| gstreamer::FlowError::Eos)?;
                        let buffer: &gstreamer::BufferRef = sample.get_buffer().ok_or_else(|| {
                            gst_element_error!(
                        appsink,
                        gstreamer::ResourceError::Failed,
                        ("Failed to get buffer from appsink")
                    );

                            gstreamer::FlowError::Error
                        })?;

                        let map: gstreamer::BufferMap<gstreamer::buffer::Readable> = buffer.map_readable().map_err(|_| {
                            gst_element_error!(
                        appsink,
                        gstreamer::ResourceError::Failed,
                        ("Failed to map buffer readable")
                    );

                            gstreamer::FlowError::Error
                        })?;

                        let video_samples: &[u8] = map.as_slice_of::<u8>().map_err(|_| {
                            gst_element_error!(
                        appsink,
                        gstreamer::ResourceError::Failed,
                        ("Failed to interpret buffer as RGBa")
                    );

                            gstreamer::FlowError::Error
                        })?;

                        Ok(gstreamer::FlowSuccess::Ok)
                    })
                    .build();

                appsink_video.set_callbacks(callbacks);

                let sink_pad: gstreamer::Pad = queue.get_static_pad("sink").expect("queue has no sinkpad");
                decode_pad.link(&sink_pad)?;
            }

            Ok(())
        };

        if let Err(err) = insert_sink(is_audio, is_video) {
            #[cfg(feature = "v1_10")]
            element_error!(
                decode_element,
                gstreamer::LibraryError::Failed,
                ("Failed to insert sink"),
                details: gstreamer::Structure::builder("error-details")
                            .field("error",
                                   &ErrorValue(Arc::new(Mutex::new(Some(err)))))
                            .build()
            );

            #[cfg(not(feature = "v1_10"))]
            gst_element_error!(
                decode_element,
                gstreamer::LibraryError::Failed,
                ("Failed to insert sink"),
                ["{}", err]
            );
        }
    });

    pipeline.set_state(gstreamer::State::Playing)?;

    let bus: gstreamer::Bus = pipeline
        .get_bus()
        .expect("Pipeline without bus. Shouldn't happen!");

    for msg in bus.iter_timed(gstreamer::CLOCK_TIME_NONE) {
        match msg.view() {
            MessageView::Eos(..) => break,
            MessageView::Error(err) => {
                pipeline.set_state(gstreamer::State::Null)?;

                #[cfg(feature = "v1_10")]
                    {
                        match err.get_details() {
                            // This bus-message of type error contained our custom error-details struct
                            // that we sent in the pad-added callback above. So we unpack it and log
                            // the detailed error information here. details contains a glib::SendValue.
                            // The unpacked error is the converted to a Result::Err, stopping the
                            // application's execution.
                            Some(details) if details.get_name() == "error-details" => details
                                .get::<&ErrorValue>("error")
                                .unwrap()
                                .and_then(|v| v.0.lock().unwrap().take())
                                .map(Result::Err)
                                .expect("error-details message without actual error"),
                            _ => Err(ErrorMessage {
                                src: msg
                                    .get_src()
                                    .map(|s| String::from(s.get_path_string()))
                                    .unwrap_or_else(|| String::from("None")),
                                error: err.get_error().to_string(),
                                debug: err.get_debug(),
                                source: err.get_error(),
                            }
                                .into()),
                        }?;
                    }
                #[cfg(not(feature = "v1_10"))]
                    {
                        return Err(ErrorMessage {
                            src: msg
                                .get_src()
                                .map(|s| String::from(s.get_path_string()))
                                .unwrap_or_else(|| String::from("None")),
                            error: err.get_error().to_string(),
                            debug: err.get_debug(),
                            source: err.get_error(),
                        }
                            .into());
                    }
            }
            MessageView::StateChanged(s) => {
                println!(
                    "State changed from {:?}: {:?} -> {:?} ({:?})",
                    s.get_src().map(|s| s.get_path_string()),
                    s.get_old(),
                    s.get_current(),
                    s.get_pending()
                );
            }
            _ => (),
        }
    }

    pipeline.set_state(gstreamer::State::Null)?;

    Ok(())
}

The issue I have not been able to figure out, is how to export/return the audio_samples and video_samples variables from within the nested closures/callbacks.

A working example of what I need is here (I mentioned this in my first post).

A quick example of what I need to do is this:

let (audio, video): (&[i16], &[u8]) = output_audio_and_video_streams("/path/to/file.mkv");
handle.upload_audio_frame(audio);
handle.upload_video_frame(video);

Where the return type of my function is changed to -> (&[i16], &[u8])

Do you have any thoughts on how to do this?

You need to pass something into the closures that allows you to pass the data elsewhere (e.g. some kind of channel), or instead of using the new-sample callback directly you could also call appsink.pull_sample() from any other place but that would then block until a sample is available. If you do that you could use the new-sample callback for waking up your code that pulls the samples.

I added the lazy_static crate, and wrote this:

lazy_static! {
    static ref AUDIO_STREAM: Arc<RwLock<&'static[i16]>> = Arc::new(RwLock::new(&[]));
    static ref VIDEO_STREAM: Arc<RwLock<&'static[u8]>> = Arc::new(RwLock::new(&[]));
}

And then used it inside the appsink callback like this (repeated for audio):

let video_stream_clone = VIDEO_STREAM.clone();
let mut video_stream_write: std::sync::RwLockWriteGuard<&[u8]> = video_stream_clone.write().expect("Could not unlock VIDEO_SAMPLES.");

*video_stream_write = video_samples;

However, I have run into a curious borrow-checker issue:

error[E0597]: `sample` does not live long enough
   --> src/main.rs:224:61
    |
224 |                         let buffer: &gstreamer::BufferRef = sample.get_buffer().ok_or_else(|| {
    |                                                             ^^^^^^-------------
    |                                                             |
    |                                                             borrowed value does not live long enough
    |                                                             argument requires that `sample` is borrowed for `'static`
...
260 |                     })
    |                     - `sample` dropped here while still borrowed

error[E0597]: `map` does not live long enough
   --> src/main.rs:244:52
    |
244 |                         let video_samples: &[u8] = map.as_slice_of::<u8>().map_err(|_| {
    |                                                    ^^^--------------------
    |                                                    |
    |                                                    borrowed value does not live long enough
    |                                                    argument requires that `map` is borrowed for `'static`
...
260 |                     })
    |                     - `map` dropped here while still borrowed

Do you have a best way in mind to remedy this?

Depends on the overall context of your application, but something like this doesn’t seem right. Especially because this is a global variable and also requires a &'static reference of the data, which you don’t get. The latter could be solved by storing a gst::Buffer inside the RWLock but that still looks rather wrong.

Generally you would either

  • Pass some kind of channel into the callback which then consumes the data and passes it to some other part of the code that is consuming the data. See std::sync::mpsc or any of the other channels for examples.
  • Don’t use the callback (except maybe for notification of new data being available), and call the blocking appsink.pull_sample() from somewhere else in your code to directly get access to the data where you need it.

All this doesn’t really have much to do with GStreamer but is a more general Rust question.

I see.
I amended my program as follows:

fn output_audio_and_video_streams(file_path: &str, sender: std::sync::mpsc::Sender<()>, audio_stream: Arc<RwLock<&'static[i16]>>) -> Result<(), Error> {
    gstreamer::init()?;

    let pipeline: gstreamer::Pipeline = gstreamer::Pipeline::new(None);
    let filesrc: gstreamer::Element = gstreamer::ElementFactory::make("filesrc", None).map_err(|_| MissingElement("filesrc"))?;
    let decodebin: gstreamer::Element =
        gstreamer::ElementFactory::make("decodebin", None).map_err(|_| MissingElement("decodebin"))?;

    filesrc.set_property("location", &file_path)?;

    pipeline.add_many(&[&filesrc, &decodebin])?;
    gstreamer::Element::link_many(&[&filesrc, &decodebin])?;

    let pipeline_weak: glib::object::WeakRef<gstreamer::Pipeline> = pipeline.downgrade();

    // let (audio_stream_clone, sender_clone) = (Arc::clone(&audio_stream), sender.clone());

    let sender_clone = sender.clone();

    decodebin.connect_pad_added(move |decode_element, decode_pad| {

        sender_clone.send(()).unwrap();

        ...
}


fn main() {
    let audio_stream: Arc<RwLock<&'static[i16]>> = Arc::new(RwLock::new(&[]));

    let (sender, receiver): (std::sync::mpsc::Sender<()>, std::sync::mpsc::Receiver<()>) = channel();

    output_audio_and_video_streams(&get_file_path(), sender, audio_stream).unwrap();
}

However, when I try to use the sender inside of the .connect_pad_added() closure, I am met with this error:

error[E0277]: `std::sync::mpsc::Sender<()>` cannot be shared between threads safely
   --> src/main.rs:86:15
    |
86  |       decodebin.connect_pad_added(move |decode_element, decode_pad| {
    |  _______________^^^^^^^^^^^^^^^^^_-
    | |               |
    | |               `std::sync::mpsc::Sender<()>` cannot be shared between threads safely
87  | |
88  | |         sender_clone.send(()).unwrap();
89  | |
...   |
296 | |         }
297 | |     });
    | |_____- within this `[closure@src/main.rs:86:33: 297:6]`
    |
    = help: within `[closure@src/main.rs:86:33: 297:6]`, the trait `std::marker::Sync` is not implemented for `std::sync::mpsc::Sender<()>`
    = note: required because it appears within the type `[closure@src/main.rs:86:33: 297:6]`

I am able to call sender_clone.send(()).unwrap() outside of that closure happily though.
I am a little confused, because that closure does have the move keyword on it, so I thought it should take ownership of it.

What am I missing here?

std::sync::mpsc::Sender is not Sync, but the closure requires that. Basically what the compiler error says :slight_smile:

You can use any kind of mutex (not read-write mutex!) to make something non-Sync become Sync, or you can reorganize the code so it’s not needed.

That you still have a &'static [i16] somewhere here seems like your problems start somewhere else already though, and it probably makes sense to take a step back and reconsider what you’re actually trying to do here and how that could fit together with the various Rust design patterns and the given APIs.

Okay, I figured a good bit more out :slight_smile:

I changed the function signature to this:

fn output_audio_and_video_streams(file_path: &str, audio_sender: Arc<Mutex<std::sync::mpsc::Sender<Vec<i16>>>>, video_sender: Arc<Mutex<std::sync::mpsc::Sender<Vec<u8>>>>) -> Result<(), Error> {}

And I have a few .clone() calls where needed for each sender.

In each callback, I have this:

println!("Audio length: {}", audio_samples.len());
audio_sender_clone.lock().unwrap().send(audio_samples.to_vec()).unwrap();

The thing is, I see that the callback is executed in chunks, however, I have not seen how to use the receiver to match. I know that .recv() blocks until completion, so is there a way to do this:

println!("{}", audio_receiver.recv().unwrap().len());

elsewhere in my program that goes in chunks with the callback?

Thank you again for all of the help!

That depends on your application design. There are many ways. One way would be to use a Futures enabled channel and spawn a separate task for the two receivers.

Do you think this section would be what I need?

Not really, that explains how all those things work under the hood. That doesn’t explain how to use them, which is what you need. Also I don’t know if using Futures is the correct approach in your case, that depends on your whole application structure and is something only you can know.

Really, I have no preference, as long as it functions how it should (at the moment).
Currently, I just want to connect the wires and be able to play an audio/video stream from gstreamer to the libretro-backend crate.
If you were writing it, given what we have thus far, where would you go from here?

How does the libretro-backend API work for providing audio/video data? I assume it’s this one here?

Do those functions block? I see that video can only be uploaded once per frame, so presumably you’d call these in some kind of event/rendering loop? It doesn’t look like you can call it from arbitrary threads anyway.

In that case I would probably implement this by keeping the last frame around all the time and then pass that in there every time a frame has to be uploaded, and for the audio keep a queue of audio frames around and implement dropping/skipping as needed (not sure how that would work with this minimal API from libretro-backend…).

So you’d basically share a single-element “channel” for video with that appsink callback, e.g. a Arc<Mutex<Option<gst_video::VideoFrame<Readable>>. And for the audio a multi-element queue instead of just having an Option in there. And I assume there is a way how you can wake up your event/rendering loop somehow from another thread, and that’s what you would do whenever the appsink callbacks are called.

But in the end this all depends on how the libretro-backend API works, and I don’t know that API.

In Libretro, the frontend completely controls the backend (also known as a Libretro core), giving it callbacks to handle inputs and outputs, and controlling how to iterate over the core.

The core must implement several functions that will be called by the fronted in that order (IIRC):

  • the frontend calls retro_set_*() to set various callback functions (for extra API, video output, audio output, controller input, logging…)
  • the fronted calls retro_init()
  • the fronted calls retro_load_game() if relevant to set a file to load: game file, video file… but standalone games don’t need to support it
  • during initialization or when loading a game, the core used callbacks to tell the frontend the interval between each iteration
  • the frontend loops over retro_run() somehow to run an iteration per call
  • during retro_run(), the core can run a frame and use the callbacks to retrieve controller inputs and to output audio and video
  • the fronted calls retro_unload_game() (if a game has been loaded)
  • the fronted calls retro_deinit()

I simplified it a lot, the best is to read https://github.com/libretro/RetroArch/blob/master/libretro-common/include/libretro.h, and if you need more help let me know. :slight_smile:

Thank you so much!
You must have heard me hacking, because I actually started writing my own wrapper for libretro over the weekend.
I will certainly ask if/when I run into issues :slight_smile:

Thank you for the insight. I am actually looking into the details of libretro, so hopefully this will all help :slight_smile:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.