Determining whether container elements are modifiable from GI annotations

For me that indicates the caller, that is the user, provides the buffer. Buffer may be allocated by the user on the heap, or maybe user can provide the address of an array living on the stack. In no case there is a reason to allocate a buffer more than once, as we can use the same buffer again, gio will fill the buffer with new data for each call.

Sure, and maybe it’s entirely okay to change the annotation of g_socket_receive() to be out caller-allocates to match the change in GInputStream. After all: yes, the bytes buffer is allocated by the caller.

The performance angle matters a lot less if you’re using a high level language; after all, you’re allocating a lot of stuff anyway, and it’s up to the language to deal with memory fragmentation.

As I said above: either constness or an out direction can be used to determine whether an argument is a mutable reference or not.

@StefanSalewski, that suggests to me that you don’t treat ‘in’ any differently from ‘out caller-allocates’. That’s the case in C and seems reasonable for bindings with a similar level of abstraction in that respect. For the SML bindings I’m developing, there is a higher level of abstraction: ‘out’ means a value is passed from the callee to the caller, and nothing is passed in the other direction regardless of who allocates the parameter.

Yes, though in situations where most work is done by the library calls, e.g. computing a checksum of a file, I wouldn’t expect performance to suffer much due to use of a high-level language.

Allocating a new buffer on each read introduces scope for inefficiency that reuse of the same buffer would avoid. Perhaps this is just the price to pay for a higher level of abstraction and I have found that the price isn’t too bad, at least in the checksum example: a small performance loss and the need to manually trigger GC in the application code. I compared two examples that compute a checksum: 1. using a binding to g_input_stream_read where the caller supplies the same buffer on each call and 2. using a binding to g_input_stream_read that allocates a new buffer on each call. For 1, I found the average time for a SHA-256 sum of a 3.9 GiB file to be 23.85 s. For 2, the performance was terrible due to virtual memory use but by explicitly triggering GC in the application code when each buffer is no longer required, the average time was 25.35 s (with ~115 % CPU use due to the parallelized GC). That’s acceptable performance but requiring memory management hints in the application code has its downsides.

I presume the constness information here is from the c:type attribute of the GIR file and this isn’t available via the girepository API.

The C ABI has no concept of constness, so it cannot be available to libraries that wrap that C ABI via libffi, like libgirepository.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.