For me that indicates the caller, that is the user, provides the buffer. Buffer may be allocated by the user on the heap, or maybe user can provide the address of an array living on the stack. In no case there is a reason to allocate a buffer more than once, as we can use the same buffer again, gio will fill the buffer with new data for each call.
Sure, and maybe itâs entirely okay to change the annotation of g_socket_receive() to be out caller-allocates to match the change in GInputStream. After all: yes, the bytes buffer is allocated by the caller.
The performance angle matters a lot less if youâre using a high level language; after all, youâre allocating a lot of stuff anyway, and itâs up to the language to deal with memory fragmentation.
As I said above: either constness or an out direction can be used to determine whether an argument is a mutable reference or not.
@StefanSalewski, that suggests to me that you donât treat âinâ any differently from âout caller-allocatesâ. Thatâs the case in C and seems reasonable for bindings with a similar level of abstraction in that respect. For the SML bindings Iâm developing, there is a higher level of abstraction: âoutâ means a value is passed from the callee to the caller, and nothing is passed in the other direction regardless of who allocates the parameter.
Yes, though in situations where most work is done by the library calls, e.g. computing a checksum of a file, I wouldnât expect performance to suffer much due to use of a high-level language.
Allocating a new buffer on each read introduces scope for inefficiency that reuse of the same buffer would avoid. Perhaps this is just the price to pay for a higher level of abstraction and I have found that the price isnât too bad, at least in the checksum example: a small performance loss and the need to manually trigger GC in the application code. I compared two examples that compute a checksum: 1. using a binding to g_input_stream_read where the caller supplies the same buffer on each call and 2. using a binding to g_input_stream_read that allocates a new buffer on each call. For 1, I found the average time for a SHA-256 sum of a 3.9 GiB file to be 23.85 s. For 2, the performance was terrible due to virtual memory use but by explicitly triggering GC in the application code when each buffer is no longer required, the average time was 25.35 s (with ~115 % CPU use due to the parallelized GC). Thatâs acceptable performance but requiring memory management hints in the application code has its downsides.
I presume the constness information here is from the c:type attribute of the GIR file and this isnât available via the girepository API.