GTask thread pool is not big enough for large number of real-time tasks

Each horizontal bar is a task that represent a pair of turn-on and turn-off operations on a valve. A task does the following: wait for Y ms, turn on valve 1, wait for X ms, then turn off valve 1. This is a real-time system where the duration of each task, when and if a new task is coming varies.

Since each task is a blocking operation that can be run in some background thread, I used GTask to run each task in a worker thread. The problem occurs when a large number of tasks arrive in short succession, when this happens, the thread pool runs out of worker threads very quickly, and incoming tasks are delayed which lead to the valve being turn on/turn off at the wrong time. The thread pool runs out of worker threads even faster when there are more than one valve in the system.

I want to ask if the use case described above is actually unsuitable for GTask? And if GTask is not the answer, what approach you recommend?

Hi, welcome to GNOME’s Discourse :slight_smile:

Is this the same question as asked on StackOverflow?

I wonder if, rather than having one thread job for each horizontal bar, you should have one worker thread per valve, which polls some shared state to tell when it should turn on/off the valve. Each worker thread could sit in a tight poll loop waiting for state changes, and could run with realtime privileges if needed. That would mean you have O(valves) threads rather than O(parallel operations) threads.

Would that fit your use case?

1 Like

Hi @pwithnall,

Thank you for your suggestions!

The question on StackOverflow is related to my implementation using GTask, while the question posted here is to check if GTask is the right tool for the problem before switching to lower-level tools provided by Glib.

I thought about using a thread for each valve as well but I’m still sorting out the details. My idea is each thread keeps track of 2 time values: t-on - time at which its valve should be turned on and t-off - time at which its valve should be turned off, the two time values will be update each time a new task arrive as follow:

if t-on-new <= t-off: # overlap
   t-on = t-on-new
   t-off = toff + (t-off-new - t-off)
if t-on-new > t-off: #  no overlap
   t-on  = t-on-new
   t-off = t-off-new

t-on and t-off can be countdown time and have callback attach to them, which are executed once the time runs out and since t-on and t-off is only access by a single thread, there is no need for locking them which simplify the threading code.

So for implementation, each thread will have a GMainContext and GMainLoop for pooling the 2 GSources (t-on, t-off).

Does what I describe similar to what you have in mind? Do you recommend using GThreadPool or GThreads in this case? Please also correct me if I’m wrong, GTask is a form of multi-thread async, and you suggest to use single-thread async instead?

It’s hard for me to suggest an approach with complete confidence, since I don’t know all of your use cases and requirements. However, on the basis of what you’ve said, I think you should use one thread per valve, and that thread should read incoming messages off a GAsyncQueue. Other threads can put messages onto the GAsyncQueue. Each message would contain an on-time, an off-time, and any callbacks or other closure state you want to pass to the valve thread.

I think GAsyncQueue is more appropriate than GTask because you’re dealing with a sequence of closures which are all of the same type, and which don’t (necessarily) all need their own completion callback to be invoked (although this is an area where I don’t really understand your requirements). There are realistically no limits on the size or queue/dequeue speed of a GAsyncQueue.

On the other hand, running a GTask asynchronously triggers a new thread to be spawned in a global thread pool, and that pool has limits (as you have found). It has various features, and those features require more allocations and more synchronisation. From what you’ve said, it seems like you don’t really need all the features of a GTask. Essentially all you need is a closure and a method for passing it from one thread to another and notifying the second thread to wake up and process it. GAsyncQueue does exactly that.

Note that running callbacks would have to be deferred back to a thread other than the valve thread, otherwise the valve thread may end up blocking in the callback and hence would no longer be realtime. You can do that by creating a GIdleSource and attaching it to a GMainContext which is running in the thread where you want the callback to be executed.

1 Like

Hi @pwithnall,

Thank you for the clarifications! I was about to go for the custom GSource route if not for your suggestion on using GAsyncQueue.

My apology for the unclear explanation, let’s me try again:

The appsink element runs in a separate thread (say, thread A). Appsink emits signal when a new buffer arrives which causes thread A to call the corresponding signal handler. The signal handler must process a buffer as fast as possible to avoid blocking the whole pipeline.

A buffer contain metadata for multiple frames, each frame contains metadata for multiple objects, i.e., A Batch-metadata contains a linked-list of Frame-metadatas, each Frame-metadata contain a linked-list of Object-metadatas, each Object-metadata contains info on the type and location of the object. The end goal is to use the buffer metadata to control the turn-on and turn-off of multiple valves. Specifically, we use a given Object-metadata to decide what valve should be turned on for that object and how long to wait before turning on that valve, then after a fixed amount of time, we turn off the valve.

My understanding of your suggestion in this case is: each valve-thread consumes a separate GAsyncQueue, so if we have 2 valves, there will be 2 GAsyncQueue. When a buffer arrives at appsink, thread A calls the signal handler whose task is to pick the appropriate valve and forward necessary data to the corresponding GAsyncQueue. Then, the valve-thread responsible for that GAsyncQueue will process the new message (i.e., compute wait-time till turn on, update t-on/t-off, wait till t-on, turn on, wait till t-off , turn off). Does what I describe seems reasonable to you? I’m not sure what you mean by deferring callback to another thread though, would you please elaborate?

Seems reasonable from what you’ve said, but bear in mind I’m not doing consultancy for you :slight_smile:

You said “t-on and t-off can … have callback attach to them”. If those callbacks need to be invoked in a thread other than the valve thread, they need to be passed to another thread’s context. Otherwise the code in the callback will execute on the wrong CPU. See

1 Like

Hi @pwithnall,

Of course I understand, thank you so much for your suggestions and clarifications.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.