Wait for condition from multiple resources in pthread - conditional-statements

I have a number of data containers that can send a signal when there are updates in them. The structure looks similar like this:
typedef struct {
int data;
/*...*/
pthread_cond_t *onHaveUpdate;
} Container;
The onHaveUpdate is a pointer to a global condition that is shared by all the containers
In my application, I have a number of these structures and they can concurrently be updated by different threads.
Now, is it possible for me to have a thread that listens to the condition and can perform something on the container that sends the notification?
I know that this can be solved by using one thread per container but it feel like a waste of resources, but I was wondering if this can be done using only one thread for all container?

The problem is that your condition is shared by all containers, so when you send the condition that something has changed, you have no idea what has changed. Since the architecture of your application is not a 100% clear from your context, why not implement a Queue which holds events (and pointers to containers are pushed onto that Queue) and a worker thread which takes events from that waitqueue and performs the work. That way your worker queue needs to wait on the condition that the queue is filled (or run in a non aggressive while-true-fashion), and that way you can remove the condition variable from your container altogether.

Related

Mutex the right way

What is the right way to "synchronize" function which mutates shared state using Mutex? Using Mutex within a coroutine started with launch() or async() works as expected, but if I start a coroutine with runBlocking() the thread looks like blocked (locked) for a very long time. The thing is that the function might be called from multiple threads and I cannot solve this with thread confinement. What's the right way to use Mutex in this scenario?
The right way is to design your software in so that avoids usage of Mutex and other forms of shared mutable state altogether. If you have some resource or data structure that needs to be shared you can always encapsulate this data structure inside a separate coroutine and communicate with this coroutine when you need to do anything with this data structure. This design pattern is known as an actor. An actor is a pair of a coroutine and a channel it reads incoming messages from.
The advantage of this approach is that you can communicate asynchronously with an actor. If you send a message to an actor and don't wait for a response, then you can continue the work without having to wait for an actor to finish processing your message.
You can read a bit more about actors in the guide to kotlinx.coroutines.

blocking call on two Queues?

I have an algorithm (task in VxWorks) that is reading data from multiple queues to be able to manage priorities accordingly. Now , the msgQReceive( ) function, can be set to WAIT_FOREVER which would make it a blocking call until something is available to receive and process. Now how can I do this if I have multiple queues? Currently I check in a while(1) loop if any of the queues have any contents and receive them if so but if nothing is there, my algorithm just spins and spins and spins and eats CPU resources for nothing. How can I prevent this best?
You should be able to use VxWorks events coupled with a Message Queue.
See msgQEvStart function and Kernel Programmer's Guide, section 7.9.
This is akin to using a select() for I/O operation.
You do a blocking eventReceive which returns a bitmask indicating which queue has content and you then do a non-blocking msgQReceive to retrieve the data.
Or you can look at How can a task wait on multiple vxworks Queues? which I wrote a while ago,
As already mentioned, you could use events, alternatively if you can use a pipe instead of msgQ, you could potentially use select.
As another alternative, perhaps consider having multiple tasks, each servicing a single msgQ

Synchronizing dependent asychnronized functions Objective C

So I am running into a race condition and I have a few solutions on how to fix the issue. I am new to threading so obviously, my opinion and research is limited. I have a large amount of asynchronization calls that can happen if a user receives certain messages from server. Thus, my design is poor due to the dependent nature of my objects.
Lets say I have a function called
adduser:(NSString s){
does some asynchronize activity
}
Messageuser:(NSString s)
{
Does some more asychronize activity
}
if a user were to recieve a message telling it to addUser "Ryan". he would than create a thread and proceed with looking up Ryan and storing him. However, if the user has the application in suspended mode, and in the buffered of messages waiting to be recieved there is a addUser request and a MessageUser request, a race condition occures because it takes longer to complete Adduser than it does to complete MessageUser. Thus, If messageUser is called , and (in our example) "Ryan" has not been fully added, it throws an error.
What would be a possible solution to this issue. I looked into locks and semaphores, and what I am trying to do is, when MessageUser recieves a call, check to make sure there is no thread currently proccessing addUser. If there is none, proceed. Else wait, than proceed after it has finished.
Well it depends on how the messages are being issued in the first place and what the async response events are.
If the operations have dependencies (ordering requirements) then perhaps a background serial queue would be appropriate? That is a simple way to ensure the messages are processed in order.
If the async operations take completion blocks, then you could have the completion block issue the request for the next operation to be performed, though you may not know about that ahead of time.
If you need to solve this in a more general way then you need some kind of system for tracking prerequisites so you can skip work items that don't have their prerequisites met yet. That probably means your own background thread that monitors a list of waiting tasks and receives notification of all task completions so it can scan for items waiting on that completion and issue them.
It seems really complicated though... I suspect you don't really have such strong async parallel processing requirements and a much simpler design would be just as effective. Given your situation where you are receiving messages from a server, I think a serial queue would be the best option. Then you can process messages in the order the server sent them and keep things simple.
//do this once at app startup
dispatch_queue_t queue = dispatch_queue_create("com.example.myapp", NULL);
//handle server responses
dispatch_async(queue, ^{
//handle server message here, one at a time
});
In reality, depending on how you connect to your server you might be able to just move the entire connection handling to the background queue and communicate with it via messages from the UI, and update the UI by dispatching to the dispatch_get_main_queue() which will be the UI thread.

using multiple io_service objects

I have my application in which listen and process messages from both internet sockets and unix domain sockets. Now I need to add SSL to the internet sockets, I was using a single io_service object for all the sockets in the application. It seems now I need to add separate io_service objects for network sockets and unix domain sockets. I don't have any threads in my application and I use async_send and async_recieve and async_accept to process data and connections. Please point me to any examples using multiple io_service objects with async handlers.
The question has a degree of uncertainty as if multiple io_service objects are required. I could not locate anything in the reference documentation, or the overview for SSL and UNIX Domain Sockets that mandated separate io_service objects. Regardless, here are a few options:
Single io_service:
Try to use a single io_service.
If you do not have a direct handle to the io_service object, but you have a handle to a Boost.Asio I/O object, such as a socket, then a handle to the associated io_service object can be obtained by calling socket.get_io_service().
Use a thread per io_service:
If multiple io_service objects are required, then dedicate a thread to each io_service. This approach is used in Boost.Asio's HTTP Server 2 example.
boost::asio::io_service service1;
boost::asio::io_service service2;
boost::thread_group threads;
threads.create_thread(boost::bind(&boost::asio::io_service::run, &service1));
service2.run();
threads.join_all();
One consequence of this approach is that the it may require thread-safety guarantees to be made by the application. For example, if service1 and service2 both have completion handlers that invoke message_processor.process(), then message_processor.process() needs to either be thread-safe or called in a thread-safe manner.
Poll io_service:
io_service provides non-blocking alternatives to run(). Where as io_service::run() will block until all work has finished, io_service::poll() will run handlers that are ready to run and will not block. This allows for a single thread to execute the event loop on multiple io_service objects:
while (!service1.stopped() &&
!service2.stopped())
{
std::size_t ran = 0;
ran += service1.poll();
ran += service2.poll();
// If no handlers ran, then sleep.
if (0 == ran)
{
boost::this_thread::sleep_for(boost::chrono::seconds(1));
}
}
To prevent a tight-busy loop when there are no ready-to-run handlers, it may be worth adding in a sleep. Be aware that this sleep may introduce latency in the overall handling of events.
Transfer handlers to a single io_service:
One interesting approach is to use a strand to transfer completion handlers to a single io_service. This allows for a thread per io_service, while preventing the need to have the application make thread-safety guarantees, as all completion handlers will post through a single service, whose event loop is only being processed by a single thread.
boost::asio::io_service service1;
boost::asio::io_service service2;
// strand2 will be used by service2 to post handlers to service1.
boost::asio::strand strand2(service1);
boost::asio::io_service::work work2(service2);
socket.async_read_some(buffer, strand2.wrap(read_some_handler));
boost::thread_group threads;
threads.create_thread(boost::bind(&boost::asio::io_service::run, &service1));
service2.run();
threads.join_all();
This approach does have some consequences:
It requires handlers that are intended to by ran by the main io_service to be wrapped via strand::wrap().
The asynchronous chain now runs through two io_services, creating an additional level of complexity. It is important to account for the case where the secondary io_service no longer has work, causing its run() to return.
It is common for an asynchronous chains to occur within the same io_service. Thus, the service never runs out of work, as a completion handler will post additional work onto the io_service.
| .------------------------------------------.
V V |
read_some_handler() |
{ |
socket.async_read_some(..., read_some_handler) --'
}
On the other hand, when a strand is used to transfer work to another io_service, the wrapped handler is invoked within service2, causing it to post the completion handler into service1. If the wrapped handler was the only work in service2, then service2 no longer has work, causing servce2.run() to return.
service1 service2
====================================================
.----------------- wrapped(read_some_handler)
| .
V .
read_some_handler NO WORK
| .
| .
'----------------> wrapped(read_some_handler)
To account for this, the example code uses an io_service::work for service2 so that run() remains blocked until explicitly told to stop().
Looks like you are writing a server and not a client. Don't know if this helps, but I am using ASIO to communicate with 6 servers from my client. It uses TCP/IP SSL/TSL. You can find a link to the code here
You should be able to use just one io_service object with multiple socket objects. But, if you decide that you really want to have multiple io_service objects, then it should be fairly easy to do so. In my class, the io_service object is static. So, just remove the static keyword along with the logic in the constructor that only creates one instance of the io_service object. Depending on the number of connections expected for your server, you would probably be better off using a thread pool dedicated to handling socket I/O rather than creating a thread for each new socket connection.

Alternatives to dispatch_get_current_queue() for completion blocks in iOS 6?

I have a method that accepts a block and a completion block. The first block should run in the background, while the completion block should run in whatever queue the method was called.
For the latter I always used dispatch_get_current_queue(), but it seems like it's deprecated in iOS 6 or higher. What should I use instead?
The pattern of "run on whatever queue the caller was on" is appealing, but ultimately not a great idea. That queue could be a low priority queue, the main queue, or some other queue with odd properties.
My favorite approach to this is to say "the completion block runs on an implementation defined queue with these properties: x, y, z", and let the block dispatch to a particular queue if the caller wants more control than that. A typical set of properties to specify would be something like "serial, non-reentrant, and async with respect to any other application-visible queue".
** EDIT **
Catfish_Man put an example in the comments below, I'm just adding it to his answer.
- (void) aMethodWithCompletionBlock:(dispatch_block_t)completionHandler
{
dispatch_async(self.workQueue, ^{
[self doSomeWork];
dispatch_async(self.callbackQueue, completionHandler);
}
}
This is fundamentally the wrong approach for the API you are describing to take. If an API accepts a block and a completion block to run, the following facts need to be true:
The "block to run" should be run on an internal queue, e.g. a queue which is private to the API and hence entirely under that API's control. The only exception to this is if the API specifically declares that the block will be run on the main queue or one of the global concurrent queues.
The completion block should always be expressed as a tuple (queue, block) unless the same assumptions as for #1 hold true, e.g. the completion block will be run on a known global queue. The completion block should furthermore be dispatched async on the passed-in queue.
These are not just stylistic points, they're entirely necessary if your API is to be safe from deadlocks or other edge-case behavior that WILL otherwise hang you from the nearest tree someday. :-)
The other answers are great, but for the me the answer is structural. I have a method like this that's on a Singleton:
- (void) dispatchOnHighPriorityNonMainQueue:(simplest_block)block forceAsync:(BOOL)forceAsync {
if (forceAsync || [NSThread isMainThread])
dispatch_async_on_high_priority_queue(block);
else
block();
}
which has two dependencies, which are:
static void dispatch_async_on_high_priority_queue(dispatch_block_t block) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), block);
}
and
typedef void (^simplest_block)(void); // also could use dispatch_block_t
That way I centralize my calls to dispatch on the other thread.
You should be careful about your use of dispatch_get_current_queue in the first place. From the header file:
Recommended for debugging and logging purposes only:
The code
must not make any assumptions about the queue returned, unless it
is one of the global queues or a queue the code has itself created.
The code must not assume that synchronous execution onto a queue is
safe from deadlock if that queue is not the one returned by
dispatch_get_current_queue().
You could do either one of two things:
Keep a reference to the queue you originally posted on (if you created it via dispatch_queue_create), and use that from then on.
Use system defined queues via dispatch_get_global_queue, and keep a track of which one you're using.
Effectively whilst previously relying on the system to keep track of the queue you are on, you are going to have to do it yourself.
Apple had deprecated dispatch_get_current_queue(), but left a hole in another place, so we still able to get current dispatch queue:
if let currentDispatch = OperationQueue.current?.underlyingQueue {
print(currentDispatch)
// Do stuff
}
This works for main queue at least.
Note, that underlyingQueue property is available since iOS 8.
If you need to perform the completion block in the original queue, you also may use OperationQueue directly, I mean without GCD.
For those who still need in queue comparing, you could compare queues by their label or specifies.
Check this https://stackoverflow.com/a/23220741/1531141
This is a me too answer. So I will talk about our use case.
We have a services layer and the UI layer (among other layers). The services layer runs tasks in the background. (Data manipulation tasks, CoreData tasks, Network calls etc). The service layer has a couple operation queues to satisfy the needs of the UI layer.
The UI layer relies on the services layer to do its work and then run a success completion block. This block can have UIKit code in it. A simple use case is to get all messages from the server and reload the collection view.
Here we guarantee that the blocks that are passed into the services layer are dispatched on the queue on which the service was invoked on. Since dispatch_get_current_queue is a deprecated method, we use the NSOperationQueue.currentQueue to get the caller's current queue. Important note on this property.
Calling this method from outside the context of a running operation
typically results in nil being returned.
Since we always invoke our services on a known queue (Our custom queues and Main queue) this works well for us. We do have cases where serviceA can call serviceB which can call serviceC. Since we control where the first service call is being made from, we know the rest of the services will follow the same rules.
So NSOperationQueue.currentQueue will always return one of our Queues or the MainQueue.