I need to set a mutex before to make an asynchronous request, and then unlock the mutex in the callback of this request that is on another thread.
Apple documentation say:
Warning: The NSLock class uses POSIX
threads to implement its locking
behavior. When sending an unlock
message to an NSLock object, you must
be sure that message is sent from the
same thread that sent the initial lock
message. Unlocking a lock from a
different thread can result in
undefined behavior.
How can I avoid this "undefined behaviour" and make it work as expected?
Better yet; use an NSOperationQueue or a GCD queue as your synchronization primitive.
Locks are expensive and semaphores are, more or less, a lock with a counter.
Queue based coding is far more efficient, especially when using the built in queuing mechanisms.
Use a NSCondition for this to signal other threads that they can safely pass now.
Don't use a mutex for this. Use a semaphore initialized to 1 or some other lock mechanism that allows cross-thread communication/locking.
Rgds,
Martin
Related
A lock variable and a semaphore appear to be identical. Explain the one significant difference between them
I've looked through all my notes and all I can find is the similarities between them, such as they are atomic actions and its a share resource. I can't seem to find that "one significant difference".
Do you suspect your teacher is trying to trick you? If not, there is an old adage "if you have checked everything and still can't find the answer, then something you know isn't wrong".
The difference between synchronization mechanisms is often subtle; and may seem insignificant.
For example, it might not seem important that a semaphore and a condition variable are stateless -- anybody can wake them up (post, release, signal, ...); whereas a mutex is strictly stateful -- only the last actor (process, thread, task, ...) that claimed it ( lock, enter, ...) can wake it up.
If you need to answer the question "who owns this resource", there is no answer for a semaphore or condition variable; but there is for a mutex.
I assume lock variable as mutex.
Yes. Semaphores and mutex seem similar. Some people use binary semaphore as mutex.
But they are not the same because of two major reasons.
Intention: Mutex is to be used with critical section of the code. They are used mainly to make sure that a resource is used by one thread in a program. If a thread is able to lock the mutex, that means it has exclusive access to that resource.
On the other hand, semaphores are to be used in producer-consumer case: When a producer is producing data and consumer is consuming data. If you consider a container where data is stored as a resource, producer and consumer can work simultaneously on different part of the data in the container. If there are multiple consumers, then number of consumers accessing the container (resource) to be limited by the amount of data present in the container.
In semaphore terms, if producers do sem_post after producing each piece of data and putting in container and consumers do sem_wait, before accessing data in container, you are controlling number of users accessing the container. Semaphores are not meant to provide exclusive access of a resource to a user. The intention is to limit the number of users of a resource.
Usage: Mutex should be unlocked by the thread which has locked it. In C, if a thread tries to unlock a mutex which is not locked by it, the behavior is undefined. In case of semaphore, one thread can just do semaphore_wait and another thread can semaphore_post (This is how normally it is used). ("one significant difference"?? :D )
Some developers use binary semaphores are used as mutex. It is risky because of 2. Usage I have mentioned above. Also, in my opinion it is like a work around for mutex. It is almost like replacing semaphores (not binary) with a mutex, a counter, a flag and polling mechanism in case of sem_wait. The code will work with this but it is a overkill.
There are more resources on this:
Difference between binary semaphore and mutex
https://www.geeksforgeeks.org/mutex-vs-semaphore/
I want to use an NSOperationQueue to dispatch CoreData operations. However, operation queue behavior is not always the same (e.g. it dispatches using libdispatch on iOS 4.0/OS 10.6 which uses thread pools) and a queue might not always use the same thread (as NSManagedObjectContext requires).
Can I force a serial NSOperationQueue to execute on a single thread?
Or do I have to create my own simple queuing mechanism for that?
Can I force a serial NSOperationQueue to execute on a single thread?
Or do I have to create my own simple queuing mechanism for that?
You shouldn't need to do either of those. What Core Data really requires is that you don't have two pieces of code making changes to a managed object context at the same time. There's even a note on this at the very beginning of Concurrency with Core Data:
Note: You can use threads, serial operation queues, or dispatch queues for concurrency.
For the sake of conciseness, this article uses “thread” throughout to refer to any of these.
What's really required is that you serialize operations on a given context. That happens naturally if you use a single thread, but NSOperationQueue also serializes its operations if you set maxConcurrentOperationCount to 1, so you don't have to worry about ensuring that all operations take place on the same thread.
Apple decided to bind managed objects to real Threads.. it isnt that safe anymore to access a context on different threads - a context without any objects MIGHT be safe but its objects are not
I just created a singleton method, and I would like to know what the function #synchronized() does, as I use it frequently, but do not know the meaning.
It declares a critical section around the code block. In multithreaded code, #synchronized guarantees that only one thread can be executing that code in the block at any given time.
If you aren't aware of what it does, then your application probably isn't multithreaded, and you probably don't need to use it (especially if the singleton itself isn't thread-safe).
Edit: Adding some more information that wasn't in the original answer from 2011.
The #synchronized directive prevents multiple threads from entering any region of code that is protected by a #synchronized directive referring to the same object. The object passed to the #synchronized directive is the object that is used as the "lock." Two threads can be in the same protected region of code if a different object is used as the lock, and you can also guard two completely different regions of code using the same object as the lock.
Also, if you happen to pass nil as the lock object, no lock will be taken at all.
From the Apple documentation here and here:
The #synchronized directive is a
convenient way to create mutex locks
on the fly in Objective-C code. The
#synchronized directive does what any
other mutex lock would do—it prevents
different threads from acquiring the
same lock at the same time.
The documentation provides a wealth of information on this subject. It's worth taking the time to read through it, especially given that you've been using it without knowing what it's doing.
The #synchronized directive is a convenient way to create mutex locks on the fly in Objective-C code.
The #synchronized directive does what any other mutex lock would do—it prevents different threads from acquiring the same lock at the same time.
Syntax:
#synchronized(key)
{
// thread-safe code
}
Example:
-(void)AppendExisting:(NSString*)val
{
#synchronized (oldValue) {
[oldValue stringByAppendingFormat:#"-%#",val];
}
}
Now the above code is perfectly thread safe..Now Multiple threads can change the value.
The above is just an obscure example...
#synchronized block automatically handles locking and unlocking for you. #synchronize
you have an implicit lock associated with the object you are using to synchronize. Here is very informative discussion on this topic please follow How does #synchronized lock/unlock in Objective-C?
Excellent answer here:
Help understanding class method returning singleton
with further explanation of the process of creating a singleton.
#synchronized is thread safe mechanism. Piece of code written inside this function becomes the part of critical section, to which only one thread can execute at a time.
#synchronize applies the lock implicitly whereas NSLock applies it explicitly.
It only assures the thread safety, not guarantees that. What I mean is you hire an expert driver for you car, still it doesn't guarantees car wont meet an accident. However probability remains the slightest.
I'm doing this with Macruby, but I don't think that should matter much here.
I've got a model which stores its state in a dictionary data structure. I want concurrent operations to be updating this data structure sporadically. It seems to me like GCD offers a few possible solutions to this, including these two:
wrap any code that accesses the data structure in a block sent to some serial queue
use a GCD semaphore, with client code sending wait/signal calls as necessary when accessing the structure
When the queues in the first solution are synchronously called, then it seems pretty much equivalent to the semaphore solution. Do either of these solutions have clear advantages that I'm missing? Is there a better alternative I'm missing?
Also: would it be straightforward to implement a read-write (shared-exclusive) lock with GCD?
Serial Queue
Pros
there are not any lock
Cons
tasks can't work concurrently in the Serial Queue
GCD Semaphore
Pros
tasks can work concurrently
Cons
it uses lock even though it is light weight
Also we can use Atomic Operations instead of GCD Semaphore. It would be lighter than GCD Semaphore in some situation.
Synchronization Tools - Atomic Operations
Guarding access to the data structure with dispatch_sync on serial queue is semantically equivalent to using a dispatch semaphore, and in the uncontended case, they should both be very fast. If performance is important, benchmark and see if there's any significant difference.
As for the readers-writer lock, you can indeed construct one on top of GCD—at least, I cobbled something together the other day here that seems to work. (Warning: there be dragons/not-well-tested code.) My solution funnels the read/write requests through an intermediary serial queue before submitting to a global concurrent queue. The serial queue is suspended/resumed at the appropriate times to ensure that write requests execute serially.
I wanted something that would simulate a private concurrent dispatch queue that allowed for synchronisation points—something that's not exposed in the public GCD api, but is strongly hinted at for the future.
Adding a warning (which ends up being a con for dispatch queues) to the previous answers.
You need to be careful of how the dispatch queues are called as there are some hidden scenarios that were not immediately obvious to me until I ran into them.
I replaced NSLock and #synchronized on a number of critical sections with dispatch queues with the goal of having lightweight synchronization. Unfortunately, I ran into a situation that results in a deadlock and I have pieced it back to using the dispatch_barrier_async / dispatch_sync pattern. It would seem that dispatch_sync may opportunistically call its block on the main queue (if already executing there) even when you create a concurrent queue. This is a problem since dispatch_sync on the current dispatch queue causes a deadlock.
I guess I'll be moving backwards and using another locking technique in these areas.
I have some code like this:
doDatabaseFetch {
...
#synchronized(self) {
...
}
}
and many objects that call doDatabaseFetch as the user uses the view.
My problem is, I have an operation (navigate to the next view) that also requires a database fetch. My problem is that it hits the same synchronize block and waits it's turn! I would ideally like this operation to kill all the threads waiting or give this thread a higher priority so that it can execute immediately.
Apple says that
The recommended way to exit a thread is to let it exit its entry point routine normally. Although Cocoa, POSIX, and Multiprocessing Services offer routines for killing threads directly, the use of such routines is strongly discouraged.
So I don't think I should kill the threads... but how can I let them exit normally if they're waiting on a synchronized block? Will I have to write my own semaphore to handle this behavior?
Thanks!
Nick.
The first question to ask here - do you need that big of a critical section so many threads are waiting to enter? What you are doing here is serializing parallel execution, i.e. making your program single-threaded again (but slower.) Reduce the lock scope as much as possible, think about reducing contention at the application level, use appropriate synchronization tools (wait/signal) - you'll find that you don't need to kill threads, pretty much ever. I know it's a very general advise, but it really helps to think that way.
Typically you cannot terminate a thread that is waiting on a synchronized block, if you need that sort of behavior, you should be using a timed wait and signal paradigm so that threads are sound asleep waiting and can be interrupted. Plus if you use a timed wait and signal paradigm, each time the timed wait expires your threads have the opportunity to not go back to sleep but rather to exit or take some other path (ie. even if you don't choose to terminate them).
Synchronized blocks are designed for uncontested locks, on an uncontested lock, the synchronization should be pretty close to a noop, but as soon as the lock becomes contested they have a very detrimental to application performance, moreso than even simply because they are serializing your parallel program.
I'm not an Objective C expert by any means, but I'm sure that there are some more advanced synchronization patterns such as barriers, conditions, atomics, etc.