I have a problem regarding the behaviour of the pthread function pthread_rwlock_wrlock. The specification linked above states that when one thread has locked the lock for writing and the same thread locks it again, it results in undefined behaviour (I could actually observe this in that on x86 Linux calling this function is a noop and on PowerPC Linux it stalls the thread).
The behaviour I need would be a read write lock that has the following characteristics:
read-locking by a thread succeeds if:
the lock is not held by any thread
the lock is only read-locked by zero or more threads (including the calling thread) and possibly read- or write locked by the calling thread
write-locking succeeds when:
the lock is not held by any other thread
only the current thread is holding the lock (for reading or writing)
With a pthread_mutex_t, the recursiveness of the lock can be controlled via an initialization flag, but this is not possible for pthread_rwlock_t.
What are my options? I've never actually had to implement this kind of concurrency primitive in C, and I think I'm missing some obvious solution here.
To be honest, recursive locking does have some uses but generally it's a hack. I can't seem to find the article right now, but Butenhof has a nice rant on this.
Back to the question. You could keep a thread-specific flag that signals: "I have the lock". Set it right after locking and unset it before unlocking. Since this is the only thread accessing it, you should be safe. So when trying to lock you simply need to check: "Hey, is this thing locked already?".
As a side note: are you sure the design is okay if a thread tries to lock twice ?
EDIT
Found the article.
But if that's all that's necessary, why does POSIX have recursive
mutexes?
Because of a dare.
Related
A lock variable and a semaphore appear to be identical. Explain the one significant difference between them
I've looked through all my notes and all I can find is the similarities between them, such as they are atomic actions and its a share resource. I can't seem to find that "one significant difference".
Do you suspect your teacher is trying to trick you? If not, there is an old adage "if you have checked everything and still can't find the answer, then something you know isn't wrong".
The difference between synchronization mechanisms is often subtle; and may seem insignificant.
For example, it might not seem important that a semaphore and a condition variable are stateless -- anybody can wake them up (post, release, signal, ...); whereas a mutex is strictly stateful -- only the last actor (process, thread, task, ...) that claimed it ( lock, enter, ...) can wake it up.
If you need to answer the question "who owns this resource", there is no answer for a semaphore or condition variable; but there is for a mutex.
I assume lock variable as mutex.
Yes. Semaphores and mutex seem similar. Some people use binary semaphore as mutex.
But they are not the same because of two major reasons.
Intention: Mutex is to be used with critical section of the code. They are used mainly to make sure that a resource is used by one thread in a program. If a thread is able to lock the mutex, that means it has exclusive access to that resource.
On the other hand, semaphores are to be used in producer-consumer case: When a producer is producing data and consumer is consuming data. If you consider a container where data is stored as a resource, producer and consumer can work simultaneously on different part of the data in the container. If there are multiple consumers, then number of consumers accessing the container (resource) to be limited by the amount of data present in the container.
In semaphore terms, if producers do sem_post after producing each piece of data and putting in container and consumers do sem_wait, before accessing data in container, you are controlling number of users accessing the container. Semaphores are not meant to provide exclusive access of a resource to a user. The intention is to limit the number of users of a resource.
Usage: Mutex should be unlocked by the thread which has locked it. In C, if a thread tries to unlock a mutex which is not locked by it, the behavior is undefined. In case of semaphore, one thread can just do semaphore_wait and another thread can semaphore_post (This is how normally it is used). ("one significant difference"?? :D )
Some developers use binary semaphores are used as mutex. It is risky because of 2. Usage I have mentioned above. Also, in my opinion it is like a work around for mutex. It is almost like replacing semaphores (not binary) with a mutex, a counter, a flag and polling mechanism in case of sem_wait. The code will work with this but it is a overkill.
There are more resources on this:
Difference between binary semaphore and mutex
https://www.geeksforgeeks.org/mutex-vs-semaphore/
I have a situation where a session of background processing can finish by timing out, user asynchronously cancelling or the session completing. Any of those completion events can run a single shot completion method. The completion method must only be run once. Assume that the session is an instance of an object so any synchronisation must use instance constructs.
Currently I'm using an Atomic Compare and Swap operation on a completion state variable so that each event can test and set the completion state when it runs. The first completion event to fire gets to set the completed state and run the single shot method and the remaining events fail. This works nicely.
However I can't help feeling that I should be able to do this in a higher level way. I tried using a Lock object (NSLock as I'm writing this with Cocoa) but then got a warning that I was releasing a lock that was still in the locked state. This is what I want of course. The lock gets locked once and never unlocked but I was afraid that system resources representing the lock might get leaked.
Anyway, I'm just interested as to whether anyone knows of a more high level way to achieve a single shot method like this.
sample code for any of the completion events:
if(OSAtomicCompareAndSwapInt(0, 1, &completed))
{
self.completionCallback();
}
Doing a CAS is almost certainly the right thing to do. Locks are not designed for what you need, they are likely to be much more expensive and are semantically a poor match anyway -- the completion is not "locked". It is "done". A boolean flag is the right representation, and doing a CAS ensures that it is manipulated safely in concurrent scenarios. In C++, I'd use std::atomic_flag for this, maybe check whether Cocoa has anything similar (this just wraps the CAS in a nicer interface, so that you never accidentally use a non-CAS test on the variable, which would be racy).
(edit: in pthreads, there's a function called pthread_once which does what you want, but I wouldn't know about Cocoa; the pthread_once interface is quite unwieldy anyway, in my opinion...)
If I have a thread that first manipulates a data structure and therefore has a pthread writelock on it, can I let that thread change the lock to a readlocked state without having a race condition that might allow another thread to acquire a writelock at some point during the switch?
Unfortunately, as far as I know, the pthreads standard does not allow for "downgrading" from a writer lock to a reader lock on a pthread_rwlock_t. Some pthreads implementations might allow extensions that let you transition from holding a writer lock to holding a reader lock without releasing the lock, but this is outside the scope of the SuS / POSIX spec for pthreads. And I don't believe that the most common case, the Linux/glibc pthreads implementation, allows for this operation.
So the short answer to your question is "No." You'll need to implement your own reader/writer locks on top of pthread_mutex_t/pthread_cond_t to get the behaviour you want.
I was trying to set up a multi thread app. One of the threads is working as background to get some data transfer. Right now this thread automatically kill itself after it's job done.
Somehow I need to kill this thread in another thread in order stop its job immediately. Are there any api or method for making this happen?
In short, you can't. Or, more precisely, you should not. Not ever and not under any circumstances.
There is absolutely no way for thread A to know the exact state of thread B when A kills B. If B is holding any locks or in the middle of a system call or calling into a system framework when A kills it, then the resulting state of your application is going to be nondeterministic.
Actually -- it will be somewhat deterministic in that you are pretty much guaranteed that a crash will happen sometime in the near future.
If you need to terminate thread B, you need to do so in a controlled fashion. The most common way is to have a cancel flag or method that can be set/called. thread B then needs to periodically check this flag or check to see if the method has been called, clean up whatever it is doing, and then exit.
That is, you are going to have to modify the logic in thread B to support this.
bbum is correct, you don't want to simply kill a thread. You can more safely kill a process, because it is isolated from the rest of the system. Because a thread shares memory and resources with the rest of the process, killing it would likely lead to all sorts of problems.
So, what are you supposed to do?
The only correct way of handling this is to have a way for your main thread to send a message to the worker thread telling it to quit. The worker thread must check for this message periodically and voluntarily quit.
An easy way to do this is with a flag, a boolean variable accessible by both threads. If you have multiple worker threads, you might need something more sophisticated, though.
Isn't that a bad idea? (If the other thread is in the middle of doing something in a critical section, it could leave stuff in an inconsistent state.) Couldn't you just set some shared flag variable, and have the other thread check it periodically to see if it should stop?
One thing you could do would be pass messages between the front thread and the background thread, potentially using something like this to facilitate message passing.
If you are using pthread then you try with 'pthread_kill' , I had tried long back it did not worked for me, basically if the thread is in some blocking call it won't work.
It is true that killing a thread is not good option, if you are looking for some kind for fix for some issue then you can try with this.
In my personal view it is best to let a thread run its course naturally. It's difficult to make guarantees about the effect of trying to kill a thread.
I have some code like this:
doDatabaseFetch {
...
#synchronized(self) {
...
}
}
and many objects that call doDatabaseFetch as the user uses the view.
My problem is, I have an operation (navigate to the next view) that also requires a database fetch. My problem is that it hits the same synchronize block and waits it's turn! I would ideally like this operation to kill all the threads waiting or give this thread a higher priority so that it can execute immediately.
Apple says that
The recommended way to exit a thread is to let it exit its entry point routine normally. Although Cocoa, POSIX, and Multiprocessing Services offer routines for killing threads directly, the use of such routines is strongly discouraged.
So I don't think I should kill the threads... but how can I let them exit normally if they're waiting on a synchronized block? Will I have to write my own semaphore to handle this behavior?
Thanks!
Nick.
The first question to ask here - do you need that big of a critical section so many threads are waiting to enter? What you are doing here is serializing parallel execution, i.e. making your program single-threaded again (but slower.) Reduce the lock scope as much as possible, think about reducing contention at the application level, use appropriate synchronization tools (wait/signal) - you'll find that you don't need to kill threads, pretty much ever. I know it's a very general advise, but it really helps to think that way.
Typically you cannot terminate a thread that is waiting on a synchronized block, if you need that sort of behavior, you should be using a timed wait and signal paradigm so that threads are sound asleep waiting and can be interrupted. Plus if you use a timed wait and signal paradigm, each time the timed wait expires your threads have the opportunity to not go back to sleep but rather to exit or take some other path (ie. even if you don't choose to terminate them).
Synchronized blocks are designed for uncontested locks, on an uncontested lock, the synchronization should be pretty close to a noop, but as soon as the lock becomes contested they have a very detrimental to application performance, moreso than even simply because they are serializing your parallel program.
I'm not an Objective C expert by any means, but I'm sure that there are some more advanced synchronization patterns such as barriers, conditions, atomics, etc.