Doing an atomic read in objective-C - objective-c

I have a thread-safe class, a cancel token, that transitions from an unstable mutable state (not cancelled) to a stable immutable state (cancelled). Once an instance has become immutable, I'd like to stop paying the cost of acquiring a lock before checking the state.
Here's a simplification of what things look like now:
-(bool) isCancelled {
#synchronized(self) {
return _isCancelled;
}
}
-(bool) tryCancel {
#synchronized(self) {
if (_isCancelled) return false;
_isCancelled = true;
}
return true;
}
and what I want to try:
-(bool) isCancelled {
bool result;
// is the following correct?
// can the two full barriers be reduced to a single read-acquire barrier somehow?
OSMemoryBarrier();
result = _isCancelled != 0;
OSMemoryBarrier();
return result;
}
-(bool) tryCancel {
return OSAtomicCompareAndSwap32Barrier(0, 1, &_isCancelled);
}
Is using two memory barriers the correct approach? How should I expect it to compare to the cost of acquiring a lock (insert standard refrain about profiling here)? Is there a cheaper way to do it?

Edit: this sounds like possible premature optimization. is this lock acquisition slowing things down?
Edit2: its possible compiler optimization will defeat this. be aware.
if you are concerned about the gotchas with double checked locking, perhaps dispatch_once() could be useful for you?
would double checked locking work in this case?
-(void) doSomething {
if (!_isCanceled) { //only attempt to acquire lock if not canceled already
#synchronized(self) {
if (!_isCanceled) // now check again (the double check part)
doSomethingElse();
}
}
}
read the wikipedia entry on double checked locking for more info

Related

Understanding semaphores with pthreads (code included)

So in class, we learned about semaphores and stuff and our professor let us know that this code below would be handy to learn for our exam. Unfortunately our exam is on Friday, and whole list of excuses, i just need to be able to understand this code for the exam and for future cases. I understand that the mutex_t is a lock system and the cond_t is a condition system in which signals get passed through sema_P and sema_V (if the value is 0, race condition occurs and the thread is locked out by cond_wait until another thread increases the value and is unlocked by cond_signal), but why does a lock need to get passed around? Why is there a mutex_lock and mutex_unlock in both decrementer P() and incrementer V()? How does this work with the threads and the conditions (cont_t)?
typedef struct
{
pthread_mutex_t lock;
pthread_cond_t wait;
int value;
} sema;
void pthread_sema_init(sema *s, int count)
{
s->value = count;
pthread_cond_init(&(s->wait),NULL);
pthread_mutex_init(&(s->lock),NULL);
return;
}
void pthread_sema_P(sema *s)
{
pthread_mutex_lock(&(s->lock));
s->value--;
if(s->value < 0) {
pthread_cond_wait(&(s->wait),&(s->lock));
}
pthread_mutex_unlock(&(s->lock));
return;
}
void pthread_sema_V(sema *s)
{
pthread_mutex_lock(&(s->lock));
s->value++;
if(s->value <= 0) {
pthread_cond_signal(&(s->wait));
}
pthread_mutex_unlock(&(s->lock));
}
The mutex sema.lock is there to protect the shared variable sema.value, ensuring that only one thread accesses that value at a time. Both pthread_sema_P() and pthread_sema_V() must take the lock because they both access sema.value.
That implementation of sempahores is buggy, by the way - it doesn't handle spurious wakeups (a "spurious wakeup" is where pthread_cond_wait() wakes up despite not being signalled - this is allowed by the spec).
A more traditional implementation might be:
void pthread_sema_P(sema *s)
{
pthread_mutex_lock(&s->lock);
while (s->value < 1) {
pthread_cond_wait(&s->wait, &s->lock);
}
s->value--;
pthread_mutex_unlock(&s->lock);
}
void pthread_sema_V(sema *s)
{
pthread_mutex_lock(&s->lock);
s->value++;
pthread_cond_signal(&s->wait);
pthread_mutex_unlock(&s->lock);
}

How to enforce parameters of anonymous blocks to be unused in Objective-C?

I've run into a situation while using a library called TransitionKit (helps you write state machines) where I am want to supply entry and exit actions in the form of a callback.
Sadly, the callbacks include two completely useless parameters. A typical block has to look like this:
^void (TKState *state, TKStateMachine *stateMachine) {
// I TOTALLY don't want parameters `state` or `stateMachine` used here
};
(this is an anonymous code block. Read up on blocks here if you're unclear)
As I've noted in the comment, I really don't want those parameters even mentioned in the body there. I've tried simply removing the parameter names like suggested in this question like so:
^void (TKState *, TKStateMachine *) {
// I foobar all I like here
};
but sadly the code won't compile then :(.
How can I enforce this non-usage of parameters in code?
This is what I could come up with. Quite a hack and relies on the GCC poison pragma, which is not standard but a GNU extension - although, given that you are probably compiling this with clang anyway, it should not be a problem.
#define _state state
#define _stateMachine stateMachine
#pragma GCC poison state stateMachine
Then this compiles:
^(TKState *_state, TKStateMachine *_stateMachine) {
do_something();
}
But this doesn't:
^(TKState *_state, TKStateMachine *_stateMachine) {
do_something(state, stateMachine);
}
You could just have a function that took one kind of block, and returned another, like this:
#class TKState, TKStateMachine; // here so this will compile
typedef void (^LongStateBlock)(TKState *state, TKStateMachine *stateMachine);
static inline LongStateBlock Adapter(void(^block)()) {
void(^heapBlock)() = [block copy]; // forces block to be on heap rather than stack, a one-time expense
LongStateBlock longBlock = ^(TKState *s __unused, TKStateMachine *sm __unused) {
heapBlock();
};
// this is the non-ARC, MRR version; I'll leave ARC for the interested observer
[heapBlock release];
return [[longBlock copy] autorelease];
}
And in practice:
// this represents a library method
- (void)takesLongStateBlock:(LongStateBlock)longBlock
{
// which hopefully wouldn't look exactly like this
if (longBlock) longBlock(nil, nil);
}
- (void)yourRandomMethod
{
[self takesLongStateBlock:^(TKState *state, TKStateMachine *stateMachine) {
NSLog(#"Gratuitous parameters, AAAAHHHH!");
}];
[self takesLongStateBlock:Adapter(^{
NSLog(#"So, so clean.");
})];
}
The whole thing is gisted, and should compile inside any class. It does what you expect when you call -yourRandomMethod.
AFAIK there is no way to do what you want when you are creating a block, you can only miss the parameter names when you are declaring a block variable(a reference to a block, to avoid misunderstandings)
So here you can miss the param names:
void (^myBlock)(SomeClass *);
But not when you create a block:
myBlock = ^(SomeClass *o)
{
};
I'd write
^void (TKState *unused_state, TKStateMachine *unused_stateMachine) {
// Anyone using unused_state or unused_stateMachine gets what they deserve.
};
Of course someone can use the parameters. But then whatever you do, they can change the code. If someone is intent on shooting themselves in the foot, there is no stopping them.

#synchronized not working on NSMutableArray

I am trying to remove and object from an mutable array - an array which is iterated through every frame (see tick: method).
I am getting
* Collection <__NSArrayM: 0xaa99cb0> was mutated while being enumerated.
exceptions.
So I added #synchronized() to lock it from being touched by other threads, but its still failing.
- (void)addEventSubscriber:(id <EventSubscriber>)eventSubscriber
{
[_eventSubscribers addObject:eventSubscriber];
}
- (void)removeEventSubscriber:(id <EventSubscriber>)eventSubscriber
{
#synchronized(_eventSubscribers) // Not working.
{
[_eventSubscribers removeObject:eventSubscriber];
}
}
- (void)tick:(ccTime)dt
{
for (id <EventSubscriber> subscriber in _eventSubscribers)
{
if ([subscriber respondsToSelector:#selector(tick:)])
{
[subscriber tick:dt];
}
}
}
You need to lock updates to the array completely while iterating. Adding synchronized blocks to both methods addEventSubscriber: and removeEventSubscriber: will not work because the array can change while being iterated over because the iteration is not synchronized. Simply put, only one of those three methods can run at a time.
You can use #synchronized, or an NSLock to manually lock array updates while it is being iterated over.
Alternatively, you could use GCD with a serial dispatch queue to ensure that only one method is executing at a time. Here's how that would work:
You could also store the queue as a property of the class object in which you're doing this processing.
// Create the queue
dispatch_queue_t myQueue = dispatch_queue_create("myQueue", NULL);
- (void)addEventSubscriber:(id <EventSubscriber>)eventSubscriber
{
dispatch_sync(myQueue, ^{
[_eventSubscribers addObject:eventSubscriber];
});
}
- (void)removeEventSubscriber:(id <EventSubscriber>)eventSubscriber
{
dispatch_sync(myQueue, ^{
[_eventSubscribers removeObject:eventSubscriber];
});
}
- (void)tick:(ccTime)dt
{
dispatch_sync(myQueue, ^{
for (id <EventSubscriber> subscriber in _eventSubscribers)
{
if ([subscriber respondsToSelector:#selector(tick:)])
{
[subscriber tick:dt];
}
}
});
}
You are only obtaining a lock while removing items from your array, not while enumerating items. The error suggests that within an enumeration you're trying to remove an item, which is allowed by your locking but not enumeration.
Simply locking the array before enumerating may not work either. The same thread can lock an object recursively, but if your enumeration and remove are on different threads then trying to remove within an enumeration would cause deadlock. If you are in this situation you'll need to rethink your model.
I run into this problem a lot. I have no experience with thread handling / synchronization beyond an undergraduate OS course, so this is what I came up with.
Every time you iterate over the list of objects and want to remove something - instead add that object to a global "objectsToRemove" array. In your update method, remove everything from the objectsToRemove, then clean up the array to avoid over-removing an object on the next update.
Cocos2D has a CCArray which is essentially an NSMutableArray with some added functionality– like being able to remove an item while iterating. I haven't read through the code myself, so I'm not sure how it is implemented and therefore I don't use it.
you need add synchronized in this functin too.
- (void)tick:(ccTime)dt
{
#synchronized(_eventSubscribers){
for (id <EventSubscriber> subscriber in _eventSubscribers)
{
if ([subscriber respondsToSelector:#selector(tick:)])
{
[subscriber tick:dt];
}
}
}
}

How to dispatch on main queue synchronously without a deadlock?

I need to dispatch a block on the main queue, synchronously. I don’t know if I’m currently running on the main thread or no. The naive solution looks like this:
dispatch_sync(dispatch_get_main_queue(), block);
But if I’m currently inside of a block running on the main queue, this call creates a deadlock. (The synchronous dispatch waits for the block to finish, but the block does not even start running, since we are waiting for the current one to finish.)
The obvious next step is to check for the current queue:
if (dispatch_get_current_queue() == dispatch_get_main_queue()) {
block();
} else {
dispatch_sync(dispatch_get_main_queue(), block);
}
This works, but it’s ugly. Before I at least hide it behind some custom function, isn’t there a better solution for this problem? I stress that I can’t afford to dispatch the block asynchronously – the app is in a situation where the asynchronously dispatched block would get executed “too late”.
I need to use something like this fairly regularly within my Mac and iOS applications, so I use the following helper function (originally described in this answer):
void runOnMainQueueWithoutDeadlocking(void (^block)(void))
{
if ([NSThread isMainThread])
{
block();
}
else
{
dispatch_sync(dispatch_get_main_queue(), block);
}
}
which you call via
runOnMainQueueWithoutDeadlocking(^{
//Do stuff
});
This is pretty much the process you describe above, and I've talked to several other developers who have independently crafted something like this for themselves.
I used [NSThread isMainThread] instead of checking dispatch_get_current_queue(), because the caveats section for that function once warned against using this for identity testing and the call was deprecated in iOS 6.
For syncing on the main queue or on the main thread (that is not the same) I use:
import Foundation
private let mainQueueKey = UnsafeMutablePointer<Void>.alloc(1)
private let mainQueueValue = UnsafeMutablePointer<Void>.alloc(1)
public func dispatch_sync_on_main_queue(block: () -> Void)
{
struct dispatchonce { static var token : dispatch_once_t = 0 }
dispatch_once(&dispatchonce.token,
{
dispatch_queue_set_specific(dispatch_get_main_queue(), mainQueueKey, mainQueueValue, nil)
})
if dispatch_get_specific(mainQueueKey) == mainQueueValue
{
block()
}
else
{
dispatch_sync(dispatch_get_main_queue(),block)
}
}
extension NSThread
{
public class func runBlockOnMainThread(block: () -> Void )
{
if NSThread.isMainThread()
{
block()
}
else
{
dispatch_sync(dispatch_get_main_queue(),block)
}
}
public class func runBlockOnMainQueue(block: () -> Void)
{
dispatch_sync_on_main_queue(block)
}
}
I recently began experiencing a deadlock during UI updates. That lead me this Stack Overflow question, which lead to me implementing a runOnMainQueueWithoutDeadlocking-type helper function based on the accepted answer.
The real issue, though, is that when updating the UI from a block I had mistakenly used dispatch_sync rather than dispatch_async to get the Main queue for UI updates. Easy to do with code completion, and perhaps hard to notice after the fact.
So, for others reading this question: if synchronous execution is not required, simply using dispatch_**a**sync will avoid the deadlock you may be intermittently hitting.

double checked locking - objective c

I realised double checked locking is flawed in java due to the memory model, but that is usually associated with the singleton pattern and optimizing the creation of the singleton.
What about under this case in objective-c:
I have a boolean flag to determine if my application is streaming data or not. I have 3 methods, startStreaming, stopStreaming, streamingDataReceived and i protect them from multiple threads using:
- (void) streamingDataReceived:(StreamingData *)streamingData {
if (self.isStreaming) {
#synchronized(self) {
if (self.isStreaming) {
- (void) stopStreaming {
if (self.isStreaming) {
#synchronized(self) {
if (self.isStreaming) {
- (void) startStreaming:(NSArray *)watchlistInstrumentData {
if (!self.isStreaming) {
#synchronized(self) {
if (!self.isStreaming) {
Is this double check uneccessary? Does the double check have similar problems in objective-c as in java? What are the alternatives to this pattern (anti-pattern).
Thanks
It is equally flawed - you have a race condition
You have to enter your synchronized section and then check the flag
That looks like premature optimisation to me. What's wrong with (for example)
- (void) startStreaming:(NSArray *)watchlistInstrumentData {
#synchronized(self) {
if (!self.isStreaming) {
...