How to get hold of the currently executing NSOperation? - objective-c

Is there an equivalent to [NSOperationQueue currentQueue] or [NSThread currentThread] for NSOperation?
I have a fairly complex domain model where the heavy processing happens quite deep down in the call stack. In order to timely cancel an operation I would need to pass the NSOperation as a parameter to every method until I get to the point where I want to interrupt a longer running loop. Using threads I could use [[NSThread currentThread] isCancelled] so it would seem convenient if there is an equivalent for NSOperation, unfortunately there is only the seemingly useless [NSOperationQueue currentQueue].

Came up with an extension in swift that returns the running operations
extension NSOperationQueue {
public var runningOperations: [NSOperation] {
return operations.filter {$0.executing && !$0.finished && !$0.cancelled}
}
}
You can then pick up the first one
if let operation = aQueue.runningOperations.first {}

No, there's no method to find the currently executing operation.
Two ways to solve your problem:
Operations are objects. If you need object A to talk to object B, you'll need to arrange for A to have a reference to B. There are lots of ways to do that. One way is to pass the operation along to each object that needs to know about it. Another is to use delegation. A third is to make the operation part of some larger "context" that's passed along to each method or function. If you find that you need to pass a reference from one object through several others just to get it to the object that will finally use it, that's a clue that you should think about rearranging your code.
Have the "heavy lifting" method return some value that gets passed up the call chain. You don't necessarily need the heavy lifting method to call [currentOperation cancel] to accomplish your goal. In fact, it would be better to have it return some value that the operation will understand to mean "work is done, stop now" because it can check that return value and exit immediately rather than having to call -isCancelled once in a while to find out whether it has been cancelled.

This isn't a good idea. Operations are usually canceled by their queue. Within the operation's main() method, you can periodically check if self is cancelled (say, every n trips through a loop, or at the start of every major block of commands) and abort if so.
To respond to a cancellation (say, some UI element tied to the operation's or queue's status), you use key value observing (KVO) to have your controller observe the operations' started, completion, and cancelled properties (as needed), then set your UI's state (always on the main thread) when those keys are updated. Per JeremyP's comments, it's important to note the KVO notifications come from the op's thread and UI should (almost) always be manipulated on the main thread, so you'll need to use -performSelectorOnMainThread... methods to update your actual UI when you receive a state change KVO note about your operations.
What are you really trying to do? That is, why do you feel other parts of your app need to know directly about the current operation?

You could store the current operation in the thread dictionary. Just remember to get rid of it before you exit. You can safely use the thread dict if you created the object.

You can use a combination of [NSOperationQueue currentQueue] & [NSThread currentThread] to accomplish this.
Essentially, you need to loop through the operations on the currentQueue and find the operation running on the currentThread.
NSOperation doesn't provide access to the thread it is running on, so you need to add that property yourself and assign it.
You're probably already subclassing NSOperation and providing a main, so add a 'thread' property to that subclass:
#interface MyOperation : NSOperation
#property(nonatomic,strong) NSThread *thread ;
#end
Then, in your 'main' assign the current thread to that property
myOperation.thread = [NSThread currentThread]
You can then add a 'currentOperation' method:
+(MyOperation *)currentOperation
{
NSOperationQueue *opQueue = [NSOperationQueue currentQueue] ;
NSThread *currentThread = [NSThread currentThread] ;
for( MyOperation *op in opQueue.operations ) {
if( [op isExecuting] && [op respondsToSelector:#selector(thread)] ) {
if( op.thread == currentThread ) {
return ( op ) ;
}
}
}
}
return nil ;
}

How do you know which operation you want to cancel?
When you get to the point that you want to cancel, just call [myQueue operations] and go through the operations until you find ones that you now want to cancel. I guess if you have millions of operations (or thousands) this might not work.
[myQueue operations] is thread safe - a snapshot of the Queue contents. You can dive through it pretty quick cancelling at will.
Another way:
NSOperationQueue is not a singleton, so you can create a Q that has say 200 jobs on it, and then cancel all 20 by just getting that Q and cancelling them all. Store the Q's in a dictionary on the main thread, and then you can get the jobs you want canceled from the dict and cancel them all. i.e. you have 1000 kinds of operations and at the point in the code where you realize you don't need a certain task, you just get the Q for that kind, and look through it for jobs to cancel.

Related

How to ensure FIFO execution in a concurrent NSOperationQueue?

I'm working on a framework and in order to ensure non blocking public methods, I'm using a NSOperationQueue that puts all the public method calls into an operation queue and returns immediately.
There is no relation or dependencies between different operations and the only thing that matters is that the operations are started in FIFO order that is in the same order as they were added to the queue.
Here is an example of my current implementation (sample project here):
#implementation Executor
-(instancetype) init {
self = [super init];
if(self) {
_taskQueue = [[NSOperationQueue alloc] init];
_taskQueue.name = #"com.d360.tasks";
}
return self;
}
-(void) doTask:(NSString*) taskName
{
NSOperation *operation = [NSBlockOperation blockOperationWithBlock:^{
NSLog(#"executing %#", taskName);
}];
[self.taskQueue addOperation:operation];
}
I realised though that the order at which the operations are started is not necessarily the order at which they were added to the queue. For instance, if I call
[self.executor doTask:#"Task 1"];
[self.executor doTask:#"Task 2"];
Sometimes Task 2 is started after Task 1.
The question is how can I ensure a FIFO execution start?
I could achieve it using _taskQueue.maxConcurrentOperationCount = 1; but this would allow only 1 operation at once which I don't want. One operation should not block any other operation and they can run concurrently as long as they are started in the correct order.
I looked also into the NSOperationQueuePriority property which would work If I knew the priorities of the calls which I don't. In fact, even if I sent the earlier added operation to NSOperationQueuePriorityHigh and the second to NSOperationQueuePriorityNormal the order is not guaranteed neither.
[self.executor doTask:#"Task 1" withQueuePriority:NSOperationQueuePriorityHigh];
[self.executor doTask:#"Task 2" withQueuePriority:NSOperationQueuePriorityNormal];
Output is sometimes
executing Task 2
executing Task 1
Any ideas?
thanks,
Jan
When you create each task you could add a dependency on the previous task with NSOperation -addDependency. The complication is that dependencies aren't satisfied until the dependent task completes, which probably isn't what you want. You could work around that by creating another NSOperation inside each task, and make the next queued task depend on that. This inner task can just set a flag or something that says "hey, I've started!". Then when that inner task completes it will satisfy the dependency for the next task in the queue and allow it to start.
Seems like a convoluted way to do things, though, and I'm not sure the benefit is worth the extra complication - why does it matter what order the operations are started in, if they truly are independent operations? Once they've started, the OS decides which task gets CPU time, and you don't have much control over it anyway, so why not just queue them up and let the OS manage the start order?

Return value from asynchronous SQL method

I have this code:
- (NSString *)obtenerDatosUsuario
{
__block NSString *result=#"";
[self obtenerDatosUsuarioSQL:^(NSString *resultadoSQL){
result=resultadoSQL;
}];
return result;
}
And I want that the return be the content of resultadoSQL
If my guess is correct about what happens inside your method -obtenerDatosUsuarioSQL: (i.e., it performs a lengthy operation in a background thread, and gives the result to the passed block argument), then your code runs in the following order:
You call -obtenerDatosUsuario
You call -obtenerDatosUsuarioSQL:, passing a completion handler block.
Execution proceeds forward and reaches the return statement at the end of -obtenerDatosUsuario, and exits the method body. The returned variable result hasn't been set yet!
Sometime later, the SQL query completes and the block is executed. But it is too late to return the result because execution already exited the method -obtenerDatosUsuario.
There are ways to make this asynchronous method behave synchronously (e.g. semaphores), but it generally is a very, very bad idea. Most likely, obtenerDatosUsuarioSQL is asynchronous because there is a chance (even if only a small chance) that the result won't be returned immediately. Maybe it's possible that the SQL will be slow. Or maybe you'll eventually be doing queries from multiple threads, so this query might have to wait for queries in other threads to finish. Or there might be other reasons. But whatever the reason, this method was implemented as asynchronous method, and you should embrace that, rather than fight it. If you change obtenerDatosUsuario to return synchronously, you open yourself to a wide variety of possible problems.
Instead, you should just adopt asynchronous pattern in your code. For example, let's imagine that you have some code that was planning on using the result of obtenerDatosUsuario for some other purpose, e.g.:
NSString *resultadoSQL = [self obtenerDatosUsuario];
// use `resultadoSQL` here
Just change that to:
[self obtenerDatosUsuarioSQL:^(NSString *resultadoSQL){
// use `resultadoSQL` here
}];
// but not here
And, if you're using obtenerDatosUsuarioSQL in some method that you're currently trying to return the value immediately, then change that to behave asynchronously, too. For example, let's assume you had something like:
- (NSString *)someOtherMethod {
NSString *resultadoSQL = [self obtenerDatosUsuario];
// let's assume you're doing something else with `resultadoSQL` to build some other string
NSString *string = ... // some expression using `resultadoSQL`
return string;
}
Then, you'd change that to also adopt asynchronous pattern:
- (void)someOtherMethod:(void (^)(NSString *))completionHandler {
[self obtenerDatosUsuarioSQL:^(NSString *resultadoSQL){
NSString *string = ... // some expression using `resultadoSQL`
completionHandler(resultadoSQL);
}];
}
When you first encounter this, this may seem unnecessarily complicated, but asynchronous programming is so critical, such a fundamental part of Cocoa programming, that one really must gain some familiarity with these common asynchronous patterns, such as blocks. Personally, we use block syntax so much that I create code snippets in Xcode's "Code Snippet Library" for typical block patterns, which simplifies life a lot and gets you out of the world of memorizing the unintuitive block syntax.
But don't be tempted to wrap asynchronous method in another method that makes it behave synchronously. You open yourself up to many types of problems if you do that.

objective-c How to prevent an action while a thread is being executed

I've been using Multithreading for a while I thought I got it but my program is crashing now.
I have a method that has to download data for the server and access memory depending on the data, that process takes long, so I execute it from a secondary thread like this:
-(void)showPeople{
dispatch_queue_t pintaOcupantes = dispatch_queue_create("Pinta Ocupantes", NULL);
dispatch_async(pintaOcupantes, ^{
//BUNCH OF CODE
[self isPersonIn:jid];
//MORE CODE that include methods calling isPersonIn
});
Inside that block there's isPersonIn. It crashes if I press too fast the button that executes showPeople. IsPersonIn is something like:
-(int)isPersonIn:(XMPPJID *)jid{
int i = 0;
for(NSDictionary *card in self.listaGente){
NSLog(#"la jid es: %#", [card objectForKey:#"jid"]);
NSLog(#"la jid del usuario es: %#", jid.user);
if([[card objectForKey:#"jid"] isEqualToString:jid.user]){
return i;
}
i++;
}
return -1;
}
It compares a XMPPJID with an array which is a instance variable.
isPersonIn is called several times from different methods but all the methods that call it belong to the block, so as I understand it, all the executions of isPersonIn should be serialized, FIFO, right?
But if I press the button that executes showPeople, the one containing the block, many times very fast the app crashes on isPersonIn, sometimes without any message. I can see the threads when it crashes and I see at least 2 threads with isPersonIn last in the stack, which doesn`t make sense, since the block should be executed one at a time, not several threads at the same time, right?
Any help will be very much appreaciated.
Thanks!
[EDIT]
Also the instance array, self.listaGente, is modified outside the block.
I'm not a GCD expert, but I suspect the reason you're getting multiple threads is that you're creating a new dispatch queue each time showPeople is called.
So rather than having a single serial queue with multiple blocks, I think you are ending up with multiple queues each executing a single block.
[EDIT] If the collection is modified outside of the block but during execution of the block, this could be the source of your crash. From Fast Enumeration Documentation:
Enumeration is “safe”—the enumerator has a mutation guard so that if you attempt to modify the collection during enumeration, an exception is raised.
In this case protecting the array, that was provoking my app to crash, fixed the problem.
using:
#syncronized(theArray){
//CODE THAT WILL ACCESS OR WRITE IN THE ARRAY
}
This way threads will stop before if there's a thread already executing that code, like a mutex or semaphore

How do I determine if a thread has a lock?

I am writing an Objective-C class that I want to be thread safe. To do this I am using pthreads and a pthread_rwlock (using #synchronized is overkill and I want to learn a bit more about pthreads). The lock is inited in the objects designated init method and destroyed in dealloc. I have three methods for manipulating the lock; readLock, writeLock, unlock. These three methods simply invoke the related pthread functions and currently nothing else.
Here are two of the objects methods, both of which require a writeLock:
-(void)addValue:(const void *)buffer
{
[self writeLock];
NSUInteger lastIndex = self.lastIndex;
[self setValue:buffer atIndex:(lastIndex == NSNotFound) ? 0 : lastIndex+1];
[self unlock];
}
-(void)setValue:(const void *)buffer atIndex:(NSUInteger)index
{
[self writeLock];
//do work here
[self unlock];
}
Invoking setAddValue: will first obtain a write lock and then invoke setValue:atIndex: which will also attempt to obtain a write lock. The documentation states that the behaviour is undefined when this occurs. Therefore, how do I check if a thread has a lock before attempting to obtain a lock?
(I could ensure that critical section make no invocation that trigger another lock request, but that would mean code repetition and I want to keep my code DRY).
Not entirely clear what kind of lock you're using. You indicate you're using pthreads, and read/write lock, so I'm concluding that you're using a pthread_rwlock.
If that's true, then you should be able to use pthread_rwlock_trywrlock on the lock. From the man page,
If successful, the pthread_rwlock_wrlock() and pthread_rwlock_trywrlock()
functions will return zero. Otherwise, an error number will be returned
to indicate the error.
And, one of the errors is:
[EDEADLK] The calling thread already owns the read/write lock
(for reading or writing).
Therefore, I believe you should be able to call pthread_rwlock_trywrlock() and you will either be successful, it will return EBUSY if another thread has the lock, or you will get EDEADLK if the current thread has the lock.
First, a critical section containing only one operation is useless. The point is to synchronize different things relative to each other. (You do effectively make the integer atomic, but that is probably not the full intent.)
Second, you already know you have the write lock inside the latter critical section, so there is no need to check that it exists or not. Simply do not attempt a read lock while writing.
The solution is probably to move the readLock and writeLock calls up into the calling functions, but without knowing more it's impossible to say.
(This will also likely reduce the performance cost of locking by reducing the number of total operations, as you will not be locking and then immediately unlocking. Probably you do not need to work directly at the pthreads level.)
A portable program cannot rely on the implementation to tell the caller it already holds the write lock. Instead, you need to do something like this to wrap rwlocks with a recursive write lock:
int wrlock_wrap(pthread_rwlock_t *l, int *cnt)
{
int r = *cnt ? 0 : pthread_rwlock_wrlocK(l);
if (!r) ++*cnt;
return r;
}
int wrunlock_wrap(pthread_rwlock_t *l, int *cnt)
{
--*cnt;
return pthread_rwlock_unlock(l);
}
You can keep the count beside the pthread_rwlock_t wherever it's stored, e.g. as a member of your struct/class/whatever.

How to "break" out of dispatch_apply()?

Is there a way to simulate a break statement in a dispatch_apply() block?
E.g., every Cocoa API I've seen dealing with enumerating blocks has a "stop" parameter:
[array enumerateObjectsUsingBlock:^(id obj, NSUInteger i, BOOL *stop) {
if ([obj isNotVeryNice]) {
*stop = YES; // No more enumerating!
} else {
NSLog(#"%# at %zu", obj, i);
}
}];
Is there something similar for GCD?
By design, dispatch_*() APIs have no notion of cancellation. The reason for this is because it is almost universally true that your code maintains the concept of when to stop or not and, thus, also supporting that in the dispatch_*() APIs would be redundant (and, with redundancy comes errors).
Thus, if you want to "stop early" or otherwise cancel the pending items in a dispatch queue (regardless of how they were enqueued), you do so by sharing some bit of state with the enqueued blocks that allows you to cancel.
if (is_canceled()) return;
Or:
__block BOOL keepGoing = YES;
dispatch_*(someQueue, ^{
if (!keepGoing) return;
if (weAreDoneNow) keepGoing = NO;
}
Note that both enumerateObjectsUsingBlock: and enumerateObjectsWithOptions:usingBlock: both support cancellation because that API is in a different role. The call to the enumeration method is synchronous even if the the actual execution of the enumerating blocks may be fully concurrent depending on options.
Thus, setting the *stopFlag=YES tells the enumeration to stop. It does not, however, guarantee that it will stop immediately in the concurrent case. The enumeration may, in fact, execute a few more already enqueued blocks before stopping.
(One might briefly think that it would be more reasonable to return BOOL to indicate whether the enumeration should continue. Doing so would have required that the enumerating block be executed synchronously, even in the concurrent case, so that the return value could be checked. This would have been vastly less efficient.)
I don't think dispatch_apply supports this. The best way I can think of to imitate it would be to make a __block boolean variable, and check it at the beginning of the block. If it's set, bail out quickly. You'd still have to run the block through the rest of the iterations, but it would be faster.
You can't break a dispatch_apply since it's illogical.
In -enumerateObjectsUsingBlock: a break is well-defined because the functions are run sequentially. But in dispatch_apply the functions are run in parallel. That means at the i=3rd invocation of the "block", the i=4th call could have been started. If you break at i=3, should the i=4 call still run?
#BJ's answer is the closest you can do, but there will always some "spill-over".