how to wait perform tasks in objective-c - objective-c

I have a question how to wait perform several tasks that are in some method even that method is called few times in different threads;
For example:
When I call method:
1:
2:
I want to see the following:
1:
STEP 1, STEP 2;
2:
STEP 1, STEP 2;
but often I see the following:
1:
2:
STEP 1, STEP 1,
STEP 2, STEP 2,
See code below, maybe it helps to understand the problem better;
//many times per second
- (void)update:(UpdateObjectClass *)updateObject {
//step 1:
//update common data(for example array)
//long process(about 1-2 seconds)
[self updateData:updateObject];
//step 2:
//update table
[self updateTableView];
}
I have tried to use dispatch_barrier_async, but I don't understand how to use this in proper way;
Thank you for any help ;)

I'm borrowing from #remus's answer.
Assuming that -[update:] is being called on the same instance of an object (and not a whole bunch of objects), you can use #synchronized to enforce that your code is only performed one-at-a-time.
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
#synchronized(self) {
// Run your updates
// [self updateData:updateObject];
dispatch_async(dispatch_get_main_queue(), ^(void){
// Async thread callback:
[self updateTableView];
});
}
});
However
I am going to go out on a limb, and guess that the reason you need this code to be performed synchronously is because your -[updateData:] method is doing something that is not thread safe, such as modifying a NSMutableDictionary or NSMutableArray. If this is the case, you should really use that #synchronized trick on the mutable thing itself.
I highly recommend that you post the code to -[updateData:] if it is not too long.

You are trying to solve the problem at the wrong level and with the information in the question it is unlikely that any solution can be provided.
Given the output reported we know that updateData and updateTableView are asynchronous and use one or more tasks. We don't know anything about what queue(s) they use, how many tasks they spawn, whether they have an outer task which does not complete until sub tasks have, etc., etc.
If you look at the standard APIs you will see async methods often take a completion block. Internally such methods may use multiple tasks on multiple queues, but they are written such that all such tasks are completed before they call the completion block. Can you redesign updateData so it takes a completion block (which you will then use to invoke updateTableView)?
The completion block model doesn't by itself address all the ways you might need to schedule a task based on the completion of other task(s), there are other mechanisms including: serial queues, dispatch groups and dispatch barriers.
Serial queues enable a task to be scheduled after the completion of all other tasks previously added to the queue. Dispatch groups enable multiple tasks scheduled on multiple queues to be tagged as belonging to a group, and a task scheduled to run after all tasks in a group have completed. Dispatch barriers enable a task to be scheduled after all previous tasks scheduled on a concurrent queue.
You need to study these methods and then embed the appropriate ones for your needs into your design of updateData, updateTableView and ultimately update itself. You can use a bottom up approach, essentially the opposite of what your question is attempting. Start at the lowest level and ask whether one or more tasks should be a group, have a barrier, be sequential, and might need a completion block. Then move " upward".
Probably not the answer you were hoping for! HTH

Consider using dispatch_async to run the array updates and then update your tableView. You can do this inside of a single method:
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
// Run your updates
// [self updateData:updateObject];
dispatch_async(dispatch_get_main_queue(), ^(void){
// Async thread callback:
[self updateTableView];
});
});
Edit
I'd consider modifying your updateData method to run inside the async queue; when it is done, call [self updateTableView]; directly from that method. If it's not too long, can you add the [self updateData:updateObject] code (or a portion of) to your question?

Related

Track down a block/deadlock that is causing a freeze when doing executeFetchRequest

I have an app that is freezing frequently and permanently. When this happens, I click pause in Xcode and see that on the main thread it's always stopping at a line of code that executes a fetch request on the MOC. I also see the output __psynch_mutexwait + 17 in the thread list on the left. This is making me assume that the app is hitting deadlock or for some reason the MOC is blocked.
My first instinct was that I might be executing a fetch request on a non-main thread so I put in logs to check, but this isn't the case. All fetches are happening on the main thread.
How can I go about tracking down what might be blocking here? Is there something more I should be looking for in the stack traces?
Is it a problem that I am setting properties of objects fetched on the main thread on other threads? ie, fetch objectA on main but then pass it to another thread and do something like objectA.someNumber = [NSNumber numberWithInt:2] ?
Is it a problem that I am setting properties of objects fetched on the main thread on other threads? ie, fetch objectA on main but then pass it to another thread and do something like objectA.someNumber = [NSNumber numberWithInt:2] ?
Yes! I've tried this.
When you fetch ObjA in ThreadA, and then pass it to ThreadB for some operations, it will fall into deadlock.
It it precicely because your fetch requests are running on the main thread that your app is blocking. Remember that the main thread is a serial queue and that no other block (or event) will run until your fetch request is done (even if in theory it could because you block is in a waiting state). This explains why when you break you always hit a _psanch_mutexwait.
You should run your fetch requests on another queue and if necessary use the result on the main queue. One way to achieve this is with the following pattern:
- (void) fetchRequest1
{
dispatch_async(not_the_main_queue, ^(void) {
// Code the request here.
// Then use the result on the main if necessary.
dispatch_async(dispatch_get_main_queue(), ^(void) {
// Use the result on the main queue.
});
});
}
Also note that its often not necessary to run anything on the main queue. In fact your app will usually run more smoothly if you run as little as possible on that thread. Of course there are some things that must be done there and in those case you could use the following pattern to ensure that it is:
- (void) runBlockOnMainThread:(void(^)(void))block
{
dispatch_queue_t thisQ = dispatch_get_current_queue();
dispatch_queue_t mainQ = dispatch_get_main_queue();
if (thisQ != mainQ)
dispatch_sync(mainQ, block);
else
block();
}

What to do when users generate the same action several time waiting for download?

I am designing an IPhone application. User search something. We grab data from the net. Then we update the table.
THe pseudocode would be
[DoThisAtbackground ^{
LoadData ();
[DoThisAtForeground ^{
UpdateTableAndView();
}];
}];
What about if before the first search is done the user search something else.
What's the industry standard way to solve the issue?
Keep track which thread is still running and only update the table
when ALL threads have finished?
Update the view every time a thread finish?
How exactly we do this?
I suggest you take a look at the iOS Human Interface Guidelines. Apple thinks it's pretty important all application behave in about the same way, so they've written an extensive document about these kind of issues.
In the guidelines there are two things that are relevant to your question:
Make Search Quick and Rewarding: "When possible, also filter remote data while users type. Although filtering users' typing can result in a better search experience, be sure to inform them and give them an opportunity to opt out if the response time is likely to delay the results by more than a second or two."
Feedback: "Feedback acknowledges people’s actions and assures them that processing is occurring. People expect immediate feedback when they operate a control, and they appreciate status updates during lengthy operations."
Although there is of course a lot of nonsense in these guidelines, I think the above points are actually a good idea to follow. As a user, I expect something to happen when searching, and when you update the view every time a thread is finished, the user will see the fastest response. Yes, it might be results the user doesn't want, but something is happening! For example, take the Safari web browser in iOS: Google autocomplete displays results even when you're typing, and not just when you've finished entering your search query.
So I think it's best to go with your second option.
If you're performing the REST request for data to your remote server you can always cancel the request and start the new one without updating the table, which is a way to go. Requests that have the time to finish will update UI and the others won't. For example use ASIHTTPRequest
- (void)serverPerformDataRequestWithQuery:(NSString *)query andDelegate:(__weak id <ServerDelegate)delegate {
[currentRequest setFailedBlock:nil];
[currentRequest cancel];
currentRequest = [[ASIHTTPRequest alloc] initWithURL:kHOST];
[currentRequest startAsynchronous];
}
Let me know if you need an answer for the local SQLite databases too as it is much more complicated.
You could use NSOperationQueue to cancel all pending operations, but it still would not cancel the existing operation. You would still have to implement something to cancel the existing operation... which also works to early-abort the operations in the queue.
I usually prefer straight GCD, unless there are other benefits in my use cases that are a better fit for NSOperationQueue.
Also, if your loading has an external cancel mechanism, you want to cancel any pending I/O operations.
If the operations are independent, consider a concurrent queue, as it will allow the newer request to execute simultaneously as the other(s) are being canceled.
Also, if they are all I/O, consider if you can use dispatch_io instead of blocking a thread. As Monk would say, "You'll thank me later."
Consider something like this:
- (void)userRequestedNewSearch:(SearchInfo*)searchInfo {
// Assign this operation a new token, that uniquely identifies this operation.
uint32_t token = [self nextOperationToken];
// If your "loading" API has an external abort mechanism, you want to keep
// track of the in-flight I/O so any existing I/O operations can be canceled
// before dispatching new work.
dispatch_async(myQueue, ^{
// Try to load your data in small pieces, so you can exit as early as
// possible. If you have to do a monolithic load, that's OK, but this
// block will not exit until that stops.
while (! loadIsComplete) {
if ([self currentToken] != token) return;
// Load some data, set loadIsComplete when loading completes
}
dispatch_async(dispatch_get_main_queue(), ^{
// One last check before updating the UI...
if ([self currentToken] != token) return;
// Do your UI update operations
});
});
}
It will early-abort any operation that is not the last one submitted. If you used NSOperationQueue you could call cancelAllOperations but you would still need a similar mechanism to early-abort the one that is currently executing.

how to run functions in order in objective c

i have a problem with queued functions. i want to run my function and when my first function finishes running, i want to start other one.
-(void)firstFunct
{
// sending and getting information from server.
// doing sth and creating data to use in my second function.
}
and my second function is:
-(void)secondFunct
{
// using data coming from first function
}
i am now using these 2 functions in like that
-(void)ThirdFunct
{
[self firstFunct];
[self performSelector:#selector(secondFunct) withObject:nil afterDelay:0.5];
}
but there is a problem that this method is not good to use. i want to learn if there is an efficient way to run the functions one after the other.
You can simply call one function after the other:
- (void) thirdFunct
{
[self firstFunct];
[self secondFunct];
}
If you want to run this whole block in the background, not blocking the UI thread, use Grand Central Dispatch:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_NORMAL, 0), ^{
[self firstFunct];
[self secondFunct];
});
And if the first function contains some asynchronous call, that call should offer some kind of interface to run code after the call finishes, like a delegate call or a completion block.
Well, yes, zoul's spot on for the normal case.
However, you mentioned a server was involved. In that case, you probably have an asynchronous request. What you want to do is read the documentation of the class you use to make the network request, and learn what callbacks it uses to notify you when it is complete.
Cocoa offers nice concurrency management classes like NSOperation and NSOperationQueue. You could use them to simplify the logic behind chaining your asynchronous calls without creating explicit connections in code to call method 3 after work 2 completes, and method 2 after work 1 completes, and so on. Using dependency management, you could easily specify those dependencies between your operations.
Here's a simple example of how you would use that code. Suppose you are downloading some data asynchronously in all those three methods, and you've created a NSOperation subclass called DownloadOperation.
First, create a NSOperationQueue.
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
Then, create the three download operations.
DownloadOperation *a = [[DownloadOperation alloc] init];
DownloadOperation *b = [[DownloadOperation alloc] init];
DownloadOperation *c = [[DownloadOperation alloc] init];
Then, specify the dependencies between your operations. Dependencies simply say that an operation can run only if all the operations it depends upon are complete. In your example, the dependencies look like c <- b <- a, which can be broken down into two steps:
[b addDependency:a];
[c addDependency:b];
Now add these operations to the queue.
[queue addOperations:#[ a, b, c ] waitUntilFinished:NO];
The queue will automatically start processing all operations at this point, but since we've creating this chaining sort of a dependency, they will be executed one after the other in our particular example.
I've created a sample project on github demonstrating it with a simple example at https://github.com/AnuragMishra/ChainedOperationQueue. It fakes an asynchronous operation by creating a timer, and when the timer finishes, the operation is marked complete. It's written using Xcode 4.5, so let me know if you have issues compiling it in older versions.
if you write your methode like this
-
(void)ThirdFunct
{
[self firstFunct];
[self secondFunct];
}
secondFunct is called after firstFunct. Your problem comes certainly from the network request which is asynchronous. To be sure that your secondFunct is executed after an asynchronous request you have to execute it from delegate or block.
Checkout NSURLConnectionDelegate if you NSURLConnection

dispatch_sync vs. dispatch_async on main queue

Bear with me, this is going to take some explaining. I have a function that looks like the one below.
Context: "aProject" is a Core Data entity named LPProject with an array named 'memberFiles' that contains instances of another Core Data entity called LPFile. Each LPFile represents a file on disk and what we want to do is open each of those files and parse its text, looking for #import statements that point to OTHER files. If we find #import statements, we want to locate the file they point to and then 'link' that file to this one by adding a relationship to the core data entity that represents the first file. Since all of that can take some time on large files, we'll do it off the main thread using GCD.
- (void) establishImportLinksForFilesInProject:(LPProject *)aProject {
dispatch_queue_t taskQ = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
for (LPFile *fileToCheck in aProject.memberFiles) {
if (//Some condition is met) {
dispatch_async(taskQ, ^{
// Here, we do the scanning for #import statements.
// When we find a valid one, we put the whole path to the imported file into an array called 'verifiedImports'.
// go back to the main thread and update the model (Core Data is not thread-safe.)
dispatch_sync(dispatch_get_main_queue(), ^{
NSLog(#"Got to main thread.");
for (NSString *import in verifiedImports) {
// Add the relationship to Core Data LPFile entity.
}
});//end block
});//end block
}
}
}
Now, here's where things get weird:
This code works, but I'm seeing an odd problem. If I run it on an LPProject that has a few files (about 20), it runs perfectly. However, if I run it on an LPProject that has more files (say, 60-70), it does NOT run correctly. We never get back to the main thread, the NSLog(#"got to main thread"); never appears and the app hangs. BUT, (and this is where things get REALLY weird) --- if I run the code on the small project FIRST and THEN run it on the large project, everything works perfectly. It's ONLY when I run the code on the large project first that the trouble shows up.
And here's the kicker, if I change the second dispatch line to this:
dispatch_async(dispatch_get_main_queue(), ^{
(That is, use async instead of sync to dispatch the block to the main queue), everything works all the time. Perfectly. Regardless of the number of files in a project!
I'm at a loss to explain this behavior. Any help or tips on what to test next would be appreciated.
This is a common issue related to disk I/O and GCD. Basically, GCD is probably spawning one thread for each file, and at a certain point you've got too many threads for the system to service in a reasonable amount of time.
Every time you call dispatch_async() and in that block you attempt to to any I/O (for example, it looks like you're reading some files here), it's likely that the thread in which that block of code is executing will block (get paused by the OS) while it waits for the data to be read from the filesystem. The way GCD works is such that when it sees that one of its worker threads is blocked on I/O and you're still asking it to do more work concurrently, it'll just spawn a new worker thread. Thus if you try to open 50 files on a concurrent queue, it's likely that you'll end up causing GCD to spawn ~50 threads.
This is too many threads for the system to meaningfully service, and you end up starving your main thread for CPU.
The way to fix this is to use a serial queue instead of a concurrent queue to do your file-based operations. It's easy to do. You'll want to create a serial queue and store it as an ivar in your object so you don't end up creating multiple serial queues. So remove this call:
dispatch_queue_t taskQ = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
Add this in your init method:
taskQ = dispatch_queue_create("com.yourcompany.yourMeaningfulLabel", DISPATCH_QUEUE_SERIAL);
Add this in your dealloc method:
dispatch_release(taskQ);
And add this as an ivar in your class declaration:
dispatch_queue_t taskQ;
I believe Ryan is on the right path: there are simply too many threads being spawned when a project has 1,500 files (the amount I decided to test with.)
So, I refactored the code above to work like this:
- (void) establishImportLinksForFilesInProject:(LPProject *)aProject
{
dispatch_queue_t taskQ = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(taskQ,
^{
// Create a new Core Data Context on this thread using the same persistent data store
// as the main thread. Pass the objectID of aProject to access the managedObject
// for that project on this thread's context:
NSManagedObjectID *projectID = [aProject objectID];
for (LPFile *fileToCheck in [backgroundContext objectWithID:projectID] memberFiles])
{
if (//Some condition is met)
{
// Here, we do the scanning for #import statements.
// When we find a valid one, we put the whole path to the
// imported file into an array called 'verifiedImports'.
// Pass this ID to main thread in dispatch call below to access the same
// file in the main thread's context
NSManagedObjectID *fileID = [fileToCheck objectID];
// go back to the main thread and update the model
// (Core Data is not thread-safe.)
dispatch_async(dispatch_get_main_queue(),
^{
for (NSString *import in verifiedImports)
{
LPFile *targetFile = [mainContext objectWithID:fileID];
// Add the relationship to targetFile.
}
});//end block
}
}
// Easy way to tell when we're done processing all files.
// Could add a dispatch_async(main_queue) call here to do something like UI updates, etc
});//end block
}
So, basically, we're now spawning one thread that reads all the files instead of one-thread-per-file. Also, it turns out that calling dispatch_async() on the main_queue is the correct approach: the worker thread will dispatch that block to the main thread and NOT wait for it to return before proceeding to scan the next file.
This implementation essentially sets up a "serial" queue as Ryan suggested (the for loop is the serial part of it), but with one advantage: when the for loop ends, we're done processing all the files and we can just stick a dispatch_async(main_queue) block there to do whatever we want. It's a very nice way to tell when the concurrent processing task is finished and that didn't exist in my old version.
The disadvantage here is that it's a bit more complicated to work with Core Data on multiple threads. But this approach seems to be bulletproof for projects with 5,000 files (which is the highest I've tested.)
I think it is more easy to understand with diagram:
For the situation the author described:
|taskQ| ***********start|
|dispatch_1 ***********|---------
|dispatch_2 *************|---------
.
|dispatch_n ***************************|----------
|main queue(sync)|**start to dispatch to main|
*************************|--dispatch_1--|--dispatch_2--|--dispatch3--|*****************************|--dispatch_n|,
which make the sync main queue so busy that finally fail the task.

How can I consolidate deferred/delayed calls in Objective-C?

I'd like to ensure that certain maintenance tasks are executed "eventually". For example, after I detect that some resources might no longer be used in a cache, I might call:
[self performSelector:#selector(cleanupCache) withObject:nil afterDelay:0.5];
However, there might be numerous places where I detect this, and I don't want to be calling cleanupCache continuously. I'd like to consolidate multiple calls to cleanupCache so that we only periodically get ONE call to cleanupCache.
Here's what I've come up with do to this-- is this the best way?
[NSObject cancelPreviousPerformRequestsWithTarget:self selector:#selector(cleanupCache) object:nil];
[self performSelector:#selector(cleanupCache) withObject:nil afterDelay:0.5];
There's no real built-in support for what you want. If this is common in your program, I would create a trampoline class that keeps track of whether it's already scheduled to send a message to a given object. It shouldn't take more than 20 or so lines of code.
Rather than canceling the pending request, how about just keeping track? Set a flag when you schedule the request, and clear it when the cleanup runs.