AFNetworking - Why does it spawn a network request thread? - objective-c

I'm trying to understand Operations and Threads better, and looked to AFNetworking's AFURLConnectionOperation subclass for example, real-world, source code.
My current understanding is when instances of NSOperation are added to an operation queue, the queue, among other things, manages the thread responsible for executing the operation. In Apple's documentation of NSOperation it points out that even if subclasses return YES for -isConcurrent the operation will always be started on a separate thread (as of 10.6).
Based on Apple's strong language throughout the Thread Programming Guide and Concurrency Programming Guide, it seems like managing a thread is best left up to the internal implementation of NSOperationQueue.
However, AFNetworking's AFURLConnectionOperation subclass spawns a new NSThread, and the execution of the operation's -main method is pushed off onto this network request thread. Why? Why is this network request thread necessary? Is this a defensive programming technique because the library is designed to be used by a wide audience? Is it just less hassle for consumers of the library to debug? Is there a (subtle) performance benefit to having all networking activity on a dedicated thread?
(Added Jan 26th)
In a blog post by Dave Dribin, he illustrates how to move an operation back onto the main thread using the specific example of NSURLConnection.
My curiosity comes from the following section in Apple's Thread Programming Guide:
Keep Your Threads Reasonably Busy.
If you decide to create and
manage threads manually, remember that threads consume precious system
resources. You should do your best to make sure that any tasks you
assign to threads are reasonably long-lived and productive. At the
same time, you should not be afraid to terminate threads that are
spending most of their time idle. Threads use a nontrivial amount of
memory, some of it wired, so releasing an idle thread not only helps
reduce your application’s memory footprint, it also frees up more
physical memory for other system processes to use.
It seems to me that AFNetworking's network request thread isn't being "kept reasonably busy;" it's running an infinite while-loop for handling networking I/O. But, see, that's the point of this questions - I don't know and I am only guessing.
Any insight or deconstruction of AFURLConnectionOperation with specific regards to operations, threads (run loops?), and / or GCD would be highly beneficial to filling in the gaps of my understanding.

Its an interesting question and the answer is all about the semantics of how NSOperation and NSURLConnection interact and work together.
An NSURLConnection is itself an asynchronous task. It all happens in the background and calls its delegate periodically with the results. When you start an NSURLConnection it schedules the delegate callbacks using the runloop it is scheduled on, so a runloop must always be running on the thread you are executing an NSURLConnection on.
Therefore the method -start on our AFURLConnectionOperation will always have to return before the operation finishes so it can receive the callbacks. This requires that AFURLConnectionOperation be an asynchronous operation.
from: https://developer.apple.com/library/mac/documentation/Cocoa/Reference/NSOperation_class/index.html
The value of property is YES for operations that run asynchronously with respect to the current thread or NO for operations that run synchronously on the current thread. The default value of this property is NO.
But AFURLConnectionOperation overrides this method and returns YES as we would expect. Then we see from the class description:
When you call the start method of an asynchronous operation, that method may return before the corresponding task is completed. An asynchronous operation object is responsible for scheduling its task on a separate thread. The operation could do that by starting a new thread directly, by calling an asynchronous method, or by submitting a block to a dispatch queue for execution. It does not actually matter if the operation is ongoing when control returns to the caller, only that it could be ongoing.
AFNetworking creates a single network thread using a class method that it schedules all the NSURLConnection objects (and their resulting delegate callbacks) on. Here is that code from AFURLConnectionOperation
+ (void)networkRequestThreadEntryPoint:(id)__unused object {
#autoreleasepool {
[[NSThread currentThread] setName:#"AFNetworking"];
NSRunLoop *runLoop = [NSRunLoop currentRunLoop];
[runLoop addPort:[NSMachPort port] forMode:NSDefaultRunLoopMode];
[runLoop run];
}
}
+ (NSThread *)networkRequestThread {
static NSThread *_networkRequestThread = nil;
static dispatch_once_t oncePredicate;
dispatch_once(&oncePredicate, ^{
_networkRequestThread = [[NSThread alloc] initWithTarget:self selector:#selector(networkRequestThreadEntryPoint:) object:nil];
[_networkRequestThread start];
});
return _networkRequestThread;
}
Here is code from AFURLConnectionOperation showing them scheduling the NSURLConnection on the runloop of the AFNetwokring thread in all runloop modes
- (void)start {
[self.lock lock];
if ([self isCancelled]) {
[self performSelector:#selector(cancelConnection) onThread:[[self class] networkRequestThread] withObject:nil waitUntilDone:NO modes:[self.runLoopModes allObjects]];
} else if ([self isReady]) {
self.state = AFOperationExecutingState;
[self performSelector:#selector(operationDidStart) onThread:[[self class] networkRequestThread] withObject:nil waitUntilDone:NO modes:[self.runLoopModes allObjects]];
}
[self.lock unlock];
}
- (void)operationDidStart {
[self.lock lock];
if (![self isCancelled]) {
self.connection = [[NSURLConnection alloc] initWithRequest:self.request delegate:self startImmediately:NO];
NSRunLoop *runLoop = [NSRunLoop currentRunLoop];
for (NSString *runLoopMode in self.runLoopModes) {
[self.connection scheduleInRunLoop:runLoop forMode:runLoopMode];
[self.outputStream scheduleInRunLoop:runLoop forMode:runLoopMode];
}
//...
}
[self.lock unlock];
}
here [NSRunloop currentRunloop] retrieves the runloop on the AFNetworking thread instead of the mainRunloop as the -operationDidStart method is called from that thread. As a bonus we get to run the outputStream on the background thread's runloop as well.
Now AFURLConnectionOperation waits for the NSURLConnection callbacks and updates its own NSOperation state variables (cancelled, finished, executing) itself as the network request progresses. The AFNetworking thread spins its runloop repeatedly so that as NSURLConnections from potentially many AFURLConnectionOperations schedule their callbacks they are called and the AFURLConnectionOperation objects can react to them.
If you always plan to use queues to execute your operations, it is simpler to define them as synchronous. If you execute operations manually, though, you might want to define your operation objects as asynchronous. Defining an asynchronous operation requires more work, because you have to monitor the ongoing state of your task and report changes in that state using KVO notifications. But defining asynchronous operations is useful in cases where you want to ensure that a manually executed operation does not block the calling thread.
Also note that you can also use an NSOperation without a NSOperationQueue by calling -start and observing it until -isFinished returns YES. If AFURLConnectionOperation was implemented as a synchronous operation and blocked the current thread waiting for NSURLConnection to finish it would never actually finish as NSURLConnection would schedule its callbacks on the current runloop, which wouldn't be running as we would be blocking it. Therefore to support this valid scenario of using NSOperation we have to make the AFURLConnectionOperation asynchronous.
Answers to Questions
Yes, AFNetworking creates one thread which it uses to schedule all connections. Thread creation is expensive. (this is partly why GCD was created. GCD keeps a pool of threads running for you and dispatches blocks on the different threads as needed without having to create, destroy and manage threads yourself)
The processing is not done on the background AFNetworking thread. AFNetworking uses the completionBlock property of NSOperation to do its processing which is executed when finished is set to YES.
The exact execution context for your completion block is not guaranteed but is typically a secondary thread. Therefore, you should not use this block to do any work that requires a very specific execution context. Instead, you should shunt that work to your application’s main thread or to the specific thread that is capable of doing it. For example, if you have a custom thread for coordinating the completion of the operation, you could use the completion block to ping that thread.
the post processing of HTTP connections is handled in AFHTTPRequestOperation. This class creates a dispatch queue specifically for transforming response objects in the background and shunts the work off onto that queue. see here
- (void)setCompletionBlockWithSuccess:(void (^)(AFHTTPRequestOperation *operation, id responseObject))success
failure:(void (^)(AFHTTPRequestOperation *operation, NSError *error))failure
{
self.completionBlock = ^{
//...
dispatch_async(http_request_operation_processing_queue(), ^{
//...
I guess this begs the question could they have written AFURLConnectionOperation to not make a thread. I think the answer is yes as there is this API
- (void)setDelegateQueue:(NSOperationQueue*) queue NS_AVAILABLE(10_7, 5_0);
Which is meant to schedule your delegate callbacks on a specific operation queue as opposed to using the runloop. But as we're looking at the legacy part of AFNetworking and that API was only available in iOS 5 and OS X 10.7. Looking at the blame view on Github for AFURLRequestOperation we can see that mattt actually wrote the +networkRequestThread method coincidentally on the actual day the iPhone 4s and iOS 5 were announced back in 2011! As such we can reason that the thread exists because at the time it was written we can see that making a thread and scheduling your connections on it was the only way to receive callbacks from NSURLConnection in the background while running in an asynchronous NSOperation subclass.
the thread is created using the dispatch_once function. (see the extra code snipped i added as you suggested) This function ensures that the code enclosed in the block it runs will be run only once in the lifetime of the application. The AFNetworking thread is created when it is needed and then persists for the lifetime of the application
When i wrote NSURLConnectionOperation i meant AFURLConnectionOperation. I corrected that thanks for mentioning it :)

Related

Is an NSOperationQueue's thread it's parent's thread or a new one?

If I instantiate an NSOperationQueue (Listing 1):
NSOperationQueue * operationQueue = [[NSOperationQueue alloc] init];
Then add an operation to it (Listing 2):
NSInvocationOperation* anOperation = [[NSInvocationOperation alloc] initWithInvocation:theInvocation];
[operationQueue addOperation:anOperation ];
I know that anOperation will be run on the new NSOperationQueue, on a new thread. And one mechanism to communicate with it would be to call performSelector:onThread:withObject:waitUntilDone:
But that's the NSOperation on a queue and thread other than the main ones. What about the NSOperationQueue object? As part of the alloc/init does it move to a thread other than the one that created it? In other words, if the NSOperationQueue alloc/init code (Listing 1) was executed on the main queue and main thread, could I use performSelectorOnMainThread:withObject:waitUntilDone: or do I have to use performSelector:onThread:withObject:waitUntilDone: ?
I haven't been able to track down any info on this, or precisely what NSOperationQueue init does, which is why I'm thinking of writing code to log the various queues and threads that get created/used.
There is no specific thread associated with an NSOperationQueue. The code of NSOperationQueue runs on whatever thread calls it.
The object itself is thread-safe and can be accessed or manipulated from any thread. From the NSOperationQueue class reference:
Multicore Considerations
It is safe to use a single NSOperationQueue object from multiple threads without creating additional locks to synchronize access to that object.

Asynchronous Cocoa - Preventing "simple" (obvious) deadlocks in NSOperation?

When subclassing NSOperation to get a little chunk of work done, I've found out it's pretty easy to deadlock. Below I have a toy example that's pretty easy to understand why it never completes.
I can only seem to think through solutions that prevent the deadlock from the caller perspective, never the callee. For example, the caller could continue to run the run loop, not wait for finish, etc. If the main thread needs to be message synchronously during the operation, I'm wondering if there is a canonical solution that an operation subclasser can implement to prevent this type of deadlocking. I'm only just starting to dip my toe in async programming...
#interface ToyOperation : NSOperation
#end
#implementation ToyOperation
- (void)main
{
// Lots of work
NSString *string = #"Important Message";
[self performSelector:#selector(sendMainThreadSensitiveMessage:) onThread:[NSThread mainThread] withObject:string waitUntilDone:YES];
// Lots more work
}
- (void)sendMainThreadSensitiveMessage:(NSString *)string
{
// Update the UI or something that requires the main thread...
}
#end
- (int)main
{
ToyOperation *op = [[ToyOperation alloc] init];
NSOperationQueue *opQ = [[NSOperationQueue alloc] init];
[opQ addOperations: #[ op ] waitUntilFinished:YES]; // Deadlock
return;
}
If the main thread needs to be message synchronously during the
operation, I'm wondering if there is a canonical solution that an
operation subclasser can implement to prevent this type of
deadlocking.
There is. Never make a synchronous call to the main queue. And a follow-on: Never make a synchronous call from the main queue. And, really, it can be summed up as Never make a synchronous call from any queue to any other queue.
By doing that, you guarantee that the main queue is not blocked. Sure, there may be an exceptional case that tempts you to violate this rule and, even, cases where it really, truly, is unavoidable. But that very much should be the exception because even a single dispatch_sync() (or NSOpQueue waitUntilDone) has the potential to deadlock.
Of course, data updates from queue to queue can be tricky. There are several options; a concurrency safe data layer (very hard), only passing immutable objects or copies of the data (typically to the main queue for display purposes -- fairly easy, but potentially expensive), or you can go down the UUID based faulting like model that Core Data uses. Regardless of how you solve this, the problem isn't anything new compared to any other concurrency model.
The one exception is when replacing locks with queues (For example, instead of using #synchronized() internally to a class, use a serial GCD queue and use dispatch_sync() to that queue anywhere that a synchronized operation must take place. Faster and straightforward.). But this isn't so much an exception as solving a completely different problem.

Does performBlockAndWait always done on the same thread [duplicate]

I have an NSManagedObjectContext declared like so:
- (NSManagedObjectContext *) backgroundMOC {
if (backgroundMOC != nil) {
return backgroundMOC;
}
backgroundMOC = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
return backgroundMOC;
}
Notice that it is declared with a private queue concurrency type, so its tasks should be run on a background thread. I have the following code:
-(void)testThreading
{
/* ok */
[self.backgroundMOC performBlock:^{
assert(![NSThread isMainThread]);
}];
/* CRASH */
[self.backgroundMOC performBlockAndWait:^{
assert(![NSThread isMainThread]);
}];
}
Why does calling performBlockAndWait execute the task on the main thread rather than background thread?
Tossing in another answer, to try an explain why performBlockAndWait will always run in the calling thread.
performBlock is completely asynchronous. It will always enqueue the block onto the queue of the receiving MOC, and then return immediately. Thus,
[moc performBlock:^{
// Foo
}];
[moc performBlock:^{
// Bar
}];
will place two blocks on the queue for moc. They will always execute asynchronously. Some unknown thread will pull blocks off of the queue and execute them. In addition, those blocks are wrapped within their own autorelease pool, and also they will represent a complete Core Data user event (processPendingChanges).
performBlockAndWait does NOT use the internal queue. It is a synchronous operation that executes in the context of the calling thread. Of course, it will wait until the current operations on the queue have been executed, and then that block will execute in the calling thread. This is documented (and reasserted in several WWDC presentations).
Furthermore, performBockAndWait is re-entrant, so nested calls all happen right in that calling thread.
The Core Data engineers have been very clear that the actual thread in which a queue-based MOC operation runs is not important. It's the synchronization by using the performBlock* API that's key.
So, consider 'performBlock' as "This block is being placed on a queue, to be executed at some undetermined time, in some undetermined thread. The function will return to the caller as soon as it has been enqueued"
performBlockAndWait is "This block will be executed at some undetermined time, in this exact same thread. The function will return after this code has completely executed (which will occur after the current queue associated with this MOC has drained)."
EDIT
Are you sure of "performBlockAndWait does NOT use the internal queue"?
I think it does. The only difference is that performBlockAndWait will
wait until the block's completion. And what do you mean by calling
thread? In my understanding, [moc performBlockAndWait] and [moc
performBloc] both run on its private queue (background or main). The
important concept here is moc owns the queue, not the other way
around. Please correct me if I am wrong. – Philip007
It is unfortunate that I phrased the answer as I did, because, taken by itself, it is incorrect. However, in the context of the original question it is correct. Specifically, when calling performBlockAndWait on a private queue, the block will execute on the thread that called the function - it will not be put on the queue and executed on the "private thread."
Now, before I even get into the details, I want to stress that depending on internal workings of libraries is very dangerous. All you should really care about is that you can never expect a specific thread to execute a block, except anything tied to the main thread. Thus, expecting a performBlockAndWait to not execute on the main thread is not advised because it will execute on the thread that called it.
performBlockAndWait uses GCD, but it also has its own layer (e.g., to prevent deadlocks). If you look at the GCD code (which is open source), you can see how synchronous calls work - and in general they synchronize with the queue and invoke the block on the thread that called the function - unless the queue is the main queue or a global queue. Also, in the WWDC talks, the Core Data engineers stress the point that performBlockAndWait will run in the calling thread.
So, when I say it does not use the internal queue, that does not mean it does not use the data structures at all. It must synchronize the call with the blocks already on the queue, and those submitted in other threads and other asynchronous calls. However, when calling performBlockAndWait it does not put the block on the queue... instead it synchronizes access and runs the submitted block on the thread that called the function.
Now, SO is not a good forum for this, because it's a bit more complex than that, especially w.r.t the main queue, and GCD global queues - but the latter is not important for Core Data.
The main point is that when you call any performBlock* or GCD function, you should not expect it to run on any particular thread (except something tied to the main thread) because queues are not threads, and only the main queue will run blocks on a specific thread.
When calling the core data performBlockAndWait the block will execute in the calling thread (but will be appropriately synchronized with everything submitted to the queue).
I hope that makes sense, though it probably just caused more confusion.
EDIT
Furthermore, you can see the unspoken implications of this, in that the way in which performBlockAndWait provides re-entrant support breaks the FIFO ordering of blocks. As an example...
[context performBlockAndWait:^{
NSLog(#"One");
[context performBlock:^{
NSLog(#"Two");
}];
[context performBlockAndWait:^{
NSLog(#"Three");
}];
}];
Note that strict adherence to the FIFO guarantee of the queue would mean that the nested performBlockAndWait ("Three") would run after the asynchronous block ("Two") since it was submitted after the async block was submitted. However, that is not what happens, as it would be impossible... for the same reason a deadlock ensues with nested dispatch_sync calls. Just something to be aware of if using the synchronous version.
In general, avoid sync versions whenever possible because dispatch_sync can cause a deadlock, and any re-entrant version, like performBlockAndWait will have to make some "bad" decision to support it... like having sync versions "jump" the queue.
Why not? Grand Central Dispatch's block concurrency paradigm (which I assume MOC uses internally) is designed so that only the runtime and operating system need to worry about threads, not the developer (because the OS can do it better than you can do to having more detailed information). Too many people assume that queues are the same as threads. They are not.
Queued blocks are not required to run on any given thread (the exception being blocks in the main queue must execute on the main thread). So, in fact, sometimes sync (i.e. performBlockAndWait) queued blocks will run on the main thread if the runtime feels it would be more efficient than creating a thread for it. Since you are waiting for the result anyway, it wouldn't change the way your program functioned if the main thread were to hang for the duration of the operation.
This last part I am not sure if I remember correctly, but in the WWDC 2011 videos about GCD, I believe that it was mentioned that the runtime will make an effort to run on the main thread, if possible, for sync operations because it is more efficient. In the end though, I suppose the answer to "why" can only be answered by the people who designed the system.
I don't think that the MOC is obligated to use a background thread; it's just obligated to ensure that your code will not run into concurrency issues with the MOC if you use performBlock: or performBlockAndWait:. Since performBlockAndWait: is supposed to block the current thread, it seems reasonable to run that block on that thread.
The performBlockAndWait: call only makes sure that you execute the code in such a way that you don't introduce concurrency (i.e. on 2 threads performBlockAndWait: will not run at the same time, they will block each other).
The long and the short of it is that you can't depend on which thread a MOC operation runs on, well basically ever. I've learned the hard way that if you use GCD or just straight up threads, you always have to create local MOCs for each operation and then merge them to the master MOC.
There is a great library (MagicalRecord) that makes that process very simple.

Core Data and Concurrency using NSOperationQueues

After using Instruments, I have found the a spot in my code that is very long running and blocking my UI: lots of Core Data fetches (it's part of a process of ingesting a large JSON packet and building up managed objects while making sure that objects have not be duplicated).
While my intentions are to break up this request into smaller pieces and process them serially, that only means I'll be spreading out those fetches - I anticipate the effect will be small bursts of jerkiness in the app instead of one long hiccup.
Everything I've read both in Apple's docs and online at various blog posts indicates that Core Data and concurrency is akin to poking a beehive. So, timidly I've sat down to give it the ol' college try. Below is what I've come up with, and I would appreciate someone wiser pointing out any errors I'm sure I've written.
The code posted below works. What I have read has me intimidated that I've surely done something wrong; I feel like if pulled the pin out of a grenade and am just waiting for it to go off unexpectedly!
NSBlockOperation *downloadAllObjectContainers = [NSBlockOperation blockOperationWithBlock:^{
NSArray *containers = [webServiceAPI findAllObjectContainers];
}];
[downloadAllObjectContainers setCompletionBlock:^{
NSManagedObjectContext *backgroundContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
[backgroundContext setPersistentStoreCoordinator:[_managedObjectContext persistentStoreCoordinator]];
[[NSNotificationCenter defaultCenter] addObserverForName:NSManagedObjectContextDidSaveNotification
object:backgroundContext
queue:[NSOperationQueue mainQueue]
usingBlock:^(NSNotification *note) {
[_managedObjectContext mergeChangesFromContextDidSaveNotification:note];
}];
Builder *builder = [[Builder alloc] init];
[builder setManagedObjectContext:backgroundContext];
for (ObjectContainer *objCont in containers) { // This is the long running piece, it's roughly O(N^2) yuck!
[builder buildCoreDataObjectsFromContainer:objCont];
}
NSError *backgroundContextSaveError = nil;
if ([backgroundContext hasChanges]) {
[backgroundContext save:&backgroundContextSaveError];
}
}];
NSOperationQueue *background = [[NSOperationQueue alloc] init];
[background addOperation:downloadAllObjectContainers];
Since you are using NSPrivateQueueConcurrencyType you must be doing it for iOS5, you do not have to go through all the trouble of creating context in a background thread and merging it in the main thread.
All you need is to create a managed object context with concurrency type NSPrivateQueueConcurrencyType in the main thread and do all operations with managed objects inside a block passed in to managedObjectContext:performBlock method.
I recommend you take a look at WWDC2011 session 303 - What's New in Core Data on iOS.
Also, take a look at Core Data Release Notes for iOS5.
Here's a quote from the release notes:
NSManagedObjectContext now provides structured support for concurrent operations. When you create a managed object context using initWithConcurrencyType:, you have three options for its thread (queue) association
Confinement (NSConfinementConcurrencyType).
This is the default. You promise that context will not be used by any thread other than the one on which you created it. (This is exactly the same threading requirement that you've used in previous releases.)
Private queue (NSPrivateQueueConcurrencyType).
The context creates and manages a private queue. Instead of you creating and managing a thread or queue with which a context is associated, here the context owns the queue and manages all the details for you (provided that you use the block-based methods as described below).
Main queue (NSMainQueueConcurrencyType).
The context is associated with the main queue, and as such is tied into the application’s event loop, but it is otherwise similar to a private queue-based context. You use this queue type for contexts linked to controllers and UI objects that are required to be used only on the main thread.
Concurrency
Concurrency is the ability to work with the data on more than one queue at the same time. If you choose to use concurrency with Core Data, you also need to consider the application environment. For the most part, AppKit and UIKit are not thread-safe. In OS X in particular, Cocoa bindings and controllers are not threadsafe Core Data, Multithreading, and the Main Thread

Trouble with Cocoa Touch Threading

I'm writing a multi threaded application for iOS.
I am new to Objective-C, so I haven't played around with threads in iPhone before.
Normally when using Java, I create a thread, and send "self" as object to thread. And from the thread I can therefor call the main thread.
How is this done in Objective C?
How can I call the main thread from a thread? I have been trying with NSNotificationCenter, but I get a sigbart error :/
This is how the thread is started:
NSArray *extraParams = [NSArray arrayWithObjects:savedUserName, serverInfo, nil]; // Parameters to pass to thread object
NSThread *myThread = [[NSThread alloc] initWithTarget:statusGetter // New thread with statusGetter
selector:#selector(getStatusFromServer:) // run method in statusGetter
object:extraParams]; // parameters passed as arraylist
[myThread start]; // Thread started
activityContainerView.hidden = NO;
[activityIndicator startAnimating];
Any help would be great!
you accomplish this by adding a message to the main thread's run loop.
Foundation provides some conveniences for this, notably -[NSObject performSelectorOnMainThread:withObject:waitUntilDone:] and NSInvocation.
using the former, you can simply write something like:
[self performSelectorOnMainThread:#selector(updateUI) withObject:nil waitUntilDone:NO]
a notification may be dispatched from a secondary thread (often the calling thread).
You can either use performSelectorOnMainThread:withObject:waitUntilDone: or, if you're targeting iOS 4 and later, Grand Central Dispatch, which doesn't require you to implement a method just to synchronize with the main thread:
dispatch_async(dispatch_get_main_queue(), ^ {
// Do stuff on the main thread here...
});
This often makes your code easier to read.
Though this is not a direct answer to your question I would highly recommend you take a look at Grand Central Dispatch. It generally gives better performance than trying to use threads directly.
As Justin pointed out, you can always perform a function in the main thread by calling performSelectorOnMainThread, if you really needed too.