NSOperationQueue designated thread - objective-c

I want to use an NSOperationQueue to dispatch CoreData operations. However, operation queue behavior is not always the same (e.g. it dispatches using libdispatch on iOS 4.0/OS 10.6 which uses thread pools) and a queue might not always use the same thread (as NSManagedObjectContext requires).
Can I force a serial NSOperationQueue to execute on a single thread?
Or do I have to create my own simple queuing mechanism for that?

Can I force a serial NSOperationQueue to execute on a single thread?
Or do I have to create my own simple queuing mechanism for that?
You shouldn't need to do either of those. What Core Data really requires is that you don't have two pieces of code making changes to a managed object context at the same time. There's even a note on this at the very beginning of Concurrency with Core Data:
Note: You can use threads, serial operation queues, or dispatch queues for concurrency.
For the sake of conciseness, this article uses “thread” throughout to refer to any of these.
What's really required is that you serialize operations on a given context. That happens naturally if you use a single thread, but NSOperationQueue also serializes its operations if you set maxConcurrentOperationCount to 1, so you don't have to worry about ensuring that all operations take place on the same thread.

Apple decided to bind managed objects to real Threads.. it isnt that safe anymore to access a context on different threads - a context without any objects MIGHT be safe but its objects are not

Related

what is nsoperation?how to use it?

I am implementing the contact module basically adding,deleting,searching and listing the contacts.
Here i used file to persist the data like storing all the contacts in file(json format) and deserializing back to the object.
Now my target is to perform serialization and deserialization functions in background thread using nsoperation.And how one class extends nsopertions and what to do in that class.
I am new to mac os.And i cant understand what exactly nsoperation means?how to use it in my module.how to make them run concurrently.I had seen lot of tutorials but still it is very clumsy for me.I am really in need of help.Thanks in advance.
We have lot of answer to your question
What is NSOperation?
First Apple Reference Says
The NSOperation class is an abstract class you use to encapsulate the
code and data associated with a single task. Because it is abstract,
you do not use this class directly but instead subclass or use one of
the system-defined subclasses (NSInvocationOperation or
BlockOperation) to perform the actual task. Despite being abstract,
the base implementation of NSOperation does include significant logic
to coordinate the safe execution of your task. The presence of this
built-in logic allows you to focus on the actual implementation of
your task, rather than on the glue code needed to ensure it works
correctly with other system objects.
Then Simple meaning of NSOperation
NSOperation represents a single unit of work. It’s an abstract class
that offers a useful, thread-safe structure for modeling state,
priority, dependencies, and management.
Do you need to run it concurrently?
What is Concurrency?
Doing multiple things at the same time.
Taking advantage of number of cores available in multicore CPUs.
Running multiple programs in parallel.
Why NSOperationQueue?
For situations where NSOperation doesn’t make sense to build out a
custom NSOperation subclass, Foundation provides the concrete
implementations NSBlockOperation and NSInvocationOperation.
Examples of tasks that lend themselves well to NSOperation include
network requests, image resizing, text processing, or any other
repeatable, structured, long-running task that produces associated
state or data.
But simply wrapping computation into an object doesn’t do much without
a little oversight. That’s where NSOperationQueue comes in
What is NSOperationQueue?
NSOperationQueue regulates the concurrent execution of operations. It
acts as a priority queue, such that operations are executed in a
roughly First-In-First-Out manner, with higher-priority
(NSOperation.queuePriority) ones getting to jump ahead of
lower-priority ones. NSOperationQueue can also limit the maximum
number of concurrent operations to be executed at any given moment,
using the maxConcurrentOperationCount property.
NSOperationQueue itself is backed by a Grand Central Dispatch queue,
though that’s a private implementation detail.
To kick off an NSOperation, either call start, or add it to an
NSOperationQueue, to have it start once it reaches the front of the
queue. Since so much of the benefit of NSOperation is derived from
NSOperationQueue, it’s almost always preferable to add an operation to
a queue rather than invoke start directly.
Also
Operation queues usually provide the threads used to run their
operations. In OS X v10.6 and later, operation queues use the
libdispatch library (also known as Grand Central Dispatch) to initiate
the execution of their operations. As a result, operations are always
executed on a separate thread, regardless of whether they are
designated as concurrent or non-concurrent operations
So your code should be
NSOperationQueue *backgroundQueue = [[NSOperationQueue alloc] init];
[backgroundQueue addOperationWithBlock:^{
//Your Background Work kHere
.....
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
//Your Main Thread(UI) Work Here
....
}];
}];

Adding an NSOperationQueue to an NSOperation

Is it safe to add an NSOperationQueue to an NSOperation, and then add this operation to another NSOperationQueue?
Here is some code to visualize what I am trying to do.
NSOperationQueue *mainQueue = [NSOperationQueue alloc] init];
// Here I declare some NSBlockOperation's, i.e. parseOperation1-2-3
// and also another operation called zipOperation, which includes
// an NSOperationQueue itself. This queue takes the processed (parsed) files
// and write them to a single zip file. Each operation's job is to write the data
// stream and add it to the zip file. After all operations are done,
// it closes the zip.
[zipOperation addDependency:parseOperation1];
[zipOperation addDependency:parseOperation2];
[zipOperation addDependency:parseOperation3];
[mainQueue addOperation:parseOperation1];
[mainQueue addOperation:parseOperation2];
[mainQueue addOperation:parseOperation3];
[mainQueue addOperation:zipOperation];
I have used this approach and have it running in live code deployed on the App Store. I haven't experienced any issues during development or in the last 2 months since the code has been live.
In my case I had a high level series of operations, some of which contained a set of sub operations. Rather than expose the detail of each sub operation into the high level code, I created NSOperations which themselves contained NSOperationQueues and enqueued their own sub operations. The code I ended up with was much cleaner and easier to maintain.
I read extensively into NSOperation and have not seen any commentary that warns against this approach. I reviewed a lot of information online, the Apple documentation, and WWDC videos.
The only possible "drawback" might be the added complexity of understanding and implementing a Concurrent operation. Embedding an NSOperationQueue in an NSOperation means that operation becomes Concurrent.
So that's a 'YES' from me.
Additional details about concurrent operations:
An NSOperationQueue calls the start method on a normal (non-concurrent) NSOperation and expects the operation to be finished by the time the start call returns. For instance some piece of code you supplied to NSBlockOperation is complete at the end of the block.
If the work will not be finished by the time the start call returns then you configure the NSOperation as a Concurrent operation, so the NSOperationQueue knows that it has to wait until you tell it that the operation is finished at some later point in time.
For example, concurrent operations are often used to run asynchronous network calls; the start method only starts the network call, which then runs in the background, and calls back to the operation when its finished. You then change the isFinished property of the NSOperation to flag that the work is now complete.
So.... Normally when you add operations to an NSOperationQueue that queue runs those operations in the background. So if you put an NSOperationQueue inside an NSOperation then that operations work will be done in the background. Therefore the operation is concurrent and you need to flag when the internal NSOperationQueue has finished processing all it's operations.
Alternatively there are some methods on NSOperationQueue such as waitUntilAllOperationsAreFinished which could be used to ensure all the work was done before the start call returns, however these involve blocking threads and I avoided them, you may feel more comfortable with that approach, and making sure you don't have any side effects from blocking threads.
In my case I was already familiar with Concurrent operations so it was straightforward just to set it up as a Concurrent operation.
Some documentation about concurrent operations:
Concurrency Programming Guide: Configuring Operations for Concurrent Execution
In this example they are detaching a thread to perform work in the background, in our case we would be starting the NSOperationQueue here.

Should my block based methods return on the main thread or not when creating an iOS cloud integration framework?

I am in the middle of creating a cloud integration framework for iOS. We allow you to save, query, count and remove with synchronous and asynchronous with selector/callback and block implementations. What is the correct practice? Running the completion blocks on the main thread or a background thread?
For simple cases, I just parameterize it and do all the work i can on secondary threads:
By default, callbacks will be made on any thread (where it is most efficient and direct - typically once the operation has completed). This is the default because messaging via main can be quite costly.
The client may optionally specify that the message must be made on the main thread. This way, it requires one line or argument. If safety is more important than efficiency, then you may want to invert the default value.
You could also attempt to batch and coalesce some messages, or simply use a timer on the main run loop to vend.
Consider both joined and detached models for some of your work.
If you can reduce the task to a result (remove the capability for incremental updates, if not needed), then you can simply run the task, do the work, and provide the result (or error) when complete.
Apple's NSURLConnection class calls back to its delegate methods on the thread from which it was initiated, while doing its work on a background thread. That seems like a sensible procedure. It's likely that a user of your framework will not enjoy having to worry about thread safety when writing a simple callback block, as they would if you created a new thread to run it on.
The two sides of the coin: If the callback touches the GUI, it has to be run on the main thread. On the other hand, if it doesn't, and is going to do a lot of work, running it on the main thread will block the GUI, causing frustration for the end user.
It's probably best to put the callback on a known, documented thread, and let the app programmer make the determination of the effect on the GUI.

Identify a GCD thread

I have written a Core Data abstraction class which holds the persistent store, object model and object context. To make the multithreading easier, I have written the accessor for the object context so that it returns a instance that is only available for the current thread by using [NSThread currentThread] to identify the threads.
This works perfectly as long as I don't use GCD, which I want to use as replacement for the old NSThread's. So my question is, how do I identify a GCD thread? The question applies for both iOS and Mac OS X but I guess that its the same for both platforms.
You could check whether dispatch_get_current_queue() returns anything. I like Jeremy's idea of transitioning to a CD-context-per-queue instead of CD-context-per-thread model using the queue's context storage though.
Perhaps you can store the CD context for each thread in the GCD context using dispatch_set_context()
The contextForCurrentThread helper method in Magical Record is very similar to what to said (i.e. keep one context per thread). The GCD execution block, while running on a single queue, can potentially run on any thread managed by GCD, which will cause some random crashes. Check this article: http://saulmora.com/2013/09/15/why-contextforcurrentthread-doesn-t-work-in-magicalrecord/

Grand Central Dispatch: Queue vs Semaphore for controlling access to a data structure?

I'm doing this with Macruby, but I don't think that should matter much here.
I've got a model which stores its state in a dictionary data structure. I want concurrent operations to be updating this data structure sporadically. It seems to me like GCD offers a few possible solutions to this, including these two:
wrap any code that accesses the data structure in a block sent to some serial queue
use a GCD semaphore, with client code sending wait/signal calls as necessary when accessing the structure
When the queues in the first solution are synchronously called, then it seems pretty much equivalent to the semaphore solution. Do either of these solutions have clear advantages that I'm missing? Is there a better alternative I'm missing?
Also: would it be straightforward to implement a read-write (shared-exclusive) lock with GCD?
Serial Queue
Pros
there are not any lock
Cons
tasks can't work concurrently in the Serial Queue
GCD Semaphore
Pros
tasks can work concurrently
Cons
it uses lock even though it is light weight
Also we can use Atomic Operations instead of GCD Semaphore. It would be lighter than GCD Semaphore in some situation.
Synchronization Tools - Atomic Operations
Guarding access to the data structure with dispatch_sync on serial queue is semantically equivalent to using a dispatch semaphore, and in the uncontended case, they should both be very fast. If performance is important, benchmark and see if there's any significant difference.
As for the readers-writer lock, you can indeed construct one on top of GCD—at least, I cobbled something together the other day here that seems to work. (Warning: there be dragons/not-well-tested code.) My solution funnels the read/write requests through an intermediary serial queue before submitting to a global concurrent queue. The serial queue is suspended/resumed at the appropriate times to ensure that write requests execute serially.
I wanted something that would simulate a private concurrent dispatch queue that allowed for synchronisation points—something that's not exposed in the public GCD api, but is strongly hinted at for the future.
Adding a warning (which ends up being a con for dispatch queues) to the previous answers.
You need to be careful of how the dispatch queues are called as there are some hidden scenarios that were not immediately obvious to me until I ran into them.
I replaced NSLock and #synchronized on a number of critical sections with dispatch queues with the goal of having lightweight synchronization. Unfortunately, I ran into a situation that results in a deadlock and I have pieced it back to using the dispatch_barrier_async / dispatch_sync pattern. It would seem that dispatch_sync may opportunistically call its block on the main queue (if already executing there) even when you create a concurrent queue. This is a problem since dispatch_sync on the current dispatch queue causes a deadlock.
I guess I'll be moving backwards and using another locking technique in these areas.