NSManagedObjectContext with different processes - objective-c

I have two processes that are talking to the same persistent store. I save the context on one process, and I post a distributed notification. The other process sees the distributed notification, and fetches its data again, but still receives the old data. Is there some kind of "flushing" I need to do to get the other process to get the correct data from the store?
EDIT: So, it turns out that I was flushing the data correctly. NSManagedObjects have a "refreshObject:mergeChanges" method that you use to do this. The issue appears to be timing related. Let's say I have two processes, A and B. Process A is the main process and does a save to the database. Then Process B does a save to the database and sends a notification to Process A that it has done so, and Process A fetches the new data. I've found that if Process A's save and Process B's save are too close together, the old data is fetched by Process A even if I refresh. If I force there to be some time between the two saves, then it works out correctly.
Obviously this seems like some kind of race condition, where perhaps the notification is getting sent before the data is actually getting saved to the database, however, the notification gets sent in the didSave method of the managed object, at which point the context has already saved.

You should check the merge policy concept, to manage, get and communicate the correct values of a persistent store coordinator between different contexts.
Here -> http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CoreData/Articles/cdChangeManagement.html#//apple_ref/doc/uid/TP30001201-CJBDBHCB
That should resolve the problem.
Hope this can help.

Related

Asynchronous Object Construction in Core Data

I'm currently working on an app with a reasonably complex Core Data model. The data model currently has 10 tables in it, with a bunch of relationships set between them. The data for the model is obtained piecemeal from a remote server. In order to minimize the amount of traffic to/from the server, the server API passes object ID's first, giving me a chance to discover if I already have stored the objects. If not, then I can ask the server for the full objects and store them. However, those objects can have references to other objects, for which I will need to check follow the same process: check if I have the object(s) and, if not, grab the objects from the server. The Core Data model includes fields for the server IDs which I use to validate and construct Core Data's object graph.
This creates a situation where objects will have been instantiated in Core Data, but won't have been completely constructed as they may be waiting for referenced objects to be returned by the server (which may, in turn, need to wait for their own reference objects).
So my first attempt to deal with this was to create a semaphore that would not allow the object context to be saved (I only save the context in one place) until all objects are downloaded and the object graph is constructed. The problem I ran into was that the context was being saved anyway, without me asking. This results in a ton of changes propagating through NSFetchedResultsController as objects are downloaded from the server and the object graph is being constructed. Moreover, the propagated objects may not be complete.
Has any dealt with anything like this? I think this could all work if I could explicitly control when Core Data saves, but that does not appear to be possible. Or am I missing something?
UPDATE
I was missing something. I was under the impression that NSFetchedResultsController received updates when the Context is saved. This is not true. It receives updates whenever processPendingChanges is called in the context, which occurs at the end of an event cycle. In the past, I've always used two contexts to keep updates separate from the UI, but this project had a deadline and existing code that kept me from refactoring. Given this new information, I think the separate context will fix my problem.
That is an extremely expensive way to sync with a server. Is there a reason your server can't respond to "changed since X" calls and give you everything? In your current design you are spending more time opening and closing sockets than you are receiving data.
Be that as it may, you want to do all of this processing in a secondary context that is connected directly to the NSPersistentStoreCoordinator. When it saves you want to capture the NSManagedObjectContextDidSaveNotification and then have your UI context consume that notification. That will update your UI when your server sync is complete.
This will keep your syncing 100% isolated from the UI and allow the UI to save or do whatever else it needs to do while you are working with the server. I would not use a parent/child design here. There is no reason to.
You access a core data database via the NSManagedObjectContext class.
Each context object must belong to a single thread, and any NSManagedObjects that context creates belong to the same thread.
Do not read or write any managed object from a thread other than the one that created it. If you do, you'll end up with unpredictable and impossible to debug data corruption problems.
However, you can have multiple NSManagedObjectContext instances for a single core data database, each one on a different thread, and you can merge any changes made to the context in one thread over to a context on another thread.
So, basically, you have a "main" NSManagedObjectContext which is used on the main thread, and used for almost all your operations. And then when you need to do something on another thread you create a "child" context for that thread, make all your changes, then merge those changes back to the main context on the main thread.
You can find specific details how to implement this from Apple's official documentation. Start reading here:
https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/CoreData/Articles/cdConcurrency.html#//apple_ref/doc/uid/TP40003385-SW1

Is there a possibility to infer the last time when the local core data store was completely synced to iCloud?

I am wondering whether it is possible to determine at which time my local Core Data store was synchronized with iCloud. From iCloud is trivial. You can just take the time of the last NSPersistentStoreDidImportUbiquitousContentChangesNotification. However, I could not find any method to check when my local changes were completely transmitted to iCloud.
Any ideas?
You can't find this out. The only information available is the transaction logs, but those don't tell you when the data was actually synced. Transaction logs are created when you save changes. At some later time (probably soon, but there's no guarantee) they get synced to the iCloud service. You don't get notified of this.
It might be possible to infer sync timing via out-of-band communication from other devices. For example, when you receive the did-import notification, write the current time to NSUbiquitousKeyValueStore. Then monitor that key to see when changes are received at the other end. That would at least tell you that some changes had been received by some other device. At best though, it would notify you of when changes had been downloaded at the other end, not when they had been successfully uploaded to the iCloud service.

iCloud Key Value Sync - first launch

I am using iCloud to store sync user preferences between devices. On the device, these are stored in an array of 'Favorite Teams' in NSUserDefaults, and I am using MKiCloudSync to mirror them to the NSUbiquitousKeyValueStore. Changes on one device are propagating to the second device well.
But I am not sure how to prevent the cloud data from being wiped on the first launch after a new install. Here is what's happening:
Device A launches for the first time. App finds nothing in the cloud. User adds multiple items to the array in NSUserDefaults. Changes are synced immediately to cloud.
Device B launches for the first time, but is offline. User adds a single item to the NSUserDefaults array, then remembers the app supports iCloud, so finds some wifi instead.
Device B pushes its version of the defaults to the cloud (with only one item). Device A pulls it, effectively wiping out all of the teams added on Device A.
Is this a limitation of iCloud or is my implementation naive? The docs address a similar issue where a 'highest level' is synced, and adds application logic to never overwrite this value with a smaller value. That's fine when there is some clear business logic to adhere to (higher level is always the one to keep), but when data is more arbitrary, I don't see how I can determine what to do.
Or is it because I am using an array in NSUserDefaults for 'Favorite Teams' and replacing it wholesale? If I used separate keys for each team, perhaps they will be synced independently, based on time code?
Any time you sync a value for a specific key, you run the risk that it will be changed by a different device using the same account. The iCloud service chooses the winning value for you, makes updates, and notifies you when it's done. This is a limitation of iCloud and of your app, and is a simple example of why syncing is hard. What if your step 2 above looked like this:
Device B launches for the first time, and iCloud is available. The app downloads the current data from device A. The user changes their mind and deletes all the data they created on device A. Then they a single new item.
Well, what then? Step 3 still happens exactly as you describe it, except that this time the incoming data is what the user wants. You could refactor your data but the same kind of situation will still be possible.
One option is to keep a non-syncing local copy of the data, so that you can compare incoming changes with the previous local state. What to do when they're different is up to you. Just don't forget that even dramatic changes might well be exactly what the user wants, and not a syncing issue that needs to be fixed. Or, they might be something that would lose data the user wants to keep. Resolving this conflict is your job.

Core data : how to undo operations once managed objects are saved with context

I am trying to implement downloading of bulk data from several tables on the server.
In my case there are 16 tables. For all these tables I will be firing 10 requests to the server. This means I have done a bit of logical groupings for related tables, but it is like all tables are inter-related with each other through one or the other relationship.
I need to consider three cases while doing downloading:
Saving data to each table at local.
Managing relationships between inserted objects.
Handling situation when one of the requests fails during download, say 8th request failed.
I will be following this approach for each response:
Inserting data in managed object context.
Managing relationships by firing NSPredicate and associating the related objects.
Saving the context.
In case of a response failure, I have two options:
Next time continue from the failed response.
Revert all saved data to its previous state.
1st approach may lead to some data inconsistency, so I am going with 2nd approach.
I know that if a managed object context is not saved, we can revert the changes, but
is it possible to revert the changes, if the managed object context is
saved?
I require some useful answers from the community.
Please suggest.
Is it possible to revert the changes, if the managed object context is saved?
After saving? Maybe, but it could be tricky. If you set up a separate managed object context for your network operations, and give it an NSUndoManager, you could later on tell the undo manager to roll everything back to the previous state.
It would be simpler to just not save changes until you're finished, though. Using an undo manager doesn't really help much-- the memory needed to store up all the undo actions will at least match the memory use from keeping all of the unsaved changes around until you're finished. If you're working on a separate managed object context (whether a child context or a completely separate context), handling the error case is as simple as letting the MOC get deallocated without saving changes first.

Persisting items being uploaded via web service to disk

I have a launchd daemon that every so often uploads some data via a web service using NSOperationQueue.
I need to be able to persist this data, both so that it can later be re-uploaded in the event of failure, even between sessions (in case of computer shut down, for example).
This is not a high load application, it probably receives items intermittently no more than 1 or 2 every minute, often with several hour gaps in between.
My initial implementation without this persistence in place is as follows:
Daemon receives data.
Daemon parses data into an object of type MyDataObject.
Daemon creates instance of NSOperation subclass with MyDataObject as the object to upload and adds it to its NSOperationQueue.
NSOperationQueue goes through and uploads MyDataObject via web service as it is able.
This part all functions just fine. The part I now want to add is the persistence in case of web service failure, computer shut down, etc.
It seems like I could use an NSMutableArray of MyDataObjects along with NSKeyed(Un)archiver containing all the items which had not yet been uploaded and observation of the -isFinished key of all the operations to remove items from the array, but it seems like there should be a simpler way to do is, with less room for things to go wrong, especially as far as thread safety goes.
Can somebody point me in the right direction?
You could add two operations per item. The first would store the item to local storage, and the second would depend on the first and would remove the item from local storage on success.
Then, when you want to restore any items from local storage, you create only the store-to-the-cloud operations, not the store-locally operations. As before, they remove the items from local storage only if they succeed, and if they don't succeed, they leave the items in local storage for the next attempt.