Core data crash occurring since iOS 10 - objective-c

I've been struggling to resolve a locally-unreproducible crash since iOS 10 within the sqlite/coredata library. It's occurring very infrequently - somewhere in the realm of 0.2% in production.
What I know (or at least suspect):
It only happens on iOS 10 and above.
Most often occurs while saving the context, but may also be occurring during a core data fetch request.
Occurring fairly rarely (ballpark rate of 0.15% of sessions)
I have run stress tests with concurrency debug flags enabled, as well as some of the xcode memory management tools. No issues detected.
Tested for memory leaks.
I've never been able to reproduce this stacktrace in a development environment.
No exceptions are being thrown prior to the crash. The entire code is wrapped.
This action is performed within a block, and the app is in the foreground.
Occurs seemingly at random during normal app operation. (Not at initialization time or anything special)
It's a SIGABRT crash
libsystem_kernel.dylib0x00000001841c3014 __pthread_kill+4
libsystem_c.dylib0x0000000184137400 abort+136
libsystem_malloc.dylib0x0000000184207a5c nanozone_error+328
libsystem_malloc.dylib0x0000000184209028 nano_realloc+644
libsystem_malloc.dylib0x00000001841fb240 malloc_zone_realloc+176
libsqlite3.dylib0x0000000185730c34 sqlite3_value_text+1220
libsqlite3.dylib0x0000000185777f38 sqlite3_rekey+1564
libsqlite3.dylib0x000000018578df78 sqlite3_rekey+91740
libsqlite3.dylib0x0000000185791c88 sqlite3_rekey+107372
libsqlite3.dylib0x000000018571df98 sqlite3_log+86448
libsqlite3.dylib0x0000000185757780 sqlite3_bind_int+11992
libsqlite3.dylib0x00000001856f1c80 sqlite3_exec+35188
libsqlite3.dylib0x00000001856eb608 sqlite3_exec+8956
libsqlite3.dylib0x00000001856ea838 sqlite3_exec+5420
libsqlite3.dylib0x00000001856e9f24 sqlite3_exec+3096
libsqlite3.dylib0x00000001856e9ae0 sqlite3_exec+2004
CoreData0x00000001874f1284 -[NSSQLiteConnectionprepareSQLStatement:]+468
CoreData0x00000001876166f0 -[NSSQLiteConnectionupdateRow:forRequestContext:]+496
CoreData0x00000001876c3430 _writeChangesForSaveRequest+1596
CoreData0x00000001876c4958 _executeSaveChangesRequest+312
CoreData0x00000001876ba7f4 -[NSSQLSaveChangesRequestContextexecuteRequestUsingConnection:]+40
CoreData0x00000001875cdaf8 __52-[NSSQLDefaultConnectionManagerhandleStoreRequest:]_block_invoke+256
libdispatch.dylib0x000000018407e1bc _dispatch_client_callout+12
libdispatch.dylib0x000000018408b7f0 _dispatch_barrier_sync_f_invoke+80
CoreData0x00000001875cd994 -[NSSQLDefaultConnectionManagerhandleStoreRequest:]+204
CoreData0x0000000187693f80 -[NSSQLCoreDispatchManagerrouteStoreRequest:]+284
CoreData0x00000001875fb7e4 -[NSSQLCoredispatchRequest:withRetries:]+196
CoreData0x00000001875f7560 -[NSSQLCoreprocessSaveChanges:forContext:]+200
CoreData0x00000001874f8360 -[NSSQLCoreexecuteRequest:withContext:error:]+744
CoreData0x00000001875da2f4 __65-[NSPersistentStoreCoordinatorexecuteRequest:withContext:error:]_block_invoke+3248
CoreData0x00000001875d2bf0 -[NSPersistentStoreCoordinator_routeHeavyweightBlock:]+272
CoreData0x00000001874f7f20 -[NSPersistentStoreCoordinatorexecuteRequest:withContext:error:]+404
CoreData0x00000001875195ac -[NSManagedObjectContextsave:]+2768
Here's what the code generally looks like:
NSManagedObject *object = [[MyManagedObject alloc] init];
// This is actually within the init method
NSEntityDescription *desc = [NSEntityDescription entityForName:NSStringFromClass(object.class)
inManagedObjectContext:context];
[object initWithEntity:desc insertIntoManagedObjectContext:nil];
// later on...
[context performBlock:^{
// Fetch another (different) object from core data
NSArray *fetchResults = [context executeFetchRequest:request error:&error];
// Changing some properties of object with values from fetched results
object.property = fetchResults[0].property;
// insert the object
[context insertObject:object];
// save the context
[context save:&error]
}
Any ideas would be greatly appreciated.
Update:
I found this release note that coincided with iOS 10.2, which may have caused some existing issue(s) to be exposed. It's not clear what the change was, or how it might cause problems, but it seems pretty likely that this is related somehow.
https://support.apple.com/en-us/HT207422
Impact: Processing malicious strings may lead to an unexpected application termination or arbitrary code execution
Description: A memory corruption issue existed in the processing of strings. This issue was addressed through improved bounds checking.
CVE-2016-7663

If the majority of your code base is, as your suggest, asynchronous and you are trying to perform a synchronous save operation within this asynchronous block, there is every reason to suspect that is why you're receiving the NSPersistentStoreCoordinator error in the error message.
The key is the issue with the NSPersistentStoreCoordinator (PSC) failing to properly coordinate the data save. Unless I'm mistaken the error message identifies that the PSC is locked when you ask the PSC to respond to the call to save for that MOC.
In my humble opinion your problem still most likely stems from your call to performBlock... in this code you're performing a fetch request, then updating a property, then inserting the object back into the MOC, then saving, all in the same block. These are very different functions taking different amounts of processing power and time, all dumped into one single concurrency block.
Also, how you instantiate a property when using concurrency and blocks is important. You may need to check where in your code is most appropriate to instantiate your properties.
So some questions...
Do you need every line of this code in a performBlock? Consider that, unless you're blocking your UI, the fetch request and the update to your property may be ok in code outside the call to performBlock.
If you do need every line of this code in a core data concurrency block such as performBlock, have you considered instead embedding your call to save in a "block-within-a-block" and using performBlockAndWait?
The Apple developer website has an example of embedding a save call into a performBlockAndWait block, included (in part) following:
NSManagedObjectContext *moc = '…; //Our primary context on the main queue
NSManagedObjectContext *private = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
[private setParentContext:moc];
[private performBlock:^{
'Complete your fetch request and update the managed object's property.
NSError *error = nil;
if (![private save:&error]) {
NSLog(#"Error saving context: %#\n%#", [error localizedDescription], [error userInfo]);
abort();
}
[moc performBlockAndWait:^{
NSError *error = nil;
if (![moc save:&error]) {
NSLog(#"Error saving context: %#\n%#", [error localizedDescription], [error userInfo]);
abort();
}
}];
}];
If you're able to update your question with a little more code and a more detailed description I might be able to provide a more accurate fix for your specific problem.
Also I'd recommend you do some research...
Despite the age of the book, the concepts of concurrency are still very well explained in "Core Data, 2nd Edition, Data Storage and Management for iOS, OS X, and iCloud" (Jan 2013 from The Pragmatic Bookshelf) by Marcus S. Zarra, and in particular Chapter 4 titled "Performance Tuning” and Chapter 5 titled "Threading”.
Another valuable book on core data from Apress publishers – "Pro iOS Persistence Using Core Data", by Michael Privat and Robert Warner.

No longer occurs as of iOS 10.3. Root cause is still unknown. Assuming some iOS memory management break in 10.2, with a fix in 10.3.

Related

How can a RLMRealm database grow from 25Mb to 11Gb in Objective C?

I've using Realm.io in one of my projects, well I've used it in a few iOS projects but this is my first Objective C desktop app that I've used it in so this question is based on OS X usage. Before going any further I think its also worth mentioning the test machine is running El Capitan so I'm aware this is beta software.
Realm is loaded as a CocoaPod, everything works fine and I'm able to run queries etc, no problems initiating or anything but something makes me think I'm not closing my calls correctly maybe?
My objects are 2 main different types, one to hold a 'group' and the other to hold an actual 'object'. The app is a photo uploader so it reads Apple's Photo app, indexes all the media objects and groups then uploads them.
On the first run or if i delete the realm completely so we are starting from scratch, everything flies through and goes really quickly. On the next run, my queries seem to run slower and the database first went from 25Mb to 50Mb then after an hour when i checked again i was at around 11Gb.
My main use of Realm is in a singleton but I'm executing a background queue which is queuing a new job for every object so when it discovers a photo it queues another job to check if it exists in the database and if not creates it, if it does it updates any existing information. As its doing this unless i declare Realm inside the job i get thread errors so its defined each time.
Below is an example of one of my calls, can anyone suggest some things i might be doing wrong or is there anyway to control or compact the size of the database now its so big?
- (void)saveMediaObject:(MLMediaObject *)mediaObject mediaGroup:(MLMediaGroup *)mediaGroup
{
[jobQueue addOperationWithBlock:
^{
NSLog(#"saveMediaObject");
lastScanResult = [NSDate date];
RLMRealm *realm = [RLMRealm defaultRealm];
SPMediaObject *spMediaObject = [SPMediaObject objectInRealm:realm forPrimaryKey:[mediaObject identifier]];
if(spMediaObject == nil)
{
spMediaObject = [[SPMediaObject alloc] init];
spMediaObject.identifier = [mediaObject identifier];
}
[realm beginWriteTransaction];
spMediaObject.lastSeen = [NSDate date];
spMediaObject.versionURL = [[mediaObject URL] path];
spMediaObject.versionMimeType = [self mimeTypeForExtension:[[spMediaObject.versionURL pathExtension] lowercaseString]];
spMediaObject.originalURL = [[mediaObject originalURL] path];
spMediaObject.originalMimeType = [self mimeTypeForExtension:[[spMediaObject.originalURL pathExtension] lowercaseString]];
if([mediaObject name] != nil)
{
spMediaObject.caption = [mediaObject name];
}
else
{
spMediaObject.caption = #"";
}
[realm addOrUpdateObject:spMediaObject];
[realm commitWriteTransaction];
}];
}
Steve.
Realm's docs on file size should provide some insight as to what's happening here and how to mitigate the problem:
You should expect a Realm database to take less space on disk than an equivalent
SQLite database. If your Realm file is much larger than you expect, it may be because
you have a RLMRealm that is
referring to an older version of the data in the database.
In order to give you a consistent view of your data, Realm only updates the
active version accessed at the start of a run loop iteration. This means that if
you read some data from the Realm and then block the thread on a long-running
operation while writing to the Realm on other threads, the version is never
updated and Realm has to hold on to intermediate versions of the data which you
may not actually need, resulting in the file size growing with each write.
The extra space will eventually be reused by future writes, or may be compacted
— for example by calling writeCopyToPath:error:.
To avoid this issue you, may call invalidate
to tell Realm that you no longer need any of the objects that you've read from the Realm so far,
which frees us from tracking intermediate versions of those objects. The Realm will update to
the latest version the next time it is accessed.
You may also see this problem when accessing Realm using Grand Central Dispatch.
This can happen when a Realm ends up in a dispatch queue's autorelease pool as
those pools may not be drained for some time after executing your code. The intermediate
versions of data in the Realm file cannot be reused until the
RLMRealm object is deallocated.
To avoid this issue, you should use an explicit autorelease pool when accessing a Realm
from a dispatch queue.
If those suggestions don't help, the Realm engineering team would be happy to profile your project to determine ways to minimize file size growth. You can email code privately to help#realm.io.

Handling errors in addPersistentStoreWithType

I am trying to find information on handling errors when creating a persistent store coordinator on the iPhone. I have implemented lightweight migration
NSError *error = nil;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption,
[NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil];
_persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]];
if (![_persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:options error:&error]) {
/*
Replace this implementation with code to handle the error appropriately.
abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development.
Typical reasons for an error here include:
* The persistent store is not accessible;
* The schema for the persistent store is incompatible with current managed object model.
Check the error message to determine what the actual problem was.
If the persistent store is not accessible, there is typically something wrong with the file path. Often, a file URL is pointing into the application's resources directory instead of a writeable directory.
If you encounter schema incompatibility errors during development, you can reduce their frequency by:
* Simply deleting the existing store:
[[NSFileManager defaultManager] removeItemAtURL:storeURL error:nil]
* Performing automatic lightweight migration by passing the following dictionary as the options parameter:
#{NSMigratePersistentStoresAutomaticallyOption:#YES, NSInferMappingModelAutomaticallyOption:#YES}
Lightweight migration will only work for a limited set of schema changes; consult "Core Data Model Versioning and Data Migration Programming Guide" for details.
*/
NSLog(#"Unresolved error %#, %#", error, [error userInfo]);
abort();
}
return _persistentStoreCoordinator;
This is based on the code from Apple with the added support for lightweight migration.
I can't find any information on handling errors if the application would still encounter an error here. It seems to me that if the database cannot be created the application can't be used at all.
Do I just ask the user to try reinstalling the application and display relevant infromation ?
Can I keep the abort() statement while adding a prompt about the error or will this cause the application to be rejected by Apple ?
Calling abort() in this situation is out of question. Any app that crashes will be rejected by Apple. And it does not solve the problem: Starting the app again will find the same store file and therefore fail again.
For the same reason, reinstalling the app does not help, and it would be a bad user experience.
Of course, the situation should not occur if the migration has been tested. But if this fatal error occurs and your app cannot open the database, you have to create a fresh database.
The exact steps to take depend on what is stored in the database and if/how you can recover the data. So you could
remove the old database file with [[NSFileManager defaultManager] removeItemAtURL:storeURL error:nil], or copy a default database file from your programs resources to storeURL,
call _persistentStoreCoordinator addPersistentStoreWithType:... again to open the new database.
perhaps fill the database again with data from a server, or whatever has to be done to recreate the data.

UIManagedDocument with NSFetchedResultsController and background context

I am trying to get the following working.
I have a table view that is displaying data fetched from an API in a table view. For that purpose I am using a NSFetchedResultsController:
self.fetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:request
managedObjectContext:self.database.managedObjectContext
sectionNameKeyPath:nil
cacheName:nil];
I create my entities in a background context like this:
NSManagedObjectContext *backgroundContext;
backgroundContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
backgroundContext.parentContext = document.managedObjectContext;
[backgroundContext performBlock:^{
[MyAPI createEntitiesInContext:backgroundContext];
NSError *error = nil;
[backgroundContext save:&error];
if (error) NSLog(#"error: %#",error.localizedDescription);
[document.managedObjectContext performBlock:^{
[document updateChangeCount:UIDocumentChangeDone];
[document.managedObjectContext save:nil];
}];
Now, whenever I get new data (and insert/update entities like shown just above), my NSFetchedResultsController doesn't quite work as it should. In particular, I am always updating one entity (not creating a new one), but my table view shows two entities. Once I restart the app it shows up correctly.
If I perform the creation of the entities ([MyAPI createEntities]) in self.database.managedObjectContext, everything works fine.
Any idea what I am doing wrong? Looking through the existing threads here on SO makes me think that I'm doing it the right way. Again, if I do not do the core data saves in the background context (but on document.managedObjectContext) then it works fine...
I read about a similar problem on the Apple dev forums today. Perhaps this is the same problem as yours, https://devforums.apple.com/message/666492#666492, in which case perhaps there is a bug (or at least someone else with the same issue to discuss it with!).
Assuming it isn't, it sounds like what you want to do should be perfectly possible with nested contexts, and therefore assuming no bugs with UIManagedDocument.
My only reservation is that I've been trying to get batch loading working with UIManagedDocument and it seems like it does not work with nested contexts (https://stackoverflow.com/q/11274412/1347502). I would think one of the main benefits of NSFetchedResultsController is it's ability to improve performance through batch loading. So if this can't be done in UIManagedDocument perhaps NSFetchedResultsController isn't ready for use with UIManagedDocument but I haven't got to the bottom of that issue yet.
That reservation aside, most of the instruction I've read or viewed about nested contexts and background work seems to be done with peer child contexts. What you have described is a parent, child, grandchild configuration. In the WWDC 2012 video "Session 214 - Core Data Best Practices" (+ 16:00 minutes) Apple recommend adding another peer context to the parent context for this scenario, e.g
backgroundContext.parentContext = document.managedObjectContext.parentContext;
The work is performed asynchronously in this context and then pushed up to the parent via a call to save on the background context. The parent would then be saved asynchronously and any peer contexts, in this case the document.managedObjectContext, would access the changes via a fetch, merge, or refresh. This is also described in the UIManagedDocument documentation:
If appropriate, you can load data from a background thread directly
to the parent context. You can get the parent context using
parentContext. Loading data to the parent context means you do not
perturb the child context’s operations. You can retrieve data loaded
in the background simply by executing a fetch.
[Edit: re-reading this it could just be recommending Jeffery's suggestion i.e. not creating any new contexts at all and just using the parent context.]
That being said the documentation also suggests that typically you do not call save on child contexts but use the UIManagedDocument's save methods. This may be an occasion when you do call save or perhaps part of the problem. Calling save on the parent context is more strongly discouraged, as mentioned by Jeffery. Another answer I've read on stack overflow recommended only using updateChangeCount to trigger UIManagedDocument saves. But I've not read any thing from Apple, so perhaps in this case a to call the UIManagedDocument saveToURL:forSaveOperation:completionHandler: method would be appropriate to get everything in sync and saved.
I guess the next obvious issue is how to notify NSFetchedResultsController that changes have occurred. I would be tempted to simplify the setup as discussed above and then subscribe to the various NSManagedObjectContextObjectsDidChangeNotification or save notifications on the different contexts and see which, if any, are called when UIMangedDocument saves, autosaves, or when background changes are saved to the parent (assuming that is allowable in this case). I assume the NSFetchedResultsController is wired to these notifications in order to keep in sync with the underlying data.
Alternatively perhaps you need to manually perform a fetch, merge, or refresh in the main context to get the changes pulled through and then somehow notify NSFetchedResultsController that it needs to refresh?
Personally I'm wondering if UIManagedDocument is ready for general consumption, there was no mention of it at WWDC this year and instead a lengthy discussion of how to build a much more complicated solution was presented: "Session 227 - Using iCloud with Core Data"
In my method where I fetch data from server, I first create the Entities and after that I call these two methods to save the changes to the document :
[self.managedObjectContext performBlock:^{
// create my entities
[self.document updateChangeCount:UIDocumentChangeDone];
[self.document savePresentedItemChangesWithCompletionHandler:^(NSError *errorOrNil) {
...
}];
}];
Because you are updating the results on a different context, I think you will need to call [self.fetchedResultsController performFetch:&error] in your view controllers -viewWillAppear: method.
After Updates
OK, you should not be calling [backgroundContext save:&error] or [document.managedObjectContext save:nil]. See: UIManagedDocument Class Reference
You should typically use the standard UIDocument methods to save the document.
If you save the child context directly, you only commit changes to the parent context and not to the document store. If you save the parent context directly, you sidestep other important operations that the document performs.
I had to use -insertedObjects and obtainPermanentIDsForObjects:error: to persist new objects created in a context.
Next, I don't think you need to create a new context to run in the background. document.managedObjectContext.parentContext should be an available background context to run updates in.
Finally, I don't call [document updateChangeCount:UIDocumentChangeDone] very often. This is taken care of by the document automatically. You can still do it any time you want, but it shouldn't be necessary.
Here is how I would call Your -createEntitiesInContext method.
[document.managedObjectContext.parentContext performBlock:^{
[MyAPI createEntitiesInContext:document.managedObjectContext.parentContext];
NSSet *objects = [document.managedObjectContext.parentContext insertedObjects];
if (objects.count > 0) {
NSError *error = nil;
[document.managedObjectContext.parentContext obtainPermanentIDsForObjects:objects error:&error]
if (error) NSLog(#"error: %#",error.localizedDescription);
}
}];

Restkit-loaded nested Core Data entities cause NSObjectInaccessibleException

I'm using RestKit to grab objects from my RoR service and using CoreData to persist some of the objects (more static-type lookup table objects). TasteTag is one of those persisted objects:
#ifdef RESTKIT_GENERATE_SEED_DB
NSString *seedDatabaseName = nil;
NSString *databaseName = RKDefaultSeedDatabaseFileName;
#else
NSString *seedDatabaseName = RKDefaultSeedDatabaseFileName;
NSString *databaseName = #"Model.sqlite";
#endif
RKObjectManager* manager = [RKObjectManager objectManagerWithBaseURL:kServerURL];
manager.objectStore = [RKManagedObjectStore objectStoreWithStoreFilename:databaseName usingSeedDatabaseName:seedDatabaseName managedObjectModel:nil delegate:self];
.. lots of fun object mapping ..
RKManagedObjectMapping* tasteTagMapping = [RKManagedObjectMapping mappingForClass:[TasteTag class]];
[tasteTagMapping mapKeyPath:#"id" toAttribute:#"tasteTagID"];
[tasteTagMapping mapKeyPath:#"name" toAttribute:#"name"];
tasteTagMapping.primaryKeyAttribute = #"tasteTagID";
[[RKObjectManager sharedManager].mappingProvider setMapping:tasteTagMapping forKeyPath:#"taste_tags"];
[[RKObjectManager sharedManager].mappingProvider addObjectMapping:tasteTagMapping];
.. some more mapping ..
I have the data coming back from the RoR server and it's getting mapped to objects as expected. The Core Data entity also seems mapped fine after RestKit gets the request back:
"<TasteTag: 0x6e87170> (entity: TasteTag; id: 0x6e85d60 <x-coredata://03E4A20A-21F2-4A2D-92B4-C4424893D559/TasteTag/p5> ; data: <fault>)"
The issue is when I try to access properties on the objects the fault can't seem to be fire. At first I was just calling the properties, which always came back as nil (even though that should fire the fault):
for (TasteTag *tag in self.vintage.tasteTags) {
[tagNames addObject:tag.name]; //get error of trying to add nil to array
}
After looking into manually triggering faults (http://www.mlsite.net/blog/?p=518) I tried calling [tag willAccessValueForKey:nil] which results in:
Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault for '0x6e7b060 <x-coredata://03E4A20A-21F2-4A2D-92B4-C4424893D559/TasteTag/p5>''
Looking up the entity in the .sqlite based on the key (TasteTag/p5) does show it mapped to the one I'd expect.
Other posts relating to RestKit recommend disabling the object cache (which I'm not using) since this is usually caused by an entity being deleted. But at this stage I'm just reading, not deleting, and I have no cache in place.
If I just call [TasteTag allObjects] I'm able to get all the objects back fine and they load without issue. It's just in the case when they are faulted it seems.
I found a solution that worked for me (I am unsure of how applicable it will be to your situation, but I'm adding it as an answer since it solved this (or very similar) issue for me):
A couple days back, I ran the RKTwitterCoreData example and noticed it worked perfectly while mine, with very simple code at this point and doing nearly the same thing, didn't. I got a lot of unfulfilled faults. So I decided to modify all of my code dealing with RestKit to reflect how the RKTwitterCoreData example does it.
I'll split this into chunks to try and help you follow my line of thinking at the time (since I don't think our problems are identical).
My Original Implementation Assumption
Since RestKit can back objects to Core Data, I assumed that those managed objects could be used interchangeably. For example, I could use the objects from Core Data in the exact same way as the ones retrieved from a remote web service. I could even merge them together to get all the data.
I Was Wrong
I noticed that RKTwitterCoreData's code did not flow this way in the least. A decent chunk of my code matched up with theirs, but the largest difference was that they didn't treat these objects as interchangeable. In fact, they never used the objects they got from remote data stores. Instead, they just let that "fall through the cracks". I can only assume that means they're added to Core Data's data store since it works for them and, now, for me.
Details
My app worked after modifying my code to utilize this flow. I can only then surmise that the unfulfillable faults we are seeing are related to using the Core Data backed objects that we get back from the web service. If instead you just ignore those and then do a fetch, you will get everything back (including the most recent request) and you shouldn't get any unfulfillable faults.
To elaborate, if you look at RKTwitterViewController you will notice that lines 45-61 handle loading of objects:
- (void)loadObjectsFromDataStore {
[_statuses release];
NSFetchRequest* request = [RKTStatus fetchRequest];
NSSortDescriptor* descriptor = [NSSortDescriptor sortDescriptorWithKey:#"createdAt" ascending:NO];
[request setSortDescriptors:[NSArray arrayWithObject:descriptor]];
_statuses = [[RKTStatus objectsWithFetchRequest:request] retain];
}
- (void)loadData {
// Load the object model via RestKit
RKObjectManager* objectManager = [RKObjectManager sharedManager];
[objectManager loadObjectsAtResourcePath:#"/status/user_timeline/RestKit" delegate:self block:^(RKObjectLoader* loader) {
// Twitter returns statuses as a naked array in JSON, so we instruct the loader
// to user the appropriate object mapping
loader.objectMapping = [objectManager.mappingProvider objectMappingForClass:[RKTStatus class]];
}];
}
Everything looks normal (at least compared to how I was doing this loading initially). But take a look at the objectLoader:didLoadObjects: delegate method:
- (void)objectLoader:(RKObjectLoader*)objectLoader didLoadObjects:(NSArray*)objects {
[[NSUserDefaults standardUserDefaults] setObject:[NSDate date] forKey:#"LastUpdatedAt"];
[[NSUserDefaults standardUserDefaults] synchronize];
NSLog(#"Loaded statuses: %#", objects);
[self loadObjectsFromDataStore];
[_tableView reloadData];
}
The sample doesn't even touch the objects parameter! (Aside from the NSLog of course...)
Conclusion/tl;dr
Don't use the managed objects you get back in objectLoader:didLoadObjects: as if they are fully backed by Core Data. Instead, ignore them and re-fetch from Core Data. All objects, including the ones from the last request are there. Otherwise, you will get unfulfillable faults (at least I did).
Documenting my fix (read:hack) per Ryan's suggestion.
The error seems to be in how RestKit assumed you'll be using the objects returned from their objectLoader:didLoadObjects: method. They seem to assume it will all be Core Data backed (and follow the flow similar to what Ryan talked about - let it sync to Core Data, then re-query) or that you'll be using all non Core Data backed objects and just keep those results around.
In my case I had a mix - a root array of non Core Data backed objects which each then contained an array of Core Data backed entities. The top-level objects are ones I don't mind querying the server for and have no reason to persist locally beyond the view they're shown in. It seems once objectLoader:didLoadObjects: is complete the managed object context backing the Core Data entities within the objects param is disposed of (under the assumption you'll be re-querying for them), causing any future calls to the entities to result in being treated as faults, even though you can't trigger the fault and load the data (results in NSObjectInaccessibleException).
I got around it with an ugly hack - within objectLoader:didLoadObjects: I access one of the Core Data entity's managed object context and copy it to a property within the view (self.context = [tag managedObjectContext];). This prevents the context it being released after objectLoader:didLoadObjects: is complete, allowing me to access the entities without issue later in the view.
Another solution would be to manually re-query for each entity using a new context and copy that back to the stored return objects. One could do this when one goes to display them, or possibly some post-processing in objectLoader:didLoadObjects:, using a new context. The entity ID is still around on the faulted object so one could use that to re-query without issue even after the original RestKit context disappears. But it seems silly to have to re-query for every entity in the object graph like that.

'No database channel is available'

I have an app which connects to the internet and stores data in an SQL database. I tested with iOS4, it works completely as it should. When I upgrade to the new version though, I get an NSInternalInconsistencyException, with this as the reason:
'_obtainOpenChannel -- NSSQLCore 0x951a640: no database channel is available'
From what I can gather, my database is being accessed by something when it shouldn't be, though I can't understand where or why.
Can anyone help me locate and properly diagnose my problem?
I found something for this one:
I got the error (among some other ones, seemingly randomly appearing) while I was accessing a managed object's relationships in a different thread than the one the managed context was created in. There have been some changes with respect to concurrent access to managed objects in iOS5 (see here http://developer.apple.com/library/ios/#releasenotes/DataManagement/RN-CoreData/_index.html#//apple_ref/doc/uid/TP40010637) - and although the doc states the default behaviour should be as pre-iOS5 it apparently is not true, my code did work without problems in iOS4.2.
For now, my workaround was to do all the relationship-access thingies in the main thread, store the data in an array, and access the data I need in the other thread via that array. No more errors at least. This is not the 'nice' solution I suppose, as I should (and will) change the way I concurrently access managed objects, but I'm not going to change that in a hurry now.
This default concurrency type for NSManagedObjectContext is NSConfinementConcurrencyType, which means it can only be used by a single thread. From the documentation:
You promise that context will not be used by any thread other than the
one on which you created it.
You can instead create a managed object context that is backed by a private queue for multithreaded use:
[[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]
To use the managed object context from a different thread, use performBlock: (asyncronous) or performBlockAndWait: (synchronous), e.g.
__block NSArray *results;
[[self managedObjectContext] performBlockAndWait:^{
results = [[self managedObjectContext] executeFetchRequest:request error:&error];
}];
// do something with results
The documentation says you don't need to use the block API from the thread that created the managed object context.
Another option is to create separate managed object contexts for each thread.
See the iOS 5 release notes for more info.