I have an app which connects to the internet and stores data in an SQL database. I tested with iOS4, it works completely as it should. When I upgrade to the new version though, I get an NSInternalInconsistencyException, with this as the reason:
'_obtainOpenChannel -- NSSQLCore 0x951a640: no database channel is available'
From what I can gather, my database is being accessed by something when it shouldn't be, though I can't understand where or why.
Can anyone help me locate and properly diagnose my problem?
I found something for this one:
I got the error (among some other ones, seemingly randomly appearing) while I was accessing a managed object's relationships in a different thread than the one the managed context was created in. There have been some changes with respect to concurrent access to managed objects in iOS5 (see here http://developer.apple.com/library/ios/#releasenotes/DataManagement/RN-CoreData/_index.html#//apple_ref/doc/uid/TP40010637) - and although the doc states the default behaviour should be as pre-iOS5 it apparently is not true, my code did work without problems in iOS4.2.
For now, my workaround was to do all the relationship-access thingies in the main thread, store the data in an array, and access the data I need in the other thread via that array. No more errors at least. This is not the 'nice' solution I suppose, as I should (and will) change the way I concurrently access managed objects, but I'm not going to change that in a hurry now.
This default concurrency type for NSManagedObjectContext is NSConfinementConcurrencyType, which means it can only be used by a single thread. From the documentation:
You promise that context will not be used by any thread other than the
one on which you created it.
You can instead create a managed object context that is backed by a private queue for multithreaded use:
[[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]
To use the managed object context from a different thread, use performBlock: (asyncronous) or performBlockAndWait: (synchronous), e.g.
__block NSArray *results;
[[self managedObjectContext] performBlockAndWait:^{
results = [[self managedObjectContext] executeFetchRequest:request error:&error];
}];
// do something with results
The documentation says you don't need to use the block API from the thread that created the managed object context.
Another option is to create separate managed object contexts for each thread.
See the iOS 5 release notes for more info.
Related
I've using Realm.io in one of my projects, well I've used it in a few iOS projects but this is my first Objective C desktop app that I've used it in so this question is based on OS X usage. Before going any further I think its also worth mentioning the test machine is running El Capitan so I'm aware this is beta software.
Realm is loaded as a CocoaPod, everything works fine and I'm able to run queries etc, no problems initiating or anything but something makes me think I'm not closing my calls correctly maybe?
My objects are 2 main different types, one to hold a 'group' and the other to hold an actual 'object'. The app is a photo uploader so it reads Apple's Photo app, indexes all the media objects and groups then uploads them.
On the first run or if i delete the realm completely so we are starting from scratch, everything flies through and goes really quickly. On the next run, my queries seem to run slower and the database first went from 25Mb to 50Mb then after an hour when i checked again i was at around 11Gb.
My main use of Realm is in a singleton but I'm executing a background queue which is queuing a new job for every object so when it discovers a photo it queues another job to check if it exists in the database and if not creates it, if it does it updates any existing information. As its doing this unless i declare Realm inside the job i get thread errors so its defined each time.
Below is an example of one of my calls, can anyone suggest some things i might be doing wrong or is there anyway to control or compact the size of the database now its so big?
- (void)saveMediaObject:(MLMediaObject *)mediaObject mediaGroup:(MLMediaGroup *)mediaGroup
{
[jobQueue addOperationWithBlock:
^{
NSLog(#"saveMediaObject");
lastScanResult = [NSDate date];
RLMRealm *realm = [RLMRealm defaultRealm];
SPMediaObject *spMediaObject = [SPMediaObject objectInRealm:realm forPrimaryKey:[mediaObject identifier]];
if(spMediaObject == nil)
{
spMediaObject = [[SPMediaObject alloc] init];
spMediaObject.identifier = [mediaObject identifier];
}
[realm beginWriteTransaction];
spMediaObject.lastSeen = [NSDate date];
spMediaObject.versionURL = [[mediaObject URL] path];
spMediaObject.versionMimeType = [self mimeTypeForExtension:[[spMediaObject.versionURL pathExtension] lowercaseString]];
spMediaObject.originalURL = [[mediaObject originalURL] path];
spMediaObject.originalMimeType = [self mimeTypeForExtension:[[spMediaObject.originalURL pathExtension] lowercaseString]];
if([mediaObject name] != nil)
{
spMediaObject.caption = [mediaObject name];
}
else
{
spMediaObject.caption = #"";
}
[realm addOrUpdateObject:spMediaObject];
[realm commitWriteTransaction];
}];
}
Steve.
Realm's docs on file size should provide some insight as to what's happening here and how to mitigate the problem:
You should expect a Realm database to take less space on disk than an equivalent
SQLite database. If your Realm file is much larger than you expect, it may be because
you have a RLMRealm that is
referring to an older version of the data in the database.
In order to give you a consistent view of your data, Realm only updates the
active version accessed at the start of a run loop iteration. This means that if
you read some data from the Realm and then block the thread on a long-running
operation while writing to the Realm on other threads, the version is never
updated and Realm has to hold on to intermediate versions of the data which you
may not actually need, resulting in the file size growing with each write.
The extra space will eventually be reused by future writes, or may be compacted
— for example by calling writeCopyToPath:error:.
To avoid this issue you, may call invalidate
to tell Realm that you no longer need any of the objects that you've read from the Realm so far,
which frees us from tracking intermediate versions of those objects. The Realm will update to
the latest version the next time it is accessed.
You may also see this problem when accessing Realm using Grand Central Dispatch.
This can happen when a Realm ends up in a dispatch queue's autorelease pool as
those pools may not be drained for some time after executing your code. The intermediate
versions of data in the Realm file cannot be reused until the
RLMRealm object is deallocated.
To avoid this issue, you should use an explicit autorelease pool when accessing a Realm
from a dispatch queue.
If those suggestions don't help, the Realm engineering team would be happy to profile your project to determine ways to minimize file size growth. You can email code privately to help#realm.io.
I'm having some trouble dealing with Core Data+concurrency/nested MOCs (not sure which one I'm having problems with =P).
I have a method where I pass in a managed object ID (I checked that it's permanent) and that method has a child managed object context that is confined to a certain queue. I can retrieve the object from the child managed object context via [managedObjectContext objectWithID:moID] but when I try to access any of its properties (the managed object is still a fault), I get EXC_BAD_ACCESS with the stack trace showing _svfk_1 and objc_msgSend.
I know it's kind of difficult to figure out what the problem is without sample code, but I was hoping someone could shed some light on the possible causes. Thanks. =)
EDIT: I tried using existingObjectWithID:error: instead of objectWithID: as Tom Harrington suggested and now it works sometimes but doesn't work other times. I also experienced an EXC_BAD_ACCESS crash on mergeChangesFromContextDidSaveNotification:. I suspect this could be a synchronization issue. If I edit something in one context and save while something else is edited in my child context, would that cause an issue?
EDIT 2: I figured out why existingObjectWithID:error: was working sometimes but not always. The managed object ID was indeed a temporary ID (shouldn't mergeChangesFromContextDidSaveNotification: convert it to a permanent ID?), so I had to call obtainPermanentIDsForObjects:error: before passing the ID. But I'm still getting crashes sometimes in the child context's mergeChangesFromContextDidSaveNotification:. What could be the possible causes of this? Thanks.
EDIT 3: Here's what my MOC hierarchy looks like.
Persistent Store Coordinator
|
Persistent Store MOC
/ \
Main Queue MOC Child MOC (confinement)
I'm invoking a method from the main queue that uses the Child MOC (in another queue) to insert and update some managed objects and at the same time, I'm inserting and updating managed objects in the Persistent Store MOC. Managed objects can also be updated, deleted, and inserted in the Main Queue MOC at the same time. I merge any changes from the Persistent Store Coordinator to both the Main Queue MOC and the Child MOC.
Some more questions: Does saving an MOC automatically merge things? If there is a pending merge request for an MOC and you save before that merge request is processed, can that cause issues?
EDIT 4: Here's how I initialize the Child MOC.
dispatch_sync(_searchQueue, ^{
_searchManagedObjectContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSConfinementConcurrencyType];
[_searchManagedObjectContext setParentContext:_persistentStoreManagedObjectContext];
[_searchManagedObjectContext setMergePolicy:NSMergeByPropertyStoreTrumpMergePolicy];
});
Btw, I notice that the merge only crashes (with EXC_BAD_ACCESS) when the notification contains deleted objects.
It looks like you're still working too hard. For your child MOC, since it's in a serial queue, use NSPrivateQueueConcurrencyType, and set its parent to your main MOC.
NSConfinementConcurrencyType is for legacy configurations.
I found a fix. Before every save, I do [moc obtainPermanentIDsForObjects:[[moc insertedObjects] allObjects] error:&error]. Now I don't get any more crashes.
I'm still a bit murky on what exactly was going on, but here's my understanding. When you save newly inserted objects, they are only assigned a permanent ID when the MOC connected to the persistent store coordinator saves. Now, either mergeChangesFromContextDidSaveNotification: propagated the permanent IDs back down (as I had expected) and some other operation just happened to occur just before the merge or there's an Apple bug somewhere. In any case, obtaining the permanent IDs beforehand solved the issue.
TL;DR Core Data+concurrency is difficult.
We have a background thread that needs to do some fetching.. but it doesnt need any data -- only the objectIDs
originally we did this using a specific newly created blank managed context just for this.
NSFetchRequest *request = [DKDocumentDetails requestAllWithPredicate:predicate inContext:ctx];
[request setResultType:NSManagedObjectIDResultType];
self.objectIDs = [DKDocumentDetails executeFetchRequest:request inContext:ctx];
...
but recently I found out, I can also do this on the PST itself, without any context AS I dont want Managed Objects, but only IDs
NSFetchRequest *request = [DKDocumentDetails requestAllWithPredicate:predicate inContext:mainctx /*used in the wrong thread but only for getting entity description*/];
[request setResultType:NSManagedObjectIDResultType];
NSError *error = nil;
self.objectIDs = [pst executeRequest:request inContext:nil error:&error];
...
so in my tests it never crashed and in the docs I dont see why it shouldnt work either... I mean I dont get unsaved stuff and I cannot get objects, but used this way...
It is faster and looks elegant but is it safe or not?
I've been thinking about your question all day. Here is what I've come up with. As others have pointed out, NSPersistentStoreCoordinator objects are not thread safe. When a bunch of NSManagedObjectContext objects on various threads use the same NSPersistentStoreCoordinator, they do so by locking and unlocking the NSPersistentStoreCoordinator.
However, you are worried about just reading data, and thread safe NSManagedObjectID data at that. Is that ok?
Well, the Apple documentation On Concurrency with Core Data mentions something similar to what you are doing:
For example, you can configure a fetch request to return just object IDs but also include the row data (and update the row cache)—this can be useful if you're just going to pass those object IDs from a background thread to another thread.
Ok, but do we need to lock the Coordinator?
There is typically no need to use locks with managed objects or managed object contexts. However, if you use a single persistent store coordinator shared by multiple contexts and want to perform operations on it (for example, if you want to add a new store), or if you want to aggregate a number of operations in one context together as if a virtual single transaction, you should lock the persistent store coordinator.
That seems to be pretty clear that if you are performing operations on a persistent store from more than one thread, you should lock it.
But wait - these are just read operations, shouldn't they be safe? Well, apparently not:
Core Data does not present a situation where reads are “safe” but changes are “dangerous”—every operation is “dangerous” because every operation has cache coherency effects and can trigger faulting.
Its the cache we need to worry about. That's why you need to lock - a read in one thread can cause data in another thread to get messed up through inadvertent cache changes. Your code never gave you problems because this is probably really rare. But its those edge cases and 1-in-1,000,000 bugs that can do the most damage...
So, is it safe? My answer:
If nothing else is using your persistent store coordinator while you read, yes, you are safe.
If you have anything else using the same persistent store coordinator, then lock it before you get the object IDs.
Using a managed object context means the locking is automatically taken care of for you, so its also a fine possibility, but it looks like you don't need to use it (and I agree it is nice to not make one just to get a few Object IDs).
From the NSPersistentStoreCoordinator docs:
Note that if multiple threads work directly with a coordinator, they need to lock and unlock it explicitly.
I would say that if you were to properly lock the PSC:
[pst lock];
self.objectIDs = [pst executeRequest:request inContext:nil error:&error];
[pst unlock];
That would be considered "safe" according to my reading of the docs. That being said, locking done internally by the MOC might be the most significant performance difference between the two approaches you have described, and if that's the case you might prefer to to just use the blank MOC as it would be less surprising when you or someone else encounter's the code later.
Related question:
Is NSPersistentStoreCoordinator Thread Safe?
There is no a good reason to NOT use a managed object context for this. The managed object context buys you a lot - it handles change management, threading, etc. etc. Using the persistent store coordinator directly loses a lot of this functionality. For example, if you have changes that have not been persisted yet to this store, you may miss them by using the persistent store coordinator directly.
Now you say the reason this is attractive to you is that you only want the managed object IDs. What it seems you really want is to find managed objects but not fire faults on them. You can do this with either a NSManagedObjectResultType or a NSManagedObjectIDResultType on your fetch request. In the case of the NSManagedObjectResultType, you would just access the objectID on your fetched objects, which will not fire a fault - thus not "getting data". This can have some performance advantages if the row cache is already populated, etc.
With all of that said, why not just use parent-child contexts to solve this?
I am trying to get the following working.
I have a table view that is displaying data fetched from an API in a table view. For that purpose I am using a NSFetchedResultsController:
self.fetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:request
managedObjectContext:self.database.managedObjectContext
sectionNameKeyPath:nil
cacheName:nil];
I create my entities in a background context like this:
NSManagedObjectContext *backgroundContext;
backgroundContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
backgroundContext.parentContext = document.managedObjectContext;
[backgroundContext performBlock:^{
[MyAPI createEntitiesInContext:backgroundContext];
NSError *error = nil;
[backgroundContext save:&error];
if (error) NSLog(#"error: %#",error.localizedDescription);
[document.managedObjectContext performBlock:^{
[document updateChangeCount:UIDocumentChangeDone];
[document.managedObjectContext save:nil];
}];
Now, whenever I get new data (and insert/update entities like shown just above), my NSFetchedResultsController doesn't quite work as it should. In particular, I am always updating one entity (not creating a new one), but my table view shows two entities. Once I restart the app it shows up correctly.
If I perform the creation of the entities ([MyAPI createEntities]) in self.database.managedObjectContext, everything works fine.
Any idea what I am doing wrong? Looking through the existing threads here on SO makes me think that I'm doing it the right way. Again, if I do not do the core data saves in the background context (but on document.managedObjectContext) then it works fine...
I read about a similar problem on the Apple dev forums today. Perhaps this is the same problem as yours, https://devforums.apple.com/message/666492#666492, in which case perhaps there is a bug (or at least someone else with the same issue to discuss it with!).
Assuming it isn't, it sounds like what you want to do should be perfectly possible with nested contexts, and therefore assuming no bugs with UIManagedDocument.
My only reservation is that I've been trying to get batch loading working with UIManagedDocument and it seems like it does not work with nested contexts (https://stackoverflow.com/q/11274412/1347502). I would think one of the main benefits of NSFetchedResultsController is it's ability to improve performance through batch loading. So if this can't be done in UIManagedDocument perhaps NSFetchedResultsController isn't ready for use with UIManagedDocument but I haven't got to the bottom of that issue yet.
That reservation aside, most of the instruction I've read or viewed about nested contexts and background work seems to be done with peer child contexts. What you have described is a parent, child, grandchild configuration. In the WWDC 2012 video "Session 214 - Core Data Best Practices" (+ 16:00 minutes) Apple recommend adding another peer context to the parent context for this scenario, e.g
backgroundContext.parentContext = document.managedObjectContext.parentContext;
The work is performed asynchronously in this context and then pushed up to the parent via a call to save on the background context. The parent would then be saved asynchronously and any peer contexts, in this case the document.managedObjectContext, would access the changes via a fetch, merge, or refresh. This is also described in the UIManagedDocument documentation:
If appropriate, you can load data from a background thread directly
to the parent context. You can get the parent context using
parentContext. Loading data to the parent context means you do not
perturb the child context’s operations. You can retrieve data loaded
in the background simply by executing a fetch.
[Edit: re-reading this it could just be recommending Jeffery's suggestion i.e. not creating any new contexts at all and just using the parent context.]
That being said the documentation also suggests that typically you do not call save on child contexts but use the UIManagedDocument's save methods. This may be an occasion when you do call save or perhaps part of the problem. Calling save on the parent context is more strongly discouraged, as mentioned by Jeffery. Another answer I've read on stack overflow recommended only using updateChangeCount to trigger UIManagedDocument saves. But I've not read any thing from Apple, so perhaps in this case a to call the UIManagedDocument saveToURL:forSaveOperation:completionHandler: method would be appropriate to get everything in sync and saved.
I guess the next obvious issue is how to notify NSFetchedResultsController that changes have occurred. I would be tempted to simplify the setup as discussed above and then subscribe to the various NSManagedObjectContextObjectsDidChangeNotification or save notifications on the different contexts and see which, if any, are called when UIMangedDocument saves, autosaves, or when background changes are saved to the parent (assuming that is allowable in this case). I assume the NSFetchedResultsController is wired to these notifications in order to keep in sync with the underlying data.
Alternatively perhaps you need to manually perform a fetch, merge, or refresh in the main context to get the changes pulled through and then somehow notify NSFetchedResultsController that it needs to refresh?
Personally I'm wondering if UIManagedDocument is ready for general consumption, there was no mention of it at WWDC this year and instead a lengthy discussion of how to build a much more complicated solution was presented: "Session 227 - Using iCloud with Core Data"
In my method where I fetch data from server, I first create the Entities and after that I call these two methods to save the changes to the document :
[self.managedObjectContext performBlock:^{
// create my entities
[self.document updateChangeCount:UIDocumentChangeDone];
[self.document savePresentedItemChangesWithCompletionHandler:^(NSError *errorOrNil) {
...
}];
}];
Because you are updating the results on a different context, I think you will need to call [self.fetchedResultsController performFetch:&error] in your view controllers -viewWillAppear: method.
After Updates
OK, you should not be calling [backgroundContext save:&error] or [document.managedObjectContext save:nil]. See: UIManagedDocument Class Reference
You should typically use the standard UIDocument methods to save the document.
If you save the child context directly, you only commit changes to the parent context and not to the document store. If you save the parent context directly, you sidestep other important operations that the document performs.
I had to use -insertedObjects and obtainPermanentIDsForObjects:error: to persist new objects created in a context.
Next, I don't think you need to create a new context to run in the background. document.managedObjectContext.parentContext should be an available background context to run updates in.
Finally, I don't call [document updateChangeCount:UIDocumentChangeDone] very often. This is taken care of by the document automatically. You can still do it any time you want, but it shouldn't be necessary.
Here is how I would call Your -createEntitiesInContext method.
[document.managedObjectContext.parentContext performBlock:^{
[MyAPI createEntitiesInContext:document.managedObjectContext.parentContext];
NSSet *objects = [document.managedObjectContext.parentContext insertedObjects];
if (objects.count > 0) {
NSError *error = nil;
[document.managedObjectContext.parentContext obtainPermanentIDsForObjects:objects error:&error]
if (error) NSLog(#"error: %#",error.localizedDescription);
}
}];
I have an NSManagedObject that has some of its properties initialized at the start of the program. When I refer to this object later, it appears to be faulted, and the properties are not accessible. I'm not sure what I need to do.
This is related to a new feature added to a program that has been operating smoothly with core-data in all other ways.
Here is a code snippet where it is initialized as a property value of a singleton. (That singleton is accessible by many parts of my code):
favoritesCollection = [[SearchTerms alloc] initWithEntity:[NSEntityDescription entityForName:#"SearchTerms" inManagedObjectContext:moc] insertIntoManagedObjectContext:moc];
favoritesCollection.keywords = #"Favorites List";
favoritesCollection.isFavoritesCollection = [NSNumber numberWithBool:YES];
favoritesCollection.dateOfSearch = [NSDate NSCExtendedDateWithNaturalLanguageString:#"4000"];
favoritesCollection.pinColorIndex = 0;
[moc save:&error];
NSLog(#"(favoritesCollection) = %#", favoritesCollection);
}
return favoritesCollection;
When I look at favoritesCollection with the NSLog, I see this (I added some newlines to make it easier to read):
(favoritesCollection) =
<SearchTerms: 0x5c28820>
(entity: SearchTerms; id: 0x5a6df90
<x-coredata://3936E19F-C0D0-4587-95B6-AA420F75BF78/SearchTerms/p33> ;
data: {
dateOfSearch = "4000-09-25 12:00:00 -0800";...*more things after this*
After the return, another NSLog shows that contents are intact.
When I refer to this instance later, I can see this in the debugger:
<SearchTerms: 0x5c28820>
(entity: SearchTerms; id: 0x5a6df90
<x-coredata://3936E19F-C0D0-4587-95B6-AA420F75BF78/SearchTerms/p33> ;
data: <fault>)
and that's all.
So I believe that the object is retained (I explicitly retain it where it is returned). I have zombies on and it doesn't look like a zombie.
I have only one managedObjectContext in the program, maintained in the singleton.
So what is happening, and how do I get to the properties that were saved?
There is nothing wrong with your object and I think you might be misinterpreting the meaning of "fault" here.
From Apple's documentation:
"Faulting is a mechanism Core Data employs to reduce your
application’s memory usage..."
Once you try and access any of the object's properties it will hit the database for all of the object's properties.
More details here http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CoreData/Articles/cdFaultingUniquing.html
Faults are CoreData's way of having loose links to other entities. Just access the values via properties or valueGorKey and you will see them populated just in time.
I'm a little late getting back to this, but I found out that some steps in my program were out of order. Instead of deleting the database contents (something I do at startup every time, for now) and then creating and adding this entity, I had created and added the entity and then deleted the database contents.
The pointer to the favoritesCollection entity is held for the lifetime of the program, so I would have expected it be able to see its contents any time after it was created.
From the Core Data Programming Guide
Fault handling is transparent—you do not have to execute a fetch to
realize a fault. If at some stage a persistent property of a fault
object is accessed, then Core Data automatically retrieves the data
for the object and initializes the object (see NSManagedObject Class
Reference for a list of methods that do not cause faults to fire).
This process is commonly referred to as firing the fault.
Core Data automatically fires faults when necessary (when a persistent
property of a fault is accessed).
From what I can tell by reading the programming guide, seeing faults on relationships (links to other entities) is normal when looking at any particular entity. But seeing faults on the persistent property values is not mentioned. I believe that, in general, if the object is in memory, then its properties should not be faulted, but its relationships may be faulted.
The fact that the favoritesCollection entity was fully faulted (properties and relationships) and the fault did not get resolved revealed a problem. In this case it is consistent with the entity no longer existing in the database.