I try to authenticate my app with Twitter with following code: pastebin
However, if I remove the (useless?) loop line 23ff
for (ACAccount *acc in arrayOfAccounts) {
[acc accountType].identifier;
//Otherwise the identifier get lost - god knows why -__-
}
the acc.type becomes (null) when it gets executed further in
AccountHandler checkAccountOf:acc. If I leave the loop in, the type is correctly set.
I am pretty sure it has to do with the fact that I am in a block and then move on to the main queue, but I am wondering if I am doing something wrong? This loop does not look like sth I am supposed to have to do.
Something kinda similar happened here.
ACAccounts are not thread safe. You should use them only on the thread that they originate. And for this purpose you can read 'thread' as 'queue'.
While I've not seen formal documentation of that, if you NSLog an account you'll see that it's a Core Data object and the lack of thread safety on Core Data objects is well documented.
The specific behaviour is that a Core Data object can be a fault. That means that what you're holding is a reference to the object but not the actual object. When you try to access a property the object will be loaded into memory.
What Core Data is doing underneath is caching things in memory and returning faults until it knows that an object is really needed. The efficient coordination of that cache is what limits individual instances of the Core Data object that coordinates objects to a single thread.
If you do the action that should bring the object into memory on the wrong thread — which is what happens when you access identifier here — then the behaviour is undefined. You could just get a nil result or you could crash your application.
(aside: the reason that Core Data works like this is that it stores an object graph, so possibly 1000s of interconnected objects, and you can traverse it just like any other group of objects. However you don't normally want to pay the costs associated with loading every single one of them into memory just to access whatever usually tiny subset of information you're going to use, so it needs a way of providing a normal Objective-C interface while lazily loading)
The code you've linked to skirts around that issue by ensuring that the objects are in the cache, and hence in memory, before queue hopping. So the 'fetch from store' step occurs on the correct queue. However the code is nevertheless entirely unsafe because objects may transition from being in memory back to being faults according to whatever logic Core Data cares to apply.
The author obviously thinks they've found some bug on Apple's part. They haven't, they've merely decided to assume something is thread safe when it isn't and have then found a way of relying on undefined behaviour that happened to work in their tests.
Moral of the story: keep the accounts themselves on a single thread. If you want to do some processing with the properties of an account then collect the relevant properties themselves as fundamental Foundation objects and post those off.
Related
I need to implement a bit of functionality that can be used from a few different places in an application. It's basically sending something over the network, but I don't need it to be attached to any particular view - I can communicate everything to the user by UIAlertViews.
What I would like to do is encapsulating the functionality in an object (?) that can maintain it's own state for a while and then disappear all by itself. I've read in several similar topics that it's generally not advised to have an object that retains and then releases itself, but on the other hand you have singletons which apart from the fact that they never get released, are very similar in nature. You don't need to keep reference to them just to use them properly. In my situation however I feel it woud be somewhat wasteful to create a singleton and then keep it alive for something that takes a few seconds to execute.
What I came up with is a static dictionary local to the class, that keeps unique references to the instances of the class, and then, when an instance is done with its task, it performs selector 'removeObjectForKey' after delay which removes the only existing reference and effectively kills the object. This way I keep only a dictionary in memory which for the most time is empty anyway.
The question is: are there any unexpected side effects of such a solution that I should be aware of and are there any other good patterns for described situation?
So basically instead of a persistent object of your own class, you've got a persistent object of type NSDictionary? How does that help matters? Is your object unusually large? If you are making your codebase more complicated for the sake of a few bytes, that's not a good tradeoff.
Especially now ARC is commonplace, this kind of trickery is usually not a good idea. Have you measured how much memory a singleton approach takes and found it to be a problem? Unless you have done this, use a singleton. It's simpler code, and all other things being equal, simpler code is far better.
I am having few issues on iPad app, related to CoreData.
This time regarding deleting of object.
Looking at instruments (allocation template) I found that my deleted object stays in memory forever, or at least 7hours, the time I left instruments on. Also the leak instruments doesn't show anything about.
I double check, after saving the context, both the table view which is not displaying the object among its row, and the database itself, that has physically a missing row.
I am not able, with instruments, to tell who is keeping a reference to it, preventing the object to be deallocated.
I am using ARC, and an NSFetchedResultsController is managing the UITableView.
Do you have any guess, or suggestion about what instruments to use.
You shouldn't worry about that. If you try to access the object, core data will give you an exception telling you it couldn't fulfill a fault for that object. The object is gone from your application's database. Core data's internal work might keep a reference to it for other purposes, but that purpose is out of your control.
this a question I always wanted to ask.
When I am running an iOS application in Profiler looking for allocation issues, I found out that NSManagedObject stays in memory long after they have been used and displayed, and the UIViewController who recall has been deallocated. Of course when the UIViewController is allocated again, the number is not increasing, suggesting that there's no leak, and there's some kind of object reuse by CoreData.
If I have a MyManagedObject class which has been given 'mobjc' as name, then in profiler I can see an increasing number of:
MyManagedObject_mobjc_
the number may vary, and for small amount of data, for example 100 objects in sqllite, it grows to that limit and stays there.
But it also seems that sometimes during the application lifecycle the objects are deallocated, so I suppose that CoreData itself is doing some kind of memory optimizations.
It also seems that not the whole object is retained, but rather the 'fault' of it (please forgive my english :-) ) because of the small live byte size.
Even tough a lot of fault objects would also occupy memory.
But at this point I would like some confirmation:
is CoreData really managing and optimizing object in memory ?
is there anything I can do for helping my application to retain less object as possible ?
related to the point above, do I really need to take care of this issue or not ?
do you have some link, possibly by Apple, where this specific subject is explained ?
maybe it is relevant, the app I used for testing rely on ARC and iOS 5.1.
thanks
In this SO topic, Core Data Memory Management, you can find the info you are looking for.
This, instead, is the link to Apple doc on Core Data Memory Managament.
Few tips here.
First, when you deal with Core Data you deal with an object graph. To reduce memory consumption (to prune your graph) you can do a reset on the context you are using or turn objects into fauts passing NO to refreshObject:(NSManagedObject *)object mergeChanges:(BOOL)flag method. If you pass NO to that method, you can lose unsaved changes, so pay attention to it.
Furthermore, don't use Undo Management if you don't need it. It increases memory use (by default in iOS, no undo manager is created).
Hope that helps.
I was experimenting with ways to get rid of some memory leaks within my application the other day when I realized that I know virtually nothing about cleaning up my resources. I did some research, and hoped that just calling the .dispose() would solve all of my problems. We have a table in our database that contains about 65,000 records. Obviously when I fill my dataset from the data adapter, the memory usage can get pretty high. When I called the dispose method on the dataset, I was surprised to find out that NONE of the memory got released. Why did this happen? Clearing the dataset doesn't help either.
IDisposable and thus Dispose is not used to reduced memory pressure, although in some cases it might, but instead used for deterministic cleanup.
Consider this, you construct an object that maintains an active and open connection to your database server. This connection uses resources, both on your machine, and the server.
You could of course just leave the object be when you're done with it, and eventually it'll get picked up by the garbage collector, but suppose you want to make sure at least the resources gets freed, and thus the connection closed, when you're done with it. This is where IDisposable.Dispose comes into play.
It is used to clean up resources managed by the object.
It will, however, not free the managed memory allocated to the object. This is still left to the garbage collector, that will kick in at some later time to do that.
Do you actually have a memory problem, or do you just look at the memory usage in Task Manager or similar and go "that's a bit high."?
If the latter, then you should just leave it be for now. .NET will run garbage collection more often if you have less memory available, so unless you're in a situation where you get, or might suspect you will get soon, a memory overflow condition, you're probably not going to have any problems.
Let me explain what I mean by "run less often".
If you have 8GB of memory in your machine, and only have Windows and Notepad running, most of that memory will be available. When you now run your program, even if it loads minor data blocks into memory, you can keep doing that for a long time, and memory usage will steadily grow. Exactly when the GC will kick in and try to reduce your memory footprint I don't know, but I can almost guarantee you that you will wonder why it gets so high.
Let's just for the sake of the argument say that your program will eventually use 2GB of memory.
Now, if you run your program on a machine that has less memory available, GC will occur more often, and will kick in on a lower limit, which might keep the memory usage below 500MB or possibly even less.
The important part to note here is that in order for you to get an accurate picture of how much memory application actually requires, then you can't rely on Task Manager or similar ways to measure it, you need something more targetted.
Calling Dispose() will only release unmanaged resources, such as file handles, database connections, unmanaged memory, etc. It will not release garbage collected memory.
Garbage collected memory will only get released at the next collection. Usually when the application domain memory is deamed full.
I'm going to point out something here that hasn't been explicitly mentioned: calling Dispose() will only clean up (free) unmanaged resources if the developer of the component has coded it.
What I mean is this: if you suspect you have a memory leak, calling Dispose() is not going to fix it if the original developer has done a lousy job and not correctly freed up unmanaged resources. For a bit more info, check this blog post. Take note of the statement The behaviour of Dispose is defined by the developer.
Some objects will ask one or more other entities to do something on its behalf until further notice, to the detriment of other entities. If an object which did so were to disappear without informing the former entities that their services were no longer needed, those entities would continue to uselessly act on behalf of an object that no longer needed them, to the continuing detriment of other entities that would want to use them.
In many cases, for an object "George" to tell an outside entity "Joe" that its services were no longer needed, George would have to know that its services were no longer needed. There are two normal means via which that can happen in .NET, finalization and IDIsposable.
If an object overrides a method called Finalize, then when the object is created the .NET garbage collector will add it to a list of objects with registered finalizers. If the GC discovers that there exists no rooted reference to the object other than that list, the GC will remove the object from that list and add it to a strongly-rooted queue of objects which should have their Finalize method called as soon as possible. Such an object can then use its Finalize method to inform other entities that their services are no longer required.
Although finalization-based cleanup can sometimes work, there's no guarantee of timeliness. At one point during the design of .net Microsoft may have intended that finalization would be the primary cleanup method, but for a variety of reasons it cannot safely be relied upon.
The other cleanup approach, which should be the focus of one's efforts, is IDisposable. Basically, the idea behind IDisposable is simple: for every object that implements IDisposable, there should be one entity (generally either an object or a nested execution scope) which is responsible for ensuring that that object's IDisposable.Dispose method will get called sometime within the lifetime of the universe (which would imply sometime while a reference to the object still exists), and preferably as soon as code can tell that the object's services will no longer be required.
Note that IDisposable.Dispose generally promises that any outside entities which had been asked to do something on an object's behalf will be told that they no longer need to do so, but such a promise does not imply that the number of entities is non-zero. If an object hasn't asked any outside entities to do anything on its behalf, then delivering a message "all" such entities doesn't require doing anything at all. On the other hand, the fact that a Dispose method may do nothing in some cases doesn't mean that it's guaranteed never to do anything in any case, nor that failure to call it in those cases where it would do something won't have detrimental effects.
I'm getting random crashes when creating an inferred mapping model (with Core Data's lightweight migration) within my application. By the way, I have to do it programmatically in my application while it is running.
This is how I create this model (after I have made proper currentModel and newModel objects, of course):
NSMappingModel *mappingModel = [NSMappingModel inferredMappingModelForSourceModel:currentModel destinationModel:newModel error:&error];
The problem is this: This method is crashing randomly. When it works, it works just fine without issues. But when it crashes, it crashes my application (instead of returning nil to signify that the method failed, as it should). By randomly, I mean that sometimes it happens and sometimes not. It is unpredictable.
Now, here is the deal: I'm running this method in another thread. More precisely, it is located inside a block that is passed via GCD to run on the global main queue. I need to do this for my UI to appear crisp to the user, i.e. so that I can display a progress indicator while the work is underway.
The strange thing seems to be that if I remove the GCD stuff and just let it run on the main thread, it seems to be working fine and never crashing. Thus, could it be because I'm running this on a different thread that this is crashing?
I somehow find that weird because I don't believe I'm breaking any Core Data rules regarding multi-threading. In particular, I'm not passing any managed objects around, and whenever I need access to the MOC, I create a new MOC, i.e. I'm not relying on any MOC (or for that matter: anything) that has been created earlier on the main thread. Besides the little MOC stuff that occurs, occurs after the mapping model creation method, i.e. after the point at which the app crashes, so it can't possibly be a cause of the crashes under consideration here.
All I'm doing is taking two MOMs and asking for a mapping model between them. That can't be wrong even under threading, now can it?
Any ideas on what could be going on?
First, what is the crash?
Second, Core Data generally is a single threaded API. There are things you can do in multiple threads but creating a NSMappingModel is most likely not one of them. Why must you create the mapping model dynamically? If the MOMs are a known quantity then the mapping can be a known quantity as well.
update
First, the threading issue. Core Data is meant to be single threaded. However, the NSManagedObjectContext knows how to lock the NSPersistentStoreCoordinator correctly therefore you can have one NSManagedObjectContext per thread because they know how to lock correctly. However this is not the case when you are working with and creating a mapping model.
However, the error you provided is not a Core Data error per se. That error indicates that somewhere in your code you are trying to stick a nil into a set. Without seeing the code that is generating the mapping model though it is difficult to guess as to exactly where.
Have you put a breakpoint at objc_throw_exception and see what line in your code is causing this crash? If it is in something non-obvious then I would suggest there is some point in your building of the mapping model that is giving Core Data an unexpected nil.
One thing you can try is locking the NSPersistentStore and/or the NSManagedObjectContext yourself to see if that resolves the crash. However I suspect when you do that you are going to again deal with performance issues.
I ended up giving up on this problem altogether and just created the damned mapping models myself.