I have a situation where I must generate a sequence of objects, and pass them back to an application one at a time (think block-based, or fast enumeration).
However, each object is going to be relatively expensive to generate, so I am looking for ways to avoid this cost.
It happens to be the case, that given one object of the sequence, the next one can be efficiently generated by a simple modification of the former. For this reason, it is tempting to "cheat" by only ever creating one object, and then keep passing that same object back to the application, and only carrying out the "cheap" modification "behind the scene" at each step in the sequence.
The problem is, of course, that the application may choose to (and it should be allowed to) store a reference to some, or all of the objects somewhere else. If it does that, the "illusion" of a true sequence of unique objects breaks down.
If Objective-C allows it, a neat way of solving this problem would be to detect when the application actually does store references elsewhere, and whenever that happens, replace the object by a copy of itself, before applying the modification that produce the next element in the sequence.
I don't know what the official name of this idiom is, but we could call it "copy on write if leaked", "copy on write if shared", or simply "copy on write".
My question is then: Does Objective-C, with ARC enabled, allow for such an idiom to be implemented?
And, is it even the right way to solve this kind of problem in Objective-C?
I did notice that with ARC enabled, there is no way I can extract the reference count from an object, nor override the methods that increment and decrement it.
EDIT: I have noticed that there is a copy attribute that can be applied to properties, but I have a hard time figuring it out. Are any of you guys able to explain how it works?
http://clang.llvm.org/docs/AutomaticReferenceCounting.html#property-declarations
Related
In Objc string, array and dictionary are all reference types, while in Swift they are all value types.
I want to figure out what's the reason behind the scenes, for my understanding, no matter it is a reference type or value type, the objects live in the heap in both Objc and Swift.
Was the change for making coding easier? i.e. if it is reference type then the pointer to the object might not be nil, so need to check both pointer and the object not nil for accessing the object. While if it is value type then only need to check the object itself?
But in terms of memory allocation, value types and reference types are same, right? both allocated same size of memory?
thanks
Arrays, dictionaries etc. in Objective-C are often mutable. That means when I pass an array to another method, and then that array is modified behind the back of the other method, surprising (to put it gently) behaviour will happen.
By making arrays, dictionaries etc. value types, this surprising behaviour is avoided. When you receive a Swift array, you know that nobody is going to modify it behind your back. Objects that can be modified behind your back are a major source for problems.
In reality, the Swift compiler tries to avoid unnecessary copying whenever possible. So even if it says that an array is officially copied, it doesn't mean that it is really copied.
The Swift team is very active on the official developer forums. So, I'm assuming that since you didn't ask there, you're more curious about the community's broader "sense" of what the change means, as opposed to the technical implementation details. If you want to understand exactly "why", just go ask them :)
The explanation that makes the most sense to me is that Objects should be responsible for reacting to, and updating the state of your application. Values should be the state of your application. In other words, an Array or a String or a Dictionary (and other value types) should never be responsible for responding to user input or network input or error conditions, etc. The Objects handle that and store the resulting data into those values.
One cool feature in Swift, which makes a complex Value Type (like a Dictionary or a custom type like Person, as opposed to a simple Float) more viable, is that the value types can encapsulate rules and logic because they can have functions. If I write a value type Person as a struct, then the Person struct can have a function for updating a name due to marriage, etc. That's solely concerned with the data, and not with /managing/ the state. The Objects will still decide WHEN and WHY to updating a Person's name, but the business logic of how to go about doing so safely/test-ably can be included in the Value Type itself. Hence giving you a nice way to increase isolation and reduce complexity.
In addition to the previous answers, there are also multi-threading issues to consider with sharing a Reference-Based collection type that we don't have to worry as much with sharing an instance of a type that is Value-Based and has Copy-On-Write behavior. Multi-core is becoming more and more proliferant even on iOS devices, so it has become more of an issue for the Swift language developers to consider.
I do not know, whether this is the real idea behind it, but have a historical view on it:
At the beginning, an array copy behaved by reference, when you changed an item in it. It behaved by value, when you changed the length of the array. They did it for performance reasons (less array copy). But of course this was, eh, how can I express that politly, eh, difficult with Swift at all, eh, let's call it a "do not care about a good structure if you can win some performance, you probably never need" approach. Some called that copy-on-write, what is not much more intelligent, because COW is transparent, while that behavior was not transparent. Typical Swift wording: Use a buzzword, use it the way, it fits to Swift, don't care about correctness.
Later on arrays got a complete by copy behavior, what is less confusing. (You remember, Swift was for readability. Obviously in Swift's concept, readability means "less characters to read", but does not mean "better understandable". Typical Swift wording: Use a buzzword, use it the way, it fits to Swift, don't care about correctness. Did I already mention that?)
So, I guess it is still performance plus understandable behavior probably leading to less performance. (You will better know when a copy is needed in your code and you can still do that and you get a 0-operation from Cocoa, if the source array is immutable.) Of course, they could say: "Okay, by value was a mistake, we changed that." But they will never say.
However, now arrays in Swift behave consistently. A big progress in Swift! Maybe you can call it a programming language one sunny day.
I need to implement a bit of functionality that can be used from a few different places in an application. It's basically sending something over the network, but I don't need it to be attached to any particular view - I can communicate everything to the user by UIAlertViews.
What I would like to do is encapsulating the functionality in an object (?) that can maintain it's own state for a while and then disappear all by itself. I've read in several similar topics that it's generally not advised to have an object that retains and then releases itself, but on the other hand you have singletons which apart from the fact that they never get released, are very similar in nature. You don't need to keep reference to them just to use them properly. In my situation however I feel it woud be somewhat wasteful to create a singleton and then keep it alive for something that takes a few seconds to execute.
What I came up with is a static dictionary local to the class, that keeps unique references to the instances of the class, and then, when an instance is done with its task, it performs selector 'removeObjectForKey' after delay which removes the only existing reference and effectively kills the object. This way I keep only a dictionary in memory which for the most time is empty anyway.
The question is: are there any unexpected side effects of such a solution that I should be aware of and are there any other good patterns for described situation?
So basically instead of a persistent object of your own class, you've got a persistent object of type NSDictionary? How does that help matters? Is your object unusually large? If you are making your codebase more complicated for the sake of a few bytes, that's not a good tradeoff.
Especially now ARC is commonplace, this kind of trickery is usually not a good idea. Have you measured how much memory a singleton approach takes and found it to be a problem? Unless you have done this, use a singleton. It's simpler code, and all other things being equal, simpler code is far better.
I have an array of NSDictionaries and a NSDictionary iVar (*selectedDictionary) that points to one of the array's objects. *selectedDictionary points to a different object every time the user selects a different row in a NSTableView. Several GUI controls are binded to the selectedDictionary instance (IB).
I just want to make the NSDocument dirty (edited) every time the user alters the above controls. I think using Key Value Observing for ALL the objects in the array and all their kaypaths, is a bit insufficient. Any suggestions?
Thanks
NSDocument's support for marking a document as dirty comes directly from the NSUndoManager. The easiest way to change the document to dirty is to do an implementation of Undo, and this is basically going to mean doing the undo inside of the model class that the document is using (or the subclass of NSDocument if you choose to handle all storage directly in there).
Apple has documentation on this here:
http://developer.apple.com/library/ios/#documentation/cocoa/Conceptual/UndoArchitecture/Articles/AppKitUndo.html
Since you indicate you have an array of dictionaries, that's going to make it a bit more work to implement, but once you have nailed that, you'll be in good shape.
Alternatively, if you don't want to go with the freebie support provided by NSDocument and NSUndoManager, you can manually handle undo and use the updateChangeCount: method to modify the internal understanding of whether changes have occurred. This takes some work, and likely is a lot less useful than just setting up undo correctly.
As for the efficiency of observing all the objects in the array, I wouldn't worry about it unless you have profiled it and found it to be inefficient. KVO is really pretty darned efficient and we regularly observe multiple values in every element of arrays without seeing performance problems. You have to observe the array itself in order to handle adds and removes (assuming your arrays have this).
As far as I can tell, though, you have a selectedDictionary which is used to determine the other controls that are shown. In this case, you can use KVO to observe the value of selectedDictionary and when it changes, you can remove the observers from the previous selectedDictionary and add them to the keys in the current selectedDictionary. This is basically what bindings is doing in order to handle the display and setting, anyway.
One other consideration that I've used in the past is referenced in this StackOverflow post:
NSMutableDictionary KVO. If you look at my answer here, I outline a trick for getting notifications when a new key is added or an existing key is deleted. It also has the benefit of giving you a notification when there's any change. It's not always a great solution, but it does save some effort on coding the list of keys to observe.
Beyond that, you'll have to add every key you're expecting to have an effect on the saved state of the document.
I have read Apple's memory management guide, and think I understand the practices that should be followed to ensure proper memory management in my application.
At present it looks like there are no memory leaks in my code. But as my code grows more complex, I wonder whether there is any particular pattern I should follow to keep track of allocations and deallocations of objects.
Does it make sense to create some kind of global object that is present throughout the execution of the application which contains a count of the number of active objects of a type? Each object could increment the count of their type in their init method, and decrement it in dealloc. The global object could verify at appropriate times if the count of a particular type is zero of not.
EDIT: I am aware of how to use the leaks too, as well as how to analyze the project using Xcode. The reason for this post is to keep track of cases which may not be detected through leaks or analyze easily.
EDIT: Also, it seems to make sense to have something like this so that leaks can be detected in builds early by running unit tests that check the global object. I guess that as an inexperienced objective-c programmer I would benefit from the views of others on this.
Each object could increment the count of their type in their init
method, and decrement it in dealloc.
To do that right, you'll have to do one of the following: 1) override behavior at some common point, such as NSObject's -init or , or 2) add the appropriate code to the designated initializer of every single class. Neither seems simple.
The global object could verify at appropriate times if the count of a
particular type is zero of not.
Sounds good, but can you elaborate a bit on "appropriate times"? How would you know at any given point in the life of your program which classes should have zero instances? You'd have a pretty good idea that there should be no objects at the end of the program, but Instruments could tell you the same thing in that case.
Objective-C has taken several steps to make memory management much simpler. Use properties and synthesized accessors where you can, as they essentially manage your objects for you. A more recent improvement is ARC, which goes even further toward automating most memory management tasks. You basically let the compiler figure out where to put the memory management calls -- it's like garbage collection without the garbage collector. Learn to use those tools well before you try to invent new ones.
Don't go that route... it's a pain in single inheritance. Most importantly, there are excellent tools at your disposal which you should master before thinking you must create some global counter. The global counter exists in a few tools already -- Learn them!
The way you combat it is to learn how to balance and manage everything correctly when it's written. It's really very simple in hindsight.
ARC is another option -- really that just postpones your understanding.
The first "design pattern" I recommend it to use release instead of autorelease where possible (although generally more useful for over-releases).
Next, run the leaks instrument/util regularly and fix all leaks/zombies immediately.
Third, learn the existing tools as you go! These tools can do really crazy stuff, like record the backtrace of every allocation and every reference count. You can pause your program's execution and view what allocations exist, alloc counts, backtraces, and all sorts of other stats.
I'm asking this because I'm relatively new to interpreter development and I wanted to know some basic concepts before reinventing the wheel.
I thought of the values of all variables stored in an array which makes the current scope, upon entering a function the array is swapped and the original array put on some sort of stack. When leaving the function the top element of the "scope stack" is popped of and used again.
Is this basically right?
Isn't swapping arrays (which means moving around a lot of data) not very slow and therefore not used by modern interpreters?
Why swap the array? Just look at the top array on your stack. Furthermore, in most languages you don’t have to copy the array when you want to swap it, you can just swap references or pointers.
This is also what an interpreter might do. An alternative is having a special data structure for the current scope which holds a reference to its parent frame explicitly.
Python uses the C stack to keep track of its scope. Everytime a new scope is entered a new function call is made so that the scope's data is always held in the local variables on the stack.
For some other interpreters, everything is kept on the stack something like your suggestion. However, the interpreter acts on the top of the stack in-place. There is no need to copy things back and forth since there is only one copy.