I want to perform the same action over several objects stored in a NSSet.
My first attempt was using a fast enumeration:
for (id item in mySetOfObjects)
[item action];
which works pretty fine. Then I thought of:
[mySetOfObjects makeObjectsPerformSelector:#selector(action)];
And now, I don't know what is the best choice. As far as I understand, the two solutions are equivalent. But are there arguments for preferring one solution over the other?
I would argue for using makeObjectsPerformSelector, since it allows the NSSet object to take care of its own indexing, looping and message dispatching. The people who wrote the NSSet code are most likely to know the best way to implement that particular loop.
At worst, they would simply implement the exact same loop, and all you gain is slightly cleaner code (no need for the enclosing loop). At best, they made some internal optimizations and the code will actually run faster.
The topic is briefly mentioned in Apple's Code Speed Performance document, in the section titled "Unrolling Loops".
If you're concerned about performance, the best thing to do is set up a quick program which performs some selector on the objects in a set. Have it run several million times, and time the difference between the two different cases.
I too was presented with this question. I find in the Apple docs "Collections Programming Topics" under "Sets: Unordered Collections of Objects" the following:
The NSSet method objectEnumerator lets
you traverse elements of the set one
by one. And
themakeObjectsPerformSelector: and
makeObjectsPerformSelector:withObject:
methods provide for sending messages
to individual objects in the set. In
most cases, fast enumeration should be
used because it is faster and more
flexible than using an NSEnumerator or
the makeObjectsPerformSelector:
method. For more on enumeration, see
“Enumeration: Traversing a
Collection’s Elements.”
This leads me to believe that Fast Enumeration is still the most efficient means for this application.
I would not use makeObjectsPerformSelector for the simple reason that it is the kind of call that you don't see all that often. Here is why for example - I need to add debugging code as the array is enumerated, and you really can't do that with makeObjectsPerformSelector unless you change how the code works in Release mode which is a real no no.
for (id item in mySetOfObjects)
{
#if MY_DEBUG_BUILD
if ([item isAllMessedUp])
NSLog(#"we found that wily bug that has been haunting us");
#endif
[item action];
}
--Tom
makeObjectsPerformSelector: might be slightly faster, but I doubt there's going to be any practical difference 99% of the time. It is a bit more concise and readable though, I would use it for that reason.
If pure speed is the only issue (i.e. you're creating some rendering engine where every tiny CPU cycle counts), the fastest possible way to iterate through any of the NSCollection objects (as of iOS 5.0 ~ 6.0) is the various "enumerateObjectsUsingBlock" methods. I have no idea why this is, but I tested it and this seems to be the case...
I wrote small test creating collections of hundreds of thousands of objects that each have a method which sums a simple array of ints. Each of those collections were forced to perform the various types of iteration (for loop, fast enumeration, makeObjectsPerformSelector, and enumerateObjectsUsingBlock) millions of times, and in almost every case the "enumerateObjectsUsingBlock" methods won handily over the course of the tests.
The only time when this wasn't true was when memory began to fill up (when I began to run it with millions of objects), after which it began to lose to "makeObjectsPerformSelector".
I'm sorry I didn't take a snapshot of the code, but it's a very simple test to run, I highly recommend giving it a try and see for yourself. :)
Related
There are tons of articles and blog posts over the internet telling that mutable objects are bad and that we shouldn't use them and therefore we shall make all our objects immutable.
I have nothing against this except that the topic has gone so far that some people might be "tricked" into thinking that mutable objects shall never be used at all.
When do we have to resort to use mutable objects? What are the common kinds of problems that are unsolvable without using mutable state?
As to your fear, it's common. Every concept gets taken by some as to mean that nothing else shall ever be done, for any reason.
These are the people who try to make requirements fit their ideology, rather than the other way around (a.k.a. they're not pragmatic).
When to use mutables? Basically when you feel like it, when you think it makes sense.
Prime example is in low memory and high performance situations where creating a new instance that's identical except for one little thing from the old one is too expensive in either memory and/or CPU cycles.
Today I was asked how long an NSMutableDictionary insertion takes, were that dictionary to contain 1,000,000 elements. Not coming from a computer science background, I had absolutely no idea. I was surprised to learn that it completes in (what I now understand to be called) O(n) time. Great. Wonderful.
How could someone know that, definitively?
Obviously, one could just write dozens and dozens of tests against every single Cocoa class and chart out all the time data. I'll be sure to get around to that when I have a few weeks of free time. Barring all of that...
Is this just super obvious to someone with a computer science
background?
Does Apple publish documentation that explains
this?
Does his knowledge imply that he, being a computer
science expert, did his own testing to discover this?
What you are asking about is called the "complexity" of an algorithm. It is language independent; NSDictionary time complexity is no different than any other associative dictionary such as C++ std::map. However, that doesn't mean that an NSDictionary filled with some objects is guaranteed to be able to perform insert, search, or delete operations as quickly as an std::map; all it means is that the time it takes to do those operations is linear (O(n)), on worst case, in relation to the number of elements (the n part of O(n)). Dictionary insertion could be O(1), which is constant time (operation takes the same amount of time independent of the number of elements in the dictionary) if there were no hash collisions.
The "algorithm" employed by an NSDictionary is called a Hash Table. A hash table does insertion by hashing the key input (a constant time operation), then resolving collisions, an O(n) operation. Hopefully you can see that, in the worst case, all of your insert operations will collide, which is O(n).
Tables/hashing algorithms can of course be specialized to reduce collisions within a specific set of data, but NSDictionary just uses the objects hash method of your key objects, which you can override for your NSObject subclasses if you need some sort of specialization (probably not).
Since it is a general purpose dictionary and not specialized for a specific set of data, we don't necessarily need to know the implementation details of NSDictionary (Apple's documentation for NSDictionary doesnt mention specifics) to know that these operations will be O(n). Neither do we have to run "tests" to discover the complexity.
What are the differences between NSArray and CCArray? Also, in what cases will one be preferred to the other with respect to game programming?
CCArray emulates NSMutableArray. It is a wrapper around a C array (memory buffer). It was developed and is used internally by cocos2d because NSMutableArray was deemed too slow. However the performance improvement is minimal. Any use cases (features) of CCArray that cocos2d itself doesn't use remain a potential source of issues, including weird and hard to debug issues or terrible performance characteristics.
The most important performance critical aspect is reading the array sequential. In my latest tests that's an area where CCArray (no longer?) excels. Specifically fast enumeration: NSMutableArray is around 33 times faster!
CCArray is a perfect example why one should not reinvent the wheel, specifically when it comes to storage classes when there is already a stable, proven, and fast solution available (NSMutableArray). Any speed advantage it may have once had is long gone. What remains is a runtime behavior you will not want to deal with, including some extremely bad performance characteristics (insertion, fast enumeration).
Long story short: do not use CCArray in your own code! Treat CCArray like an internal, private class not to be used in user code (except where unavoidable, ie children array).
NSMutableArray is THE array reference implementation everyone should be using because it's extremely well tested, documented, and stable (both in terms of runtime behavior and speed).
Check it....
http://www.learn-cocos2d.com/2010/09/array-performance-comparison-carray-ccarray-nsarray-nsmutablearray/
Hope this help
Enjoy Programming
CCArray
http://www.cocos2d-x.org/embedded/cocos2d-x/d9/d2e/classcocos2d_1_1_c_c_array.html
In cocos2d-x CCArray is mutable, i.e. you can add elements to it. To create CCArray instance without capacity, you can use CCArray::array() constructor. CCMutableArray is template-based container that can store objects of the same type. CCArray stores objects as CCObject instances, so you have to cast them after getting from CCArray instance
The NSArray class contains a number of methods specifically designed to ease the creation and manipulation of arrays within Objective-C programs.
I need to implement a bit of functionality that can be used from a few different places in an application. It's basically sending something over the network, but I don't need it to be attached to any particular view - I can communicate everything to the user by UIAlertViews.
What I would like to do is encapsulating the functionality in an object (?) that can maintain it's own state for a while and then disappear all by itself. I've read in several similar topics that it's generally not advised to have an object that retains and then releases itself, but on the other hand you have singletons which apart from the fact that they never get released, are very similar in nature. You don't need to keep reference to them just to use them properly. In my situation however I feel it woud be somewhat wasteful to create a singleton and then keep it alive for something that takes a few seconds to execute.
What I came up with is a static dictionary local to the class, that keeps unique references to the instances of the class, and then, when an instance is done with its task, it performs selector 'removeObjectForKey' after delay which removes the only existing reference and effectively kills the object. This way I keep only a dictionary in memory which for the most time is empty anyway.
The question is: are there any unexpected side effects of such a solution that I should be aware of and are there any other good patterns for described situation?
So basically instead of a persistent object of your own class, you've got a persistent object of type NSDictionary? How does that help matters? Is your object unusually large? If you are making your codebase more complicated for the sake of a few bytes, that's not a good tradeoff.
Especially now ARC is commonplace, this kind of trickery is usually not a good idea. Have you measured how much memory a singleton approach takes and found it to be a problem? Unless you have done this, use a singleton. It's simpler code, and all other things being equal, simpler code is far better.
I'm attempting to piece together and run a list of tasks put together by a user. These task lists can be hundreds or thousand of items long.
From what I know, the easiest and most obvious way would be to build an array and then iterate through them:
NSArray *arrayOfTasks = .... init and fill with thousands of tasks
for (id *eachTask in arrayOfTasks)
{
if ( eachTask && [eachTask respondsToSelector:#selector(execute)] ) [eachTask execute];
}
For a desktop, this may be no problem, but for an iphone or ipad, this may be a problem. Is this a good way to go about it, or is there a faster way to accomplish the same thing?
The reason why I'm asking about how much overhead a msg_send occurs is that I could also do a straight C implementation as well. For example, I could put together a linked list and use a block to handle the next task. Will I gain anything from that or is it really more trouble than its worth?
I assume you're talking about objc_msgSend, in which case, Bill Bumgarner has an excellent
4 Part Series that is worth a read.
In general though, I would recommend simply using Obj-C. This is what all apps for the iDevices use, including Apple, and hundreds of items is not going to kill the device.
What rynmrtn said...
Unless your -execute methods were exceedingly simplistic -- incrementing / testing a small handful of scalar values -- then it is unlikely that objc_msgSend() will even show up as a % of your program's CPU time.
Measure first, optimize after.
Your code does raise a question; why are you putting things into the arrayOfTasks that might not be able to execute. Assuming everything in your arrayOfTasks is a subclass of your making, you could add an execute method and not do the responds test. If you have a hierarchy of collection classes, you could use categories to add the methods -- just put a prefix on 'em to be safe (i.e. pxl_execute or something).
Here is a nice benchmark comparison of common operations, including objc_msgSend. In general, you shouldn't worry about objc_msgSend performance, even on the iPhone. Message sending will always be slower than a straight C function call, but on a modern processor (remember, the iPhone processor is still about 500 mhz), the difference is trivial most of the time. If profiling shows that a lot of time is being used in objc_msgSend, then it might be worth using straight C functions instead of Objective-C methods.
For clarity, you can use -[NSArray makeObjectsPerformSelector:] or (on Mac) enumerateObjectsUsingBlock: instead of iterating through the objects, but I don't think it should make much performance difference.