I'm writting a series of anonymous functions for an objective-C project (i.e. these functions are not class specific / implementation is hidden) and I came across an interesting issue...
I have a macro function:
div(c)((CGFloat)c/255.0f)
This usage will almost always be something like div(0.0f), but others may not know that it takes a float so div(0) is possible
and the question I have is this: when variables are explicitly cast and the variable is of the same type as the cast is any performance lost to the cast?
A cast is a promise, not a data type, not a method, not an extension. You're just making the compiler comfortable about a type. Nothing should change execution-wise, therefore there is nothing to optimize execution-wise. If you're worried about the type of the parameter you've requested, you can always explicitly store it in a different CGFloat before operating on it.
The machine works with memory. At the end all the variable that you use are just raw bytes.
Then why do Objective-C have types? To protect the programmer at compile time, showing errors and warnings.
At runtime all the operations that you do are made on the memory, so you don't need to worry about the overhead of casts.
Related
Is there any gain in speed, memory usage, whatever, in Swift by defining as much as possible constants x vars?
I mean, defining as much as possible with let instead of var?
In theory, there should be no difference in speed or memory usage - internally, the variables work the same. In practice, letting the compiler know that something is a constant might result in better optimisations.
However the most important reason is that using constants (or immutable objects) helps to prevent programmer errors. It's not by accident that method parameters and iterators are constant by default.
Using immutable objects is also very useful in multithreaded applications because they prevent one type of synchronization problems.
In objective c, I often run into warnings in the compiler when I have methods that (for instance) return/take as an argument an NSInteger, and I instead place the argument as say a long int/NSNumber/etc value or variable. The code compiles fine, but I am always tempted to just do casts because the warnings make me uneasy. I understand that it probably does not make a big difference either way, but is casting the preferred way to handle these warnings or not?
int, NSInteger, and NSUInteger are scalars. NSNumber is an object. They have nothing to do with one another, and casting between them to hide that fact from the compiler will bring disaster.
Remember, casting means that you throw away the compiler's ability to check that you are doing the right thing. You can use casting to lie to the compiler, as if to say to it, "Don't worry, everything will be okay." Do not lie to the compiler! If you do, then when the app runs, you will get nonsense at worst, and a crash at best. (The crash is best, because it stops you dead; not crashing means you now have a big mistake that will be very difficult to track down later.)
If the warning is just a cast issue then by all means add the cast. Many times Xcode is very helpful with this.
Many times the best answer is to change the declared type so there is neither an error nor a warning. Example: You have declared a variable as an int. A method is expecting a NSInteger. Instead of a cast change the declaration of the variable to an NSInteger.
But note that casting an NSNumber to an NSInteger for example will not fix anything, as #Josh points out they are very different things.
Eliminate all warnings! Then when a new one occurs it is readily apparent. If there are warnings then the code is not compiling fine.
Note: There are times when individual warnings will need to be eliminated with a #pragma but these are extremely rare and only done when the cause is completely understood. An example: I needed to be able to cause a crash in an application for testing, that code caused a warning, a #pragma was added to eliminated the warning.
Also run Analyzer and fix any warnings there.
The warnings are there for a reason, they help catch errors.
I'm just trying to get a deeper understanding of Objective C.
Why do I have to cast before the call to avoid a warning? Isn't this a piece of cake for the compiler? Are there any dynamic aspects that I'm missing?
if ([a.class conformsToProtocol:#protocol(P1)])
{
[(id<P1>)a p1Message];
}
I mean, I understand it in a C/C++ point of view, but after all I'm using an Objective C compiler and I don't like casts. :)
If a is a specific type that declares itself at compile time as implementing P1 then you shouldn't need to cast.
If a is of type id then you'll need to cast only if the return type is ambiguous and you're actually using it, or if it had parameters. That'll generally mean that there are multiple method signatures for the method name p1Message so the compiler doesn't know which to expect.
If a is of some type that doesn't declare itself as implementing P1 then — unless it separately (and repetitiously) declares p1Message — you'll get a warning because you're calling a method that the object may not implement.
If I had to guess, probably a is declared as being of type id rather than id <P1> (which is more normal for, say, delegates) and you have multiple p1Messages flying around. You might also put the cast in proactively because one day you might have multiple different messages with the same name and someone else that might implement p1Message shouldn't have to know every other place in the project that somebody uses that method name.
The compiler can't induce from the conformsToProtocol: check that it is safe to call p1Message exactly because it's a dynamic runtime. You may have substituted a different implementation of conformsToProtocol: either at compile time or at runtime, meaning that it isn't safe to assume that the compiler knows what it does. That call will be dynamically dispatched just like any other.
Maybe it's useful if calling a method that MyClass doesn't understand on a something typed MyClass is an error rather than a warning since it's probably either a mistake or going to cause mistakes in the future...
However, why is this error specific to ARC? ARC decides what it needs to retain/release/autorelease based on the cocoa memory management conventions, which would suggest that knowing the selector's name is enough. So it makes sense that there are problems with passing a SEL variable to performSelector:, as it's not known at compile-time whether the selector is an init/copy/new method or not. But why does seeing this in a class interface or not make any difference?
Am I missing something about how ARC works, or are the clang warnings just being a bit inconsistent?
ARC decides what it needs to retain/release/autorelease based on the cocoa memory management conventions, which would suggest that knowing the selector's name is enough.
This is just one way that ARC determines memory management. ARC can also determine memory management via attributes. For example, you can declare any typedef retainable using __attribute__((NSObject)) (never, ever do this, but it's legal). You can also use other attributes like __attribute((ns_returns_retained)) and several others to override naming conventions (these are things you might reasonably do if you couldn't fix the naming; but it's much better to fix the naming).
Now, imagine a case where you failed to include the header file that declares these attributes in some files but not others. Now, some compile units (.m files) memory manage it one way and some memory manage it another. Hijinks ensure. This is much, much worse than the situation without ARC, and the resulting bugs would be mindbending because some ARC code would do one thing and other ARC code would do something different.
So, yeah, don't do that. (Of course you should never ignore warnings in Objective-C anyway, but this is a particularly nasty situation.)
It's an ounce of prevention, I'd assume. Incidentally, it's not foolproof in larger systems because selectors do not need to match and matching is all based on the translation, so it could still blow up on you if you are not writing your program such that it introduces type safety. Something is better than nothing, though!
The compiler wants to know about parameters and return types, potentially annotations and out parameters. ObjC has defaults to fall back on, but it's a good source of multiple types of bugs as the compiler does more for you.
There are a number of reasons you should introduce type safety and turn up the warning levels. With ARC, there are even more. Regardless of whether it is truly necessary, it's a good direction for an objc compiler to move towards (IMHO). You might consider C99 safer than ObjC 2.0 in this regard ;)
If there really is a restriction for codegen, I'd like to hear it.
I'm trying to interface Lua with Objective-C, and I think string conversion with NSSelectorFromString() has too big an overhead because Lua has to copy all strings to internalize them (although I'm not sure about this).
So I'm trying to find more lightweight way to represent a selector in Lua.
An Objective-C selector is an abstracted type, but it's defined as a pointer to something:
typedef struct objc_selector *SEL;
So it looks safe to handle as a regular pointer, so I can pass it to Lua with lightuserdata. Is this fine?
I don't believe it is safe to handle it as a pointer (even a void pointer), because if this ever changes in a future implementation or a different implementation of the language. I didn't see a formal Objective-C spec that tells what is implementation defines, but often when opaque types like this are used it means that you shouldn't have to know details about the underlying type is. In fact, the struct is forward-declared so that you can't access any of its members.
The other problem you might run into is implementing equality comparisons: are selectors references to a pool of constants or is each selector mutable. Once again, implementation defined.
Using C strings as suggested above is probably your best bet; ruby manages to use symbols for selectors and doesn't have too much of a performance penalty. Since the strings are const, lua doesn't need to copy them, but probably does anyway to be safe. If you can find a way to not copy the strings you might not take that much of a performance hit.