Why does the Objective-C compiler need to know method signatures? - objective-c

Why does the Objective-C compiler need to know at compile-time the signature of the methods that will be invoked on objects when it could defer that to runtime (i.e., dynamic binding)? For example, if I write [foo someMethod], why it is necessary for the compiler to know the signature of someMethod?

Because of calling conventions at a minimum (with ARC, there are more reasons, but calling conventions have always been a problem).
You may have been told that [foo someMethod] is converted into a function call:
objc_msgSend(foo, #selector(someMethod))
This, however, isn't exactly true. It may be converted to a number of different function calls depending on what it returns (and what's returned matters whether you use the result or not). For instance, if it returns an object or an integer, it'll use objc_msgSend, but if it returns a structure (on both ARM and Intel) it'll use objc_msgSend_stret, and if it returns a floating point on Intel (but not ARM I believe), it'll use objc_msgSend_fpret. This is all because on different processors the calling conventions (how you set up the stack and registers, and where the result is stored) are different depending on the result.
It also matters what the parameters are and how many there are (the number can be inferred from ObjC method names unless they're varargs... right, you have to deal with varargs, too). On some processors, the first several parameters may be put in registers, while later parameters may be put on the stack. If your function takes a varargs, then the calling convention may be different still. All of that has to be known in order to compile the function call.
ObjC could be implemented as a more pure object model to avoid all of this (as other, more dynamic languages do), but it would be at the cost of performance (both space and time). ObjC can make method calls surprisingly cheap given the level of dynamic dispatch, and can easily work with pure C machine types, but the cost of that is that we have to let the compiler know more specifics about our method signatures.
BTW, this can (and every so often does) lead to really horrible bugs. If you have a couple of methods:
- (MyPointObject *)point;
- (CGPoint)point;
Maybe they're defined in completely different files as methods on different classes. But if the compiler chooses the wrong definition (such as when you're sending a message to id), then the result you get back from -point can be complete garbage. This is a very, very hard bug to figure out when it happens (and I've had it happen to me).
For a bit more background, you may enjoy Greg Parker's article explaining objc_msgSend_stret and objc_msgSend_fpret. Mike Ash also has an excellent introduction to this topic. And if you want to go deep down this rabbit hole, you can see bbum's instruction-by-instruction investigation of objc_msgSend. It's outdated now, pre-ARC, and only covers x86_64 (since every architecture needs its own implementation), but is still highly educational and recommended.

Related

Thread safety of primitive value type properties - Objective C

Question
I am working on a project where I am concerned about the thread safety of an object's properties. I know that when a property is an object such as an NSString, I can run into situations where multiple threads are reading and writing simultaneously. In this case you can get a corrupt read and the app will either crash or result in corrupted data.
My question is for primitive value type properties such as BOOLs or NSIntegers. I am wondering if I can get into a similar situation where I read a corrupt value when reading and writing from multiple threads (and the app will crash)? In either case, I am interested in why.
Clarification - 1/13/17
I am mostly interested in if a primitive value type property is differently susceptible to crashing due to multiple threads accessing it at the same time than an object such as NSMutableString, custom created object, etc. In addition, if there is a difference when accessing memory on the stack vs heap relative to multithreading.
Clarification - 12/1/17
Thank you to #Rob for pointing me to the answer here: stackoverflow.com/a/34386935/1271826! This answer has a great example that shows that depending on the type of architecture you are on (32-bit vs 64-bit), you can get an undefined result when using a primitive property.
Although this is a great step towards answering my question, I still wonder two things:
If there is a multithreading difference when accessing a primitive value property on the stack vs heap (as noted in my previous clarification)?
If you restrict a program to running on one architecture, can you still find yourself in an undefended state when access a primitive value property and why?
I should note that here has been a lot of conversation around atomic vs nonatomic in response to this question. Although this is generally an important concept, this question has little to do with preventing undefined multithreading behavior by using the atomic property modifier or any other thread safety approach such as using GCD.
If your primitive value type property is atomic, then you're assured it cannot be corrupted because your reading it from one thread while setting it from another (as long as you only use the accessor methods, and not interact with the backing ivar directly). That's the entire purpose of atomic. And, as you suggest, this only applicable to fundamental data types (or objects that are both immutable and stateless). But in these narrow cases, atomic can be useful.
Having said that, this is a far cry from concluding that the app is thread-safe. It only assures you that the access to that one property is thread-safe. But often thread-safety must be considered within a broader context. (I know you assure us that this is not the case here, but I qualify this for future readers who too quickly jump to the conclusion that atomic is sufficient to achieve thread-safety. It often is not.)
For example, if your NSInteger property is "how many items are in this cache object", then not only must that NSInteger have its access synchronized, but it must be also be synchronized in conjunction with all interactions with the cache object (e.g. the "add item to cache" and "remove item from cache" tasks, too). And, in these cases, since you'll synchronize all interaction with this broader object somehow (e.g. with GCD queue, locks, #synchronized directive, whatever), making the NSInteger property atomic then becomes redundant and therefore modestly less efficient.
Bottom line, in limited situations, atomic can provide thread-safety for fundamental data types, but frequently it is insufficient when viewed in a broader context.
You later say that you don't care about race conditions. For what it's worth, Apple argues that there is no such thing as a benign race. See WWDC 2016 video Thread Sanitizer and Static Analysis (about 14:40 into it).
Anyway, you suggest you are merely concerned whether the value can be corrupted or whether the app will crash:
I am wondering if I can get into a similar situation where I read a corrupt value when reading and writing from multiple threads (and the app will crash)?
The bottom line is that if you're reading from one thread while mutating on another, the behavior is simply undefined. It could vary. You are simply well advised to avoid this scenario.
In practice, it's a function of the target architecture. For example on 64-bit type (e.g. long long) on 32-bit x86 target, you can easily retrieve a corrupt value, where one half of the 64-bit value is set and the other is not. (See https://stackoverflow.com/a/34386935/1271826 for example.) This results in merely non-sensical, invalid numeric values when dealing with primitive types. For pointers to objects, this obviously would have catestrophic implications.
But even if you're in an environment where no problems are manifested, it's an incredibly fragile approach to eschew synchronization to achieve thread-safety. It could easily break when run on new, unanticipated hardware architectures or compiled under different configuration. I'd encourage you to watch that Thread Sanitizer and Static Analysis video for more information.

objective-c memory management--how long is object guaranteed to exist?

I have ARC code of the following form:
NSMutableData* someData = [NSMutableData dataWithLength:123];
...
CTRunGetGlyphs(run, CGRangeMake(0, 0), someData.mutableBytes);
...
const CGGlyph *glyphs = [someData mutableBytes];
...
...followed by code that reads memory from glyphs but does nothing with someData, which isn't referenced anymore. Note that CGGlyph is not an object type but an unsigned integer.
Do I have to worry that the memory in someData might get freed before I am done with glyphs (which is actually just pointing insidesomeData)?
All this code is WITHIN the same scope (i.e., a single selector), and glyphs and someData both fall out of scope at the same time.
PS In an earlier draft of this question I referred to 'garbage collection', which didn't really apply to my project. That's why some answers below give it equal treatment with what happens under ARC.
You are potentially in trouble whether you use GC or, as others have recommended instead, ARC. What you are dealing with is an internal pointer which is not considered an owning reference in either GC or ARC in general - unless the implementation has special-cased NSData. Without that owning reference either GC or ARC might remove the object. The problem you face is peculiar to internal pointers.
As you describe your situation the safest thing to do is to hang onto the real reference. You could do this by assigning the NSData reference to either an instance variable or a static (method local if you wish) variable and then assigning nil to that variable when you've done with the internal pointer. In the case of static be careful about concurrency!
In practice your code will probably work in both GC and ARC, probably more likely in ARC, but either could conceivably bite you especially as compilers change. For the cost of one variable declaration and one extra assignment you avoid the problem, cheap insurance.
[See this discussion as an example of short lifetime under ARC.]
Under actual, real garbage collection that code is potentially a problem. Objects may be released as soon as there is no longer any reference to them and the compiler may discard the reference at any time if you never use it again. For optimisation purposes scope is just a way of putting an upper limit on that sort of thing, not a way of dictating it absolutely.
You can use NSAllocateCollectable to attach lifecycle calculation to C primitive pointers, though it's messy and slightly convoluted.
Garbage collection was never implemented in iOS and is now deprecated on the Mac (as referenced at the very bottom of this FAQ), in both cases in favour of automatic reference counting (ARC). ARC adds retains and releases where it can see that they're implicitly needed. Sadly it can perform some neat tricks that weren't previously possible, such as retrieving objects from the autorelease pool if they've been used as return results. So that has the same net effect as the garbage collection approach — the object may be released at any point after the final reference to it vanishes.
A workaround would be to create a class like:
#interface PFDoNothing
+ (void)doNothingWith:(id)object;
#end
Which is implemented to do nothing. Post your autoreleased object to it after you've finished using the internal memory. Objective-C's dynamic dispatch means that it isn't safe for the compiler to optimise the call away — it has no way of knowing you (or the KVO mechanisms or whatever other actor) haven't done something like a method swizzle at runtime.
EDIT: NSData being a special case because it offers direct C-level access to object-held memory, it's not difficult to find explicit discussions of the GC situation at least. See this thread on Cocoabuilder for a pretty good one though the same caveat as above applies, i.e. garbage collection is deprecated and automatic reference counting acts differently.
The following is a generic answer that does not necessarily reflect Objective-C GC support. However, various GC implementaitons, including ref-counting, can be thought of in terms of Reachability, quirks aside.
In a GC language, an object is guaranteed to exist as long as it is Strongly-Reachable; the "roots" of these Strong-Reachability graphs can vary by language and executing environment. The exact meaning of "Strongly" also varies, but generally means that the edges are Strong-References. (In a manual ref-counting scenario each edge can be thought of as an unmatched "retain" from a given "owner".)
C# on the CLR/.NET is one such implementation where a variable can remain in scope and yet not function as a "root" for a reachability-graph. See the Systems.Timer.Timer class and look for GC.KeepAlive:
If the timer is declared in a long-running method, use KeepAlive to prevent garbage collection from occurring [on the timer object] before the method ends.
As of summer 2012, things are in the process of change for Apple objects that return inner pointers of non-object type. In the release notes for Mountain Lion, Apple says:
NS_RETURNS_INNER_POINTER
Methods which return pointers (other than Objective C object type)
have been decorated with the clang compiler attribute
objc_returns_inner_pointer (when compiling with clang) to prevent the
compiler from aggressively releasing the receiver expression of those
messages, which no longer appear to be referenced, while the returned
pointer may still be in use.
Inspection of the NSData.h header file shows that this also applies from iOS 6 onward.
Also note that NS_RETURNS_INNER_POINTER is defined as __attribute__((objc_returns_inner_pointer)) in the clang specification, which makes it such that
the object's lifetime will be extended until at least the earliest of:
the last use of the returned pointer, or any pointer derived from it,
in the calling function;
or the autorelease pool is restored to a
previous state.
Caveats:
If you're using anything older then Mountain Lion or iOS 6 you will still need to use any of the methods discussed here (e.g., __attribute__((objc_precise_lifetime))) when declaring your local NSData or NSMutableData objects.
Also, even with the newest compiler and Apple libraries, if you use older or third party libraries with objects that do not decorate their inner-pointer-returning methods with __attribute__((objc_returns_inner_pointer)) you will need to decorate your local variables declarations of such objects with __attribute__((objc_precise_lifetime)) or use one of the other methods discussed in the answers.

Why is 'no known method for selector x' a hard error under ARC?

Maybe it's useful if calling a method that MyClass doesn't understand on a something typed MyClass is an error rather than a warning since it's probably either a mistake or going to cause mistakes in the future...
However, why is this error specific to ARC? ARC decides what it needs to retain/release/autorelease based on the cocoa memory management conventions, which would suggest that knowing the selector's name is enough. So it makes sense that there are problems with passing a SEL variable to performSelector:, as it's not known at compile-time whether the selector is an init/copy/new method or not. But why does seeing this in a class interface or not make any difference?
Am I missing something about how ARC works, or are the clang warnings just being a bit inconsistent?
ARC decides what it needs to retain/release/autorelease based on the cocoa memory management conventions, which would suggest that knowing the selector's name is enough.
This is just one way that ARC determines memory management. ARC can also determine memory management via attributes. For example, you can declare any typedef retainable using __attribute__((NSObject)) (never, ever do this, but it's legal). You can also use other attributes like __attribute((ns_returns_retained)) and several others to override naming conventions (these are things you might reasonably do if you couldn't fix the naming; but it's much better to fix the naming).
Now, imagine a case where you failed to include the header file that declares these attributes in some files but not others. Now, some compile units (.m files) memory manage it one way and some memory manage it another. Hijinks ensure. This is much, much worse than the situation without ARC, and the resulting bugs would be mindbending because some ARC code would do one thing and other ARC code would do something different.
So, yeah, don't do that. (Of course you should never ignore warnings in Objective-C anyway, but this is a particularly nasty situation.)
It's an ounce of prevention, I'd assume. Incidentally, it's not foolproof in larger systems because selectors do not need to match and matching is all based on the translation, so it could still blow up on you if you are not writing your program such that it introduces type safety. Something is better than nothing, though!
The compiler wants to know about parameters and return types, potentially annotations and out parameters. ObjC has defaults to fall back on, but it's a good source of multiple types of bugs as the compiler does more for you.
There are a number of reasons you should introduce type safety and turn up the warning levels. With ARC, there are even more. Regardless of whether it is truly necessary, it's a good direction for an objc compiler to move towards (IMHO). You might consider C99 safer than ObjC 2.0 in this regard ;)
If there really is a restriction for codegen, I'd like to hear it.

Use of __attribute__'s in ARC-managed Code

When ARC came to Objective-C, I did my best to read through the Objective-C Automatic Reference Counting (ARC) guide posted on the Clang project website to get a better hang of what it was about. What I found there (and no where else) was mention of using __attribute__ declarations to signify to ARC whether certain code autoreleases its return value, for instance (__attribute__((ns_returns_autoreleased))), or whether it 'consumes' a parameter (__attribute((ns_consumed)), and so on.
However, it seems that the guide gives very little word on the actual level of necessity these declarations hold. Excluding them seems to make no difference, neither when running the static analyzer nor when running the project itself. Do these even make a difference? Is there any advantage to labeling a method with __attribute__((objc_method_family(new)))? No article I've found on ARC makes mention of these specifiers at all; perhaps an ARC guru can give word on what these are used for.
(Personally, I include all relevant specifiers just in case, but find that they make code obfuscated and messy.)
These attributes are expressly for abnormal cases, such as:
A function or method parameter of retainable object pointer type may be marked as consumed, signifying that the callee expects to take ownership of a +1 retain count.
A function or method which returns a retainable object pointer type may be marked as returning a retained value, signifying that the caller expects to take ownership of a +1 retain count.
You don't normally do these things, so you don't normally use these attributes. With no attributes, the normal behavior—the NARC rule, or perhaps under ARC I should say CAN—is what the compiler implements and expects.
There are two reasons to use these attributes:
In order to violate the CAN rule; that is, to have a method not so named that returns a reference, or a method so named that doesn't. The attribute documents the violation in the method's prototype, and may even be necessary to implement it, if the implementation uses ARC.
Working with Core Foundation types, including Core Graphics types. These aren't ARCed, so you need to use the bridging attributes to aid conversion to and from “retainable object pointer” types.
That's not necessary in most of the cases, since LLVM & Clang knows ObjC naming conventions. So if you follow the standard naming conventions of Cocoa, LLVM automagically assumes the corresponding family/return memory policy to follow.
Namely, if you declare a method named initWith... it will automatically consider it as the "init" family of methods, no need to specify __attribute__((objc_method_family(init))), Clang automatically detect it; same for the new family, etc.
In fact, you only need to use the __attribute__ specifiers when Clang can't guess such cases, which in practice rarely occurs (in practice I never had to use it), or only if you don't respect naming conventions:
Quoting Clang Language Extensions Documentation:
Many methods in Objective-C have conventional meanings determined by their selectors. For the purposes of static analysis, it is sometimes useful to be able to mark a method as having a particular conventional meaning despite not having the right selector, or as not having the conventional meaning that its selector would suggest. For these use cases, we provide an attribute to specifically describe the method family that a method belongs to.
So as soon as you respect the naming conventions (which you should always do) you won't have anything do to.
You should definitely stick to naming conventions wherever possible.
It's clearer to read.
Attributes can introduce build errors if there is a conflict.
ARC semantics combined with attributes are relatively fragile.

Objective C two-phase construction of objects

I've been reading up on RAII and single vs. two-phase construction/initialization. For whatever reason, I was in the two-phase camp up until recently, because at some point I must have heard that it's bad to do error-prone operations in your constructor. However, I think I'm now convinced that single-phase is preferable, based on questions I've read on SO and other articles.
My question is: Why does Objective C use the two-phase approach (alloc/init) almost exclusively for non-convenience constructors? Is there any specific reason in the language, or was it just a design decision by the designers?
I have the enviable situation of working for the guy who wrote +alloc back in 1991, and I happened to ask him a very similar question a few months ago. The addition of +alloc was in order to provide +allocWithZone:, which was in order to add memory pools in NeXTSTEP 2.0 where memory was very tight (4M). This allowed the caller to control where objects were allocated in memory. It was a replacement for +new and its kin, which was (and continues to be, though no one uses it) a 1-phase constructor, based on Smalltalk's new. When Cocoa came over to Apple, the use of +alloc was already entrenched, and there was no going back to +new, even though actually picking your NSZone is seldom of significant value.
So it isn't a big 1-phase/2-phase philosophical question. In practice, Cocoa has a single phase construction, because you always do (and always should) call these back-to-back in a single call without a test on the +alloc. You can think of it as a elaborate way of typing "new".
My experience is with c++, but one downside of c++'s one phase initialization is handling of inheritance/virtual functions. In c++, you can't call virtual functions during construction or destruction (well, you can, it just won't do what you expect). A two phase init could solve this (partially. From what I understand, it would get routed to the right class, but the init might not have finished yet. You could still do things with that) (I'm still in favor of the one phase)