Determining what a CFTypeRef is? - objective-c

I have a function which returns CFTypeRef. I have no idea what it really is. How do I determine that? For example it might be a CFStringRef.

CFGetTypeID():
if (CFGetTypeID(myObjectRef) == CFStringGetTypeID()) {
//i haz a string
}

The short answer is that you can (see Dave DeLongs answer). The long answer is that you can't. Both are true. A better question might be "Why do you need to know?" In my opinion, if you can arrange things so that you don't need to know, you're probably going to be better off.
I'm not saying that you can't do it, or even that you shouldn't. What I am saying is that there are some hidden gotchas when you start down this path, and some times you're not really aware of what all the unstated assumptions are. Unfortunately, programming correctly depends on knowing all the little details. Off the top of my head, here's a few of the potential gotchas:
To the best of my knowledge the set of Core Foundation types has increased in each major OS release. Therefore each major OS release has a superset Core Foundation types of the previous releases, and likely a strict superset at that. This is "observed behavior", and not necessarily "guaranteed" behavior. The important thing to note is that "things can and do change", and all things being equal, the easier and simpler solutions tend not to take this in to account. It is generally considered poor programming style to code something that breaks in the future, regardless of the reason or justification.
Because of Toll-Free Bridging between Core Foundation and Foundation, just because a CFTypeRef = CFStringRef does not mean that a CFTypeRef ≡ CFStringRef, where = means "equal to" and ≡ means "identical to". There is a distinction, which may or may not be important depending on context. As a warning, this tends to be where the bugs roam freely.
For example, a CFMutableStringRef can be used where ever a CFStringRef can be used, or CFStringRef = CFMutableStringRef. However, you can not use a CFStringRef everywhere a CFMutableStringRef can be used for obvious reasons. This means CFStringRef ≢ CFMutableStringRef. Again, depending on the context, they can be equal, but they are not identical.
It is very important to note that while there is a CFStringGetTypeID(), there is no corresponding CFMutableStringGetTypeID().
Logically, CFMutableStringRef is a strict superset of CFStringRef. It would follow, then, that passing a bona fide immutable CFStringRef to a CFMutableString API call would cause "some kind of problem". While this may not be true now (i.e., 10.6), I know for a fact that the following was true in the past: The CFMutableString API calls did not verify that "the string argument" was actually mutable (this was actually true for all types that made a distinction between immutable and mutable). The checks were there, but they were in the form of debug assertions that were disabled on "Release" builds (in other words, the checks were never performed in practice).
This is (or possibly was) officially not considered to be a bug, and the (trivial) mutability checks were not done "for performance reasons". No "public" API is provided to tell the mutability of a CFString pointer (or mutability of any type). Combined with Toll-Free bridging, this meant that you could mutate immutable NSString objects, even though the NSMutableString APIs did perform a mutability check and caused "some kind of problem" when trying to mutate an immutable object. Flavor with the fact that #"" constant strings in your source are mapped to read-only memory at run time.
The official line, as I recall, was "not to pass immutable objects, either CFStringRef or NSString, to CFMutableString API's, and further more, it was a bug to do so". When it was pointed out that there might be some security related issues with this stance (never mind the fact that it was fundamentally impossible), say if anything ever made the mistake of critically depending on the immutability of a string, especially "well known" strings, the answer was "the problem is theoretical and nothing will be done at this time until a workable exploit can be demonstrated."
Update: I was curious to see what the current behavior is. On my machine, running 10.6.4, using CFMutableString API's on an immutable CFString causes the immutable string to become essentially #"", which is at least better than what it did before (<= 10.5) and actually mutate the string. Definitely not the ideal solution, has that bitter real world taste to it where its only redeeming quality is that it is "the least worst solution".
So remember, be careful in your assumptions! You can do it, but if you do, it's more important that you not do it wrong. :) Of course, a lot of "wrong" solutions will work, so the fact that things are working is not necessarily proof that you're doing it right. Good times!
Also, in a Duck Typed system it is often considered bad form, and possibly even a bug, to "look too closely at the type of an object". Objective-C is definitely a Duck Typed system and this unquestionably bleeds over in to Core Foundation due to the tight coupling of Toll-Free bridging. CFTypeRef is a direct manifestation of this Duck Type ambiguity, and depending heavily on the context, may be an explicit way of saying "You are not supposed to be looking too closely at the types".

If you want to find out what type a CFTypeRef is during development, you can use the following snippet.
printf("CFTypeRef type is: %s\n",CFStringGetCStringPtr(CFCopyTypeIDDescription(CFGetTypeID(myObjectRef)),kCFStringEncodingUTF8));
This will print a human readable name for the type so you know what it is. But Apple makes no guarantees that they'll keep these descriptions consistant so don't use this in production code. (As is the snippet will leak memory but you should only use it during development anyway so who cares).

Related

why string, array and dictionary in Swift changed to value type

In Objc string, array and dictionary are all reference types, while in Swift they are all value types.
I want to figure out what's the reason behind the scenes, for my understanding, no matter it is a reference type or value type, the objects live in the heap in both Objc and Swift.
Was the change for making coding easier? i.e. if it is reference type then the pointer to the object might not be nil, so need to check both pointer and the object not nil for accessing the object. While if it is value type then only need to check the object itself?
But in terms of memory allocation, value types and reference types are same, right? both allocated same size of memory?
thanks
Arrays, dictionaries etc. in Objective-C are often mutable. That means when I pass an array to another method, and then that array is modified behind the back of the other method, surprising (to put it gently) behaviour will happen.
By making arrays, dictionaries etc. value types, this surprising behaviour is avoided. When you receive a Swift array, you know that nobody is going to modify it behind your back. Objects that can be modified behind your back are a major source for problems.
In reality, the Swift compiler tries to avoid unnecessary copying whenever possible. So even if it says that an array is officially copied, it doesn't mean that it is really copied.
The Swift team is very active on the official developer forums. So, I'm assuming that since you didn't ask there, you're more curious about the community's broader "sense" of what the change means, as opposed to the technical implementation details. If you want to understand exactly "why", just go ask them :)
The explanation that makes the most sense to me is that Objects should be responsible for reacting to, and updating the state of your application. Values should be the state of your application. In other words, an Array or a String or a Dictionary (and other value types) should never be responsible for responding to user input or network input or error conditions, etc. The Objects handle that and store the resulting data into those values.
One cool feature in Swift, which makes a complex Value Type (like a Dictionary or a custom type like Person, as opposed to a simple Float) more viable, is that the value types can encapsulate rules and logic because they can have functions. If I write a value type Person as a struct, then the Person struct can have a function for updating a name due to marriage, etc. That's solely concerned with the data, and not with /managing/ the state. The Objects will still decide WHEN and WHY to updating a Person's name, but the business logic of how to go about doing so safely/test-ably can be included in the Value Type itself. Hence giving you a nice way to increase isolation and reduce complexity.
In addition to the previous answers, there are also multi-threading issues to consider with sharing a Reference-Based collection type that we don't have to worry as much with sharing an instance of a type that is Value-Based and has Copy-On-Write behavior. Multi-core is becoming more and more proliferant even on iOS devices, so it has become more of an issue for the Swift language developers to consider.
I do not know, whether this is the real idea behind it, but have a historical view on it:
At the beginning, an array copy behaved by reference, when you changed an item in it. It behaved by value, when you changed the length of the array. They did it for performance reasons (less array copy). But of course this was, eh, how can I express that politly, eh, difficult with Swift at all, eh, let's call it a "do not care about a good structure if you can win some performance, you probably never need" approach. Some called that copy-on-write, what is not much more intelligent, because COW is transparent, while that behavior was not transparent. Typical Swift wording: Use a buzzword, use it the way, it fits to Swift, don't care about correctness.
Later on arrays got a complete by copy behavior, what is less confusing. (You remember, Swift was for readability. Obviously in Swift's concept, readability means "less characters to read", but does not mean "better understandable". Typical Swift wording: Use a buzzword, use it the way, it fits to Swift, don't care about correctness. Did I already mention that?)
So, I guess it is still performance plus understandable behavior probably leading to less performance. (You will better know when a copy is needed in your code and you can still do that and you get a 0-operation from Cocoa, if the source array is immutable.) Of course, they could say: "Okay, by value was a mistake, we changed that." But they will never say.
However, now arrays in Swift behave consistently. A big progress in Swift! Maybe you can call it a programming language one sunny day.

Why does the Objective-C compiler need to know method signatures?

Why does the Objective-C compiler need to know at compile-time the signature of the methods that will be invoked on objects when it could defer that to runtime (i.e., dynamic binding)? For example, if I write [foo someMethod], why it is necessary for the compiler to know the signature of someMethod?
Because of calling conventions at a minimum (with ARC, there are more reasons, but calling conventions have always been a problem).
You may have been told that [foo someMethod] is converted into a function call:
objc_msgSend(foo, #selector(someMethod))
This, however, isn't exactly true. It may be converted to a number of different function calls depending on what it returns (and what's returned matters whether you use the result or not). For instance, if it returns an object or an integer, it'll use objc_msgSend, but if it returns a structure (on both ARM and Intel) it'll use objc_msgSend_stret, and if it returns a floating point on Intel (but not ARM I believe), it'll use objc_msgSend_fpret. This is all because on different processors the calling conventions (how you set up the stack and registers, and where the result is stored) are different depending on the result.
It also matters what the parameters are and how many there are (the number can be inferred from ObjC method names unless they're varargs... right, you have to deal with varargs, too). On some processors, the first several parameters may be put in registers, while later parameters may be put on the stack. If your function takes a varargs, then the calling convention may be different still. All of that has to be known in order to compile the function call.
ObjC could be implemented as a more pure object model to avoid all of this (as other, more dynamic languages do), but it would be at the cost of performance (both space and time). ObjC can make method calls surprisingly cheap given the level of dynamic dispatch, and can easily work with pure C machine types, but the cost of that is that we have to let the compiler know more specifics about our method signatures.
BTW, this can (and every so often does) lead to really horrible bugs. If you have a couple of methods:
- (MyPointObject *)point;
- (CGPoint)point;
Maybe they're defined in completely different files as methods on different classes. But if the compiler chooses the wrong definition (such as when you're sending a message to id), then the result you get back from -point can be complete garbage. This is a very, very hard bug to figure out when it happens (and I've had it happen to me).
For a bit more background, you may enjoy Greg Parker's article explaining objc_msgSend_stret and objc_msgSend_fpret. Mike Ash also has an excellent introduction to this topic. And if you want to go deep down this rabbit hole, you can see bbum's instruction-by-instruction investigation of objc_msgSend. It's outdated now, pre-ARC, and only covers x86_64 (since every architecture needs its own implementation), but is still highly educational and recommended.

When using reference to objects, do we have a mechanism similar to "pass by value" for callee not to be able to make any change to the original data?

For the mechanism of "pass by value", it was so that the callee cannot alter the original data. So the callee can change the parameter variable in any way, but when the function returns, the original value in the argument variable is not changed.
But in Objective-C or Ruby, since all variables for objects are references to objects, when we pass the object to any method, the method can "send a message" to alter the object. After the method returns, the caller will continue with the argument already in a different state.
Or is there a way to guarantee the passed in object not changed (its states not altered) -- is there such a mechanism?
You're somewhat misusing the term "pass by value" and "pass by reference" here. What you really are discussing is const. In C++, you can refer to a const instance of a mutable class. There is no similar concept for ObjC objects (or in Ruby I believe, though I am much less familiar with Ruby than ObjC). ObjC does, via C, have the concept of const pointers, but these are a much weaker promise.
The best solution to this in ObjC is to prefer value (immutable) classes whenever possible. See Imutability in Objective-c for more discussion on that.
The next-best solution is to, as a matter of design, avoid this situation. Avoid side effects in your methods that are not obvious from the name. By avoiding this as a matter of design, callers should not need to worry about it. Remember, the caller and the called are on the same team. Neither should be trying to protected itself from the other. Good naming and good API design help the developer avoid error without compiler enforcement. ObjC has little compiler enforcement, so good naming and good API design are absolutely critical. I would say the same for Ruby, despite my limited experience there, in that it is also a highly dynamic language.
Finally, if you are dealing with a poorly behaved API that does modify your object when it shouldn't, you can resort to passing it a copy.
But if you're designing this from scratch, think hard about using an immutable class whenever possible.
I'm not sure what you are getting at. Ruby is pass-by-value. You cannot "change the argument variable":
def is_ruby_pass_by_value?(foo)
foo = 'No, Ruby is not pass-by-value.'
return nil
end
bar = 'Yes, of course, Ruby *is* pass-by-value!'
is_ruby_pass_by_value?(bar)
p bar
# 'Yes, of course, Ruby *is* pass-by-value!'
I'm not sure about Objective-C, but I would be surprised if it were different.

Why is 'no known method for selector x' a hard error under ARC?

Maybe it's useful if calling a method that MyClass doesn't understand on a something typed MyClass is an error rather than a warning since it's probably either a mistake or going to cause mistakes in the future...
However, why is this error specific to ARC? ARC decides what it needs to retain/release/autorelease based on the cocoa memory management conventions, which would suggest that knowing the selector's name is enough. So it makes sense that there are problems with passing a SEL variable to performSelector:, as it's not known at compile-time whether the selector is an init/copy/new method or not. But why does seeing this in a class interface or not make any difference?
Am I missing something about how ARC works, or are the clang warnings just being a bit inconsistent?
ARC decides what it needs to retain/release/autorelease based on the cocoa memory management conventions, which would suggest that knowing the selector's name is enough.
This is just one way that ARC determines memory management. ARC can also determine memory management via attributes. For example, you can declare any typedef retainable using __attribute__((NSObject)) (never, ever do this, but it's legal). You can also use other attributes like __attribute((ns_returns_retained)) and several others to override naming conventions (these are things you might reasonably do if you couldn't fix the naming; but it's much better to fix the naming).
Now, imagine a case where you failed to include the header file that declares these attributes in some files but not others. Now, some compile units (.m files) memory manage it one way and some memory manage it another. Hijinks ensure. This is much, much worse than the situation without ARC, and the resulting bugs would be mindbending because some ARC code would do one thing and other ARC code would do something different.
So, yeah, don't do that. (Of course you should never ignore warnings in Objective-C anyway, but this is a particularly nasty situation.)
It's an ounce of prevention, I'd assume. Incidentally, it's not foolproof in larger systems because selectors do not need to match and matching is all based on the translation, so it could still blow up on you if you are not writing your program such that it introduces type safety. Something is better than nothing, though!
The compiler wants to know about parameters and return types, potentially annotations and out parameters. ObjC has defaults to fall back on, but it's a good source of multiple types of bugs as the compiler does more for you.
There are a number of reasons you should introduce type safety and turn up the warning levels. With ARC, there are even more. Regardless of whether it is truly necessary, it's a good direction for an objc compiler to move towards (IMHO). You might consider C99 safer than ObjC 2.0 in this regard ;)
If there really is a restriction for codegen, I'd like to hear it.

Is asserting that every object creation succeeded necessary in Objective C?

I have recently read Apple's sample code for MVCNetworking written by Apple's Developer Technical Support guru Quinn "The Eskimo!". The sample is really nice learning experience with what I guess are best development practices for iOS development.
What surprised me, coming from JVM languages, are extremely frequent assertions like this:
syncDate = [NSDate date];
assert(syncDate != nil);
and this:
photosToRemove = [NSMutableSet setWithArray:knownPhotos];
assert(photosToRemove != nil);
and this:
photoIDToKnownPhotos = [NSMutableDictionary dictionary];
assert(photoIDToKnownPhotos != nil);
Is that really necessary? Is that coding style worth emulating?
If you're used to Java, this may seem strange. You'd expect an object creation message to throw an exception when it fails, rather than return nil. However, while Objective-C on Mac OS X has support for exception handling; it's an optional feature that can be turned on/off with a compiler flag. The standard libraries are written so they can be used without exception handling turned on: hence messages often return nil to indicate errors, and sometimes require you to also pass a pointer to an NSError* variable. (This is for Mac development, I'm not sure whether you can even turn exception handling support on for iOS, considering you also can't turn on garbage collection for iOS.)
The section "Handling Initialization Failure" in the document "The Objective-C Programming Language" explains how Objective-C programmers are expected to deal with errors in object initialization/creation: that is, return nil.
Something like [NSData dataWithContentsOfFile: path] may definitely return nil: the documentation for the method explicitly says so. But I'm honestly not sure whether something like [NSMutableArray arrayWithCapacity: n] ever returns nil. The only situation I can think of when it might is when the application is out of memory. But in that case I'd expect the application to be aborted by the attempt to allocate more memory. I have not checked this though, and it may very well be that it returns nil in this case. While in Objective-C you can often safely send messages to nil, this could then still lead to undesirable results. For example, your application may try to make an NSMutableArray, get nil instead, and then happily continue sending addObject: to nil and write out an empty file to disk rather than one with elements of the array as intended. So in some cases it's better to check explicitly whether the result of a message was nil. Whether doing it at every object creation is necessary, like the programmer you're quoting is doing, I'm not sure. Better safe than sorry perhaps?
Edit: I'd like to add that while checking that object creation succeeded can sometimes be a good idea, asserting it may not be the best idea. You'd want this to be also checked in the release version of your application, not just in the debug version. Otherwise it kind of defeats the point of checking it, since you don't want the application end user to, for example, wind up with empty files because [NSMutableArray arrayWithCapacity: n] returned nil and the application continued sending messages to the nil return value. Assertions (with assert or NSAssert) can be removed from the release version with compiler flags; Xcode doesn't seem to include these flags by default in the "Release" configuration though. But if you'd want to use these flags to remove some other assertions, you'd also be removing all your "object creation succeeded" checks.
Edit: Upon further reflection, it seems more plausible than I first thought that [NSMutableArray arrayWithCapacity: n] would return nil rather than abort the application when not enough memory is available. Basic C malloc also doesn't abort but returns a NULL pointer when not enough memory is available. But I haven't yet found any clear mention of this in the Objective-C documentation on alloc and similar methods.
Edit: Above I said I wasn't sure checking for nil is necessary at every object creation. But it shouldn't be. This is exactly why Objective-C allows sending messages to nil, which then return nil (or 0 or something similar, depending on the message definition): this way, nil can propagate through your code somewhat similar to an exception so that you don't have to explicitly check for nil at every single message that might return it. But it's a good idea to check for it at points where you don't want it to propagate, like when writing files, interacting with the user and so on, or in cases where the result of sending a message to nil is undefined (as explained in the documentation on sending messages to nil). I'd be inclined to say this is like the "poor man's" version of exception propagation&handling, though not everyone may agree that the latter is better; but nil doesn't tell you anything about why an error occurred and you can easily forget to check for it where such checks are necessary.
Yup. I think it's a good idea.. It helps to filter out the edge cases (out of memory, input variables empty/nil) as soon as the variables are introduced. Although I am not sure the impact on speed because of the overhead!
I guess it's a matter of personal choice. Usually asserts are used for debugging purpose so that the app crashes at the assert points if the conditions are not met. You'd normally like to strip them out on your app releases though.
I personally am too lazy to place asserts around every block of code as you have shown. I think it's close to being a bit too paranoid. Asserts might be pretty handy in case of conditions where some uncertainity is involved.
I have also asked this on Apple DevForums. According to Quinn "The Eskimo!" (author of the MVCNetworking sample in question) it is a matter of coding style and his personal preference:
I use lots of asserts because I hate debugging. (...)
Keep in mind that I grew up with traditional Mac OS, where a single rogue pointer could bring down your entire machine (similarly to kernel programming on current systems). In that world it was important to find your bugs sooner rather than later. And lots of asserts help you do that.
Also, even today I spend much of my life dealing with network programs. Debugging network programs is hard because of the asynchrony involved. Asserts help to with this, because they are continually checking the state of your program as it runs.
However, I think you have a valid point with stuff like +[NSDate date]. The chances of that returning nil are low. The assert is there purely from habit. But I think the costs of this habit (some extra typing, learning to ignore the asserts) are small compared to the benefits.
From this I gather that asserting that every object creation succeeded is not strictly necessary.
Asserts can be valuable to document the pre-conditions in methods, during development, as design aid for other maintainers (including the future self). I personally prefer the alternative style - to separate the specification and implementation using TDD/BDD practices.
Asserts can be used to double-check runtime types of method arguments due to the dynamic nature of Objective C:
assert([response isKindOfClass:[NSHTTPURLResponse class]]);
I'm sure there are more good uses of assertions. All Things In Moderation...