How can casting pointer to BOOL return 112? - objective-c

Let's say I have a pointer to some object, called myObject, and I need to know, whether it is really pointing to something. How can this code:
// assume MyObjectClass *myObject;
return (BOOL)myObject;
return 112? I know, that I can always write
return (myObject == nil);
and everything will be fine. But until today I have always assumed, that explicit casting of anything to bool will always return true or false (as far as I know, 0 is always considered as false and any other value as true) and that BOOL with it's YES and NO values is just "renamed" bool. So basically, my questions are:
Why is it returning 112? :-)
Are results of explicit casting defined somewhere in C/Objective-C standard, or is it compiler-specific?

In Objective-C, the BOOL macro is just a typedef for signed char with YES/NO defined, but bool is an actual boolean, which can be true or false.
It returns 112, because it rounds the address of your pointer to signed char type.
Here is some discussion with good answers:
Objective-C : BOOL vs bool
Is there a difference between YES/NO,TRUE/FALSE and true/false in objective-c?

The definition of "true" in C is any non-zero number. Therefore 112 would be considered true as far as C is concerned. From the C99 standard:
6.3.1.2 Boolean type
When any scalar value is converted to _Bool, the result is 0 if the value compares equal
to 0; otherwise, the result is 1.
Your value is not converted to 1 because you are converting to BOOL not _Bool. The conversion to 0/1 will be technically handled inside the if (though in reality the implementation is more likely to be (myObject != 0) inside any if/while type statement).

In C , bool is a stdbool.h macro for boolean type _Bool.
And a conversion of a non-zero integer value to _Bool is guaranteed to yield 1.
That is, the result of 1 == (bool) 42 is 1.
If you are using a BOOL type as an alias for another integer type (like signed char), you can get a different result:
The result of 1 == (BOOL) 42 is 0.

Related

What is the difference between compareTo and equals in Kotlin?

I want to fully understand the different between compareTo and equals.
I have used this code while trying to understand the difference between them:
println("${'A'.compareTo('b')}")
println("${'A'.equals('b')}")
While using compareTo I get -1 as a result. Nothing is wrong here.
It is also mentioned in the documentation that I will get -1 as a result if the strings are not the same:
Compares this object with the specified object for order. Returns zero if this object is equal to the specified other object, a negative number if it's less than other, or a positive number if it's greater than other.
And while using equals the result that I got was false, then again it looks good as the documentation mentioned - this method will return a boolean:
Indicates whether some other object is "equal to" this one.
Maybe I am missing something really simple, but in the described case, what is the difference between those methods (other than the value that is coming from compareTo and equals)?
The difference between equals and compareTo comes from a few sources.
First, equals is inherited from the Any type in Kotlin, so it is a method attached to all values in the language.
compareTo is inherited from the Comparable type, specifically meaning only its inheritors of:
Boolean, Byte, Char, Double, Duration, Enum, Float, Int etc...
will have the method.
Second, the signature of returned value is different.
Equals has a return of Boolean, meaning you only have true or false being returned from the method call. This will only tell you directly if they are the same or not, with no extra information
The compareTo method has a return of Int, which is a magnitude of the difference between the comparison of the input type. The comparison can not be between different types.
The return of a positive Integer that the Receiver value, is greater than the input value being checked against
To clarify, the Receiver is the variable or instance that the compareTo method is being called on.
For example:
val myValue: Boolean = false
val myCheck: Boolean = true
myValue.compareTo(myCheck) // Return: 1
In that code, the Receiver would be myValue because it is calling the method compareTo. Kotlin interprets true to be a greater value than false so myValue.compareTo(myCheck) will return1`
The return of 0 means that the Receiver value is the same value as the input parameter value.
val myValue: Boolean = true
val otherValue: Boolean = true
myValue.compareTo(otherValue) // Return: 0
The return of a negative number is a magnitude of difference between the two values, specific to each type based on the Receiver value being considered a value of less than the input parameter.
val myString = "zza"
val otherString = "zzz"
myString.compareTo(otherString) // Return: -25
The equality being a bit complicated to explain, but being the same length with only 1 Char place being different, it returns the difference of the Char values as an Int.
val myString = "zz"
val otherString = "zzz"
myString.compareTo(otherString) // Return: -1
In this case the difference is literally the existence of 1 Char, and does not have a value difference to assign.
For equals, the comparative other can be of Any type, not specifically the same type as the Receiver like in compareTo.
The equals method is also an operator function and can be syntactically used such as:
val myString: String = "Hello World"
val otherString: String = "Hello World"
myString == otherString // Return: true
Any non-null value can not be equal to null:
val myString = "Hello World"
val maybeNull: String? = null
myString == maybeNull // Return: false
Equality is specific to each type and has it's own specific documentation to clarify its nuances: Kotlin Equality

comparison between pointer and integer ('int *' and 'int')

I am confused as to why I get this warning:
I intiate matchObsFlag with:
int *matchObsFlag=0;
but when I run this line:
if (matchObsFlag == 1)
I get this warning. Any ideas?
You surely get a warning because you did not cast 1 as such (int*) 1 so you test an equality between different things : an address and an int.
So it is either if(matchObsFlag == (int*)1) or if(*matchObsFlag == 1) depending on what you wanna do.
int *matchObsFlag=0;
The type of matchObsFlag is int* while the constant literal is of type int. Comparison between the unrelated types is causing the warning.
matchObsFlag is a NULL pointer. matchObsFlag needs to point to a valid memory location if you wish to compare the value pointed by the pointer.
int number = 1;
matchObsFlag = &number;
Now, to compare the value, you need to dereference the pointer. So try -
if (*matchObsFlag == 1)
{
// ...
}

bool versus BOOL [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Objective-C : BOOL vs bool
Is there any difference between BOOL and Boolean in Objective-C?
I noticed from the autocomplete in XCode that there is a bool and a BOOL in Objective-C. Are these different? Why are there two different kinds of bool?
Are they interchangeable?
Yes they are different.
C++ has bool, and it is a true Boolean type. It is guaranteed to be 0 or 1 within integer context.
C99 has _Bool as a true Boolean type, and if <stdbool.h> is included, then bool becomes a preprocessor macro for _Bool (this header also defines true and false as preprocessor macros for 1 and 0 respectively).
Cocoa has BOOL as a type, but it is just a typedef for signed char. It can represent more values than just 0 or 1.
Carbon has Boolean as a type, but it is just a typedef for unsigned char. Like, Cocoa's BOOL it can represent more values than just 0 or 1.
For Cocoa and Carbon's “Boolean” types, they should be thought of as zero meaning false, and any non-zero value meaning true.
You should use bool unless you need to interoperate with code that uses BOOL, because bool is a real Boolean type and BOOL isn't. What do I mean "real Boolean type"? I mean that code like this does what you expect it to:
#define FLAG_A 0x00000001
#define FLAG_B 0x00000002
...
#define FLAG_F 0x00000020
struct S
{
// ...
unsigned int flags;
};
void doSomething(S* sList, bool withF)
{
for (S* s = sList; s; s = s->next)
{
if ((bool)(s->flags & FLAG_F) != withF)
continue;
// actually do something
}
}
because (bool)(s->flags & FLAG_F) can be relied upon to evaluate to either 0 or 1. If that were a BOOL instead of a bool in the cast, it wouldn't work, because withF evaluates to 0 or 1, and (BOOL)(s->flags & FLAG_F) evaluates to 0 or the numeric value of FLAG_F, which in this case is not 1.
This example is contrived, yeah, but real bugs of this type can and do happen all too often in old code that doesn't use the C99/C++ genuine boolean types.
BOOL is defined in Objective-C as typedef signed char BOOL, while bool is the datatype defined in C99.
BOOL is actually a signed char (thanks Yuji), while bool is a true boolean from the ISO C99 standard.
See here: http://iosdevelopertips.com/objective-c/of-bool-and-yes.html

Is comparing a BOOL against YES dangerous?

I found a comment today in a source file:
// - no longer compare BOOL against YES (dangerous!)
Is comparing BOOL against YES in Objective-C really that dangerous? And why is that?
Can the value of YES change during runtime? Maybe NO is always 0 but YES can be 1, 2 or 3 - depending on runtime, compiler, your linked frameworks?
The problem is that BOOL is not a native type, but a typedef:
typedef signed char BOOL;
#define YES (BOOL)1
#define NO (BOOL)0
As a char, its values aren't constrained to TRUE and FALSE. What happens with another value?
BOOL b = 42;
if (b)
{
// true
}
if (b != YES)
{
// also true
}
You should never compare booleans against anything in any of the C based languages. The right way to do it is to use either:
if (b)
or:
if (!b)
This makes your code much more readable (especially if you're using intelligently named variables and functions like isPrime(n) or childThreadHasFinished) and safe. The reason something like:
if (b == TRUE)
is not so safe is that there are actually a large number of values of b which will evaluate to true, and TRUE is only one of them.
Consider the following:
#define FALSE 0
#define TRUE 1
int flag = 7;
if (flag) printf ("number 1\n");
if (flag == TRUE) printf ("number 2\n");
You should get both those lines printed out if it were working as expected but you only get the first. That's because 7 is actually true if treated correctly (0 is false, everything else is true) but the explicit test for equality evaluates to false.
Update:
In response to your comment that you thought there'd be more to it than coder stupidity: yes, there is (but I still wouldn't discount coder stupidity as a good enough reason - defensive programming is always a good idea).
I also mentioned readability, which is rather high on my list of desirable features in code.
A condition should either be a comparison between objects or a flag (including boolean return values):
if (a == b) ...
if (c > d) ...
if (strcmp (e, "Urk") == 0) ...
if (isFinished) ...
if (userPressedEsc (ch)) ...
If you use (what I consider) an abomination like:
if (isFinished == TRUE) ...
where do you stop:
if (isFinished == TRUE) ...
if ((isFinished == TRUE) == TRUE) ...
if (((isFinished == TRUE) == TRUE) == TRUE) ...
and so on.
The right way to do it for readability is to just use appropriately named flag variables.
All this is true, but there are valid counter arguments that might be considered:
— Maybe we want to check a BOOL is actually YES or NO. Really, storing any other value than 0 or 1 in a BOOL is pretty incorrect. If it happens, isn't it more likely because of a bug somewhere else in the codebase, and isn't not explicitly checking against YES just masking this bug? I think this is way more likely than a sloppy programmer using BOOL in a non-standard way. So, I think I'd want my tests to fail if my BOOL isn't YES when I'm looking for truth.
— I don't necessarily agree that "if (isWhatever)" is more readable especially when evaluating long, but otherwise readable, function calls,
e.g. compare
if ([myObj doThisBigThingWithName:#"Name" andDate:[NSDate now]]) {}
with:
if (![myObj doThisBigThingWithName:#"Name" andDate:[NSDate now]]) {}
The first is comparing against true, the second against false and it's hard to tell the difference when quickly reading code, right?
Compare this to:
if ([myObj doThisBigThingWithName:#"Name" andDate:[NSDate now]] == YES) {}
and
if ([myObj doThisBigThingWithName:#"Name" andDate:[NSDate now]] == NO) {}
…and isn't it much more readable?
Again, I'm not saying one way is correct and the other's wrong, but there are some counterpoints.
When the code uses a BOOL variable, it is supposed to use such variable as a boolean. The compiler doesn't check if a BOOL variable gets a different value, in the same way the compiler doesn't check if you initialize a variable passed to a method with a value taken between a set of constants.

How do I get the int value from object_getIvar(self, myIntVar) as it returns a pointer

if the variable in object_getIvar is a basic data type (eg. float, int, bool) how do I get the value as the function returns a pointer (id) according to the documentation. I've tried casting to an int, int* but when I try to get that to NSLog, I get error about an incompatible pointer type.
Getting:
myFloat = 2.34f;
float myFloatValue;
object_getInstanceVariable(self, "myFloat", (void*)&myFloatValue);
NSLog(#"%f", myFloatValue);
Outputs:
2.340000
Setting:
float newValue = 2.34f;
unsigned int addr = (unsigned int)&newValue;
object_setInstanceVariable(self, "myFloat", *(float**)addr);
NSLog(#"%f", myFloat);
Outputs:
2.340000
For ARC:
Inspired by this answer: object_getIvar fails to read the value of BOOL iVar.
You have to cast function call for object_getIvar to get basic-type ivars.
typedef int (*XYIntGetVariableFunction)(id object, const char* variableName);
XYIntGetVariableFunction intVariableFunction = (XYIntGetVariableFunction)object_getIvar;
int result = intVariableFunction(object, intVarName);
I have made a small useful macro for fast definition of such function pointers:
#define GET_IVAR_OF_TYPE_DEFININTION(type, capitalized_type) \
typedef type (*XY ## capitalized_type ## GetVariableFunctionType)(id object, Ivar ivar); \
XY ## capitalized_type ## GetVariableFunctionType XY ## capitalized_type ## GetVariableFunction = (XY ## capitalized_type ## GetVariableFunctionType)object_getIvar;
Then, for basic types you need to specify calls to macro (params e.g. (long long, LongLong) will fit):
GET_IVAR_OF_TYPE_DEFININTION(int, Int)
And after that a function for receiving int(or specified) variable type become available:
int result = XYIntGetVariableFunction(object, variableName)
The value that is returned is the value from the right place in the object; just not the right type. For int and BOOL (but not float), you could just cast the pointer to an int or BOOL, since pointers and ints are the same size and they can be cast to each other:
(int)object_getIvar(obj, myIntVar)
It's probably boxing the value in an NSNumber. You can verify this by NSLogging the returned id's className, like so:
id returnedValue = object_getIvar(self, myIntVar);
NSLog(#"Class: %#", [returnedValue className]);
Edit: I found another question just like this one here: Handling the return value of object_getIvar(id object, Ivar ivar)
From my own experimentation, it would appear that my original assumption was incorrect. int and float and other primitives appear to be returned as the actual value. However, it would be appropriate to use ivar_getTypeEncoding to verify that the returned value is the type that you're expecting it to be.
you can use object_getInstanceVariable directly: (haven't tested it)
void *ptr_to_result;
object_getInstanceVariable(obj, "intvarname", &ptr_to_result);
float result = *(float *)ptr_to_result;