I am confused as to why I get this warning:
I intiate matchObsFlag with:
int *matchObsFlag=0;
but when I run this line:
if (matchObsFlag == 1)
I get this warning. Any ideas?
You surely get a warning because you did not cast 1 as such (int*) 1 so you test an equality between different things : an address and an int.
So it is either if(matchObsFlag == (int*)1) or if(*matchObsFlag == 1) depending on what you wanna do.
int *matchObsFlag=0;
The type of matchObsFlag is int* while the constant literal is of type int. Comparison between the unrelated types is causing the warning.
matchObsFlag is a NULL pointer. matchObsFlag needs to point to a valid memory location if you wish to compare the value pointed by the pointer.
int number = 1;
matchObsFlag = &number;
Now, to compare the value, you need to dereference the pointer. So try -
if (*matchObsFlag == 1)
{
// ...
}
Related
Let's say I have a pointer to some object, called myObject, and I need to know, whether it is really pointing to something. How can this code:
// assume MyObjectClass *myObject;
return (BOOL)myObject;
return 112? I know, that I can always write
return (myObject == nil);
and everything will be fine. But until today I have always assumed, that explicit casting of anything to bool will always return true or false (as far as I know, 0 is always considered as false and any other value as true) and that BOOL with it's YES and NO values is just "renamed" bool. So basically, my questions are:
Why is it returning 112? :-)
Are results of explicit casting defined somewhere in C/Objective-C standard, or is it compiler-specific?
In Objective-C, the BOOL macro is just a typedef for signed char with YES/NO defined, but bool is an actual boolean, which can be true or false.
It returns 112, because it rounds the address of your pointer to signed char type.
Here is some discussion with good answers:
Objective-C : BOOL vs bool
Is there a difference between YES/NO,TRUE/FALSE and true/false in objective-c?
The definition of "true" in C is any non-zero number. Therefore 112 would be considered true as far as C is concerned. From the C99 standard:
6.3.1.2 Boolean type
When any scalar value is converted to _Bool, the result is 0 if the value compares equal
to 0; otherwise, the result is 1.
Your value is not converted to 1 because you are converting to BOOL not _Bool. The conversion to 0/1 will be technically handled inside the if (though in reality the implementation is more likely to be (myObject != 0) inside any if/while type statement).
In C , bool is a stdbool.h macro for boolean type _Bool.
And a conversion of a non-zero integer value to _Bool is guaranteed to yield 1.
That is, the result of 1 == (bool) 42 is 1.
If you are using a BOOL type as an alias for another integer type (like signed char), you can get a different result:
The result of 1 == (BOOL) 42 is 0.
I am trying to perform logic based on the values of two integers. Here I am defining my integers, and I also have NSLog so I can see if the values are correct when I run the code:
int theHikeFlag = (int)[theNewHikeFlag objectAtIndex:(theID-1)];
NSLog(#"theHikeFlag: %#",theHikeFlag);
int fromTheDB = [self.detailItem hikeFlag];
NSLog(#"fromTheDB: %d",fromTheDB);
And here is the logic:
if (theHikeFlag==1) {
hikeString=#"You have";
}
else if (theHikeFlag==0) {
hikeString=#"You have not";
}
else {
if (fromTheDB==1) {
hikeString=#"You have";
}
else {
hikeString=#"You have not";
}
}
As an example of how this code is working. When theHikeFlag=1 and fromTheDB=0, the code bypasses the if and the else if and goes straight to the else and sets hikeString="You have not". This means that my result is irrelevant of theHikeFlag and is based on the fromTheDB integer.
Since you cannot store ints in an array, the line
(int)[theNewHikeFlag objectAtIndex:(theID-1)];
is not doing what you think it should. You need to pull the data from NSNumber, not cast to int.
int theHikeFlag = [[theNewHikeFlag objectAtIndex:(theID-1)] intValue];
The reason why the log output is correct is a bit funny: you made two mistakes in a row! First, you re-interpreted a pointer as an int, and then you let NSLog re-interpret it as an object again by adding a format specifier %# that is incompatible with int, but works fine with pointers! Since the int value contains a pointer to NSNumber, NSLog produces the "correct" output.
If I have the following enum type:
typedef enum {Type1=0, Type2, Type3} EnumType;
And the following code (which will work fine if converted to Java):
NSArray *allTypes = [NSArray arrayWithObjects:[NSNumber numberWithInt:Type1], [NSNumber numberWithInt:Type2], [NSNumber numberWithInt:Type3], nil];
EnumType aType = -1;
NSLog(#"aType is %d", aType); // I expect -1
// Trying to assign the highest type in the array to aType
for (NSNumber *typeNum in allTypes) {
EnumType type = [typeNum intValue];
NSLog(#"type is: %d", type);
if (type > aType) {
aType = type;
}
}
NSLog(#"aType is %d", aType); // I expect 2
The resulted logs are:
TestEnums[11461:b303] aType is: -1
TestEnums[11461:b303] type is: 0
TestEnums[11461:b303] type is: 1
TestEnums[11461:b303] type is: 2
TestEnums[11461:b303] aType is: -1
And when I inspect the value of aType using a breakpoint, I see:
aType = (EnumType) 4294967295
Which is according to Wikipedia the maximum unsigned long int value for 32-bit systems.
Does this mean that I cannot assign a value to enum types that is not
in the valid range of the type's values?
Why is the value of the log (-1) differ from the real value
(4294967295)? Does it have something to do with the specifier (%d)?
How can I achieve want I'm trying to do here without adding a new
type to represent an invalid value? Note that the collection may
sometimes be empty, this is why I'm using -1 at the beginning to indicate that there is no type if the collection was empty.
Note: I'm new to Objective-C/ANSI-C.
Thanks,
Mota
EDIT:
Here is something weird I've found. If I change the condition inside the loop to:
if (type > aType || aType == -1)
I get the following logs:
TestEnums[1980:b303] aType is -1
TestEnums[1980:b303] type is: 0
TestEnums[1980:b303] type is: 1
TestEnums[1980:b303] type is: 2
TestEnums[1980:b303] aType is 2
Which is exactly what I'm looking for! The weird part is how's (aType == -1) true, while (Type1 > -1), (Type2 > -1) and (Type3 > -1) are not?!
It seems like EnumType is defined to be an unsigned type. When you assign it to -1, this value actually rolls back to the highest possible value for an unsigned 32-bit integer (as you found). So, by starting the value at -1, you are ensuring that no other value that you compare it to could possibly be higher, because it is assigned to the maximum value for the data type (4294967295).
I'd suggest just starting the counter at 0 instead, as it is the lowest possible value for an EnumType.
EnumType aType = 0;
If you want to check to see if any value was chosen, you can check the count of the collection to see if there are any values first.
I have a method to check weather a number is even or odd:
-(BOOL)numberIsEven:(unsigned int *)x {
if (x & 1)
{
return TRUE;
}
else
{
return FALSE;
}
}
however whenever I compile it I get the error:
Invalid operands to binary %
So it's compiling into assembly as a modulus function and failing, somehow, however if I use a modulus based function (arguably slower) I get the same error!
Help me stack overflow
Thanks -
Ollie
x is a pointer. The modulo operator will not work on pointers.
return (*x & 1);
This dereferences the pointer, then returns the result of the modulo (implictly cast to a BOOL)
I suspect you're reading the error message wrong and it really says "Invalid operands to binary &".
The reason it says that is "x" is a pointer, so you need to say:
if (*x & 1)
not
if (x & 1)
That's because (aside from the fact that your code contains numerous typos) x is defined as a pointer. A pointer cannot have modulus performed on it, the result of that is meaningless.
return *x & 1;
Since x is a pointer to an int, you need to deference it first.
Alternately, you can change the signature to take an unsigned int. I don't see any advantage to passing a pointer in this situation.
bool isOdd(unsigned int x) {
return (bool)(x&1);
}
bool isOdd_by_ptr(unsigned int * p) {
return isOdd( *p );
}
Except that this is actually C, so you don't get anything by casting to bool.
#define IS_ODD( X ) (1 & (X) )
#define IS_ODD_BY_PTR( P ) (1 & *(P) )
Work just fine.
if the variable in object_getIvar is a basic data type (eg. float, int, bool) how do I get the value as the function returns a pointer (id) according to the documentation. I've tried casting to an int, int* but when I try to get that to NSLog, I get error about an incompatible pointer type.
Getting:
myFloat = 2.34f;
float myFloatValue;
object_getInstanceVariable(self, "myFloat", (void*)&myFloatValue);
NSLog(#"%f", myFloatValue);
Outputs:
2.340000
Setting:
float newValue = 2.34f;
unsigned int addr = (unsigned int)&newValue;
object_setInstanceVariable(self, "myFloat", *(float**)addr);
NSLog(#"%f", myFloat);
Outputs:
2.340000
For ARC:
Inspired by this answer: object_getIvar fails to read the value of BOOL iVar.
You have to cast function call for object_getIvar to get basic-type ivars.
typedef int (*XYIntGetVariableFunction)(id object, const char* variableName);
XYIntGetVariableFunction intVariableFunction = (XYIntGetVariableFunction)object_getIvar;
int result = intVariableFunction(object, intVarName);
I have made a small useful macro for fast definition of such function pointers:
#define GET_IVAR_OF_TYPE_DEFININTION(type, capitalized_type) \
typedef type (*XY ## capitalized_type ## GetVariableFunctionType)(id object, Ivar ivar); \
XY ## capitalized_type ## GetVariableFunctionType XY ## capitalized_type ## GetVariableFunction = (XY ## capitalized_type ## GetVariableFunctionType)object_getIvar;
Then, for basic types you need to specify calls to macro (params e.g. (long long, LongLong) will fit):
GET_IVAR_OF_TYPE_DEFININTION(int, Int)
And after that a function for receiving int(or specified) variable type become available:
int result = XYIntGetVariableFunction(object, variableName)
The value that is returned is the value from the right place in the object; just not the right type. For int and BOOL (but not float), you could just cast the pointer to an int or BOOL, since pointers and ints are the same size and they can be cast to each other:
(int)object_getIvar(obj, myIntVar)
It's probably boxing the value in an NSNumber. You can verify this by NSLogging the returned id's className, like so:
id returnedValue = object_getIvar(self, myIntVar);
NSLog(#"Class: %#", [returnedValue className]);
Edit: I found another question just like this one here: Handling the return value of object_getIvar(id object, Ivar ivar)
From my own experimentation, it would appear that my original assumption was incorrect. int and float and other primitives appear to be returned as the actual value. However, it would be appropriate to use ivar_getTypeEncoding to verify that the returned value is the type that you're expecting it to be.
you can use object_getInstanceVariable directly: (haven't tested it)
void *ptr_to_result;
object_getInstanceVariable(obj, "intvarname", &ptr_to_result);
float result = *(float *)ptr_to_result;