bool versus BOOL [duplicate] - objective-c

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Objective-C : BOOL vs bool
Is there any difference between BOOL and Boolean in Objective-C?
I noticed from the autocomplete in XCode that there is a bool and a BOOL in Objective-C. Are these different? Why are there two different kinds of bool?
Are they interchangeable?

Yes they are different.
C++ has bool, and it is a true Boolean type. It is guaranteed to be 0 or 1 within integer context.
C99 has _Bool as a true Boolean type, and if <stdbool.h> is included, then bool becomes a preprocessor macro for _Bool (this header also defines true and false as preprocessor macros for 1 and 0 respectively).
Cocoa has BOOL as a type, but it is just a typedef for signed char. It can represent more values than just 0 or 1.
Carbon has Boolean as a type, but it is just a typedef for unsigned char. Like, Cocoa's BOOL it can represent more values than just 0 or 1.
For Cocoa and Carbon's “Boolean” types, they should be thought of as zero meaning false, and any non-zero value meaning true.

You should use bool unless you need to interoperate with code that uses BOOL, because bool is a real Boolean type and BOOL isn't. What do I mean "real Boolean type"? I mean that code like this does what you expect it to:
#define FLAG_A 0x00000001
#define FLAG_B 0x00000002
...
#define FLAG_F 0x00000020
struct S
{
// ...
unsigned int flags;
};
void doSomething(S* sList, bool withF)
{
for (S* s = sList; s; s = s->next)
{
if ((bool)(s->flags & FLAG_F) != withF)
continue;
// actually do something
}
}
because (bool)(s->flags & FLAG_F) can be relied upon to evaluate to either 0 or 1. If that were a BOOL instead of a bool in the cast, it wouldn't work, because withF evaluates to 0 or 1, and (BOOL)(s->flags & FLAG_F) evaluates to 0 or the numeric value of FLAG_F, which in this case is not 1.
This example is contrived, yeah, but real bugs of this type can and do happen all too often in old code that doesn't use the C99/C++ genuine boolean types.

BOOL is defined in Objective-C as typedef signed char BOOL, while bool is the datatype defined in C99.

BOOL is actually a signed char (thanks Yuji), while bool is a true boolean from the ISO C99 standard.
See here: http://iosdevelopertips.com/objective-c/of-bool-and-yes.html

Related

Does the latest enum value will always be higher than the first enum value?

I have a question about enum : when I create an enum, does the latest value will always be higher than the first value of the enum ?
Maybe an exemple will be helpful to understand what I mean :
Imagine I am developing a RPG game, in which there are weapons. Each weapon has a type :
typedef enum
{
WoodenSword,
IronSword,
SteelSword,
GoldenSword
}WeaponType;
Now I want to check the difference of power between the weapons (supposing the WoodenSword is the weakest weapon and the GoldenSword is the strongest weapon). Is it possible de check the power of a weapon doing a simple :
WeaponType type = GoldenSword;
if(type > WoodenSword)
{
//Do something
}
In other words, I don't want this but is it possible for an enum value to be like this (if you don't force the value) :
typedef enum
{
WoodenSword, //-> equals 40
IronSword, //-> equals 0
SteelSword, //-> equals 42
GoldenSword //-> equals 5
}WeaponType;
Or it will be this way by default :
typedef enum
{
WoodenSword, //-> equals 0
IronSword, //-> equals 1
SteelSword, //-> equals 2
GoldenSword //-> equals 3
}WeaponType;
Hope to be clear enough. Please, feel free to tell me if I am not precise enough.
Thanks.
For C:
From the C99 standard section 6.7.2.2 Enumeration specifiers:
The identifiers in an enumerator list are declared as constants that have type int and may appear wherever such are permitted.98) An enumerator with = defines its enumeration constant as the value of the constant expression. If the first enumerator has no =, the value of its enumeration constant is 0. Each subsequent enumerator with no = defines its enumeration constant as the value of the constant expression obtained by adding 1 to the value of the previous enumeration constant. (The use of enumerators with = may produce enumeration constants with values that duplicate other values in the same enumeration.) The enumerators of an enumeration are also known as its members.
So, if the value of an enum enumerator is not explicitly set it is guaranteed to be one greater than the previous value.
C/C++ guarantees that if you don't force values, any next non-forced value in enum will be previous + 1.
Yes, the default behavior is how you describe. To get the other behavior, you need to set values like this:
typedef enum
{
WoodenSword = 40, //-> equals 40
IronSword = 0, //-> equals 0
SteelSword = 42, //-> equals 42
GoldenSword = 5 //-> equals 5
} WeaponType;
If you don't force subsequent values you can rely on it, unless ...
root#debian:/home/david# cat demo.c
#include <stdio.h>
#include <limits.h>
enum {a = INT_MAX, b};
int main(void)
{
printf("a=%d b=%d\n", a, b);
return 0;
}
root#debian:/home/david# clang -o demo demo.c
demo.c:4:20: warning: overflow in enumeration value
enum {a = INT_MAX, b};
^
1 warning generated.
root#debian:/home/david# ./demo
a=2147483647 b=-2147483648
Or just use defines instead of enumaration
#define IronSword 0
#define GoldenSword 5
#define WoodenSword 40
#define SteelSword 42
and
if(type > WoodenSword)
{
//Do something
}

How can casting pointer to BOOL return 112?

Let's say I have a pointer to some object, called myObject, and I need to know, whether it is really pointing to something. How can this code:
// assume MyObjectClass *myObject;
return (BOOL)myObject;
return 112? I know, that I can always write
return (myObject == nil);
and everything will be fine. But until today I have always assumed, that explicit casting of anything to bool will always return true or false (as far as I know, 0 is always considered as false and any other value as true) and that BOOL with it's YES and NO values is just "renamed" bool. So basically, my questions are:
Why is it returning 112? :-)
Are results of explicit casting defined somewhere in C/Objective-C standard, or is it compiler-specific?
In Objective-C, the BOOL macro is just a typedef for signed char with YES/NO defined, but bool is an actual boolean, which can be true or false.
It returns 112, because it rounds the address of your pointer to signed char type.
Here is some discussion with good answers:
Objective-C : BOOL vs bool
Is there a difference between YES/NO,TRUE/FALSE and true/false in objective-c?
The definition of "true" in C is any non-zero number. Therefore 112 would be considered true as far as C is concerned. From the C99 standard:
6.3.1.2 Boolean type
When any scalar value is converted to _Bool, the result is 0 if the value compares equal
to 0; otherwise, the result is 1.
Your value is not converted to 1 because you are converting to BOOL not _Bool. The conversion to 0/1 will be technically handled inside the if (though in reality the implementation is more likely to be (myObject != 0) inside any if/while type statement).
In C , bool is a stdbool.h macro for boolean type _Bool.
And a conversion of a non-zero integer value to _Bool is guaranteed to yield 1.
That is, the result of 1 == (bool) 42 is 1.
If you are using a BOOL type as an alias for another integer type (like signed char), you can get a different result:
The result of 1 == (BOOL) 42 is 0.

Large number of bitwise enums

I've got a question regarding bitwise enums that I just can't seem to resolve. I've got a number of flags that are represented by a bitwise enum as in the following example:
enum
{
EnumNone=0,
EnumOne = 1<<0,
EnumTwo = 1<<1,
EnumThree = 1<<2,
EnumFour = 1<<3
};
typedef NSUInteger MyEnum;
All is fine with the above example. Based on my research and various helpful posts in stackoverflow (this for example), I've concluded that, using the above example, I'm essentially given 32 options (or shifts if you will), each option representing 1 bit in a 32-bit series of options, which basically tells me that I can go all the way to EnumThirtyTwo = 1 << 31.
My question is this:
Suppose I've more than 32, say 75 flags for example, to represent using a bitwise enum. How would that best be represented?
enum
{
EnumNone=0,
EnumOne = 1<<0,
EnumTwo = 1<<1,
EnumThree = 1<<2,
EnumFour = 1<<3,
...
...
EnumSeventyFive = 1<<75
};
typedef NSUInteger MyEnum;
Would it be a simple matter of changing the declaration of my enum type, say, to: typedef long int MyEnum; or typedef long MyEnum?
You can use a few simple macros/functions and a struct containing a char array of sufficient size - gives you call-by-value semantics, i.e. just like real enums. E.g. something along the lines of (typed directly into answer):
typedef struct
{
char bits[10]; // enough for 80 bits...
} SeventyFiveFlags;
typedef enum
{
EnumOne = 0,
...
EnumSeventyFive = 74
} SeventyFiveFlagNames;
NS_INLINE BOOL testFlag(SeventyFiveFlags flags, SeventyFiveFlagNames bit)
{
return (flags.bits[bit >> 3] & (1 << (bit & 0x7))) != 0;
}
However you can also use the bitstring(3) functions/macros if you are OK with call-by-reference semantics. These create (heap or stack) bit strings of any length. Use your enum to provide symbolic names for the bit numbers rather than masks, e.g.:
#include <bitstring.h>
typedef enum
{
EnumOne = 0,
...
EnumSeventyFive = 74,
SeventyFiveFlagsSize = 75
} SeventyFiveFlagNames;
typedef bitstr_t *SeventyFiveFlags;
// local (stack) declaration & use
SeventyFiveFlags seventyFive;
bit_decl(seventyFive, SeventyFiveFlagsSize); // declare
bit_nclear(seventyFive, EnumOne, EnumSeventyFive); // set all false
if( bit_test(seventyFive, EnumFortyTwo) ) // test
You can always wrap this up as a class if heap allocation only is OK.
Maybe I am talking about something irrelevant.
I think having too much flag in an enum is not a good practise. Having this large amount of flag, there must be ways to group them up like:
enum
{
EnumNone=0,
EnumOne = 1<<0,
EnumTwo = 1<<1,
EnumThree = 1<<2,
EnumFour = 1<<3
};
typedef NSUInteger widthRelated;
enum
{
EnumNone=0,
EnumOne = 1<<0,
EnumTwo = 1<<1,
EnumThree = 1<<2,
EnumFour = 1<<3
};
typedef NSUInteger heightRelated;

enum values: NSInteger or int?

tl;dr Version
How are the data types of an enum's constants guaranteed to be NSUInteger instead of unsigned int when declaring an enum thusly:
enum {
NSNullCellType = 0,
NSTextCellType = 1,
NSImageCellType = 2
};
typedef NSUInteger NSCellType;
The typedef to NSUInteger does not appear to be tied to the enum declaration in any way.
Full Version
I was reading through Apple's 64-Bit Transition Guide for Cocoa for some guidance on enum values and I came away with a question. Here's a (lengthy) quote from the Enumeration Constants section, emphasis mine:
A problem with enumeration (enum) constants is that their data types are frequently indeterminate. In other words, enum constants are not predictably unsigned int. With conventionally constructed enumerations, the compiler actually sets the underlying type based on what it finds. The underlying type can be (signed) int or even long. Take the following example:
type enum {
MyFlagError = -1,
MyFlagLow = 0,
MyFlagMiddle = 1,
MyFlagHigh = 2
} MyFlagType;
The compiler looks at this declaration and, finding a negative value assigned to one of the member constants, declares the underlying type of the enumeration int. If the range of values for the members does not fit into an int or unsigned int, then the base type silently becomes 64-bit (long). The base type of quantities defined as enumerations can thus change silently size to accord with the values in the enumeration. This can happen whether you're compiling 32-bit or 64-bit. Needless to say, this situation presents obstacles for binary compatibility.
As a remedy for this problem, Apple has decided to be more explicit about the enumeration type in the Cocoa API. Instead of declaring arguments in terms of the enumeration, the header files now separately declare a type for the enumeration whose size can be specified. The members of the enumeration and their values are declared and assigned as before. For example, instead of this:
typedef enum {
NSNullCellType = 0,
NSTextCellType = 1,
NSImageCellType = 2
} NSCellType;
there is now this:
enum {
NSNullCellType = 0,
NSTextCellType = 1,
NSImageCellType = 2
};
typedef NSUInteger NSCellType;
The enumeration type is defined in terms of NSInteger or NSUInteger to make the base enumeration type 64-bit capable on 64-bit architectures.
My question is this: given that the typedef doesn't appear to be tied explicitly to the enum declaration, how does one know if their data types are unsigned int or NSUInteger?
There is now NS_ENUM starting Xcode 4.5:
typedef NS_ENUM(NSUInteger, NSCellType) {
NSNullCellType = 0,
NSTextCellType = 1,
NSImageCellType = 2
};
And you can consider NS_OPTIONS if you work with binary flags:
typedef NS_OPTIONS(NSUInteger, MyCellFlag) {
MyTextCellFlag = 1 << 0,
MyImageCellFlag = 1 << 1,
};
I run a test on the simulator so the intention of the test is check the size of different integer types. For that, the result of sizeof was printed in the console. So I test this enum values:
typedef enum {
TLEnumCero = 0,
TLEnumOne = 1,
TLEnumTwo = 2
} TLEnum;
typedef enum {
TLEnumNegativeMinusOne = -1,
TLEnumNegativeCero = 0,
TLEnumNegativeOne = 1,
TLEnumNegativeTwo = 2
} TLEnumNegative;
typedef NS_ENUM(NSUInteger, TLUIntegerEnum) {
TLUIntegerEnumZero = 0,
TLUIntegerEnumOne = 1,
TLUIntegerEnumTwo = 2
};
typedef NS_ENUM(NSInteger, TLIntegerEnum) {
TLIntegerEnumMinusOne = -1,
TLIntegerEnumZero = 0,
TLIntegerEnumOne = 1,
TLIntegerEnumTwo = 2
};
Test Code:
NSLog(#"sizeof enum: %ld", sizeof(TLEnum));
NSLog(#"sizeof enum negative: %ld", sizeof(TLEnumNegative));
NSLog(#"sizeof enum NSUInteger: %ld", sizeof(TLUIntegerEnum));
NSLog(#"sizeof enum NSInteger: %ld", sizeof(TLIntegerEnum));
Result for iPhone Retina (4-inch) Simulator:
sizeof enum: 4
sizeof enum negative: 4
sizeof enum NSUInteger: 4
sizeof enum NSInteger: 4
Result for iPhone Retina (4-inch 64 bit) Simulator:
sizeof enum: 4
sizeof enum negative: 4
sizeof enum NSUInteger: 8
sizeof enum NSInteger: 8
Conclusion
A generic enum can be an int or unsigned int types of 4 bytes for 32 or 64 bits.
As we already know NSUInteger and NSInteger are 4 bytes for 32 bits and 8 bytes in 64 bits compiler for iOS.
These are two separate declarations. The typedef guarantees that, when you use that type, you always get an NSUInteger.
The problem with an enum is not that it's not large enough to hold the value. In fact, the only guarantee you get for an enum is that sizeof(enum Foo) is large enough to hold whatever values you've currently defined in that enum. But its size may change if you add another constant. That's why Apple do the separate typedef, to maintain binary stability of the API.
The data types of the enum's constants are not guaranteed to be NSUInteger, but they are guaranteed to be cast to NSUInteger every time you use them through NSCellType.
In other words, the declaration decrees that although the enum's values would currently fit into an unsigned int, the storage reserved for them when accessed through NSCellType should be an NSUInteger.

Is comparing a BOOL against YES dangerous?

I found a comment today in a source file:
// - no longer compare BOOL against YES (dangerous!)
Is comparing BOOL against YES in Objective-C really that dangerous? And why is that?
Can the value of YES change during runtime? Maybe NO is always 0 but YES can be 1, 2 or 3 - depending on runtime, compiler, your linked frameworks?
The problem is that BOOL is not a native type, but a typedef:
typedef signed char BOOL;
#define YES (BOOL)1
#define NO (BOOL)0
As a char, its values aren't constrained to TRUE and FALSE. What happens with another value?
BOOL b = 42;
if (b)
{
// true
}
if (b != YES)
{
// also true
}
You should never compare booleans against anything in any of the C based languages. The right way to do it is to use either:
if (b)
or:
if (!b)
This makes your code much more readable (especially if you're using intelligently named variables and functions like isPrime(n) or childThreadHasFinished) and safe. The reason something like:
if (b == TRUE)
is not so safe is that there are actually a large number of values of b which will evaluate to true, and TRUE is only one of them.
Consider the following:
#define FALSE 0
#define TRUE 1
int flag = 7;
if (flag) printf ("number 1\n");
if (flag == TRUE) printf ("number 2\n");
You should get both those lines printed out if it were working as expected but you only get the first. That's because 7 is actually true if treated correctly (0 is false, everything else is true) but the explicit test for equality evaluates to false.
Update:
In response to your comment that you thought there'd be more to it than coder stupidity: yes, there is (but I still wouldn't discount coder stupidity as a good enough reason - defensive programming is always a good idea).
I also mentioned readability, which is rather high on my list of desirable features in code.
A condition should either be a comparison between objects or a flag (including boolean return values):
if (a == b) ...
if (c > d) ...
if (strcmp (e, "Urk") == 0) ...
if (isFinished) ...
if (userPressedEsc (ch)) ...
If you use (what I consider) an abomination like:
if (isFinished == TRUE) ...
where do you stop:
if (isFinished == TRUE) ...
if ((isFinished == TRUE) == TRUE) ...
if (((isFinished == TRUE) == TRUE) == TRUE) ...
and so on.
The right way to do it for readability is to just use appropriately named flag variables.
All this is true, but there are valid counter arguments that might be considered:
— Maybe we want to check a BOOL is actually YES or NO. Really, storing any other value than 0 or 1 in a BOOL is pretty incorrect. If it happens, isn't it more likely because of a bug somewhere else in the codebase, and isn't not explicitly checking against YES just masking this bug? I think this is way more likely than a sloppy programmer using BOOL in a non-standard way. So, I think I'd want my tests to fail if my BOOL isn't YES when I'm looking for truth.
— I don't necessarily agree that "if (isWhatever)" is more readable especially when evaluating long, but otherwise readable, function calls,
e.g. compare
if ([myObj doThisBigThingWithName:#"Name" andDate:[NSDate now]]) {}
with:
if (![myObj doThisBigThingWithName:#"Name" andDate:[NSDate now]]) {}
The first is comparing against true, the second against false and it's hard to tell the difference when quickly reading code, right?
Compare this to:
if ([myObj doThisBigThingWithName:#"Name" andDate:[NSDate now]] == YES) {}
and
if ([myObj doThisBigThingWithName:#"Name" andDate:[NSDate now]] == NO) {}
…and isn't it much more readable?
Again, I'm not saying one way is correct and the other's wrong, but there are some counterpoints.
When the code uses a BOOL variable, it is supposed to use such variable as a boolean. The compiler doesn't check if a BOOL variable gets a different value, in the same way the compiler doesn't check if you initialize a variable passed to a method with a value taken between a set of constants.