I used to think that in 64-bit Obj-C runtime BOOL is actually _Bool and it's a real type so it's safe to write like this:
BOOL a = YES;
BOOL b = NO;
if (a != b) {...}
It's been working seemingly fine but today I found a problem when I use bit field structs like this:
typedef struct
{
BOOL flag1 : 1;
} FlagsType;
FlagsType f;
f.flag1 = YES;
BOOL b = YES;
if (f.flag1 != b)
{
// DOES GET HERE!!!
}
It seems that BOOL returned from the bit field is equal to -1 while the regular BOOL is 1, and they are not equal!!!
Note that I am aware of the situation when an arbitrary integer number is cast to BOOL and therefore becomes a "strange" BOOL which is not safe to compare.
However in this situation, both flag1 field and b were declared as BOOL and never cast. What is the problem? Is this a compiler bug?
The bigger question is if it's really safe to compare BOOLs at all or should I write a XORing helper function? (It would be such a chore, because boolean comparisons are so ubiquitous...)
I do not repeat that using a C boolean type solves the problems one can have with BOOL. That's true – in particular here, as you can read below –, but most of the problems resulted from a wrong storage into a boolean (C) object. But in this case _Bool or unsigned (int) seem to be the only possible solution. (Except of solutions with extra code.) There is a reason for it:
I cannot find a precise documentation of the new behavior of BOOL in Objective-C, but the behavior you found is something between bad and buggy. I expected the latest behavior to be analogous to _Bool. That's not true in your case. (Thanks for finding that out!) Maybe this is for backwards compatibility. To tell the full story:
In C an object of the type int is signed int. (This is a difference to char. For this type the signedess is implementation defined.)
— int, signed, or signed int
ISO/IEC 9899:TC3, 6.7.2-2
Each of the comma-separated sets designates the same type, […]
ISO/IEC 9899:TC3, 6.7.2-5
But there is a weird exception for historical reasons:
If the int object is a bit-field, it is implementation defined, whether it is a signed int or an unsigned int. (Likely this is because some CPUs in the past could not automatically expand the sign of a partial byte integer. So having an unsigned integer is easier, because nulling the top bits is enough.)
On clang the default is signed int. So according to full-width integers int always denotes a signed integer, even it has only one bit. An int member : 1 can only store 0 and -1! (Therefore it is no solution to use int instead.)
Each of the comma-separated sets designates the same type, except that for bit-fields, it is implementation-defined whether the specifier int designates the same type as signed int or the same type as unsigned int.
ISO/IEC 9899:TC3, 6.7.2-5
The C standard says that a boolean bit-field is an integer type and therefore takes part on the weird integer signedness rule for bit-fields:
A bit-field is interpreted as a signed or unsigned integer type consisting of the specified number of bits.
ISO/IEC 9899:TC3, 6.7.2.1-9
This is the behavior you found. Because this is meaningless for 1 bit booleans types, the C standard explicitly denotes that storing a 1 into a boolean bit-field has to compare equal to 1 in every case:
If the value 0 or 1 is stored into a nonzero-width bit-field of type _Bool, the value of the bit-field shall compare equal to the value stored.
ISO/IEC 9899:TC3, 6.7.2.1-9
This leads to the strange situation, that an implementation can implement booleans of width 1 as { 0, -1 }, but has to fulfill 1 == -1. Great.
So, the short story: BOOL behaves like an integer bit-field (conforming to the standard), but does not take part on the extra requirement for _Bools.
I think this is, because of legacy code. (One could expect -1 in the past.)
Related
In this small DEMO in Objective-C:
first enum:
typedef NS_ENUM(NSUInteger, Day) {
DaySunday,
DayMonday,
DayTuesday
};
second enum:
typedef NS_ENUM(NSUInteger, Month) {
MonthJanuary,
MonthFebruary,
MonthMarch,
MonthApril
};
when comparing :
Day sunday = DaySunday;
Month january = MonthJanuary;
if (sunday == january) {
NSLog(#"case1 with warning");
}
if (DaySunday == january) {
NSLog(#"case2 without warning");
}
and Xcode snapshot:
so how could i get a warning in case2?
Enumeration types in (Objective-)C are very weak types. By the C Standard every enumeration constant (your january etc.) has an integer type, not the type of the enumeration. Furthermore a value of enumeration type is implicitly converted to an integer type when needed.
Clang is giving you a warning when both operands are of enumeration type, and it is only a warning as by the C Standard the comparison is a correct when between integer values.
In your DaySunday == january the left operand has integer type, the right operand is implicitly convert to integer type, so again this is perfectly legal and correct Standard C. Clang could choose to issue a warning, why it does not is probably down to a design decision, or consequence of the design, on Clang internals.
Be thankful Clang often gives warnings where Standard C does not require them, however you cannot rely on it showing all the traps in C.
To address your issue you can cast the literal to the enum type if you wish, (Day)DaySunday == january, but you might reasonably decide this makes C look even worse ;-)
I'm not sure why this behavior is happening, but it's strange and cool. To get the warning, you have to cast DaySunday as type Day explicitly.
if ((Day)DaySunday == january) {
NSLog(#"case2 without warning");
}
Explicitly casting january as Month won't trigger the warning, so it looks like the static analyzer is correctly treating january as a Month type (because you declared it that way), but is implicitly converting DaySunday to make the comparison work.
To be fair, the warning in the first case is actually not the ideal behavior, because both Day and Month are NSUIntegers and therefore are comparable. As you observe when you run this code, both comparisons are true, meaning the warning isn't actually meaningful.
You have to change enum to int for removing Warning
if ((int)sunday == (int)january) {
NSLog(#"case1 with warning");
}
I discovered that
BOOL x = (BOOL)0xF00;
is NO... the value 0xF00 is non zero, yet still the result is NO
but this blows my paradigm, that BOOL is supposed to work as
NO == 0, YES == any other value
Why is it like that? Does it mean that checking
if (object) {}
is not safe?
BOOL is defined as a signed char, which is 8 bits. But 0xF00 requires more than 8 bits. So the compiler is taking the lowest 8 bits, which have a value of 0. When I try it, the compiler specifically warns about this problem:
warning: implicit conversion from 'int' to 'BOOL' (aka 'signed char') changes value from 3840 to 0 [-Wconstant-conversion]
If you're going to assign arbitrary values to BOOL variables, then your paradigm about needs to account for how values are represented.
Casting to a one byte value truncated all higher bits... you can do this though...
BOOL boolyValue = !!0xffff00;
I want to invert the value of a BOOL every time I detect a tap. The default value of the BOOL is NO and the first time I tap it inverts it to YES. On subsequent taps the value stays as YES.
#property(nonatomic, assign) BOOL isDayOrNight; //property in timeDayChart object.
self.timeDayChart.isDayOrNight = ~self.timeDayChart.isDayOrNight; //This is done in a VC.
I had to change it to this:
self.timeDayChart.isDayOrNight = !self.timeDayChart.isDayOrNight;
to achieve my desired results. I would like to know why ~ did not work as expected.
BOOL is defined as a signed char in objc.h:
typedef signed char BOOL;
and YES and NO are defined like so:
#define YES (BOOL)1
#define NO (BOOL)0
So ~YES is -2, which is not the same as NO.
In (Objective-)C(++) when a Boolean value is required, such as in an if or as an operand to &&, actually take an integer value and interpret 0 as false and non-zero as true. The logical, relational and equality operators all also return integers, using 0 for false and 1 for true.
Objective-C's BOOL is a synonym for signed char, which is an integer type, while NO and YES are defined as 0 and 1 respectively.
As you correctly state ~ is the bit inversion operator. If you invert any integer containing both 0's and 1's the result will also do so. Any value containing a 1 bit is treated as true, and inverting any such value other than all 1's produces a value with at least one 1 bit which is also interpreted as true.
If you start with all 0's then repeated inversion should go all 1's, all 0's, all 1's - which is true, false, true etc. (but not YES, NO, YES, etc.). So if you are starting with 0 then either you are not always using inversion or you are testing explicitly for YES rather than true.
However what you should be using, as you figured out, is ! - logical negation - which maps 0 to 1 and non-0 to 0 and so handles "true" values other than 1 correctly.
HTH
Find a book about the C language. Check what it says about the ~ operator and the ! operator. ~ inverts all bits in an integer, and BOOL is defined as an integer type. So NO = all bits zero will be changed to all bits set, which is not the same as YES, and YES = all bits zero except the last bit = 1 will be changed to all bits set except the last bit = 0.
You are better off using this idiom to toggle a BOOL value:
self.timeDayChart.isDay = self.timeDayChart.isDay ? NO : YES;
(I deliberately changed the naming of your property)
How can I force Realloc to behave like calloc?
For instance:
I have the following structs:
typedef struct bucket0{
int hashID;
Registry registry;
}Bucket;
typedef struct table0{
int tSize;
int tElements;
Bucket** content;
}Table;
and I have the following code in order to grow the table:
int grow(Table* table){
Bucket** tempPtr;
//grow will add 1 to the number available buckets, and double it.
table->tSize++; //add 1
table->tSize *= 2; //double element
if(!table->content){
//table will be generated for the first time
table->content = (Bucket**)(calloc(sizeof(Bucket*), table->tSize));
} else {
//realloc content
tempPtr = (Bucket**)realloc(table->content, sizeof(Bucket)*table->tSize);
if(tempPtr){
table->content = tempPtr;
return 0;
}else{
return 1000;//table could not grow
}
}
}
When I execute it, the table grows properly, and MOST of the "Buckets" in it are initialized as a NULL ptr. However, not all of them are.
How can I make Realloc behave like calloc? in the sense that when it creates new "buckets" they initialize to NULL
Strictly speaking, you shouldn't be relying on calloc (or memset, for that matter) to set pointers to null. C doesn't guarantee that null pointers are represented by all-zero bytes in memory.
Quoting from the comp.lang.C FAQ question 7.31:
Don't rely on calloc's zero fill too much (see below); usually, it's best to initialize data structures yourself, on a field-by-field basis, especially if there are pointer fields.
calloc's zero fill is all-bits-zero, and is therefore guaranteed to yield the value 0 for all integral types (including '\0' for character types). But it does not guarantee useful null pointer values (see section 5 of this list) or floating-point zero values.
It's safer to initialize the individual structure fields yourself. You can create a static const one as a template, with its content initialized to NULL, and then memcpy it to each element of your dynamically-allocated array.
What is the difference between objective-c C primitive numbers? I know what they are and how to use them (somewhat), but I'm not sure what the capabilities and uses of each one is. Could anyone clear up which ones are best for some scenarios and not others?
int
float
double
long
short
What can I store with each one? I know that some can store more precise numbers and some can only store whole numbers. Say for example I wanted to store a latitude (possibly retrieved from a CLLocation object), which one should I use to avoid loosing any data?
I also noticed that there are unsigned variants of each one. What does that mean and how is it different from a primitive number that is not unsigned?
Apple has some interesting documentation on this, however it doesn't fully satisfy my question.
Well, first off types like int, float, double, long, and short are C primitives, not Objective-C. As you may be aware, Objective-C is sort of a superset of C. The Objective-C NSNumber is a wrapper class for all of these types.
So I'll answer your question with respect to these C primitives, and how Objective-C interprets them. Basically, each numeric type can be placed in one of two categories: Integer Types and Floating-Point Types.
Integer Types
short
int
long
long long
These can only store, well, integers (whole numbers), and are characterized by two traits: size and signedness.
Size means how much physical memory in the computer a type requires for storage, that is, how many bytes. Technically, the exact memory allocated for each type is implementation-dependendant, but there are a few guarantees: (1) char will always be 1 byte (2) sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long).
Signedness means, simply whether or not the type can represent negative values. So a signed integer, or int, can represent a certain range of negative or positive numbers (traditionally –2,147,483,648 to 2,147,483,647), and an unsigned integer, or unsigned int can represent the same range of numbers, but all positive (0 to 4,294,967,295).
Floating-Point Types
float
double
long double
These are used to store decimal values (aka fractions) and are also categorized by size. Again the only real guarantee you have is that sizeof(float) <= sizeof(double) <= sizeof (long double). Floating-point types are stored using a rather peculiar memory model that can be difficult to understand, and that I won't go into, but there is an excellent guide here.
There's a fantastic blog post about C primitives in an Objective-C context over at RyPress. Lots of intro CPS textbooks also have good resources.
Firstly I would like to specify the difference between au unsigned int and an int. Say that you have a very high number, and that you write a loop iterating with an unsigned int:
for(unsigned int i=0; i< N; i++)
{ ... }
If N is a number defined with #define, it may be higher that the maximum value storable with an int instead of an unsigned int. If you overflow i will start again from zero and you'll go in an infinite loop, that's why I prefer to use an int for loops.
The same happens if for mistake you iterate with an int, comparing it to a long. If N is a long you should iterate with a long, but if N is an int you can still safely iterate with a long.
Another pitfail that may occur is when using the shift operator with an integer constant, then assigning it to an int or long. Maybe you also log sizeof(long) and you notice that it returns 8 and you don't care about portability, so you think that you wouldn't lose precision here:
long i= 1 << 34;
Bit instead 1 isn't a long, so it will overflow and when you cast it to a long you have already lost precision. Instead you should type:
long i= 1l << 34;
Newer compilers will warn you about this.
Taken from this question: Converting Long 64-bit Decimal to Binary.
About float and double there is a thing to considerate: they use a mantissa and an exponent to represent the number. It's something like:
value= 2^exponent * mantissa
So the more the exponent is high, the more the floating point number doesn't have an exact representation. It may also happen that a number is too high, so that it will have a so inaccurate representation, that surprisingly if you print it you get a different number:
float f= 9876543219124567;
NSLog("%.0f",f); // On my machine it prints 9876543585124352
If I use a double it prints 9876543219124568, and if I use a long double with the .0Lf format it prints the correct value. Always be careful when using floating points numbers, unexpected things may happen.
For example it may also happen that two floating point numbers have almost the same value, that you expect they have the same value but there is a subtle difference, so that the equality comparison fails. But this has been treated hundreds of times on Stack Overflow, so I will just post this link: What is the most effective way for float and double comparison?.