How to get warning when comparing two enum value - objective-c

In this small DEMO in Objective-C:
first enum:
typedef NS_ENUM(NSUInteger, Day) {
DaySunday,
DayMonday,
DayTuesday
};
second enum:
typedef NS_ENUM(NSUInteger, Month) {
MonthJanuary,
MonthFebruary,
MonthMarch,
MonthApril
};
when comparing :
Day sunday = DaySunday;
Month january = MonthJanuary;
if (sunday == january) {
NSLog(#"case1 with warning");
}
if (DaySunday == january) {
NSLog(#"case2 without warning");
}
and Xcode snapshot:
so how could i get a warning in case2?

Enumeration types in (Objective-)C are very weak types. By the C Standard every enumeration constant (your january etc.) has an integer type, not the type of the enumeration. Furthermore a value of enumeration type is implicitly converted to an integer type when needed.
Clang is giving you a warning when both operands are of enumeration type, and it is only a warning as by the C Standard the comparison is a correct when between integer values.
In your DaySunday == january the left operand has integer type, the right operand is implicitly convert to integer type, so again this is perfectly legal and correct Standard C. Clang could choose to issue a warning, why it does not is probably down to a design decision, or consequence of the design, on Clang internals.
Be thankful Clang often gives warnings where Standard C does not require them, however you cannot rely on it showing all the traps in C.
To address your issue you can cast the literal to the enum type if you wish, (Day)DaySunday == january, but you might reasonably decide this makes C look even worse ;-)

I'm not sure why this behavior is happening, but it's strange and cool. To get the warning, you have to cast DaySunday as type Day explicitly.
if ((Day)DaySunday == january) {
NSLog(#"case2 without warning");
}
Explicitly casting january as Month won't trigger the warning, so it looks like the static analyzer is correctly treating january as a Month type (because you declared it that way), but is implicitly converting DaySunday to make the comparison work.
To be fair, the warning in the first case is actually not the ideal behavior, because both Day and Month are NSUIntegers and therefore are comparable. As you observe when you run this code, both comparisons are true, meaning the warning isn't actually meaningful.

You have to change enum to int for removing Warning
if ((int)sunday == (int)january) {
NSLog(#"case1 with warning");
}

Related

Why is 0xF00 interpreted as NO, when the dec is not 0

I discovered that
BOOL x = (BOOL)0xF00;
is NO... the value 0xF00 is non zero, yet still the result is NO
but this blows my paradigm, that BOOL is supposed to work as
NO == 0, YES == any other value
Why is it like that? Does it mean that checking
if (object) {}
is not safe?
BOOL is defined as a signed char, which is 8 bits. But 0xF00 requires more than 8 bits. So the compiler is taking the lowest 8 bits, which have a value of 0. When I try it, the compiler specifically warns about this problem:
warning: implicit conversion from 'int' to 'BOOL' (aka 'signed char') changes value from 3840 to 0 [-Wconstant-conversion]
If you're going to assign arbitrary values to BOOL variables, then your paradigm about needs to account for how values are represented.
Casting to a one byte value truncated all higher bits... you can do this though...
BOOL boolyValue = !!0xffff00;

What does 'Implicit conversion loses integer precision: 'time_t'' mean in Objective C and how do I fix it?

I'm doing an exercise from a textbook and the book is outdated, so I'm sort of figuring out how it fits into the new system as I go along. I've got the exact text, and it's returning
'Implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int''.
The book is "Cocoa Programming for Mac OS X" by Aaron Hillegass, third edition and the code is:
#import "Foo.h"
#implementation Foo
-(IBAction)generate:(id)sender
{
// Generate a number between 1 and 100 inclusive
int generated;
generated = (random() % 100) + 1;
NSLog(#"generated = %d", generated);
// Ask the text field to change what it is displaying
[textField setIntValue:generated];
}
- (IBAction)seed:(id)sender
{
// Seed the randm number generator with time
srandom(time(NULL));
[textField setStringValue:#"Generator Seeded"];
}
#end
It's on the srandom(time(NULL)); line.
If I replace time with time_t, it comes up with another error message:
Unexpected type name 'time_t': unexpected expression.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
Well you really need to do some more reading so you understand what these things mean, but here are a few pointers.
When you (as in a human) count you normally use decimal numbers. In decimal you have 10 digits, 0 through 9. If you think of a counter, like on an electric meter or a car odometer, it has a fixed number of digits. So you might have a counter which can read from 000000 to 999999, this is a six-digit counter.
A computer represents numbers in binary, which has two digits 0 and 1. A Binary digIT is called a BIT. So thinking about the counter example above, a 32-bit number has 32 binary digits, a 64-bit one 64 binary digits.
Now if you have a 64-bit number and chop off the top 32-bits you may change its value - if the value was just 1 then it will still be 1, but if it takes more than 32 bits then the result will be a different number - just as truncating the decimal 9001 to 01 changes the value.
Your error:
Implicit conversion looses integer precision: 'time_t' (aka 'long') to 'unsigned int'
Is saying you are doing just this, truncating a large number - long is a 64-bit signed integer type on your computer (not on every computer) - to a smaller one - unsigned int is a 32-bit unsigned (no negative values) integer type on your computer.
In your case the loss of precision doesn't really matter as you are using the number in the statement:
srandom(time(NULL));
This line is setting the "seed" - a random number used to make sure each run of your program gets different random numbers. It is using the time as the seed, truncating it won't make any difference - it will still be a random value. You can silence the warning by making the conversion explicit with a cast:
srandom((unsigned int)time(NULL));
But remember, if the value of an expression is important such casts can produce mathematically incorrect results unless the value is known to be in range of the target type.
Now go read some more!
HTH
Its just a notification. You are assigning 'long' to 'unsigned int'
Solution is simple. Just click the yellow notification icon on left ribbon of that particular line where you are assigning that value. it will show a solution. Double click on solution and it will do everything automatically.
It will typecast to match the equation. But try next time to keep in mind the types you are assigning are same.. hope this helps..

Obj-C: Is it really safe to compare BOOL variables?

I used to think that in 64-bit Obj-C runtime BOOL is actually _Bool and it's a real type so it's safe to write like this:
BOOL a = YES;
BOOL b = NO;
if (a != b) {...}
It's been working seemingly fine but today I found a problem when I use bit field structs like this:
typedef struct
{
BOOL flag1 : 1;
} FlagsType;
FlagsType f;
f.flag1 = YES;
BOOL b = YES;
if (f.flag1 != b)
{
// DOES GET HERE!!!
}
It seems that BOOL returned from the bit field is equal to -1 while the regular BOOL is 1, and they are not equal!!!
Note that I am aware of the situation when an arbitrary integer number is cast to BOOL and therefore becomes a "strange" BOOL which is not safe to compare.
However in this situation, both flag1 field and b were declared as BOOL and never cast. What is the problem? Is this a compiler bug?
The bigger question is if it's really safe to compare BOOLs at all or should I write a XORing helper function? (It would be such a chore, because boolean comparisons are so ubiquitous...)
I do not repeat that using a C boolean type solves the problems one can have with BOOL. That's true – in particular here, as you can read below –, but most of the problems resulted from a wrong storage into a boolean (C) object. But in this case _Bool or unsigned (int) seem to be the only possible solution. (Except of solutions with extra code.) There is a reason for it:
I cannot find a precise documentation of the new behavior of BOOL in Objective-C, but the behavior you found is something between bad and buggy. I expected the latest behavior to be analogous to _Bool. That's not true in your case. (Thanks for finding that out!) Maybe this is for backwards compatibility. To tell the full story:
In C an object of the type int is signed int. (This is a difference to char. For this type the signedess is implementation defined.)
— int, signed, or signed int
ISO/IEC 9899:TC3, 6.7.2-2
Each of the comma-separated sets designates the same type, […]
ISO/IEC 9899:TC3, 6.7.2-5
But there is a weird exception for historical reasons:
If the int object is a bit-field, it is implementation defined, whether it is a signed int or an unsigned int. (Likely this is because some CPUs in the past could not automatically expand the sign of a partial byte integer. So having an unsigned integer is easier, because nulling the top bits is enough.)
On clang the default is signed int. So according to full-width integers int always denotes a signed integer, even it has only one bit. An int member : 1 can only store 0 and -1! (Therefore it is no solution to use int instead.)
Each of the comma-separated sets designates the same type, except that for bit-fields, it is implementation-defined whether the specifier int designates the same type as signed int or the same type as unsigned int.
ISO/IEC 9899:TC3, 6.7.2-5
The C standard says that a boolean bit-field is an integer type and therefore takes part on the weird integer signedness rule for bit-fields:
A bit-field is interpreted as a signed or unsigned integer type consisting of the specified number of bits.
ISO/IEC 9899:TC3, 6.7.2.1-9
This is the behavior you found. Because this is meaningless for 1 bit booleans types, the C standard explicitly denotes that storing a 1 into a boolean bit-field has to compare equal to 1 in every case:
If the value 0 or 1 is stored into a nonzero-width bit-field of type _Bool, the value of the bit-field shall compare equal to the value stored.
ISO/IEC 9899:TC3, 6.7.2.1-9
This leads to the strange situation, that an implementation can implement booleans of width 1 as { 0, -1 }, but has to fulfill 1 == -1. Great.
So, the short story: BOOL behaves like an integer bit-field (conforming to the standard), but does not take part on the extra requirement for _Bools.
I think this is, because of legacy code. (One could expect -1 in the past.)

Integer Precision and Conversion Errors

I have been programming Objective-C for only a few weeks. My experience in programming languages such as basic, visual basic, C++ and PHP is much more extensive starting back in 1987 and continuing forward to today. Although, for the last 5 years, I have exclusively coded PHP.
Today, I find myself confused by what I perceive to be bit conversion errors within the Objective-C language. I first noticed this the other day when trying to divide an integer (84) converted to a float by a float (10.0). This produced 8.399999, instead of the 8.400 I was hoping for. I coded a way around the issue and moved on.
Today, I am extracting an (int) 0 from an NSMutableDictionary. I store it first in an NSInteger and second in an int variable. The values should be 0 for both cases, but for both cases, I get the integer value 151229568. (See screenshot)
I remember from my early programming years that we had to worry about the size of the container, because pointing to block of memory with a 32-bit pointer to access a 4-bit value resulted in capturing all the data associated with other values and thus resulted in what appeared to be the wrong number being captured. With implicit memory management and type-conversions becoming the norm, I have not had to worry about this kind of issue for years, and now that I am confronted with it again, I need advice and clarification from programmers who are more familiar with this topic in todays programming environments.
Questions:
Is this a case of incorrect pointer sizing or something else?
What is happening on the back-end to produce this conversion from 0 to another number?
What can I do to get better precision and accuracy from my Objective-C calculations and variable assignments?
Code:
NSInteger hsibs = [keyData objectForKey:#"half_sibs"];
int hsibsi = [keyData objectForKey:#"half_sibs"];
//breakpoint and screen capture of variables in stack
I don't know Objective C all that well, but it looks like the method you use to obtain your data is returning a data type of id (see this reference), and not int
Looks like you either need to cast it or get the integer value in such a manner:
NSInteger hsibs = [[keyData objectForKey:#"half_sibs"] integerValue];
int hsibsi = [[keyData objectForKey:#"half_sibs"] intValue];
and then see if you get the expected results.

Objective-C - Is !!BOOL Beneficial

I'm looking over the diffs submitted to a project by another developer, and they have a lot of code that does !!<some BOOL value>. In fact, this seems to be their standard pattern for implementing boolean getters and setters. They've implemented their code like:
- (BOOL) hasId {
return !!hasId_;
}
- (void) setHasId:(BOOL) value {
hasId_ = !!value;
}
I've never seen this pattern before, and am wondering if there is any benefit in using it. Is the double-negation doing anything useful?
The double boolean operator just makes sure that the value returned is either a 1 or a 0. That's all : )
! is a logical negation operator. So if setHasId: was passed, eg., 0x2 then the double negation would store 0x1.
It is equivalent to:
hasId_ = value ? 1 : 0;
It is useful in some cases because if you do this:
BOOL x = y & MY_FLAG;
You might get 0 if MY_FLAG is set, because the result gets truncated to the size of a BOOL (8 bits). This is unexpected. For the same reasons, people sometimes prefer that BOOL is either 0 or 1 (so bit operations work as expected). It is usually unnecessary.
In languages with a built-in bool type such as C (as of C99) and C++, converting an integer to bool does this automatically.
It makes more sense in some other cases for example where you are returning BOOL but don't want to put an if statement in.
- (BOOL)isMyVarSet
{
return !!myVar;
}
In this case I can't just return myVar because it's not a BOOL (this is a very contrived example - I can't dig out a decent one from my projects).
I've used this before and I believe:
if (!!myVar)
is equivalent to:
if (myVar != nil)
Basically, I use it to verify the value of SOMETHING.
I will admit... this is probably not the best practice or most-understood way to accomplish this goal.