I discovered that
BOOL x = (BOOL)0xF00;
is NO... the value 0xF00 is non zero, yet still the result is NO
but this blows my paradigm, that BOOL is supposed to work as
NO == 0, YES == any other value
Why is it like that? Does it mean that checking
if (object) {}
is not safe?
BOOL is defined as a signed char, which is 8 bits. But 0xF00 requires more than 8 bits. So the compiler is taking the lowest 8 bits, which have a value of 0. When I try it, the compiler specifically warns about this problem:
warning: implicit conversion from 'int' to 'BOOL' (aka 'signed char') changes value from 3840 to 0 [-Wconstant-conversion]
If you're going to assign arbitrary values to BOOL variables, then your paradigm about needs to account for how values are represented.
Casting to a one byte value truncated all higher bits... you can do this though...
BOOL boolyValue = !!0xffff00;
Related
I'm doing an exercise from a textbook and the book is outdated, so I'm sort of figuring out how it fits into the new system as I go along. I've got the exact text, and it's returning
'Implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int''.
The book is "Cocoa Programming for Mac OS X" by Aaron Hillegass, third edition and the code is:
#import "Foo.h"
#implementation Foo
-(IBAction)generate:(id)sender
{
// Generate a number between 1 and 100 inclusive
int generated;
generated = (random() % 100) + 1;
NSLog(#"generated = %d", generated);
// Ask the text field to change what it is displaying
[textField setIntValue:generated];
}
- (IBAction)seed:(id)sender
{
// Seed the randm number generator with time
srandom(time(NULL));
[textField setStringValue:#"Generator Seeded"];
}
#end
It's on the srandom(time(NULL)); line.
If I replace time with time_t, it comes up with another error message:
Unexpected type name 'time_t': unexpected expression.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
Well you really need to do some more reading so you understand what these things mean, but here are a few pointers.
When you (as in a human) count you normally use decimal numbers. In decimal you have 10 digits, 0 through 9. If you think of a counter, like on an electric meter or a car odometer, it has a fixed number of digits. So you might have a counter which can read from 000000 to 999999, this is a six-digit counter.
A computer represents numbers in binary, which has two digits 0 and 1. A Binary digIT is called a BIT. So thinking about the counter example above, a 32-bit number has 32 binary digits, a 64-bit one 64 binary digits.
Now if you have a 64-bit number and chop off the top 32-bits you may change its value - if the value was just 1 then it will still be 1, but if it takes more than 32 bits then the result will be a different number - just as truncating the decimal 9001 to 01 changes the value.
Your error:
Implicit conversion looses integer precision: 'time_t' (aka 'long') to 'unsigned int'
Is saying you are doing just this, truncating a large number - long is a 64-bit signed integer type on your computer (not on every computer) - to a smaller one - unsigned int is a 32-bit unsigned (no negative values) integer type on your computer.
In your case the loss of precision doesn't really matter as you are using the number in the statement:
srandom(time(NULL));
This line is setting the "seed" - a random number used to make sure each run of your program gets different random numbers. It is using the time as the seed, truncating it won't make any difference - it will still be a random value. You can silence the warning by making the conversion explicit with a cast:
srandom((unsigned int)time(NULL));
But remember, if the value of an expression is important such casts can produce mathematically incorrect results unless the value is known to be in range of the target type.
Now go read some more!
HTH
Its just a notification. You are assigning 'long' to 'unsigned int'
Solution is simple. Just click the yellow notification icon on left ribbon of that particular line where you are assigning that value. it will show a solution. Double click on solution and it will do everything automatically.
It will typecast to match the equation. But try next time to keep in mind the types you are assigning are same.. hope this helps..
I used to think that in 64-bit Obj-C runtime BOOL is actually _Bool and it's a real type so it's safe to write like this:
BOOL a = YES;
BOOL b = NO;
if (a != b) {...}
It's been working seemingly fine but today I found a problem when I use bit field structs like this:
typedef struct
{
BOOL flag1 : 1;
} FlagsType;
FlagsType f;
f.flag1 = YES;
BOOL b = YES;
if (f.flag1 != b)
{
// DOES GET HERE!!!
}
It seems that BOOL returned from the bit field is equal to -1 while the regular BOOL is 1, and they are not equal!!!
Note that I am aware of the situation when an arbitrary integer number is cast to BOOL and therefore becomes a "strange" BOOL which is not safe to compare.
However in this situation, both flag1 field and b were declared as BOOL and never cast. What is the problem? Is this a compiler bug?
The bigger question is if it's really safe to compare BOOLs at all or should I write a XORing helper function? (It would be such a chore, because boolean comparisons are so ubiquitous...)
I do not repeat that using a C boolean type solves the problems one can have with BOOL. That's true – in particular here, as you can read below –, but most of the problems resulted from a wrong storage into a boolean (C) object. But in this case _Bool or unsigned (int) seem to be the only possible solution. (Except of solutions with extra code.) There is a reason for it:
I cannot find a precise documentation of the new behavior of BOOL in Objective-C, but the behavior you found is something between bad and buggy. I expected the latest behavior to be analogous to _Bool. That's not true in your case. (Thanks for finding that out!) Maybe this is for backwards compatibility. To tell the full story:
In C an object of the type int is signed int. (This is a difference to char. For this type the signedess is implementation defined.)
— int, signed, or signed int
ISO/IEC 9899:TC3, 6.7.2-2
Each of the comma-separated sets designates the same type, […]
ISO/IEC 9899:TC3, 6.7.2-5
But there is a weird exception for historical reasons:
If the int object is a bit-field, it is implementation defined, whether it is a signed int or an unsigned int. (Likely this is because some CPUs in the past could not automatically expand the sign of a partial byte integer. So having an unsigned integer is easier, because nulling the top bits is enough.)
On clang the default is signed int. So according to full-width integers int always denotes a signed integer, even it has only one bit. An int member : 1 can only store 0 and -1! (Therefore it is no solution to use int instead.)
Each of the comma-separated sets designates the same type, except that for bit-fields, it is implementation-defined whether the specifier int designates the same type as signed int or the same type as unsigned int.
ISO/IEC 9899:TC3, 6.7.2-5
The C standard says that a boolean bit-field is an integer type and therefore takes part on the weird integer signedness rule for bit-fields:
A bit-field is interpreted as a signed or unsigned integer type consisting of the specified number of bits.
ISO/IEC 9899:TC3, 6.7.2.1-9
This is the behavior you found. Because this is meaningless for 1 bit booleans types, the C standard explicitly denotes that storing a 1 into a boolean bit-field has to compare equal to 1 in every case:
If the value 0 or 1 is stored into a nonzero-width bit-field of type _Bool, the value of the bit-field shall compare equal to the value stored.
ISO/IEC 9899:TC3, 6.7.2.1-9
This leads to the strange situation, that an implementation can implement booleans of width 1 as { 0, -1 }, but has to fulfill 1 == -1. Great.
So, the short story: BOOL behaves like an integer bit-field (conforming to the standard), but does not take part on the extra requirement for _Bools.
I think this is, because of legacy code. (One could expect -1 in the past.)
I am looking a few hours for some solution of this problem, but I don't get how it works. I have a hex string from delphi double value : 0X3FF0000000000000. That value should be 1.0. It is 8 byte long, first bit is sign, next 11 are exponent and the rest is mantissa. So for me is this hex value equals 0 x 10^(1023). Maybe I am wrong somewhere, but it doesn't matter. The point is, I need this hex value to convert into objective c double value. If I do: (double)strtoll(hexString.UTF8String, NULL,16); I get: 4.607...x 10 ^18. What am I doing wrong?
It seems that trying to cast in this way ends up with a call to an implicit type conversion (calls _ultod3 or _ltod3) that alters the underlying data. In fact, even trying to do this seems to do the same thing :
UINT64 temp1 = strtoull(hexString, NULL, 16);
double val = *&temp1;
But if you cast the uint pointer to a double* it semes to suppress the compiler's desire to try to perform a conversion. Something like this should work :
UINT64 temp1 = strtoull(hexString, NULL, 16);
double val = *(double*)&temp1;
At least this works with the MS C++ compiler... I imagine the objective C compiler would cooperate as well.
I want to invert the value of a BOOL every time I detect a tap. The default value of the BOOL is NO and the first time I tap it inverts it to YES. On subsequent taps the value stays as YES.
#property(nonatomic, assign) BOOL isDayOrNight; //property in timeDayChart object.
self.timeDayChart.isDayOrNight = ~self.timeDayChart.isDayOrNight; //This is done in a VC.
I had to change it to this:
self.timeDayChart.isDayOrNight = !self.timeDayChart.isDayOrNight;
to achieve my desired results. I would like to know why ~ did not work as expected.
BOOL is defined as a signed char in objc.h:
typedef signed char BOOL;
and YES and NO are defined like so:
#define YES (BOOL)1
#define NO (BOOL)0
So ~YES is -2, which is not the same as NO.
In (Objective-)C(++) when a Boolean value is required, such as in an if or as an operand to &&, actually take an integer value and interpret 0 as false and non-zero as true. The logical, relational and equality operators all also return integers, using 0 for false and 1 for true.
Objective-C's BOOL is a synonym for signed char, which is an integer type, while NO and YES are defined as 0 and 1 respectively.
As you correctly state ~ is the bit inversion operator. If you invert any integer containing both 0's and 1's the result will also do so. Any value containing a 1 bit is treated as true, and inverting any such value other than all 1's produces a value with at least one 1 bit which is also interpreted as true.
If you start with all 0's then repeated inversion should go all 1's, all 0's, all 1's - which is true, false, true etc. (but not YES, NO, YES, etc.). So if you are starting with 0 then either you are not always using inversion or you are testing explicitly for YES rather than true.
However what you should be using, as you figured out, is ! - logical negation - which maps 0 to 1 and non-0 to 0 and so handles "true" values other than 1 correctly.
HTH
Find a book about the C language. Check what it says about the ~ operator and the ! operator. ~ inverts all bits in an integer, and BOOL is defined as an integer type. So NO = all bits zero will be changed to all bits set, which is not the same as YES, and YES = all bits zero except the last bit = 1 will be changed to all bits set except the last bit = 0.
You are better off using this idiom to toggle a BOOL value:
self.timeDayChart.isDay = self.timeDayChart.isDay ? NO : YES;
(I deliberately changed the naming of your property)
I'm using objective-c in xcode. How can I convert a uint8_t piece of data into a decimal two's complement? The range is -127 to 127, correct?
If I have:
uint8_t test = 0xF2
Is there a function or method built in that I can use? Does someone have a simple function?
Thanks!
Does this do what you want?
int8_t twosComplement = (int8_t)test;
The question seems a bit confused. It asks to convert to decimal 2's complement, but 2's complement is meaningful only in binary, not in decimal.
If you want to make a unit9_t value into a signed value, you can
- cast it to some signed type like so: (int16_t)unsigned8variable
- assign it to a variable that has a signed type
However, beware of overflow. Your uint8_t value can be anything from 0 to 255. If you assign to an 8-bit signed type, there are representations for values from -128 to +127, and any original value greater than 127 will suddenly appear to be negative. Choose a type that's big enough to hold any value you might actually see. int16_t would be safe because it goes up to 32767.