Objective-C DEC to HEX - objective-c

Hi have to convert an int value into an HEX string.
From 4 to 0x04
From -4 to 0xFC
I use this code [NSString stringWithFormat:#"0x%02X", x] where x is int.
With 4 I obtain 0x04, but with -4 I obtain 0xFFFFFFFC.
Where am I wrong ?

-4 of type int is indeed 0xFFFFFFFC on 32-bit systems. %02 will pad single-digit numbers with zero, but it will not truncate a longer number to two digits.
If you are interested in printing only the last eight bits, you need to mask the number yourself:
[NSString stringWithFormat:#"0x%02X", (x & 0xFF)];

Related

how to set precision in Objective-C

In C++, there is a std::setprecision function can set float/double precision.
how can i set precision in Objective-C? and this print below:
(lldb) p 10/200
(int) $0 = 0
(lldb) p (float)10/200
(float) $1 = 0.0500000007
line 3 result is 0.0500000007, why is '7' in the result? how can i get the result is 0.05?
Floating-point numbers are binary floating-point numbers. 0.05 cannot be represented exactly by a binary floating point number. The result cannot ever be exactly 0.05.
In addition, you are quite pointlessly using float instead of double. float has only six or seven digits precision. Unless you have a very good reason that you can explain, use double, which gives you about 15 digits of precision. You still won't be able to get 0.05 exactly, but the error will be much less.
You may use NSNumberFormatter to format objects in a wide variety of ways --- too numerous to list here, see the documentation available from Xcode. Also see the Data Formatting Guide.
You must make change between % modulo operator and its identifier f in order to get desired result.
NSString* formattedNumber = [NSString stringWithFormat:#"%.02f", myFloat];
%.02f tells the formatter that you will be formatting a float (%f) and, that should be rounded to two places, and should be padded with 0s.
Example:
%f = 25.000000 // results 25.000000
%.f = 25 // results 25
%.02f = 25.00 // results 25.00
plz use
double A = 0.0500000007;
NSString *b =[NSString stringWithFormat:#"$%.02f",A] ;
double B = [b doubleValue];

Bitwise operators: How do I clear the most significat bit?

I'm working on a problem where I need to convert an integer into a special text encoding. The requirements state the I pack the int into bytes and then clear the most significant bit. I am using the bitwise operator I am unsure of how to clear the most significant bit. Here is the problem and my method that I'm working with so far:
PROBLEM:
For this task, you need to write a small program including a pair of functions that can
convert an integer into a special text encoding
The Encoding Function
This function needs to accept a signed integer in the 14-bit range [-8192..+8191] and return a 4 character string.
The encoding process is as follows:
Add 8192 to the raw value, so its range is translated to [0..16383]
2.Pack that value into two bytes such that the most significant bit of each is cleared
Unencoded intermediate value (as a 16-bit integer):
00HHHHHH HLLLLLLL
Encoded value:
0HHHHHHH 0LLLLLLL
1 of 3
Format the two bytes as a single 4-character hexadecimal string and return it.
Sample values:
Unencoded (decimal) | Intermediate (decimal) | Intermediate (hex) | Encoded (hex)
0 | 8192 | 2000 | 4000
-8192 | 0 | 0000 | 0000
8191 | 16383 | 3fff | 7F7F
2048 | 10240 | 2800 | 5000
-4096 | 4096 | 1000 | 2000
My function
-(NSString *)encodeValue{
// get the input value
int decValue = [_inputValue.text intValue];
char* bytes = (char*)&decValue;
NSNumber *number = #(decValue+8192); //Add 8192 so that the number can't be negative, because we're about to lose the sign.
u_int16_t shortNumber = [number unsignedShortValue]; //Convert the integer to an unsigned short (2 bytes) using NSNumber.
shortNumber = shortNumber << 1; // !!!! This is what I'm doing to clear the MSB !!!!!!!
NSLog(#"%hu", shortNumber);
NSString *returnString = [NSString stringWithFormat:#"%x", shortNumber]; //Convert the 2 byte number to a hex string using format specifiers
return returnString;
}
I'm using the shift bitwise operator to clear the MSB and I get the correct answer for a couple of the values, but not every time.
If I am understanding you correctly then I believe you are after something like this:
u_int16_t number;
number = 0xFFFF;
number &= ~(1 << ((sizeof(number) * 8) - 1));
NSLog(#"%x", number); // Output will be 7fff
How it works:
sizeof(number) * 8 gives you the number of bits in the input number (eg. 16 for a u_int16_t)
1 << (number of bits in number - 1) gives you a mask with only the MSB set (eg. 0x8000)
~(mask) gives you the bitwise NOT of the mask (eg. 0x7fff)
ANDing the mask with your number then clears only the MSB leaving all others as they were
You are misunderstanding your task.
You are not supposed to clear the most significant bit anywhere. You have 14 bits. You are supposed to separate these 14 bits into two groups of seven bits. And since a byte has 8 bits, storing 7 bits into a byte will leave the most significant bit cleared.
PS. Why on earth are you using an NSNumber? If this is homework, I would fail you for the use of NSNumber alone, no matter what the rest of the code does.
PS. What is this char* bytes supposed to be good for?
PS. You are not clearing any most significant bit anywhere. You have an unsigned short containing 14 significant bits, so the two most significant bits are cleared. You shift the number to the left, so the most significant bit, which was always cleared, remains cleared, but the second most significant bit isn't. And all this has nothing to do with your task.

Inprecision on floating point decimals?

If the size of a float is 4 bytes then shouldn't it be able to hold digits from 8,388,607 to -8,388,608 or somewhere around there because I probably calculated it wrong.
Why does f display the extra 15 because the value of f (0.1) is still between 8,388,607 to -8,388,608 right?
int main(int argc, const char * argv[])
{
#autoreleasepool {
float f = .1;
printf("%lu", sizeof(float));
printf("%.10f", f);
}
return 0;
}
2012-08-28 20:53:38.537 prog[841:403] 4
2012-08-28 20:53:38.539 prog[841:403] 0.1000000015
The values -8,388,608 ... 8,388,607 lead me to believe that you think floats use two's complement, which they don't. In any case, the range you have indicates 24 bits, not the 32 that you'd get from four bytes.
Floats in C use IEEE754 representation, which basically has three parts:
the sign.
the exponent (sort of a scale).
the fraction (actual digits of the number).
You basically get a certain amount of precision (such as 7 decimal digits) and the exponent dictates whether you use those for a number like 0.000000001234567 or 123456700000.
The reason you get those extra digits at the end of your 0.1 is because that number cannot be represented exactly in IEEE754. See this answer for a treatise explaining why that is the case.
Numbers are only representable exactly if they can be built by adding inverse powers of two (like 1/2, 1/16, 1/65536 and so on) within the number of bits of precision (ie, number of bits in the fraction), subject to scaling.
So, for example, a number like 0.5 is okay since it's 1/2. Similarly 0.8125 is okay since that can be built from 1/2, 1/4 and 1/16.
There is no way (at least within 23 bits of precision) that you can build 0.1 from inverse powers of two, so it gives you the nearest match.

Keep leading zeros when converting NSString to Integer

Is there a way to keep the leading zeros when converting a string to an integer. For example say the string was "01" is there a way i could store it as an integer value of 01?
- (int) getNextHand{
int temp = [[numbersArray objectAtIndex:cardsDelt] intValue];
NSLog(#"Card %i: %i", cardsDelt, temp);
cardsDelt++;
return temp;
}
My numbersArray contains 4 leading zero numbers they are:
"00"
"01"
"02"
"03"
If you want to log with leading zeroes use something like %02d
Nope integers are numbers and 01 will be automatically be converted to 1
01 and 1 are two representations of the same integer value. If the leading digits contain information, then you have a string, not an integer.

How to do string related problem

I am making a binary to decimal number converter on iphone. having some problem when i trying to take each single digit from a number and do calculation. I tried char, characterAtIndex but they all failed to do calculation or i got the syntax completely wrong. Can anyone show me how to do such cast or there is an easier approach?
Your problem is getting numbers from strings?
The easiest way to get an integer from a character is to use the ascii table, like this:
NSString *stringOfNums = #"15";
char c;
int num;
for (int i = 0; i < [stringOfNums length]; ++i) {
c = [stringOfNums characterAtIndex:i];
num = c - 48; // 0 is 48 in ascii table
printf("\nchar is %c and num is %d", c, num);
}
The advantage of this method is that you can validate on a char-by-char basis that each falls in a range of 48 through 57, the ascii digits.
Or you could do the conversion in one step using NSNumberFormatter, as described here: How to convert an NSString into an NSNumber
As for the binary-decimal conversion, does your formula work on paper? Get that right first.