uint8_t to two's complement function - objective-c

I'm using objective-c in xcode. How can I convert a uint8_t piece of data into a decimal two's complement? The range is -127 to 127, correct?
If I have:
uint8_t test = 0xF2
Is there a function or method built in that I can use? Does someone have a simple function?
Thanks!

Does this do what you want?
int8_t twosComplement = (int8_t)test;

The question seems a bit confused. It asks to convert to decimal 2's complement, but 2's complement is meaningful only in binary, not in decimal.
If you want to make a unit9_t value into a signed value, you can
- cast it to some signed type like so: (int16_t)unsigned8variable
- assign it to a variable that has a signed type
However, beware of overflow. Your uint8_t value can be anything from 0 to 255. If you assign to an 8-bit signed type, there are representations for values from -128 to +127, and any original value greater than 127 will suddenly appear to be negative. Choose a type that's big enough to hold any value you might actually see. int16_t would be safe because it goes up to 32767.

Related

Saving float and double precision in C++ during cin/cout and scanf/printf

I want to read floats and doubles from standart input and save its precision (exact the same digits after floating point) and be able to output (cout/printf) as it is. What the most convinient (and simplest way) to do this?
Thanks!
float f;
cin >> f;
cout << f;
Use setprecision.
Here is the solution
cout<<setprecision(the precision you want to set here)<<variablename;
eg. If you want to set precision of the output to 5 for variable var use it like this:
cout<<setprecision(5)<<var;
setprecision is a manipulator. Learn more about manipulators here.
It sets the decimal precision to be used to
format floating-point values on output operations.
Behaves as if member precision were called with n as argument on the
stream on which it is inserted/extracted as a manipulator (it can be
inserted/extracted on input streams or output streams).
This is a manipulator and is declared in header <iomanip>
Since the input has an unknown precision, the simplest method is to read them as strings, not doubles/floats.
If you need the float value, a simple string to double conversion is needed.
Any other method will probably fail since you rely on imperfect conversion from string to float done by the standard library.
The latter can't distinguish between 0.4 and 0.40.

What does 'Implicit conversion loses integer precision: 'time_t'' mean in Objective C and how do I fix it?

I'm doing an exercise from a textbook and the book is outdated, so I'm sort of figuring out how it fits into the new system as I go along. I've got the exact text, and it's returning
'Implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int''.
The book is "Cocoa Programming for Mac OS X" by Aaron Hillegass, third edition and the code is:
#import "Foo.h"
#implementation Foo
-(IBAction)generate:(id)sender
{
// Generate a number between 1 and 100 inclusive
int generated;
generated = (random() % 100) + 1;
NSLog(#"generated = %d", generated);
// Ask the text field to change what it is displaying
[textField setIntValue:generated];
}
- (IBAction)seed:(id)sender
{
// Seed the randm number generator with time
srandom(time(NULL));
[textField setStringValue:#"Generator Seeded"];
}
#end
It's on the srandom(time(NULL)); line.
If I replace time with time_t, it comes up with another error message:
Unexpected type name 'time_t': unexpected expression.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
Well you really need to do some more reading so you understand what these things mean, but here are a few pointers.
When you (as in a human) count you normally use decimal numbers. In decimal you have 10 digits, 0 through 9. If you think of a counter, like on an electric meter or a car odometer, it has a fixed number of digits. So you might have a counter which can read from 000000 to 999999, this is a six-digit counter.
A computer represents numbers in binary, which has two digits 0 and 1. A Binary digIT is called a BIT. So thinking about the counter example above, a 32-bit number has 32 binary digits, a 64-bit one 64 binary digits.
Now if you have a 64-bit number and chop off the top 32-bits you may change its value - if the value was just 1 then it will still be 1, but if it takes more than 32 bits then the result will be a different number - just as truncating the decimal 9001 to 01 changes the value.
Your error:
Implicit conversion looses integer precision: 'time_t' (aka 'long') to 'unsigned int'
Is saying you are doing just this, truncating a large number - long is a 64-bit signed integer type on your computer (not on every computer) - to a smaller one - unsigned int is a 32-bit unsigned (no negative values) integer type on your computer.
In your case the loss of precision doesn't really matter as you are using the number in the statement:
srandom(time(NULL));
This line is setting the "seed" - a random number used to make sure each run of your program gets different random numbers. It is using the time as the seed, truncating it won't make any difference - it will still be a random value. You can silence the warning by making the conversion explicit with a cast:
srandom((unsigned int)time(NULL));
But remember, if the value of an expression is important such casts can produce mathematically incorrect results unless the value is known to be in range of the target type.
Now go read some more!
HTH
Its just a notification. You are assigning 'long' to 'unsigned int'
Solution is simple. Just click the yellow notification icon on left ribbon of that particular line where you are assigning that value. it will show a solution. Double click on solution and it will do everything automatically.
It will typecast to match the equation. But try next time to keep in mind the types you are assigning are same.. hope this helps..

Hex to 2's complement in objective-c

I'm trying to convert a hex value e.g. 0xFAE8 to a 2's complement decimal i.e. -1304.
I've tried looking at this Convert binary two's complement data into integer in objective-c
but I don't really get how the byte-shifting enables the conversion to be done properly. I hope there can be an explanation or a simpler way to do it. Thanks.
I'm trying to convert a hex value e.g. 0xFAE8 to a 2's complement decimal i.e. -1304.
If the Most Significant Bit (MSB) is set, then you have to flip or invert all the bits and add 1. Here's the general algorithm:
if (msb_is_set) then
x = to_integer(...)
x = ~x
x = x + 1
else
x = to_integer(...)
endif
It omits overflow checks, so be sure to handle those cases in production code.
You can test if the msb or high bit is set with:
if((0x80 & byte[0]))
/* high bit is set */
endif

Difference between Objective-C primitive numbers

What is the difference between objective-c C primitive numbers? I know what they are and how to use them (somewhat), but I'm not sure what the capabilities and uses of each one is. Could anyone clear up which ones are best for some scenarios and not others?
int
float
double
long
short
What can I store with each one? I know that some can store more precise numbers and some can only store whole numbers. Say for example I wanted to store a latitude (possibly retrieved from a CLLocation object), which one should I use to avoid loosing any data?
I also noticed that there are unsigned variants of each one. What does that mean and how is it different from a primitive number that is not unsigned?
Apple has some interesting documentation on this, however it doesn't fully satisfy my question.
Well, first off types like int, float, double, long, and short are C primitives, not Objective-C. As you may be aware, Objective-C is sort of a superset of C. The Objective-C NSNumber is a wrapper class for all of these types.
So I'll answer your question with respect to these C primitives, and how Objective-C interprets them. Basically, each numeric type can be placed in one of two categories: Integer Types and Floating-Point Types.
Integer Types
short
int
long
long long
These can only store, well, integers (whole numbers), and are characterized by two traits: size and signedness.
Size means how much physical memory in the computer a type requires for storage, that is, how many bytes. Technically, the exact memory allocated for each type is implementation-dependendant, but there are a few guarantees: (1) char will always be 1 byte (2) sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long).
Signedness means, simply whether or not the type can represent negative values. So a signed integer, or int, can represent a certain range of negative or positive numbers (traditionally –2,147,483,648 to 2,147,483,647), and an unsigned integer, or unsigned int can represent the same range of numbers, but all positive (0 to 4,294,967,295).
Floating-Point Types
float
double
long double
These are used to store decimal values (aka fractions) and are also categorized by size. Again the only real guarantee you have is that sizeof(float) <= sizeof(double) <= sizeof (long double). Floating-point types are stored using a rather peculiar memory model that can be difficult to understand, and that I won't go into, but there is an excellent guide here.
There's a fantastic blog post about C primitives in an Objective-C context over at RyPress. Lots of intro CPS textbooks also have good resources.
Firstly I would like to specify the difference between au unsigned int and an int. Say that you have a very high number, and that you write a loop iterating with an unsigned int:
for(unsigned int i=0; i< N; i++)
{ ... }
If N is a number defined with #define, it may be higher that the maximum value storable with an int instead of an unsigned int. If you overflow i will start again from zero and you'll go in an infinite loop, that's why I prefer to use an int for loops.
The same happens if for mistake you iterate with an int, comparing it to a long. If N is a long you should iterate with a long, but if N is an int you can still safely iterate with a long.
Another pitfail that may occur is when using the shift operator with an integer constant, then assigning it to an int or long. Maybe you also log sizeof(long) and you notice that it returns 8 and you don't care about portability, so you think that you wouldn't lose precision here:
long i= 1 << 34;
Bit instead 1 isn't a long, so it will overflow and when you cast it to a long you have already lost precision. Instead you should type:
long i= 1l << 34;
Newer compilers will warn you about this.
Taken from this question: Converting Long 64-bit Decimal to Binary.
About float and double there is a thing to considerate: they use a mantissa and an exponent to represent the number. It's something like:
value= 2^exponent * mantissa
So the more the exponent is high, the more the floating point number doesn't have an exact representation. It may also happen that a number is too high, so that it will have a so inaccurate representation, that surprisingly if you print it you get a different number:
float f= 9876543219124567;
NSLog("%.0f",f); // On my machine it prints 9876543585124352
If I use a double it prints 9876543219124568, and if I use a long double with the .0Lf format it prints the correct value. Always be careful when using floating points numbers, unexpected things may happen.
For example it may also happen that two floating point numbers have almost the same value, that you expect they have the same value but there is a subtle difference, so that the equality comparison fails. But this has been treated hundreds of times on Stack Overflow, so I will just post this link: What is the most effective way for float and double comparison?.

Trouble with floats in Objective-C

I've a small problem and I can't find a solution!
My code is (this is only a sample code, but my original code do something like this):
float x = [#"2.45" floatValue];
for(int i=0; i<100; i++)
x += 0.22;
NSLog(#"%f", x);
the output is 52.450001 and not 52.450000 !
I don't know because this happens!
Thanks for any help!
~SOLVED~
Thanks to everybody! Yes, I've solved with the double type!
Floats are a number representation with a certain precision. Not every value can be represented in this format. See here as well.
You can easily think of why this would be the case: there is an unlimited number of number just in the intervall (1..1), but a float only has a limited number of bits to represent all numbers in (-MAXFLOAT..MAXFLOAT).
More aptly put: in a 32bit integer representation there is a countable number of integers to be represented, But there is an infinite innumerable number of real values that cannot be fully represented in a limited representation of 32 or 64bit. Therefore there not only is a limit to the highest and lowest representable real value, but also to the accuracy.
So why is a number that has little digits after the floating point affected? Because the representation is based on a binary system instead of a decimal, making other numbers easily represented then the decimal ones.
See http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
Floating point numbers can not always be represented easily by computers. This leads to inaccuracy in some digits.
It's like me asking you what 1/3 is in decimal. No matter how hard you try, you're not going to be able to tell me what it is because decimal can't accurately describe that number.
Floats can't accurately describe some decimal numbers.