What does the f after the numbers indicate? Is this from C or Objective-C? Is there any difference in not adding this to a constant number?
CGRect frame = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);
Can you explain why I wouldn't just write:
CGRect frame = CGRectMake(0, 0, 320, 50);
CGRect frame = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);
uses float constants. (The constant 0.0 usually declares a double in Objective-C; putting an f on the end - 0.0f - declares the constant as a (32-bit) float.)
CGRect frame = CGRectMake(0, 0, 320, 50);
uses ints which will be automatically converted to floats.
In this case, there's no (practical) difference between the two.
When in doubt check the assembler output. For instance write a small, minimal snippet ie like this
#import <Cocoa/Cocoa.h>
void test() {
CGRect r = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);
NSLog(#"%f", r.size.width);
}
Then compile it to assembler with the -S option.
gcc -S test.m
Save the assembler output in the test.s file and remove .0f from the constants and repeat the compile command. Then do a diff of the new test.s and previous one. Think that should show if there are any real differences. I think too many have a vision of what they think the compiler does, but at the end of the day one should know how to verify any theories.
Sometimes there is a difference.
float f = 0.3; /* OK, throw away bits to convert 0.3 from double to float */
assert ( f == 0.3 ); /* not OK, f is converted from float to double
and the value of 0.3 depends on how many bits you use to represent it. */
assert ( f == 0.3f ); /* OK, comparing two floats, although == is finicky. */
It tells the computer that this is a floating point number (I assume you are talking about c/c++ here). If there is no f after the number, it is considered a double or an integer (depending on if there is a decimal or not).
3.0f -> float
3.0 -> double
3 -> integer
The f that you are talking about is probably meant to tell the compiler that it's working with a float. When you omit the f, it is usually translated to a double.
Both are floating point numbers, but a float uses less bits (thus smaller and less precise) than a double.
A floating point literal in your source code is parsed as a double. Assigning it to a variable that is of type float will lose precision. A lot of precision, you're throwing away 7 significant digits. The "f" postfix let's you tell the compiler: "I know what I'm doing, this is intentional. Don't bug me about it".
The odds of producing a bug isn't that small btw. Many a program has keeled over on an ill-conceived floating point comparison or assuming that 0.1 is exactly representable.
It's a C thing - floating point literals are double precision (double) by default. Adding an f suffix makes them single precision (float).
You can use ints to specify the values here and in this case it will make no difference, but using the correct type is a good habit to get into - consistency is a good thing in general, and if you need to change these values later you'll know at first glance what type they are.
From C. It means float literal constant. You can omit both "f" and ".0" and use ints in your example because of implicit conversion of ints to floats.
It is almost certainly from C and reflects the desire to use a 'float' rather than a 'double' type. It is similar to suffixes such as L on numbers to indicate they are long integers. You can just use integers and the compiler will auto convert as appropriate (for this specific scenario).
It usually tells the compiler that the value is a float, i.e. a floating point integer. This means that it can store integers, decimal values and exponentials, e.g. 1, 0.4 or 1.2e+22.
Related
Say I have an a function:
- (void) doSomethingWithFloat:(float)aFloat;
and I call that function with a double precision floating point value as follows:
[self doSomethingWithFloat:12.0];
Is a conversion done from 12.0 (double) to 12.0f (single) at compile-time or runtime, or neither?
Just for clarity: I'm not asking for the difference between single precision and double
precision floating point numbers.
ObjectiveC actually follows most of C conventions - so floats are promoted to double per the C spec when passed to a function. The ObjectiveC compiler turns all methods into functions eventually, so your double works.
That said its best to turn on compiler warnings and pass CGFloats or floats - it just lets you know when you are losing precision.
When the following executes:
NSString *scratchString = #".123456789";
CGFloat scratchNum = [scratchString doubleValue];
scrathNum contains 0.123457
How can I get scratchNum to contain all of the digits? No matter what I try it rounds to 6 places.
CGFloat is a floating point number -- a float or a double (depending on which typedef is chosen by the compiler).
There's no concept of floating point numbers not "containing all of the digits". Their internal representation varies, but in this case your C float is represented by the IEEE 754 standard.
In this case, your "rounding" is a consequence of whatever is taking that floating point number and converting it from a binary form to a textual form. This could be the Xcode IDE itself (e.g. visualizing the value in the debugger), or maybe you're using a printf statement and have a specific formatting specified.
I'm using the following code in xCode, to work out how much to initially zoom on a UIImage in a scroll view but it keeps returning zero.
int heightRatio = self.scrollView.bounds.size.height / self.imageView.image.size.height;
The strange thing is if I do a subtraction it works, just not with division.
XCode describes both height variables as a "GLuint", some graphics version of int I think.
I've NSLogged both numbers on the right, and neither are zero.
I've tried defining heightRatio as a GLuint as well, with no joy, and also converting to NSNumber but it's all just getting a bit messy.
Am I missing something?
Thanks for any feedback.
You should declare heightRatio as a double (and round later if you want to have an int).
Both self.scrollView.bounds.size.height and self.imageView.image.size.height are of type CGFloat. If you declare heightRatio as CGFloat as well, you would correctly capture division results between 0 and 1. If you need to make it an integer later, use ceil(), trunc(), or round() to round up, down, or to the nearest whole number.
Sorry but double may not be enough, the only way to make division without lose data is using NSDecimal number. Here a sample code, show the percentage of working in a loop
NSDecimalNumber *Percentuale = [[[NSDecimalNumber decimalNumberWithString:[NSString stringWithFormat:#"%i",ClienteNr]] decimalNumberByDividingBy:[NSDecimalNumber decimalNumberWithString:[NSString stringWithFormat:#"%i",[[Globals ClientiDaMostrareMappa] count]]]] decimalNumberByMultiplyingBy:[NSDecimalNumber decimalNumberWithString:#"100"]];
[Globals.WaitAlert setTitle:[NSString stringWithFormat:#"Localizzazione Clienti!\nAttendere... %i%%",[Percentuale intValue]]];
I seem to be encountering a strange issue in Objective-C converting a float to an NSNumber (wrapping it for convenience) and then converting it back to a float.
In a nutshell, a class of mine has a property red, which is a float from 0.0 to 1.0:
#property (nonatomic, assign) float red;
This object is comparing itself to a value that is loaded from disk, for synchronization purposes. (The file can change outside the application, so it checks periodically for file changes, loads the alternate version into memory, and does a comparison, merging differences.)
Here's an interesting snippet where the two values are compared:
if (localObject.red != remoteObject.red) {
NSLog(#"Local red: %f Remote red: %f", localObject.red, remoteObject.red);
}
Here's what I see in the logs:
2011-10-28 21:07:02.356 MyApp[12826:aa63] Local red: 0.205837 Remote red: 0.205837
Weird. Right? How is this piece of code being executed?
The actual value as stored in the file:
...red="0.205837"...
Is converted to a float using:
currentObject.red = [[attributeDict valueForKey:#"red"] floatValue];
At another point in the code I was able to snag a screenshot from GDB. It was printed to NSLog as: (This is also the precision with which it appears in the file on disk.)
2011-10-28 21:21:19.894 MyApp[13214:1c03] Local red: 0.707199 Remote red: 0.707199
But appears in the debugger as:
How is this level of precision being obtained at the property level, but not stored in the file, or printed properly in NSLog? And why does it seem to be varying?
If you are converting it to/from a string at any point try using %0.16f instead of %f (or whatever precision you want instead of .16).
For more info, see IEEE Std formatting.
Also, use objectForKey instead of valueForKey (valueForKey is not intended to be used on dictionaries):
currentObject.red = [[attributeDict objectForKey:#"red"] floatValue];
See this SO answer for a better explanation of objectForKey vs valueForKey:
Difference between objectForKey and valueForKey?
The problem your are experiencing is a problem with floating point. A floating point number doesn't exactly represent the number stored (except for some specific cases which don't matter here). The example in the link craig posted is an excellent example of this.
In your code, when you write out the value to your file you write an approximation of what is stored in the floating point number. When you load it back, another approximation of it is stored in the float. However these two numbers are unlikely to be equal.
The best solution is to use a fuzzy comparison of the two floating point numbers. I'm not an objective c programmer, so I don't know if the languages includes builtin functions to preform this comparison. However this link provides a good set of examples on various ways to preform this comparison.
You can also try the other posted solution of using a bigger precision to write out to your file, but you will probably end up wasting space for the extra precision that you don't need. I'd personally recommend you use the fuzzy comparison as it is more bullet proof.
You say that the "remote" value is "loaded from disk". I'm guessing that the representation on disk is not an IEEE float bit value, but rather some sort of character representation or some such. So there are inevitable conversion errors going to and from that representation, given the way IEEE float works. You will not get an exact result, given that there's only about 6 digits of decimal precision in a float value, but it rarely maps to exactly 6 decimal digits but instead is sort of like representing 1/3 in decimal -- there is no exact mapping.
Read this: http://floating-point-gui.de/
I am doing a simple :
float x = 151.185436;
printf("x=%f",x);
and the result is
x=151.185440
Whats wrong here? I want to retain and print my original value of 151.185436
Thanks
Amarsh
floats just aren't very accurate. Try double. And read this: http://docs.sun.com/source/806-3568/ncg_goldberg.html
A float can only hold 32 bits (4 bytes) of information about your number - it can't just store as many decimal places as you need it to. 151.18544 is as close to your value that the float could represent without running out of bits.
For the precision you want, you need to use a double instead of a float.
Floats are inaccurate, use doubles. Also as you're using Objective C and not straight C it might be better if you use Objective C functions for this:
myNumber = [NSNumber numberWithDouble:151.185436];
NSLog(#"myNumber = %#", myNumber);