I seem to be encountering a strange issue in Objective-C converting a float to an NSNumber (wrapping it for convenience) and then converting it back to a float.
In a nutshell, a class of mine has a property red, which is a float from 0.0 to 1.0:
#property (nonatomic, assign) float red;
This object is comparing itself to a value that is loaded from disk, for synchronization purposes. (The file can change outside the application, so it checks periodically for file changes, loads the alternate version into memory, and does a comparison, merging differences.)
Here's an interesting snippet where the two values are compared:
if (localObject.red != remoteObject.red) {
NSLog(#"Local red: %f Remote red: %f", localObject.red, remoteObject.red);
}
Here's what I see in the logs:
2011-10-28 21:07:02.356 MyApp[12826:aa63] Local red: 0.205837 Remote red: 0.205837
Weird. Right? How is this piece of code being executed?
The actual value as stored in the file:
...red="0.205837"...
Is converted to a float using:
currentObject.red = [[attributeDict valueForKey:#"red"] floatValue];
At another point in the code I was able to snag a screenshot from GDB. It was printed to NSLog as: (This is also the precision with which it appears in the file on disk.)
2011-10-28 21:21:19.894 MyApp[13214:1c03] Local red: 0.707199 Remote red: 0.707199
But appears in the debugger as:
How is this level of precision being obtained at the property level, but not stored in the file, or printed properly in NSLog? And why does it seem to be varying?
If you are converting it to/from a string at any point try using %0.16f instead of %f (or whatever precision you want instead of .16).
For more info, see IEEE Std formatting.
Also, use objectForKey instead of valueForKey (valueForKey is not intended to be used on dictionaries):
currentObject.red = [[attributeDict objectForKey:#"red"] floatValue];
See this SO answer for a better explanation of objectForKey vs valueForKey:
Difference between objectForKey and valueForKey?
The problem your are experiencing is a problem with floating point. A floating point number doesn't exactly represent the number stored (except for some specific cases which don't matter here). The example in the link craig posted is an excellent example of this.
In your code, when you write out the value to your file you write an approximation of what is stored in the floating point number. When you load it back, another approximation of it is stored in the float. However these two numbers are unlikely to be equal.
The best solution is to use a fuzzy comparison of the two floating point numbers. I'm not an objective c programmer, so I don't know if the languages includes builtin functions to preform this comparison. However this link provides a good set of examples on various ways to preform this comparison.
You can also try the other posted solution of using a bigger precision to write out to your file, but you will probably end up wasting space for the extra precision that you don't need. I'd personally recommend you use the fuzzy comparison as it is more bullet proof.
You say that the "remote" value is "loaded from disk". I'm guessing that the representation on disk is not an IEEE float bit value, but rather some sort of character representation or some such. So there are inevitable conversion errors going to and from that representation, given the way IEEE float works. You will not get an exact result, given that there's only about 6 digits of decimal precision in a float value, but it rarely maps to exactly 6 decimal digits but instead is sort of like representing 1/3 in decimal -- there is no exact mapping.
Read this: http://floating-point-gui.de/
Related
Problem:
Yesterday I converted a large project of mine to support arm64 and after that I got 500+ warnings at once. About 70% of them are where NSInteger is being assigned to int or vice versa, and remaining are where NSUInteger is formatted in NSString like this:
NSInteger a = 123;
NSString *str = [NSString stringWithFormat:#"Int:%d", a]; //warning: value of 'NSInteger' should not be used as formate argument; add an explicit cast to 'unsigned long' instead.
Now I do know how to adress them manually, but that's a huge task and very laborious.
I'm also aware that I can silence the type mismatch warnings all together, but I don't want to do that. Of course, they're very helpful.
What I've tried:
I've converted [NSNumber numberWithInt:abc]; to [NSNumber numberWithInt:(int)abc]; using find-n-replace. It fixed some.
I've also tried to change all my int properties to NSInteger properties
but it doubled the number of warnings (reached to 900+ count). So I
reverted.
I've also tried to find some regular expression but couldn't find
something suitable to my needs.
Question:
I'm looking for a regular expression or any other workaround somebody has tried which can reduce the amount of work needed to fix them manually.
Thanks in advance.
NSInteger a = 123;
NSString *str = [NSString stringWithFormat:#"Int:%ld", (long)a];
After updating to 64 bit need to do typecast like this((long)a). %d is only for 32 bit range %ld for long integer. For better understanding got through this apple documentation.
https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/Cocoa64BitGuide/ConvertingExistingApp/ConvertingExistingApp.html
In case someone else's facing a similar situation, I want to clarify how to deal with it. Although #Raju's answer is suggesting to do it manually (which I wanted to avoid), I found exactly what I needed at the link he shared.
Apple has provided a script for 64bit conversion called ConvertCocoa64, located at/Developer/Extras/64BitConversion/ConvertCocoa64 which not only converts all int to NSInteger it also deals with float to CGFloat conversion, as stated:
It converts most instances of int and unsigned int to NSInteger and
NSUInteger, respectively. It doesn't convert ints in bit-field
declarations and other inappropriate cases. During processing, the
script refers to a hardcoded list of exceptions.
In addition to above conversions it also flags the lines in code which need manual fix. So this might help with the warnings of String Formats.
Please refer to this link for complete details. It not only explains how to use the script but also suggests some very important post 64-bit migration check points.
objective c implicit conversion loses integer precision 'NSUInteger' (aka 'unsigned long') to 'int
Change key in Project > Build Setting "implicit conversion to 32Bits Type > Debug > *64 architecture : No"
Other warning
Change key in Project > Build Setting "typecheck calls to printf/scanf : NO"
Explanation : [How it works]
Check calls to printf and scanf, etc., to make sure that the arguments supplied have types appropriate to the format string specified, and that the conversions specified in the format string make sense.
Hope it work
[caution: It may void other warning of 64 Bits architecture conversion].
I'm trying to do the following, but NSValue's creation method returns nil.
Are C bitfields in structs not supported?
struct MyThingType {
BOOL isActive:1;
uint count:7;
} myThing = {
.isActive = YES,
.count = 3,
};
NSValue *value = [NSValue valueWithBytes:&myThing objCType:#encode(struct MyThingType)];
// value is nil here
First and foremost, claptrap makes a very good point in his comment: why bother using bitfield specifiers (which are mainly used to either do micro-optimization or manually add padding bits where you need them), to then wrap it all up in an instance of NSValue).
It's like buying a castle, but then living in the kitchen to not ware out the carpets...
I don't think it is, a quick canter through the apple dev-docs came up with this... there are indeed several issues to take into account when it comes to bit fields.
I've also just found this, which explains why bit-fields + NSValue don't really play well together.
Especially in cases where the sizeof a struct can lead to NSValue reading the data in an... shall we say erratic manner:
The struct you've created is padded to 8 bits. Now these bits could be read as 2 int, or 1 long or something... From what I've read on the linked page, it's not unlikely that this is what is happening.
So, basically, NSValue is incapable of determining the actual types, when you're using bit fields. In case of ambiguity, an int (width 4 in most cases) is assumed and under/overflow occurs, and you have a mess on your hands.
Since the compiler still has some liberty as to where what member is actually stored, it doesn't quite suffice to pass the stringified typedef sort of thing (objCType: #encode(struct YourStruct), because there is a good chance that you won't be able to make sense of the actual struct itself, owing to compiler optimizations and such...
I'd suggest you simply drop the bit field specifiers, because structs should be supported... at least, last time I tried, a struct with simple primitive types worked just fine.
You can solve this with a union. Simply put the structure into union that has another member with a type supported by NSValue and has a size larger than your structure. In your case this is obvious for long.
union _bitfield_word_union
{
yourstructuretype bitfield;
long plain;
};
You can make it more robust against resizing the structure by using an array whose size is calculated at compile time. (Please remember that sizeof() is a compile time operator, too.)
char plain[(sizeof(yourstructuretype)/sizeof(char)];
Then you can store the structure with the bitfield into the union and read the plain member out.
union converter = { .bitfield = yourstructuretypevalue };
long plain = converter.plain;
Use this value for NSValue instance creation. Reading out you have to do the inverse way.
I'm pretty sure that through a technical correctum of C99 this became standard conforming (called type punning), because you can expect that reading out a member's value (bitfield) through another members value (plain) and storing it back is defined, if the member being read is at least as big as the member being written. (There might be undefined bits 9-31/63 in plain, but you do not have to care about it.) However it is real-world conforming.
Dirty hack? Maybe. One might call it C99. However using bitfields in combination with NSValue sounds like using dirty hacks.
Say I have an a function:
- (void) doSomethingWithFloat:(float)aFloat;
and I call that function with a double precision floating point value as follows:
[self doSomethingWithFloat:12.0];
Is a conversion done from 12.0 (double) to 12.0f (single) at compile-time or runtime, or neither?
Just for clarity: I'm not asking for the difference between single precision and double
precision floating point numbers.
ObjectiveC actually follows most of C conventions - so floats are promoted to double per the C spec when passed to a function. The ObjectiveC compiler turns all methods into functions eventually, so your double works.
That said its best to turn on compiler warnings and pass CGFloats or floats - it just lets you know when you are losing precision.
What number object should I use i've tried double but it converts 1.4 to 1.39999999
I've also tried NSNumber but I can't find out how to do.
if (MyNum < 1.4) {
Also I need to convert from an NSString
I'm evaluation my app version number 1.4 is my new release version. I need to perform action if (appVer < 1.4)
Use NSString's built-in number conversion methods.
NSString *version = [[[NSBundle mainBundle]infoDictionary]objectForKey:#"CFBundleVersion"];
double versionNumber = [version doubleValue];
Then use:
if (versionNumber < 1.4) {
NSString Documentation
What sort of result do you want? If you want a floating point result then you tend to get 1.39999999, because there's no exact representation of 1.4 in IEEE float.
As far as I know Objective-C does not have a decimal type, so if you wish to have an exact representation you must use integers and keep track of the decimal point yourself. Any arithmetic then becomes fairly complicated.
[I see that there is indeed NSDecimalNumber, which should do most of what is needed. Have no experience with it, however.]
Your best bet is probably to use floating point and rely on rounding during formatting, unless you need financial accuracy.
If you really need such accuracy, use NSDecimalNumber.
In order to correctly handle version numbers like 1.10 (which floatValue and friends will interpret as a single number, which would be 1.1), you should borrow Growl's version-comparison code under their BSD license.
A few things
(1) NSNumber is an integer type, according to the docs. It cannot represent 1.4.
(2) EDIT: see my comment about Major/Minor/Patch version numbers below. You didn't exactly explain that 1.4 represented a fixed version number that should be broken into sub-parts.
I'm still learning, and I'm just stuck. I want the user to enter any number and in result, my program will do this equation:
x = 5*y
(y is the number the user adds, x is outcome)
How would I do this? I'm not sure if I'm suppose to add in an int or NSString. Which should I use, and should I enter anything in the header files?
I'm not sure if I'm suppose to add in an int or NSString.
Well, one of these is a numeric type and the other is a text type. How do you multiply text? (Aside from repeating it.)
You need a numeric type.
I would caution against int, since it can only hold integers. The user wouldn't be able to enter “0.5” and get 2.5; when you converted the “0.5” to an int, the fractional part would get lopped off, leaving only the integral part, which is 0. Then you'd multiply 5 by 0, and the result you return to the user would be 0.
Use double. That's a floating-point type; as such, it can hold fractional values.
… should I enter anything in the header files?
Yes, but what you enter depends on whether you want to use Bindings or not (assuming that you really are talking about Cocoa and not Cocoa Touch).
Without Bindings, declare an outlet to the text field you're going to retrieve the multiplier from, and another to the text field you're going to put the product into. Send the input text field a doubleValue message to get the multiplier, and send the output text field a setDoubleValue: message with the product.
With Bindings, declare two instance variables holding double values—again, one for the multiplier and one for the product—along with properties exposing the instance variables, then synthesize the properties, and, finally, bind the text fields' value bindings to those properties.
If you're retrieving the NSString from a UI, then it's pretty simple to do:
NSString * answer = [NSString stringWithFormat:#"%d", [userInputString integerValue]*5];
This can be done without any objective C. That is, since Objective-C is a superset of C, the problem can be solved in pure C.
#include <stdio.h>
int main(void)
{
int i;
fscanf(stdin, "%d", &i);
printf("%d\n", i * 5);
}
In the above the fscanf takes care of converting the character(s) read on the standard input to a number and storing it in i.
However, if you had characters from some other source in a char* and needed to convert them to an int, you could create an NSString* with the – initWithCString:encoding: and then use its intValue method, but in this particular problem that simply isn't needed.