I have an iOS App that is doing things that don't quite make sense to me. I have several float variables defined in my interface that are being assigned incorrectly.
kettleVolume = 30;
lbsGrain = 5;
mashIn = 65;
grainTemp = 20;
When I step through this on the debugger, I very clearly see the following values being assigned-
kettleVolume float 1.09038e-33;
lbsGrain float 30
mashIn float 5
grainTemp float 65
Somehow, they are getting the values from the line above them? What am I doing incorrectly?
There are numerous reports that when inspecting ivars from LLDB they seem wrong (myself had the same problem many times). More specifically they seem to be shifted. That said, it only seems to be a bug in the implementation of XCode's inspector. If you want to be sure about the values you can either po _yourivar in the debugger console, use GDB or NSLog them. There is also a similar question here: GDB Vs LLDB debuggers
Related
i'm new to Xcode objective-c and I have a task to make a newsletter that downloadable.
So, I got some source code and tweak a bit but I got some error that said
"Implicit conversion loses integer precision : 'long' to 'int'
here are my code
-(void)downloadIssue:(IssueInfo*)issueInfo{
NewsstandDownloader* downloader = [[AppDelegate instance] newsstandDownloader];
downloader.delegate = self;
long index = [self.publisher indexOfIssue:issueInfo];
[downloader downloadIssue:issueInfo forIndexTag:index]; <-- Error
}
Please help me.
Thank you.
That's just a compiler warning and a mild one at that. If you were dealing with a document that had more than, say, 32000 pages then you might need to be concerned about it.
The way to solve the problem is to either change the declaration of the function you're calling to something like:
[downloader downloadIssue:(IssueInfo *)issueInfo forIndexTag:(long)index]
or, simply use a cast:
int index = (int)[self.publisher indexOfIssue:issueInfo];
"int" isn't usually a good thing to use in Objective C as there are different lengths and capacities to it on different platforms (32 bit versus 64 bit, iOS vs MacOS, etc.). It's better to use something more Objective-C specific, like NSInteger or NSUInteger.
This question already has answers here:
"Thread 1: stopped at breakpoint" error when initializing an NSURL object
(3 answers)
Closed 8 years ago.
I'm doing a tutorial on codeschool http://tryobjectivec.codeschool.com/ and according to this tutorial the following code should work:
void (^sumNumbers)(NSUInteger, NSUInteger) = ^(NSUInteger num1, NSUInteger num2){
NSLog(#"The sum of the numbers is %lu", num1 + num2);
};
Well, I tried the let this work, allthough xcode already gave me some orange error message:
I tried to build the code anyway, just to see if xcode was right or the tutorial was right. Well, something broke:
But even worse, I can't get xcode fixed again, xcode now pretends that all my code that I've ever wrote is causing problems. But I'm pretty sure it used to work:
NSUInteger is, at least in a 32-bit architecture, typedefed as an unsigned int, not a long. The %lu formatter specifies that you want to print them as longs, so there's a type conflict. If you print them as %ud, which is an unsigned int.
The blue marker in the gutter of the line sumNumbers(10,20); indicates that you have a breakpoint set on that line, which causes the interruption of the app. Remove/disable that breakpoint and that will work fine as well.
It is generally advised that you do not use NSInteger, NSUInteger, CGFloat, or CFIndex as direct parameters to an NSLog, or any other operation using a printf-style formatter. The reason is that their size varies between 32 and 64 bit architectures.
In your case, the correct way to write the code is to cast the NSUInteger to an unsigned long which is 64 bits on both architectures, at least as far as the XCode toolchain is concerned. Be warned that this may not be the case on other systems - for example, the C++ standard only requires that a long be at least 32 bits in size.
void (^sumNumbers)(NSUInteger, NSUInteger) = ^(NSUInteger num1, NSUInteger num2){
NSLog(#"The sum of the numbers is %lu", (unsigned long)(num1 + num2));
};
With regard to the tutorial, the person who wrote it was probably working on a 64 bit system, so NSUInteger mapped to the size required by %lu. Everything you read on the Web, from Apple documentation to Stack Overflow answers will have errors somewhere.
In order to NSLog your NSUInteger, you should change your statement to this:
NSLog(#"%ud", num1 + num2);
The reason your app is not running properly anymore is probably because you still have your breakpoint set. Go to Debug > Deactivate Breakpoints or use the shortcut ⌘Y, and then try running your application.
I am writing an iOS application for (among many features) calculating alcohol content from sugar before and after fermentation for homebrewers. Now the problem is; every time I run my app in the simulator, it crashes with "Thread 1: Signal SIGBART" once I enter the UIViewController with the text fields, labels and buttons used in this function (in the implementation):
- (IBAction)calcAlc:(id)sender {
double ogVal = [[oGtext text]floatValue];
double fgVal = [[fGtext text]floatValue];
double alcoContent = ((1.05*(ogVal-fgVal))/fgVal)/0.79;
NSString *theResult = [NSString stringWithFormat:#"%1.2f",alcoContent];
alcContent.text = theResult;
}
I'm really stuck here -- any help is very appreciated. Thanks in advance!
You should check if fgVal is ever 0, since you are dividing by fgVal.
You will also want to use doubleValue instead of floatValue, since you are declaring them as double.
I think you should try float in place of double.
Now put break points on each and every method and debug it to get the exact point where you application is getting crashed and then you will be able to find the solution.
Also be sure to connect each and every ib outlet and action in your storyboard or nib(whatever you are using).
Please notify if it works..:)
this is my first time asking something here, so don't be too harsh on me please :-). I have a strange "bad access" issue. In a class of mine I have a NSInteger along with a few other members. I override the - (NSString *)description method to print the values of everything like so (omitted the unrelevant part):
- (NSString *)description {
return [NSString stringWithFormat:#"Duration:%d", duration];
}
and then I print that using NSLog(#"%#", myObject) which is giving me EXC_BAD_ACCESS without any log messages, regardless of the NSZombieEnabled.
I have double checked the order of all formatting specifiers and
parameters - it's correct.
I tried changing the format specifier to %i and %# and didn't get any result
When I init my object I don't initialize
duration. Later, I assign it through a property #property NSInteger
duration. So I tried initializing the duration to 0 in my init
method but to no avail.
If I box duration to a NSNumber prior to
printing, it works.
When I remove the duration and leave all the
other ivars it works again.
I'm stumped, what am I doing wrong here?
Can you help me?
Thanks a lot!
EDIT: To summarize, It seems this is caused by differences between 32 and 64 bit platforms, because it's fine when run on an iphone 4 and has issues only when run in the simulator. I tried 3 different approaches - using %d and %i with the NSInteger variable itself, using %qi and using %ld/ %lx and I also tried to cast to int and long with the various format specifiers. Every time I can run on the device, but get EXC_BAD_ACCESS in the simulator.
The only guess here: NSInteger could be 64 bit in your case and %i (as well as %d) expects 32-bit integer, you can try %qi or (what seems to be better) cast the value explicitly.
When you run the app in the debugger, where exactly is the signal raised? Xcode will show you the precise line and stack of the problem.
When you are sure the crash happens in the stringWithFormat: method it's probably a matter of format specifiers. Apple's String Programming Guide contains information on how to handle NSInteger in a safe and platform independent way.
Maybe you need to synthesyse your property?
I seem to be encountering a strange issue in Objective-C converting a float to an NSNumber (wrapping it for convenience) and then converting it back to a float.
In a nutshell, a class of mine has a property red, which is a float from 0.0 to 1.0:
#property (nonatomic, assign) float red;
This object is comparing itself to a value that is loaded from disk, for synchronization purposes. (The file can change outside the application, so it checks periodically for file changes, loads the alternate version into memory, and does a comparison, merging differences.)
Here's an interesting snippet where the two values are compared:
if (localObject.red != remoteObject.red) {
NSLog(#"Local red: %f Remote red: %f", localObject.red, remoteObject.red);
}
Here's what I see in the logs:
2011-10-28 21:07:02.356 MyApp[12826:aa63] Local red: 0.205837 Remote red: 0.205837
Weird. Right? How is this piece of code being executed?
The actual value as stored in the file:
...red="0.205837"...
Is converted to a float using:
currentObject.red = [[attributeDict valueForKey:#"red"] floatValue];
At another point in the code I was able to snag a screenshot from GDB. It was printed to NSLog as: (This is also the precision with which it appears in the file on disk.)
2011-10-28 21:21:19.894 MyApp[13214:1c03] Local red: 0.707199 Remote red: 0.707199
But appears in the debugger as:
How is this level of precision being obtained at the property level, but not stored in the file, or printed properly in NSLog? And why does it seem to be varying?
If you are converting it to/from a string at any point try using %0.16f instead of %f (or whatever precision you want instead of .16).
For more info, see IEEE Std formatting.
Also, use objectForKey instead of valueForKey (valueForKey is not intended to be used on dictionaries):
currentObject.red = [[attributeDict objectForKey:#"red"] floatValue];
See this SO answer for a better explanation of objectForKey vs valueForKey:
Difference between objectForKey and valueForKey?
The problem your are experiencing is a problem with floating point. A floating point number doesn't exactly represent the number stored (except for some specific cases which don't matter here). The example in the link craig posted is an excellent example of this.
In your code, when you write out the value to your file you write an approximation of what is stored in the floating point number. When you load it back, another approximation of it is stored in the float. However these two numbers are unlikely to be equal.
The best solution is to use a fuzzy comparison of the two floating point numbers. I'm not an objective c programmer, so I don't know if the languages includes builtin functions to preform this comparison. However this link provides a good set of examples on various ways to preform this comparison.
You can also try the other posted solution of using a bigger precision to write out to your file, but you will probably end up wasting space for the extra precision that you don't need. I'd personally recommend you use the fuzzy comparison as it is more bullet proof.
You say that the "remote" value is "loaded from disk". I'm guessing that the representation on disk is not an IEEE float bit value, but rather some sort of character representation or some such. So there are inevitable conversion errors going to and from that representation, given the way IEEE float works. You will not get an exact result, given that there's only about 6 digits of decimal precision in a float value, but it rarely maps to exactly 6 decimal digits but instead is sort of like representing 1/3 in decimal -- there is no exact mapping.
Read this: http://floating-point-gui.de/