How to print a float in Objective-C as, for example, 3.45 instead of 3.45555555555?
Try formatting the float like this:
NSLog(#"%.2f", myFloat);
The % sign means this will be replaced by the corresponding argument following (myFloat). The .2 means 2 decimal places, and f means a float datatype.
Take a look here for more detail.
Objective-C's NSLog is very similar to C's printf, with the main exceptions being that you must use an Objective-C string literal (#"…") and you should use %# for Objective-C strings (NSStrings) rather than %s, which is for "Plain C strings".
Depends on how you're printing it. If you want to show it in a GUI (which is probably the common case for Cocoa and Cocoa Touch apps), use an NSNumberFormatter and set it to have two decimal places. If you're printing it through NSLog() or printf(), you'd use a format specifier along the lines of "%.2f".
Related
XCode 9.4.1. In debugger console I see results that seem strange to me:
(lldb) print (double)0.07
(double) $0 = 0.070000000000000007
(lldb) print [(NSDecimalNumber*)[NSDecimalNumber decimalNumberWithString:#"0.07"] doubleValue]
(double) $1 = 0.069999999999999993
The same results I see if executing in compiled code. I don't understand why result is different when converting literal 0.07 to double, and when converting decimal 0.07 to double. Why precision is lost differently?
What am I missing?
The values are calulated differently
(lldb) p 7.0 / 100.0
(double) $0 = 0.070000000000000007
(lldb) p 7.0 / 10.0 / 10.0
(double) $1 = 0.069999999999999993
NSDecimalNumber is designed to behave exactly like what you're seeing. It does "base 10 math" to avoid the very issue you are seeing -- traditional floating point representation of numbers can't accurately represent the base 10 numbers we're used to writing.
The comment instructing you to "go study numerical methods" is a bit brash but it's kind of heading in the right direction. A better use of your time would be to (also) take a look at the documentation for NSDecimalNumber and think about what it does and why it exists.
I had never heard of NSDecimalNumber before a couple minutes ago, so thanks for pointing me at some new knowledge. :-)
I've been working with format specifiers but they were generic like %d or %# but today in a tutorial I saw these %1$#%2$d and didn't understand what they represent.It was a calculator example so they are using them in this statement: stack = [NSString stringWithFormat:#"%1$#%2$d",stack,number];
The numbers represent positional parameters. The parameters which follow the format string are slotted into the string based on their position in the parameters list. The first parameter goes into the %1 slot, the second into the %2 slot, and so on. The purpose is to deal with languages where the order of terms/words/etc might change from your default. You can't change the parameter order at runtime, but you can make sure the parameters end up in the correct place in the string.
Example
NSLog(#"%1$#, %2$#", #"Red", #"Blue");
NSLog(#"%2$#, %1$#", #"Red", #"Blue");
Output
Red, Blue
Blue, Red
Note that the format string changed, but the parameters are in the same order.
So your format specifier %1$# %2$d means:
%1$# for %#(Objective-C object) with first parameter and
%2$d for %d*(Signed 32-bit integer (int))* with second parameter.
This $0, $1, $2 are shorthand parameter names like in Closures
“Swift automatically provides shorthand argument names to inline closures, which can be used to refer to the values of the closure’s arguments by the names $0, $1, $2, and so on.”
I can't believe I couldn't find a solution to this very simple issue. I have a command line tool in Objective C, and need to display UTF8 strings (with non-English characters) in the console. I can't use NSLog as it also display process information, PID, timestamp etc. too. printf doesn't handle non-English characters well.
How can I print non-English characters in the Terminal, without any timestamps? Am I missing something really obvious here, or is such an extremely simple task really non-trivial in OS X?
I've tried:
printf: Doesn't display non-English characters.
NSLog: Displays PID/timestamp, which I don't want.
DLog (from https://stackoverflow.com/a/17311835/811405): Doesn't display non-English characters.
This works just fine:
printf("%s\n", [#"Can Poyrazoğlu" UTF8String]);
The macro you've tried to use depends on CFShow which doesn't print Unicode characters but only their escape codes. More information regarding this behaviour here.
So you could either use something else for your macro instead of CFShow to print to console without any timestamps or you could use an NSLog replacement library I wrote, Xcode Logger and use its XLog_NH logger which prints only the output without any other information.
Using stdio:
puts([#"Can Poyrazoğlu" UTF8String]);
Using write:
const char* example = [#"Can Poyrazoğlu" UTF8String];
write(STDOUT_FILENO, example, strlen(example));
I am comparing the output of two programs, one C the other C++, using diff, so the output must be identical.
Is there any way to printf a double so that it is formatted as though it was printed using << mydouble.
I am currently using printf("%g",mydouble)
Here are some examples of the differences:
c: 3.24769e-05 c++: 3.2477e-05
c: 0.0026572 c++: 0.00265721
Interestingly the scientific notation has more digits in c, and the decimal notation has more in c++.
You can solve this by using the format specifiers in C.
For example, say you would like to print out only 3 places after the decimal, you could make your printf like so:
printf("%.3lf", dub);
With a value of double dub = .0137; the output would be 0.014
This would fix the issue with your 2nd case if you want more precision printed you could write:
printf("%.8lf", dub);
Your output for double dub = 0.00265721; would then be 0.00265721
The case for %g works the same way except the number on the left is included in the calculation. If you wanted the C++ version (the lesser precision I assume) then your code would look like this:
double dub = .0000324769;
printf("%.5g", dub);
Which yields 3.2477e-05
Can this statement ever fail?
if (#"Hello") foo(); ?
In other words can there be a situation where in the compiler fails to allocate enough storage space for literals. I know this sounds ridiculous for short length literals but what about really long ones.
No.
NSString literals are "allocated" at compile time and form part of the text segment of your program.
Edit
To answer the other part of the question, if the compiler fails to allocate enough memory for the literal, the if statement won't fail, but the compilation will.
I don't know what the effective upper limit to the length of a string literal is, but it's bound to be less than NSIntegerMax unichars because NSNotFound is defined as NSIntegerMax. According to the clang docs, the length in bytes of a string literal is an unsigned int and NSString string literals are sequences of unichars.
I'm pretty sure if you try to compile a file with the literal
#" ... 1TB worth of characters ... "
the compiler will fail. The C standard available here says that any compatible compiler needs to support at least 4095 characters per string literal, etc. See Section 5.2.4.1. I'm sure GCC and clang allows much bigger literals.