Code
CGFloat a=3.45378;
I want change the result to
CGFloat a=3.45f; only 2 precision
I know how printf works. but I don't know how to do this just keep 2 precision.
To lose the extra precision, and round to the nearest two decimals, follow these steps:
Multiply your number by 100 : 345.378
Round your number to the nearest integer : 345
Divide your number by 100 : 3.45
Related
I have a floating point number that have more decimal digits, for example:
float fRes = 10.0 / 3.0;
actually the fRes value is 3.3333333333333
it's possible set for example 2 decimal digits:
float fRes = 10.0 / 3.0;
// fRes is 3.333333333333333333333333
float fResOk = FuncRound( fRes, 2 );
// fResOk is 3.33
thanks in advance
I don't know where you are using this rounded number, but you should only round your value when displaying it to the user, there are C based format string ways to round floating point numbers for example
[NSString stringWithFormat:#"%.2f", value];
as you may have already read, floating point number are approximations of real numbers, so doing fResOk = roundf( fRes*100.0)/100.0; may not give you 3.33 but a number which is just as close as you can get with floating point number to 3.33.
Assuming that you're looking for the correct function to round to a certain number of digits, you'll probably find it easiest to do the following:
fResOk = roundf( fRes*100.0)/100.0;
That will multiply the value by 100 (giving you your 2 digits of significance), round the value, and then reduce it back to the magnitude you originally started with.
I am working on an app that needs to utilize a ratio of a given number and multiply that ratio times another number. Problem is that I can't get numbers less that 1 to give me the proper decimal ratio, instead it gives me zero (when it should be .5).
Example:
float number = 1/2; // This gives me zero
double number = 1/2; // This also gives me zero
If you don't specify decimal places you're using integers which means the calculation is performed with integer precision before the result is cast to the type on the LHS. You want to do the the following when using hard coded numbers in your code:
float number = 1.0f / 2.0f;
double number = 1.0 / 2.0;
If you're aiming to use integer variables for an operation, you'll want to cast them to the type that you want for your result.
Try this
float number = 1.0/2.0;
Remember that 1 is an int, so you are essentially taking
(int)1 / (int)2
which returns
(int)0
To cast variables that are ints, do
float number = (float)numerator / (float)denominator;
I have a floating point number that have more decimal digits, for example:
float fRes = 10.0 / 3.0;
actually the fRes value is 3.3333333333333
it's possible set for example 2 decimal digits:
float fRes = 10.0 / 3.0;
// fRes is 3.333333333333333333333333
float fResOk = FuncRound( fRes, 2 );
// fResOk is 3.33
thanks in advance
I don't know where you are using this rounded number, but you should only round your value when displaying it to the user, there are C based format string ways to round floating point numbers for example
[NSString stringWithFormat:#"%.2f", value];
as you may have already read, floating point number are approximations of real numbers, so doing fResOk = roundf( fRes*100.0)/100.0; may not give you 3.33 but a number which is just as close as you can get with floating point number to 3.33.
Assuming that you're looking for the correct function to round to a certain number of digits, you'll probably find it easiest to do the following:
fResOk = roundf( fRes*100.0)/100.0;
That will multiply the value by 100 (giving you your 2 digits of significance), round the value, and then reduce it back to the magnitude you originally started with.
I am learning Objective-C and have completed a simple program and got an unexpected result. This program is just a multiplication table test... User inputs the number of iterations(test questions), then inputs answers. That after program displays the number of right and wrong answers, percentage and accepted/failed result.
#import <Foundation/Foundation.h>
int main (int argc, const char * argv[])
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSLog(#"Welcome to multiplication table test");
int rightAnswers; //the sum of the right answers
int wrongAnswers; //the sum of wrong answers
int combinations; //the number of combinations#
NSLog(#"Please, input the number of test combinations");
scanf("%d",&combinations);
for(int i=0; i<combinations; ++i)
{
int firstInt=rand()%8+1;
int secondInt=rand()%8+1;
int result=firstInt*secondInt;
int answer;
NSLog(#"%d*%d=",firstInt,secondInt);
scanf("%d",&answer);
if(answer==result)
{
NSLog(#"Ok");
rightAnswers++;
}
else
{
NSLog(#"Error");
wrongAnswers++;
}
}
int percent=(100/combinations)*rightAnswers;
NSLog(#"Combinations passed: %d",combinations);
NSLog(#"Answered right: %d times",rightAnswers);
NSLog(#"Answered wrong: %d times",wrongAnswers);
NSLog(#"Completed %d percent",percent);
if(percent>=70)NSLog(#"accepted");
else
NSLog(#"failed");
[pool drain];
return 0;
}
Problem (strange result)
When I input 3 iterations and answer 'em right, i am not getting of 100% right. Getting only
99%. The same count I tried on my iPhone calculator.
100 / 3 = 33.3333333... percentage for one right answer (program displays 33%. The digits after mantissa getting cut off)
33.3333333... * 3=100%
Can someone explain me where I went wrong? Thanx.
This is a result of integer division. When you perform division between two integer types, the result is automatically rounded towards 0 to form an integer. So, integer division of (100 / 3) gives a result of 33, not 33.33.... When you multiply that by 3, you get 99. To fix this, you can force floating point division by changing 100 to 100.0. The .0 tells the compiler that it should use a floating point type instead of an integer, forcing floating point division. As a result, rounding will not occur after the division. However, 33.33... cannot be represented exactly by binary numbers. Because of this, you may still see incorrect results at times. Since you store the result as an integer, rounding down will still occur after the multiplication, which will make it more obvious. If you want to use an integer type, you should use the round function on the result:
int percent = round((100.0 / combinations) * rightAnswers);
This will cause the number to be rounded to the closest integer before converting it to an integer type. Alternately, you could use a floating point storage type and specify a certain number of decimal places to display:
float percent = (100.0 / combinations) * rightAnswers;
NSLog(#"Completed %.1f percent",percent); // Display result with 1 decimal place
Finally, since floating point math will still cause rounding for numbers that can't be represented in binary, I would suggest multiplying by rightAnswers before dividing by combinations. This will increase the chances that the result is representable. For example, 100/3=33.33... is not representable and will be rounded. If you multiply by 3 first, you get 300/3=100, which is representable and will not be rounded.
Ints are integers. They can't represent an arbitrary real number like 1/3. Even floating-point numbers, which can represent reals, won't have enough precision to represent an infinitely repeating decimal like 100/3. You'll either need to use an arbitrary-precision library, use a library that includes rationals as a data type, or just store as much precision as you need and round from there (e.g. make your integer unit hundredths-of-a-percent instead of a single percentage point).
You might want to implement some sort of rounding because 33.333....*3 = 99.99999%. 3/10 is an infinite decimal therefore you need some sort of rounding to occur (maybe at the 3rd decimal place) so that the answer comes out correct. I would say if (num*1000 % 10 >= 5) num += .01 or something along those lines multiply by 100 moves decimal 3 times and then mod returns the 3rd digit (could be zero). You also might only want to round at the end once you sum everything up to avoid errors.
EDIT: Didn't realize you were using integers numbers at the end threw me off, you might want to use double or float (floats are slightly inaccurate past 2 or 3 digits which is OK with what you want).
100/3 is 33. Integer mathematics here.
round(45.923,-1) gives a result of 50. Why is this? How it is calculated?
(sorry guys i was mistaken with earlier version of this question suggesting value was 46)
The SQL ROUND() function rounds a number to a precision...
For example:
round(45.65, 1) gives result = 45.7
round(45.65, -1) gives result = 50
because the precision in this case is calculated from the decimal point. If positive then it'll consider the right side number and round it upwards if it's >= 5, and if <=4 then round is downwards... and similarly if it's negative then the precision is calculated for the left hand side of decimal point... if it's >= 5
for example round(44.65, -1) gives 40
but round(45.65, -1) gives 50...
ROUND(748.58, -1) 750.00
the second parameter: Lenght, is the precision to which numeric_expression is to be rounded. length must be an expression of type tinyint, smallint, or int. When length is a positive number, numeric_expression is rounded to the number of decimal positions specified by length. When length is a negative number, numeric_expression is rounded on the left side of the decimal point, as specified by length.
From
It is expected to be 50.
round(45.923, 0) => 46
expl: the last non-decimal digit is rounded (5), the desicion is based on the next digit (9)
9 is in the high half, ergo 5 is rounded up to 6
round(45.923, 1) => 45.9
expl: the first decimal digit is rounded (9), the desicion is based on the next digit (2)
2 is in the low half, ergo 9 stays 9
your case:
round(45.923, 1-) => 45.92
expl: the secon-last non-decimal digit is rounded (4), the desicion is based on the next digit (5)
5 is in the top half, ergo 4 is rounded up to 5, the rest of the digist are filled with 0s
As for how, start by considering how you'd round a (positive) float to the nearest integer. Casting a float to an int truncates it. Adding 0.5 to a (positive) float will increment the integer portion precisely when we want to round up (when the decimal portion >= 0.5). This gives the following:
double round(double x) {
return (long long)(x + 0.5);
}
To add support for the precision parameter, note that (for e.g. round(123456.789, -3)) adding 500 and truncating in the thousands place is essentially the same as adding 0.5 and to rounding to the nearest integer, it's just that the decimal point is in a different position. To move the radix point around, we need left and right shift operations, which are equivalent to multiplying by the base raised to the shift amount. That is, 0x1234 >> 3 is the same as 0x1234 / 2**3 and 0x1234 * 2**-3 in base 2. In base 10:
123456.789 >> 3 == 123456.789 / 10**3 == 123456.789 * 10**-3 == 123.456789
For round(123456.789, -3), this means we can do the above multiplication to move the decimal point, add 0.5, truncate, then perform the opposite multiplication to move the decimal point back.
double round(double x, double p) {
return ((long long)((x * pow10(p))) + 0.5) * pow10(-p);
}
Rounding by adding 0.5 and truncating works fine for non-negative numbers, but it rounds the wrong way for negative numbers. There are a few solutions. If you have an efficient sign() function (which returns -1, 0 or 1, depending on whether a number is <0, ==0 or >0, respectively), you can:
double round(double x, double p) {
return ((long long)((x * pow10(p))) + sign(x) * 0.5) * pow10(-p);
}
If not, there's:
double round(double x, double p) {
if (x<0)
return - round(-x, p);
return ((long long)((x * pow10(p))) + 0.5) * pow10(-p);
}
It doesn't for me on MySQL:
mysql> select round(45.923,-1);
+------------------+
| round(45.923,-1) |
+------------------+
| 50 |
+------------------+
1 row in set (0.00 sec)
And on Sql Server 2005:
select round(45.923,-1)
------
50.000
What database are you running this on?
one thing is in the round function first parameter is the number and the second parameter is the precision index from the decimal side.
That means if precision index is 0 it is at the first decimal, -1 means before the decimal first number, 1 means right side of the first decimal i.e second decimal
For example
round(111.21,0)---------> return 111
round(115.21,-1)--------->return 120
round(111.325,2)---------->return 111.33
round(111.634,1)-----------> return 111.6