Converting float to int without rounding - objective-c

I want to convert a decimal number to an int without rounding it, for example:
if the number is 3.9 it will be turned into 3 (if it would have been rounded it would be 4).

You generally don't need to do anything special, as by default a cast from float/double to an integer type results in truncation:
float f = 3.9f;
int i = (int)f; // i = 3

It depends on how you want the negative values be treated. Typecasting to int would just truncate in that way that the part left of the decimal point will remain. -3.9f would turn into -3. Using floor before casting would ensure that it results in -4.
(all within the variable type boundaries of course)

you can do like this bellow :-
float myFloat = 3.9;
int result1 = (int)ceilf(myFloat );
NSLog(#"%d",result1);
int result2 = (int)roundf(myFloat );
NSLog(#"%d",result2);
int result3 = (int)floor(myFloat);
NSLog(#"%d",result3);
int result4 = (int) (myFloat);
NSLog(#"%d",result4);
OUTPUT IS
4
4
3
3

Just cast the float to an int and it will truncate your result.

You can simple typecast to int

Related

How to divide two Int a get a BigDecimal in Kotlin?

I want to divide two Integers and get a BigDecimal back in Kotlin.
E.g. 3/6 = 0.500000.
I've tried some solutions, like:
val num = BigDecimal(3.div(6))
println("%.6f".format(num))
// The result is: 0.000000
but none of them solve my problem.
3 and 6 are both Int, and dividing one Int by another gives an Int: that's why you get back 0. To get a non-integer value you need to get the result of the division to be a non-integer value. One way to do this is convert the Int to something else before dividing it, e.g.:
val num = 3.toDouble() / 6
num will now be a Double with a value of 0.5, which you can format as a string as you wish.
You might have better luck with:
val num = 3.toBigDecimal().divide(6.toBigDecimal())
println(num)
// prints 0.5
You have to convert both numbers to BigDecimal for the method to work. This will show the exact quotient, or throw an exception if the exact quotient cannot be represented (ie a non-terminating decimal).
You can set the scale and rounding mode as follows:
val num = 3.toBigDecimal().divide(6.toBigDecimal(), 4, RoundingMode.HALF_UP)
println(num)
// prints 0.5000
Link to reference article
Dividing Int by Int will give Int result only. To get float result , you need to convert one of the number to float.
You can use toFloat() function also.
var result = Int.toFloat() / Int

Short Rounds Up? [duplicate]

Does anyone know why integer division in C# returns an integer and not a float?
What is the idea behind it? (Is it only a legacy of C/C++?)
In C#:
float x = 13 / 4;
//== operator is overridden here to use epsilon compare
if (x == 3.0)
print 'Hello world';
Result of this code would be:
'Hello world'
Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)
While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.
First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.
Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.
Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.
See C# specification. There are three types of division operators
Integer division
Floating-point division
Decimal division
In your case we have Integer division, with following rules applied:
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.
Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.
Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:
Manual:
If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
Thus, since you declare 13 as integer, integer division will be performed:
Manual:
For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.
The predefined division operators are listed below. The operators all compute the quotient of x and y.
Integer division:
int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);
And so rounding down occurs:
The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.
If you do the following:
int x = 13f / 4f;
You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.
If you want the division to be a floating-point division, you'll have to make the result a float:
float x = 13 / 4;
Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).
Might be useful:
double a = 5.0/2.0;
Console.WriteLine (a); // 2.5
double b = 5/2;
Console.WriteLine (b); // 2
int c = 5/2;
Console.WriteLine (c); // 2
double d = 5f/2f;
Console.WriteLine (d); // 2.5
It's just a basic operation.
Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.
9 / 6 == 1 //true
9 % 6 == 3 // true
The /-operator in combination with the %-operator are used to retrieve those values.
The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).
var a = (byte)5 / (byte)2; // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2; // 2 (Int32)
var d = 5 / 2U; // 2 (UInt32)
var e = 5L / 2U; // 2 (Int64)
var f = 5L / 2UL; // 2 (UInt64)
var g = 5F / 2UL; // 2.5 (Single/float)
var h = 5F / 2D; // 2.5 (Double)
var i = 5.0 / 2F; // 2.5 (Double)
var j = 5M / 2; // 2.5 (Decimal)
var k = 5M / 2F; // Not allowed
There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).
As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:
int a = 1;
int b = 2;
var result = a/b;
your compiler will tell you that result would be of type int here.

Integer multiplication overflow

I have problem with multiplying two integers in Objective C. When I multiply 500 with 20000000 and save in a long variable and print it the result is 1410065408.
int x = 500;
long myLongValue = x * 20000000;
NSLog(#" %lu",myLongValue);
I think the problem is about overflowing integers, but I couldn't find real reason. I try to find the real result 10000000000, using multiplication of these integers. Is it possible?
You have an overflow here : an unsigned int is coded using 32bits, so the max value will be 2^32-1, 4294967295. An int is coded using 31 bits (1 bit is for the sign), and the max will be 2147483647.
As you can see here, the 33h bit is used to represent your number, so you cannot represent it with an int. Guess which number you'll get if you set this bit to 0 ? ;)
SOLUTION EDITED : Even if you assign the result to a long value, the result is first stored into an int, so you should cast your values to long before performing the multiplication in objC. Also, as rmaddy noticed in the comments, using a long variable doesn't work in 32bit architecture since longis coded using 4bytes. You should use long long type instead, or use explicit types such as int32_t and int64_t.
int x = 500;
long long myLongValue = (long long)x * 20000000;
NSLog(#" %llu",myLongValue); // logs correctly 10000000000
You can also declare your x directly as a long long variable.
FYI : swift is not as tolerant as objC, and your example code will crash because of the 'out-of range' bit :
let a:Int32 = 20000000
let b:Int32 = 500
let result = a*b // CRASH
let result2 = Int64(a)*Int64(b) // OK

Objective-C. Math functions

I have a variable which contain the textfield value. I want to multiply that x value with a decimal number like 0.013. But after multiplication I got as answer 0.
It takes the decimal value as 0. What is the reason?
Once you get the text from the textfield, convert the string using floatValue :
CGFloat val = [myValue floatValue];
CGFloat res = val * 0.013;
In a comment, the OP notes
If i use 4/3 then it only takes answer as 1
This suggests that the problem is in how the value is initialized: 4/3 is integer division, returning the int value 1. The solution is to be sure that the calculations are actually dealing with floats, start to finish, by using float literals, e.g., replacing 4/3 with 4.0/3.0
you need to convert the text to the required format:
float floatValue = [yourTextField floatValue];
int intValue = [yourTextField intValuew];

C/Objective-C read and get last digit of integer?

How can i get the last digit of an integer (or NSInteger) outputted to integer?
example:
int time = CFAbsoluteGetCurrent();
int lastDigit;
Use modulo:
int lastDigit = time % 10;