I wish to sum up two numbers. They are BigDecimals.
n1 = 0.0000000040.toBigDecimal()
n2 = 0.0000000030.toBigDecimal()
println(n1 + n2) //result: 7.0E-9
How can I fix it to get the result 0.0000000070 as BigDecimal?
Try
println((n1 + n2).toPlainString())
You can use string format to have the desired output.
val n1 = 0.0000000040.toBigDecimal()
val n2 = 0.0000000030.toBigDecimal()
// addition of BigDecimals
val n3 = n1 + n2
val n4 = n1.add(n2)
// "Returns a string representation of this BigDecimal without an exponent field."
println(n4.toPlainString())
// formatted output
val n3output = String.format("%.10f", n3)
println(n3output)
Related
thanks for looking and possibly responding
val c1 = 'a' + 1
val c2 = 'a' + 25
val c3 = 'E' - 2
// 'a' + 1
val c1 = 'b'
// 'a' + 25
val c2 = ??
// 'E' - 2
val c3 = 'C'
complete noob, why does val c2 = z
I cant understand how 86 translates to 'z'. The unicode table does not give a character that represents 86.
A Kotlin Char is, basically, just a regular number that represents a Unicode character (What are Unicode, UTF-8, and UTF-16?). Each number is assigned to a character, which we can look up in a unicode table. In this we can see that the letter a has a decimal representation of 97.
You could also get the decimal value using Char.code
fun main() {
println('a'.code)
}
97
Run in Kotlin Playground
Therefore, in decimal, 97 + 25 = 122.
Looking up 122 in the Unicode table reveals that this is the decimal representation of z. You can again use Char.code to get the decimal representation.
fun main() {
println(('a' + 25).code)
println('a' + 25)
}
122
z
Run in Kotlin Playground
I have 2 Functions. One uses BigInteger and BigDecimal. I want to calculate sin(z) using the Taylor series:
Here is my code:
fun sinus(z: BigDecimal, upperBound: Int = 100): BigDecimal = calcSin(z, upperBound)
fun cosinus(z: BigDecimal, upperBound: Int = 100): BigDecimal = calcSin(z, upperBound, false)
fun calcSin(z: BigDecimal, upperBound: Int = 100, isSin: Boolean = true): BigDecimal {
var erg: BigDecimal = BigDecimal.ZERO
for (n in 0..upperBound) {
// val zaehler = (-1.0).pow(n).toBigDecimal() * z.pow(2 * n + (if (isSin) 1 else 0))
// val nenner = fac(2 * n + (if (isSin) 1 else 0)).toBigDecimal()
val zaehler = (-1.0).pow(n).toBigDecimal() * z.pow(2 * n + 1)
val nenner = fac(2 * n + 1).toBigDecimal()
erg += (zaehler / nenner)
}
return erg
}
fun calcSin(z: Double, upperBound: Int = 100): Double {
var res = 0.0
for (n in 0..upperBound) {
val zaehler = (-1.0).pow(n) * z.pow(2 * n + 1)
val nenner = fac(2 * n + 1, true)
res += (zaehler / nenner)
}
return res
}
fun fac(n: Int): BigInteger = if (n == 0 || n == 1) BigInteger.ONE else n.toBigInteger() * fac(n - 1)
fun fac(n: Int, dummy: Boolean): Double = if (n == 0 || n == 1) 1.0 else n.toDouble() * fac(n - 1, dummy)
According to Google, Sin(1) is
0.8414709848
The Output of the following is however:
println("Sinus 1: ${sinus(1.0.toBigDecimal())}")
println("Sinus 1: ${sinus(1.0.toBigDecimal()).toDouble()}")
println("Sinus 1: ${sinus(1.0.toBigDecimal(), 1000)}")
println("Sinus 1: ${calcSin(1.0)}")
Output:
Sinus 1: 0.8414373208078281027995610599000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
Sinus 1: 0.8414373208078281
Sinus 1: 0.8414373208078281027995610599000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
Sinus 1: 0.8414709848078965
Wha am I missing? Why does the Double-Variant gives the correct value, while The BigDecimal doesn't? Even with 1000 Iterations.
The commented out code was meant for calculation Cos as well, but wanted to figure out that Problem first, so i made both Functions look the same
In the BigDecimal variant, try replacing erg += (zaehler / nenner) with erg += (zaehler.divide(nenner, 20, RoundingMode.HALF_EVEN))
I suspect that the defaults for scaling the division results (as described here https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/math/BigDecimal.html) are not what you want.
BTW - I assume that performance is not part of the exercise, otherwise your implementation of factorial is a low hanging fruit.
how to round 47.3476 to 47.35 in kotlin
val num = 47.3476
val df = DecimalFormat("#.##")//set decimal format here
df.roundingMode = RoundingMode.CEILING
println(df.format(num)) //47.35
In the following Kotlin code example I expected the value of parameter i to be equal to 0, such as is the case for the parameter k. The IDE reports all i, j and k as Int. Is it a bug or do I need to readjust my understanding of Kotlin casting inside expressions? For example, is there a rule to always promote/cast to Double inside expressions involving division, but not multiplication?
fun main() {
//Kotlin 1.3.61
val x = 100 * 1.0/100 //Double
val i = 100 * 1/100 //Int
val j = 1/100 //Int
val k = 100 * j //Int
println(x) //1.0
println(i) //1
println(j) //0
println(k) //0
}
I expected the value of parameter i to be equal to 0
The output is arithmetically right: 100 * 1 / 100 = (100 * 1) / 100 = 100 / 100 = 1 / 1 = 1
, such as is the case for the parameter k.
The value j is 0, so anything multiplied by it will be zero, as in case of k.
is there a rule to always promote/cast to Double inside expressions
involving division, but not multiplication?
If you divide integers, you will get an integer back. If one of the numbers is a Double, the result will be a Double:
val x = 100 * 1.0/100 //Double because 1.0 is a Double
--
There is actually already a discussion on kotlin forum to your problem here:
Mathematically speaking the current behaviour is correct.
This is called integer devision and results in the quotient as an
answer
I've been trying out swift lately and i've come across a rather simple Problem.
In Obj-C when i want to get the fraction digits of a float i'd do the following:
float x = 3.141516
int integer_x = (int)x;
float fractional_x = x-integer_x;
//Result: fractional_x = 0.141516
in Swift:
let x:Float = 3.141516
let integerX:Int = Int(x)
let fractionalX:Float = x - integerX
-> this results in an error because of mismachting types
Any Idea how to do it correctly?
Thanks in Advance
Malte
Use the modf function:
let v = 3.141516
var integer = 0.0
let fraction = modf(v, &integer)
println("fraction: \(fraction)");
output:
fraction: 0.141516
For float instead of double just use: modff
Use .truncatingRemainder(dividingBy:) which replaced the modulo operator (x % 1), which (for modulo)
immediately (it is only one character), and
with few cpu cycles (presumably only one cycle, since modulo is a common cpu instruction)
gives the fractional part.
let x:Float = 3.141516
let fracPart = x.truncatingRemainder(dividingBy: 1) // fracPart is now 0.141516
fracPart will assume the value: 0.141516. This works for double and float.
The problem is that you cannot subtract Float and Int, you should convert one of this value to be the same as another, try that:
let fractionalX:Float = x - Float(integerX)
Swift 3 does not like the modulus operator %. It wants me to use truncatingRemainder of Double type.
let x1:Double = 123.00
let t1 = x1.truncatingRemainder(dividingBy: 1)
print("t1 = \(t1)")
let x2:Double = 123.45
let t2 = x2.truncatingRemainder(dividingBy: 1)
print("t2 = \(t2)")
Produces output:
t1 = 0.0
t2 = 0.450000000000003
To remove the 3 quadrillionth artifact you should probably round the result.
Why using an int whatsoever?
What about this instead:
import Darwin
let x = 3.1415926
let xf = x - (x > 0 ? floor(x) : ceil(x))
It will use doubles by default here. Feel free to use floats if it's what you need:
let x: Float = 3.1415926
Is that what you are looking for?