Math.Net Complex32 Division giving NaN - complex-numbers

Alright, so i have two large complex values. Top, and Bottom:
Top = 4.0107e+030
Bot = 5.46725E26 -2.806428e26i
when i divide these two numbers in Math.Net's Complex32, it gives me a NaN for both the real and imaginray. I am assuming that it has smething to do with the precision.
When i use Matlab i get the following:
Top/Bot = 5.8060e+003 +2.9803e+003i
When i use System.Numerics i get something very close to matlabs, at least in the correct order of magnitute:
Top/Bot = +5575.19343780947 +2676.09270239214i System.Numerics.Complex
i wonder, which one is the right one? and why is Math.Net giving me a wrong answer
I am running simulations and i very much care about the accuracy of the numerics?
Anyway to fix this? i will be dealing with a lot of large complex numbers.
Plus, if anyone knows of a good Complex library for .net with support for special functions such as the complemetary error function and the error function of Complex parameters, that would be great.
As i found out that Math.Net doesn't support cerf of a complex32

If you care about accuracy you should obviously use the double precision/64 bit type, not the single precision/32 bit one. Note that we only provide a Complex32 but no Complex (64) type in the normal package because we want you to use the Complex type provided in System.Numerics for compatibility - we only provide an equivalent Complex (64) type in the portable build as System.Numerics is not available there.
But in this specific case, this is not a problem of precision (or accuracy), but about range. Remember that 32 bit floating point numbers can not be larger than ~3.4e+38. Computing a complex division in normal direct form requires computing the square of both real and imaginary components of the denominator, which in your case will get out of range and become "infinity" and thus NaN in the final result.
Now, it might be possible to implement the division in a form that avoids computing the square when the denominator is larger than about 1e+19, but we have not done that yet in Math.NET Numerics (as there was no demand for it up to now). This would also not be a problem if the complex type would implement the polar form, but that is quite uncommon.

Related

i cant figure out how to round up decimals in kotlin calculator

I just started coding in android studio and was creating calculator but now I'm stuck on one problem.
after struggling a lot I figured out how to make so u can use one dot but now I came across another problem which is after addition I cant seem to round up the decimals. when I do additions in decimals sometimes it gives me something like 1.9999999998 and I cant seem to round it up. for the reference I used Table Row in xml. if necessary I can show you what I have written so far. Thanks in advance.
You need String.format(".1f", value). 1,99999 -> 1.99. If you need to round to higher value, please use ceil: https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.math/ceil.html
For formatting numbers, you should always be using NumberFormat or similar.
NumberFormat lets you set a RoundingMode which will do what you want.
Or you could be like me and write your own formatter for numbers because the built-in one didn't do what I wanted.
If you care about exact decimal values, then don't use floating-point. Instead, use a type that's intended for storing exact decimal values, such as BigDecimal.
(Floating-point types such as Kotlin's Float and Double can hold numbers across a huge range of magnitude, and store and calculate with them efficiently. But they use binary floating-point, not decimal. So they're great when you care about doing calculations efficiently and flexibly — but not when you need to store exact decimal values. Most of the questions about floating-point on this site seem to be for the latter cases, unfortunately…)
Kotlin has lots of extensions making it almost as easy to handle BigDecimals as the native types. They're a little less efficient, but not by anywhere near enough to be significant in a calculator project. And they do exactly what you want here: storing and manipulating decimal numbers exactly.
And because they're exact, you shouldn't need to do any rounding — and probably won't need to do any formatting either.
(Just make sure you create them directly from strings, not from floats/doubles — which will already have been rounded to the nearest binary floating-point number.)

Why kotlin.math functions does not have implementation of Long

I have been working with kotlin for little over 2 years now.
Looking over what I learned in these 2 years, I noticed that I have been using(num.toDouble()).toLong() for kotlin.math functions a bit too much. For example, Math.sqrt(num.toDouble()).toLong(). Two of my projects have extension function sumByLong() inside util created by team, because kotlin libs only have sumBy:Int and sumByDouble:Double and a lot of work in the project, uses Long.
In short, Mathematical operations using Long is more common than using Double or Float, yet Long has a very small footprint in kotlin standard library. And since kotlin.math is different than java.lang.Math, mixed usage is not a recommended practice.
Going over docs of kotlin.math, all functions except for abs, min, max, only have implementation for Float and Double only.
Can someone Explain like I am 5 the possible reasoning behind this. Something real, not silly stuff like devs were lazy, or more code means more work, which is all I could find in search engine results.
--
Update: Some Clarification
1. I can understand that in most cases, return types will contain floating point numbers. I am also talking about parameters lacking long counterpart. Maybe using Math.sqrt wasn't the best example, something like math.log, math.cos, etc would be better example, where floating return type us expected, but parameters doesn't even support Int
2. When I said "Long is more common than using Double", I was not talking about public at large, but was looking over my past two years working with kotlin. I am sorry if my phrasing wasn't clear.
Disclaimer: this answer may be a little opinionated, but I believe it is according to general consensus and best practices of using maths in computer science.
Mathematics for integers and for real numbers (floats) are really two much different math "sub-worlds". They're pretty separate, they have different uses and we usually don't mix them.
If we work on some physics, we do real-world simulations, we operate on units like temperature or speed, we use doubles. If we have identifiers (bank account number), we count something (number of bank accounts) or we operate on a discrete values with 100% precision (bank account value) we always use integers and never doubles.
Operations like sinus, square root or logarithm make perfect sense for physics, but not really for bank account values. They very often produce either very small or very large numbers that can't be safely represented as integers. They operate on approximations and don't really provide 100% precise results. They are continuous by nature while integers are discrete.
What is the point of using integers with sqrt() or log() if they almost always return a floating point result? What is the point of passing an integer to sin() if e.g. there are only 2 distinct angles smaller than square angle that can be represented as an integer: 0 and 1? Using integers with these functions is unnatural and impractical.
I can't think of a case where we have to often convert between longs and doubles. Usually, we operate either on longs or on doubles top to bottom and we don't convert between them too often. By converting we lose advantages of these specific "math sub-worlds", we sum their disadvantages. Maybe you should just keep using doubles in your application and don't convert to/from longs? Why do you use longs?
BTW, you mentioned that you can't/shouldn't use java.lang.Math in the Kotlin application. Well, if you look into java.lang.Math you will notice that... it supports only doubles :-)
In the case of ceil, it returns a Double because a Double has a bigger range of values than Long. Consider, for example:
ceil(Long.MAX_VALUE.toDouble() * 1000)
What would you expect it to return if it returned a Long? For further discussion, see Why does Math.ceil return a double?
In the case of log and trigonometric functions, the use cases requiring Long parameters are rare and the requirements varied. For example, should it round up, down, or to the nearest integral value? These are decisions that should be made for your particular project, and therefore can't be made in the stdlib.
In your project, you can simply define your required functions in a single, small source file, making your project's choice of rounding method, and then use it everywhere instead of converting at each call site, e.g.:
fun cos(n: Long): Long = cos(x.toDouble()).roundToLong()

Excel 2007 VBA Calculations Wrong

When you run a VBA macro that performs numeric calculations which result in a decimal value, the result that is returned may be incorrect.
Here are a couple of examples:
Dim me_wrong as Double
me_wrong = 1000 - 999.59
RESULT = 0.409999999999968
Dim me_wrong_too as Double
me_wrong_too = 301.84 - 301
RESULT = 0.839999999999975
I have never ever noticed this before. What on earth is going on???
I have seen the following article about Office 97 but can't find anything about a bug in Excel 2007:
http://support.microsoft.com/default.aspx?scid=kb;en-us;165373
Plus it doesn't explain why I have never seen it before.
Please help!
The explanation for the problem from Office 97 and VBA is equally applicable going forward into Excel 2007. The core VBA system is largely unchanged despite the migration into later versions, hence the same kinds of accuracy gremlins that plague older VBA macros will persist.
The fundamental problem lies with the inherent inaccuracy in the representation of fractional numbers in binary, and how at least some effort to mitigate that inaccuracy was made with IEEE floating point representations. There is a very decent treatment of the subject of IEEE representation at this location.
*Edit: Just a minor bit of extra info for detail. *
For a very simple example that illustrates this issue in a trivial case, consider a situation in which decimals are represented as sums of inverse powers of two, eg 2-1, 2-2, 2-3, and so on. That ends up looking like .5, .25, .125, and so on. If you're representing exactly those numbers, all is good. However, consider a number like .761; 2-1+2-2 gets you to .750, but now you need .011. 2-3 (.125) is too big, but 2-4 (.0625) is too small...so you keep going to smaller powers of two, realizing you'll never quite represent the number precisely.
The choice becomes where you stop resolving and accept the inherent inaccuracy as being "good enough" for the problem you're solving/modeling.
It is, unfortunately, not a bug.
Double representation follows a fixed point notation, where the mantissa is a number "1,x" with "1" being implicit. There's an exponent and a sign, which makes the full representation in Base 2.
The pertinent issue is Base=2 which makes "x" in "1,x" to be a finite-precision (53bits of it) fractional binary. Think x= a52*1/2+a51*1/4+a50*1/8+...+a*1**1/(2^52)+a0*1/(2^53), where a< i > are the bits in the mantissa.
Try attaining 1,4 with this representation, and you hit the precision wall... there is no finite decomposition of 0.4 in binary weights. So the norm specifies you should represent the number immediately before the real one, which leaves you with 0,39999..9997346 (or whatever the tail is).
The "good" news is, and I've just burned four "c" coding days last week on that subject, you can do without Doubles if you represent your number using a very small scale (say 10^-9), store then in very large variables (long64), and do your display functions using nothing but integers (mathematically slicing away integral and fractional parts through integer division and their remainders). A treat, I tell you... not.

Exponents in Genetic Programming

I want to have real-valued exponents (not just integers) for the terminal variables.
For example, lets say I want to evolve a function y = x^3.5 + x^2.2 + 6. How should I proceed? I haven't seen any GP implementations which can do this.
I tried using the power function, but sometimes the initial solutions have so many exponents that the evaluated value exceeds 'double' bounds!
Any suggestion would be appreciated. Thanks in advance.
DEAP (in Python) implements it. In fact there is an example for that. By adding the math.pow from Python in the primitive set you can acheive what you want.
pset.addPrimitive(math.pow, 2)
But using the pow operator you risk getting something like x^(x^(x^(x))), which is probably not desired. You shall add a restriction (by a mean that I not sure) on where in your tree the pow is allowed (just before a leaf or something like that).
OpenBeagle (in C++) also allows it but you will need to develop your own primitive using the pow from <math.h>, you can use as an example the Sin or Cos primitive.
If only some of the initial population are suffering from the overflow problem then just penalise them with a poor fitness score and they will probably be removed from the population within a few generations.
But, if the problem is that virtually all individuals suffer from this problem, then you will have to add some constraints. The simplest thing to do would be to constrain the exponent child of the power function to be a real literal - which would mean powers would not be allowed to be nested. It depends on whether this is sufficient for your needs though. There are a few ways to add constraints like these (or more complex ones) - try looking in to Constrained Syntactic Structures and grammar guided GP.
A few other simple thoughts: can you use a data-type with a larger range? Also, you could reduce the maximum depth parameter, so that there will be less room for nested exponents. Of course that's only possible to an extent, and it depends on the complexity of the function.
Integers have a different binary representation than reals, so you have to use a slightly different bitstring representation and recombination/mutation operator.
For an excellent demonstration, see slide 24 of www.cs.vu.nl/~gusz/ecbook/slides/Genetic_Algorithms.ppt or check out the Eiben/Smith book "Introduction to Evolutionary Computing Genetic Algorithms." This describes how to map a bit string to a real number. You can then create a representation where x only lies within an interval [y,z]. In this case, choose y and z to be the of less magnitude than the capacity of the data type you are using (e.g. 10^308 for a double) so you don't run into the overflow issue you describe.
You have to consider that with real-valued exponents and a negative base you will not obtain a real, but a complex number. For example, the Math.Pow implementation in .NET says that you get NaN if you attempt to calculate the power of a negative base to a non-integer exponent. You have to make sure all your x values are positive. I think that's the problem that you're seeing when you "exceed double bounds".
Btw, you can try the HeuristicLab GP implementation. It is very flexible with a configurable grammar.

1.2 in SQLite3 Database Is Actually 1.199999998

I am attempting to store a float in my SQLite3 database using java. When I go to store the number 1.2 in the database, it is actually stored as 1.199999998 & the same occurs for every even number (1.4, 1.6, etc.).
This makes is really diffult to delete rows because I delete a row according to its version column(whose type =float). So this line wont work:
"DELETE FROM tbl WHERE version=1.2"
Thats because there is no 1.2 but only 1.19999998. How can I make sure that when I store a float in my SQLite3 DB, that it is the exact number I input?
Don't use a float if you need precise accuracy. Try a decimal instead.
Remember that the 1.2 you put in your source code or that the user entered into a textbox and ultimately ended up in the database is actually stored as a binary value (usually in a format known as IEEE754). To understand why this is a problem, try converting 1.2 (1 1/5) to binary by hand (binary .1 is 1/2, .01 is 1/4) and see what you end up with:
1.001100110011001100110011001100110011
You can save time by using this converter (ignore the last "1" that breaks the cycle at the site —its because the converter had to round the last digit).
As you can see, it's a repeating pattern. This goes on pretty much forever. It would be like trying to represent 1/3 as a decimal. To get around this problem, most programming languages have a decimal type (as opposed to float or double) that keeps a base 10 representation. However, calculations done using this type are orders of magnitude slower, and so it's typically reserved for financial transactions and the like.
This is the very nature of floating point numbers. They are not exact.
I'd suggest you either use an integer, or text field to store a version.
You should never rely on the accuracy of a float or a double. A float should never be used for keys in a data base or to represent money.
You should probably use decimal in this case.
Floats are not an accurate data type. They are designed to be fast, have a large range of values, and have a small memory footprint.
They are usually implemented using the IEEE standard
http://en.wikipedia.org/wiki/IEEE_754-2008
As Joel Coehoorn has pointed out, 1.2 is the recurring fraction 1.0011 0011 0011... in binary and can't be exactly represented in a finite number of bits.
The closest you can get with an IEEE 754 float is 1.2000000476837158203125. The closest you can get with a double is 1.1999999999999999555910790149937383830547332763671875. I don't know where you're getting 1.199999998 from.
Floating-point was designed for representing approximate quantities: Physical measurements (a swimming pool is never exactly 1.2 meters deep), or irrational-valued functions like sqrt, log, or sin. If you need a value accurate to 15 significant digits, it works fine. If you truly need an exact value, not so much.
For a version number, a more appropriate representation would be a pair of integers: One for the major version and one for the minor version. This would also correctly handle the sequence 1.0, 1.1, ..., 1.9, 1.10, 1.11, which would sort incorrectly in a REAL column.