Why system can not find the method BigInteger.ToDouble? - .net-4.0

I am using F# Interactive, and I have added the reference of FSharp.PowerPack.dll.
When I try to convert BigNum to double as the following code,
let n = 2N
let d = double n
error comes out that
"System.MissingMethodException: Method not found: 'Double System.Numerics.BigInteger.ToDouble(System.Numerics.BigInteger)'. at Microsoft.FSharp.Math.BigNum.ToDouble(BigNum n)"
What can I do if I want to make such conversions like "BigNum to int" and "BigNum to double"? Thanks very much.

What you've written will work in the F# standalone CTP.
This error occurs in VS2010, because the BigInteger type has been moved from the F# library to the .Net4.0 core library. I'm not sure whether this issue has something to do with having both the F# CTP and the VS2010 beta installed.
Until a better solution comes along, you could roll your own conversion like this:
let numToDouble (n:bignum) = double n.Numerator / double n.Denominator
To convert a bignum to an integer you could then think of something like this:
let numToInt (n:bignum) = int n.Numerator / int n.Denominator
But this is rather dangerous: it'll overflow quite easily. A better version of numToInt would be to convert to a double first and then convert to int:
let numToInt = int << numToDouble
Still, both conversions aren't ideal for numerators/denominators of more than 308 digits, which will still overflow a double, while the fraction itself could be small.
ex: 11^300 / 13^280 ~= 3.26337, but
> numToDouble (pown 11N 300 / pown 13N 280);;
val it : float = nan

Related

Approximation using gmp mpf_class

I am writing a UnitTest using Catch2.
I want to check if two vectors are equal. They look like the following using gmplib:
std::vector<mpf_class> result
Due to me 'faking' the expected_result vector, I get the following message after a failed test:
unittests/test.cpp:01: FAILED:
REQUIRE( actual_result == expected_result )
with expansion:
{ 0.5, 0.166667, 0.166667, 0.166667 }
==
{ 0.5, 0.166667, 0.166667, 0.166667 }
So I was looking for a function that could do an approximation for me.
I just wasn't successful in finding a solution that worked out for me.
I found some Comparison Functions but they do not work on my project.
EDIT:
The "minimal, reproducible example would simply be:
TEST_CASE("DemoTest") {
// simplified:
mpf_class a = 1;
mpf_class b = 6;
mpf_class actual_result = a / b;
mpf_class expected_result= 0.16666666667;
REQUIRE(actual_result == expected_result);
}
The "only" difference to my real application is that the results are stored in vectors. But because I am only "faking" the result by saying it is "0.1666666667" it probably doesn't fit the == anymore. So I need a function that takes an approximation and compares the range like epsilon = +-0.001.
Edit:
After implementing the solution #Arc suggested it worked well until I had some Values that were not complete "even".
So I have a failure with the following values:
actual 0.16666666666666666666700000000000000000000000000000
expected 0.16666666666666665741500000000000000000000000000000
Even though my "expected" value looks like this:
mpf_class expected = 0.16666666666666666666700000000000000000000000000000
Getting back to my original question if there is a way I can compare an approximation of the number with an epsilon of like +-0.0001 or what would be the best way to fix this issue?
First, we need to see some Minimal, Reproducible Example to be sure of what is happening. You can for example cut down some code from your test.cpp until you are left with just a few lines of code, but the issue still happens. Also, please provide compilation and running instructions. Frequently, a little bit of explanation on what your goals are may also help. As Catch2 is available on GitHub you don't need to provide it.
Without seeing the code, the best I can guess is that your code is trying to comparing mpf_t types in the mpf_class using the == operator, which I'm afraid has not been overload (see here). You should compare mpf_ts with the cmp function, since the C type mpf_t is actually an struct containing the pointer to the actual significand limbs. Check some usage examples in the tests/cxx/ directory of GMP (like here).
I note you are using GNU MP 4.1 version which is very old, you probably want to move to the 6.2.1 latest version if possible. Also, for using floats it's recommended that you use the GNU MPFR library instead of GMP floats.
EDIT: I did not yet manage to run Catch2, but the issue with your code is the expected_result is actually not equal to the actual_result. In GMP mpf_t variables are created with a 64-bit significand precision (on 64-bit machines), so that the division a / b actually results in a binary that prints 0.166666666666666666667 (that's 19 sixes after the digit 1). Try printing the result with gmp_printf("%.50Ff\n", actual_result);, because the standard cout output will only give you the value rounded to 6 digits: 0.166667.
But the problem is you can't just assign this like expected_result = 0.166666666666666666667 because in C/C++ numeric constants are parsed as double, thus you have to use the string overload attribution to get more precision.
But you can't also manage to easily (or, in general, justifiably) coin a decimal string that will correctly convert to the exact same binary given by a / b because decimal to float conversion has subtleties, see for example here and here.
So, it all depends on your application and the kind of numerical validation you aim to do. If you know that your decimal validation values are correct to some known precision, and if you set the mpf_t variables to withstanding precision (using for example mpf_set_prec), then you can use tolerance comparison, like so.
in C++ (without Catch2), it works like this:
#include <iostream>
#include <gmpxx.h>
using namespace std;
int main (void)
{
mpf_class a = 1;
mpf_class b = 6;
mpf_class actual = a / b;
mpf_class expected;
mpf_class tol;
expected = "0.166666666666666666666666666666667";
tol = "1e-30";
cout << "actual " << actual << "\n";
cout << "expected " << expected << "\n";
gmp_printf("actual %.50Ff\n", actual);
gmp_printf("expected %.50Ff\n", expected);
gmp_printf("tol %.50Ff\n", tol);
mpf_class diff = expected - actual;
gmp_printf("diff %.50Ff\n", diff);
if (abs(actual - expected) < tol)
cout << "ok\n";
else
cout << "nop\n";
return 0;
}
And compile with -lgmpxx -lgmp options.
It produces the output:
actual 0.166667
expected 0.166667
actual 0.16666666666666666666700000000000000000000000000000
expected 0.16666666666666666666700000000000000000000000000000
tol 0.00000000000000000000000000000100000000000000000000
diff 0.00000000000000000000000000000000033333529249058470
ok
If I understand Catch2 well, it should be ok if you assign expected_result with string then compare with REQUIRE(abs(actual - expected) < tol).

Is this syntax good form: "PI / (double) (i - j)" in C?

EDIT — Actually the syntax was not good form, because there is a superlative statement, which is a fair reason for me being confused, whether it is good form, and if so, why. It's my first C code ever, grafting 9 research journal algorithms inside 1000 line code from 1989.
What is a double-type in between brackets:
PI / (double) (i - j);
Is it to ensure that the result is a float?
The bigger expression statement is:
xi[i] = xi[i] + 2.0 * xr[j] / PI / (double) (i - j);
There's nothing "antiquated" about it, it's a normal C type cast.
Assuming PI is of a floating-point type, which seems safe, the division will be performed using the type of PI thanks to promotion.
So, the cast might (depending on the context) have value if PI is of type float, but you really want the division to happen at double precision. Of course, it would make more sense to actually cast PI in that case ...

VBA Fix Function Discrepancy? [duplicate]

Disclaimer: I know that 0.025 cannot be represented exactly in IEEE floating-point variables and, thus, rounding might not return what one would expect. That is not my question!
Is it possible to simulate the behavior of the VBA arithmetic operators in .NET?
For example, in VBA, the following expression yields 3:
Dim myInt32 As Long
myInt32 = CLng(0.025 * 100) ' yields 3
However, in VB.NET, the following expression yields 2:
Dim myInt32 As Integer
myInt32 = CInt(0.025 * 100) ' yields 2
According to the specification, both should return the same value:
Long (VBA) and Integer (VB.NET) are 32-bit integer types.
According to the VBA specification, CLng performs Let-coercion to Long, and Let-coercion between numeric types uses Banker's rounding. The same is true for VB.NET's CInt.
0.025 is a double precision IEEE floating-point constant in both cases.
Thus, some implementation detail of the floating-point multiplication operator or the integer-conversion operator changed. However, for reasons of compatibility with a legacy VBA system, I'd need to replicate the mathematical behavior of VBA (however wrong it might be) in a .NET application.
Is there some way to do that? Did someone write a Microsoft.VBA.Math library? Or is the precise VBA algorithm documented somewhere so I can do that myself?
VBA and VB.NET behave differently because VBA uses 80-bit "extended" precision for intermediate floating-point calculations (even though Double is a 64-bit type), whereas VB.NET always uses 64-bit precision. When using 80-bit precision, the value of 0.025 * 100 is slightly greater than 2.5, so CLng(0.025 * 100) rounds up to 3.
Unfortunately, VB.NET doesn't seem to offer 80-bit precision arithmetic. As a workaround, you can create a native Win32 DLL using Visual C++ and call it via P/Invoke. For example:
#include <cmath>
#include <float.h>
#pragma comment(linker, "/EXPORT:MultiplyAndRound=_MultiplyAndRound#16")
extern "C" __int64 __stdcall MultiplyAndRound(double x, double y)
{
unsigned int cw = _controlfp(0, 0);
_controlfp(_PC_64, _MCW_PC); // use 80-bit precision (64-bit significand)
double result = floor(x * y + 0.5);
if (result - (x * y + 0.5) == 0 && fmod(result, 2))
result -= 1.0; // round down to even if halfway between even and odd
_controlfp(cw, _MCW_PC); // restore original precision
return (__int64)result;
}
And in VB.NET:
Declare Function MultiplyAndRound Lib "FPLib.dll" (ByVal x As Double, ByVal y As Double) As Long
Console.WriteLine(MultiplyAndRound(2.5, 1)) ' 2
Console.WriteLine(MultiplyAndRound(0.25, 10)) ' 2
Console.WriteLine(MultiplyAndRound(0.025, 100)) ' 3
Console.WriteLine(MultiplyAndRound(0.0025, 1000)) ' 3
Given that the VBA is supposed to use Banker's rounding, it seems clear to me at first glance that the bug is actually in the VBA side of things. Bankers rounding rounds at the midpoint (.5) so the result digit is even. Thus, to do correct Banker's rounding, 2.5 should round to 2, and not to 3. This matches the .Net result, rather than the VBA result.
However, based on information pulled from a currently deleted answer, we can also see this result in VBA:
Dim myInt32 As Integer
myInt32 = CInt(2.5) ' 2
myInt32 = CInt(0.025 * 100) ' 3
This makes it seem like the rounding in VBA is correct, but the multiplication operation produces a result that is somehow greater than 2.5. Since we're no longer at a mid-point, the Banker's rule does not apply, and we round up to 3.
Therefore, to fix this issue, you'll need to figure out what that VBA code is really doing with that multiplication instruction. Regardless of what is documented, the observations prove that VBA is handling this part differently than .Net. Once you figure out exactly what's going on, with luck you'll be able to simulate that behavior.
One possible option is to go back to the old standby for floating point numbers: check whether you're within some small delta of a mid-point and, if so, just use the mid-point. Here's some (untested) naive code to do it:
Dim result As Double = 0.025 * 100
Dim delta As Double = Double.Epsilon
Dim floor As Integer = Math.Floor(result)
If Math.Abs(result - (CDbl(floor) + 0.5)) <= delta Then
result = floor + 0.5
End
I emphasize the untested, because at this point we're already dealing strange with results from small computer rounding errors. The naive implementation in this situation is unlikely to be good enough. At very least, you may want to use a factor of 3 or 4 epsilons for your delta. Also, the best you could hope for from this code is that it could force the VBA to match the .Net, when what you're really after is the reverse.

Delphi Double to Objective C double

I am looking a few hours for some solution of this problem, but I don't get how it works. I have a hex string from delphi double value : 0X3FF0000000000000. That value should be 1.0. It is 8 byte long, first bit is sign, next 11 are exponent and the rest is mantissa. So for me is this hex value equals 0 x 10^(1023). Maybe I am wrong somewhere, but it doesn't matter. The point is, I need this hex value to convert into objective c double value. If I do: (double)strtoll(hexString.UTF8String, NULL,16); I get: 4.607...x 10 ^18. What am I doing wrong?
It seems that trying to cast in this way ends up with a call to an implicit type conversion (calls _ultod3 or _ltod3) that alters the underlying data. In fact, even trying to do this seems to do the same thing :
UINT64 temp1 = strtoull(hexString, NULL, 16);
double val = *&temp1;
But if you cast the uint pointer to a double* it semes to suppress the compiler's desire to try to perform a conversion. Something like this should work :
UINT64 temp1 = strtoull(hexString, NULL, 16);
double val = *(double*)&temp1;
At least this works with the MS C++ compiler... I imagine the objective C compiler would cooperate as well.

Using data type like uint64_t in cgal's exact kernel

I am beginning with CGAL. What I would like to do is to create point that coordinates are number ~ 2^51.
typedef CGAL::Exact_predicates_exact_constructions_kernel K;
typedef K::Point_2 P;
uint_64 x,y;
//init them somehow
P sp0(x,y);
Then I got a long template error. Someone could help?
I guess you realize that changing the kernel may have other effects on your program.
Concerning your original question, if your integer values are smaller than 2^51, then they fit exactly in doubles (with 53 bit mantissa), so one simple option is to cast them to double, as in :
P sp0((double)x,(double)y);
Otherwise, the Exact_predicates_exact_construction_kernel should have its main number type be able to read your uint64 values (maybe cast them to unsigned long long if it's OK on your platform) :
typedef K::FT FT;
P sp0((FT)x,(FT)y);
CGAL Number types are only documented to interoperate with int and double. I recently added some code so we can construct more numbers from long (required for Eigen), and your code will work in the next version of CGAL (except that you typo-ed uint64_t) on platforms where uint64_t is unsigned int or unsigned long (not windows). For long long support, since many of our number types are based on other libraries (GMP) that do not support long long themselves yet, it may have to wait a bit.
Ok. I think that I found solution. The problem was that I used exact Kernel that supports only double, switching to inexact kernel solved the problem. It was also possible to use just double. (one of the requirements was to use data type that supports intergers up to 2^48).