How do I process enormous numbers? [duplicate] - bignum

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Most efficient implementation of a large number class
Suppose I needed to calculate 2^150000. Obviously that number is going to exceed the size of an int, float, or double. How can I make a data type that allows normal math functions but exceeds the basic number types?
If this is a "depends which language you use" kind of deal. I will say C#.

See
Most efficient implementation of a large number class
for some leads.

If C# is not cast in stone, and you want something that just works out of the box, then there are several options. The one I know best is Python, but I think that languages like Scheme and Ruby support large numbers, too.
Python: 2**150000. Prints the result after about 1 second.
If you want free mathematics software, look at Maxima or Sage.

You might also consider using Frink, which is a language with the native capability of dealing with measurement units.
It computes 2^150000 without difficulty, deals with fractions (e.g. 1/3+2/5 --> 11/15), computes 3 meters + 2 inch --> 3.0508 m and is a full programming language.
Frink - Copyright 2000-2008 Alan Eliasen, eliasen#mindspring.com
http://futureboy.us/frinkdocs/

Several languages have built in support for arbitrary large numbers. You could use Mathematica, for example. I tried your example in Mathematica, and the result has 45,155 digits. I tried the same example with bc on a Unix machine. bc supports extended precision, but not that extended; it bombed on the example.

Lisp is your friend. Default biginteger numbers.

I find it very frustrating to use a language without arbitrarily large numbers: it seems nonsensical to be able to use ordinary operators like addition on most numbers, but to have to switch to method calls on a BigInt instance simply because of its size.
A whole bunch of languages have more complete numeric towers, and seamlessly coerce when needed; e.g., Allegro Common Lisp evaluates and prints all 45,155 digits of (expt 2 150000) in 1ms.
cl-user(2): (time (expt 2 150000))
; cpu time (non-gc) 0 msec user, 0 msec system
; cpu time (gc) 0 msec user, 0 msec system
; cpu time (total) 0 msec user, 0 msec system
; real time 1 msec
; space allocation:
; 2 cons cells, 18,784 other bytes, 0 static bytes

There is a product in C called calc which is an arbitrary precision calculator. I used it once when working as a researcher and found it fairly straightforward to use...
http://sourceforge.net/projects/calc/
It can be programmed for difficult or long calculations and can accept arguments from the command line. In interactive mode, it accepts one command at a time, and displays the answer.
Ordinarily the commands are simply expressions such as:
3 * (4 + 1)
and calc will print:
15
Calc does the arithmetic operators +, -, /, * as well as ^ (exponentiation), % (modulus) and // (integer divide).
For example:
3 * 19 ^ 43 - 1
will produce:
29075426613099201338473141505176993450849249622191102976
Calc values can be VERY large. For example:
2 ^ 23209 - 1
will print:
402874115778988778181873329071 ... loads of digits ... 3779264511
Hope this helps...

I don't know C# but I do know the Ruby programming language has the BigDemical class that seems to allow numbers of unlimited size.

Python has a bignum library. If you need to implement a bignum library in another language you can at least use the Python one as reference for validating your work. Note that bignums have a few implementation gotchas that aren't immediately obvious if you don't know what you're looking for.

Related

Squeak Smalltalk, why sometimes the reduced method doesn't work?

(2332 / 2332) reduced
(2332 / 2) reduced
(2332 / 322) reduced (1166/161)
(2332 / 3) reduced (2332/3)
(2332 / 2432423) reduced (2332/2432423)
Look at the above codes. The first and second, when printed, do not work. The MessageNotUnderstood window pops up. And the 3rd, 4th, 5th code are okay. Results come out right.
Why does the reduced method not work?
Is it because the reduced method fails to handle final results which are integers like Uko guesses ?
Fractions are reduced automatically in the / method. There is no need to send the reduced message.
E.g. if you print the result of
2 / 4
you get the reduced (1/2) automatically.
If you print the result of
2332 / 2332
it is reduced to 1 which is not a Fraction, but an Integer, and Integers do not understand the reduced message. That's why you get an error.
The only case when a Fraction is not automatically reduced is when you create it manually, as in
Fraction numerator: 2 denominator: 4
which will answer the non-reduced (2/4). But in normal arithmetic expressions you never need to send reduced.
The error occurs because by default, the Integer class does not understand the message reduced in Squeak. This despite members of Squeak's Integer class being fractions.
5 isFraction "returns True"
The wonderful thing about Smalltalk is that if something does not work the way you want, you can change it. So if an Integer does not respond to the message reduced and you want it to, then you can add a reduced method to Integer with the expected behavior:
reduced
"treat an integer like a fraction"
^ self
Adding methods to Classes is the way Smalltalk makes it easy to write expressive programs. For example, Fractions in GNU Smalltalk understand the message reduce but not the message reduced available in Squeak. Rather than trying to remember a meaningless difference, the programmer can simply make reduced available to fractions in GNU Smalltalk:
Fraction extend [
"I am a synonym for reduce"
reduced [
^ self reduce
]
]
Likewise one can extend Fraction in Squeak to have a reduce method:
reduce
"I am a synonym for reduced"
^ self reduced
The designers of Smalltalk made a language that let's programmers express themselves in the way that they think about the problem.

Why write 1,000,000,000 as 1000*1000*1000 in C?

In code created by Apple, there is this line:
CMTimeMakeWithSeconds( newDurationSeconds, 1000*1000*1000 )
Is there any reason to express 1,000,000,000 as 1000*1000*1000?
Why not 1000^3 for that matter?
One reason to declare constants in a multiplicative way is to improve readability, while the run-time performance is not affected.
Also, to indicate that the writer was thinking in a multiplicative manner about the number.
Consider this:
double memoryBytes = 1024 * 1024 * 1024;
It's clearly better than:
double memoryBytes = 1073741824;
as the latter doesn't look, at first glance, the third power of 1024.
As Amin Negm-Awad mentioned, the ^ operator is the binary XOR. Many languages lack the built-in, compile-time exponentiation operator, hence the multiplication.
There are reasons not to use 1000 * 1000 * 1000.
With 16-bit int, 1000 * 1000 overflows. So using 1000 * 1000 * 1000 reduces portability.
With 32-bit int, the following first line of code overflows.
long long Duration = 1000 * 1000 * 1000 * 1000; // overflow
long long Duration = 1000000000000; // no overflow, hard to read
Suggest that the lead value matches the type of the destination for readability, portability and correctness.
double Duration = 1000.0 * 1000 * 1000;
long long Duration = 1000LL * 1000 * 1000 * 1000;
Also code could simple use e notation for values that are exactly representable as a double. Of course this leads to knowing if double can exactly represent the whole number value - something of concern with values greater than 1e9. (See DBL_EPSILON and DBL_DIG).
long Duration = 1000000000;
// vs.
long Duration = 1e9;
Why not 1000^3?
The result of 1000^3 is 1003. ^ is the bit-XOR operator.
Even it does not deal with the Q itself, I add a clarification. x^y does not always evaluate to x+y as it does in the questioner's example. You have to xor every bit. In the case of the example:
1111101000₂ (1000₁₀)
0000000011₂ (3₁₀)
1111101011₂ (1003₁₀)
But
1111101001₂ (1001₁₀)
0000000011₂ (3₁₀)
1111101010₂ (1002₁₀)
For readability.
Placing commas and spaces between the zeros (1 000 000 000 or 1,000,000,000) would produce a syntax error, and having 1000000000 in the code makes it hard to see exactly how many zeros are there.
1000*1000*1000 makes it apparent that it's 10^9, because our eyes can process the chunks more easily. Also, there's no runtime cost, because the compiler will replace it with the constant 1000000000.
For readability. For comparison, Java supports _ in numbers to improve readability (first proposed by Stephen Colebourne as a reply to Derek Foster's PROPOSAL: Binary Literals for Project Coin/JSR 334) . One would write 1_000_000_000 here.
In roughly chronological order, from oldest support to newest:
XPL: "(1)1111 1111" (apparently not for decimal values, only for bitstrings representing binary, quartal, octal or hexadecimal values)
PL/M: 1$000$000
Ada: 1_000_000_000
Perl: likewise
Ruby: likewise
Fantom (previously Fan): likewise
Java 7: likewise
Swift: (same?)
Python 3.6
C++14: 1'000'000'000
It's a relatively new feature for languages to realize they ought to support (and then there's Perl). As in chux#'s excellent answer, 1000*1000... is a partial solution but opens the programmer up to bugs from overflowing the multiplication even if the final result is a large type.
Might be simpler to read and get some associations with the 1,000,000,000 form.
From technical aspect I guess there is no difference between the direct number or multiplication. The compiler will generate it as constant billion number anyway.
If you speak about objective-c, then 1000^3 won't work because there is no such syntax for pow (it is xor). Instead, pow() function can be used. But in that case, it will not be optimal, it will be a runtime function call not a compiler generated constant.
To illustrate the reasons consider the following test program:
$ cat comma-expr.c && gcc -o comma-expr comma-expr.c && ./comma-expr
#include <stdio.h>
#define BILLION1 (1,000,000,000)
#define BILLION2 (1000^3)
int main()
{
printf("%d, %d\n", BILLION1, BILLION2);
}
0, 1003
$
Another way to achieve a similar effect in C for decimal numbers is to use literal floating point notation -- so long as a double can represent the number you want without any loss of precision.
IEEE 754 64-bit double can represent any non-negative integer <= 2^53 without problem. Typically, long double (80 or 128 bits) can go even further than that. The conversions will be done at compile time, so there is no runtime overhead and you will likely get warnings if there is an unexpected loss of precision and you have a good compiler.
long lots_of_secs = 1e9;

Advice for bit level manipulation

I'm currently working on a project that involves a lot of bit level manipulation of data such as comparison, masking and shifting. Essentially I need to search through chunks of bitstreams between 8kbytes - 32kbytes long for bit patterns between 20 - 40bytes long.
Does anyone know of general resources for optimizing for such operations in CUDA?
There has been a least a couple of questions on SO on how to do text searches with CUDA. That is, finding instances of short byte-strings in long byte-strings. That is similar to what you want to do. That is, a byte-string search is much like a bit-string search where the number of bits in the byte-string can only be a multiple of 8, and the algorithm only checks for matches every 8 bits. Search on SO for CUDA string searching or matching, and see if you can find them.
I don't know of any general resources for this, but I would try something like this:
Start by preparing 8 versions of each of the search bit-strings. Each bit-string shifted a different number of bits. Also prepare start and end masks:
start
01111111
00111111
...
00000001
end
10000000
11000000
...
11111110
Then, essentially, perform byte-string searches with the different bit-strings and masks.
If you're using a device with compute capability >= 2.0, store the shifted bit-strings in global memory. The start and end masks can probably just be constants in your program.
Then, for each byte position, launch 8 threads that each checks a different version of the 8 shifted bit-strings against the long bit-string (which you now treat like a byte-string). In each block, launch enough threads to check, for instance, 32 bytes, so that the total number of threads per block becomes 32 * 8 = 256. The L1 cache should be able to hold the shifted bit-strings for each block, so that you get good performance.

Where does the limitation of 10^15 in D.J. Bernstein's 'primegen' program come from?

At http://cr.yp.to/primegen.html you can find sources of program that uses Atkin's sieve to generate primes. As the author says that it may take few months to answer an e-mail sent to him (I understand that, he sure is an occupied man!) I'm posting this question.
The page states that 'primegen can generate primes up to 1000000000000000'. I am trying to understand why it is so. There is of course a limitation up to 2^64 ~ 2 * 10^19 (size of long unsigned int) because this is how the numbers are represented. I know for sure that if there would be a huge prime gap (> 2^31) then printing of numbers would fail. However in this range I think there is no such prime gap.
Either the author overestimated the bound (and really it is around 10^19) or there is a place in the source code where the arithmetic operation can overflow or something like that.
The funny thing is that you actually MAY run it for numbers > 10^15:
./primes 10000000000000000 10000000000000100
10000000000000061
10000000000000069
10000000000000079
10000000000000099
and if you believe Wolfram Alpha, it is correct.
Some facts I had "reverse-engineered":
numbers are sifted in batches of 1,920 * PRIMEGEN_WORDS = 3,932,160 numbers (see primegen_fill function in primegen_next.c)
PRIMEGEN_WORDS controls how big a single sifting is - you can adjust it in primegen_impl.h to fit your CPU cache,
the implementation of the sieve itself is in primegen.c file - I assume it is correct; what you get is a bitmask of primes in pg->buf (see primegen_fill function)
The bitmask is analyzed and primes are stored in pg->p array.
I see no point where the overflow may happen.
I wish I was on my computer to look, but I suspect you would have different success if you started at 1 as your lower bound.
Just from the algorithm, I would conclude that the upper bound comes from the 32 bit numbers.
The page mentiones Pentium-III as CPU so my guess it is very old and does not use 64 bit.
2^32 are approx 10^9. Sieve of Atkins (which the algorithm uses) requires N^(1/2) bits (it uses a big bitfield). Which means in 2^32 big memory you can make (conservativ) N approx 10^15. As this number is a rough conservative upper bound (you have system and other programs occupying memory, reserving address ranges for IO,...) the real upper bound is/might be higher.

Random Numbers in Objective-C Using Mod Range?

I have read here -- without understanding much -- that it's bad to use mod range. So this typical recommendation for Objective-C
int r = arc4random() % 45;
might be a bad idea to get a number from 0 to 45 (something about the distribution and this formula having a preference for low bits). What should one use in Objective-C?
<sarcasm>
I am so glad to be able to finally learn this stuff after using only high-level languages (Java et. al) all this time. Tomorrow I will try to make fire with two twigs. </sarcasm>
Java is just as high level as Objecive C here - in this case Java' Random.getInt() is the same as arc4random in that they both return a 32-bit pseudo-random number.
The issue raised in the URL (and I have seen elsewhere) is that rand()
could be repeating itself every 32768
values.
Whilst OSX's arc4random could have (2**1700) states.
But as in all uses of pseudo-random generators you need to be aware of their weaknesses before using them e.g. a preference for low bits in some generators and also the comment in the OpenBSD arc4random man page where it says
arc4random_uniform() is recommended
over constructions like ``arc4random()
% upper_bound'' as it avoids "modulo
bias" when the upper bound is not a
power of two.