How to implement long division for enormous numbers (bignums) - division

I'm trying to implement long division for bignums. I can't use a library like GMP unfortunately due to the limitations of embedded programming. Besides, i want the intellectual exercise of learning how to implement it. So far i've got addition and multiplication done using any-length arrays of bytes (so each byte is like a base-256 digit).
I'm just trying to get started on implementing division / modulus and i want to know where to start? I've found lots of highly-optimised (aka unreadable) code on the net, which doesn't help me, and i've found lots of highly-technical mathematical whitepapers from which I can't bridge the gap between theory and implementation.
If someone could recommend a popular algorithm, and point me to a simple to understand explanation of it that leans towards implmenentation, that'd be fantastic.
-edit: I need algorithms which work when the dividend is ~4000bits, and divisor is ~2000bits
-edit: Will this algorithm work with base-256 ? http://courses.cs.vt.edu/~cs1104/BuildingBlocks/divide.030.html
-edit: Is this the algorithm (newton division) i should really be using? http://en.wikipedia.org/wiki/Division_(digital)#Newton.E2.80.93Raphson_division

If you want to learn, then start with the pencil and paper method you used in elementary school. Believe it or not, that is essentially the same O(n^2) algorithm that is used in most bignum libraries for numbers that are in the range you are looking for. The tricky first step is called "quotient estimation", and that will probably be the hardest to understand. Once you understand that, the rest should come easy.
A good reference is Knuth's "Seminumerical Algorithms". He has many discussions about different ways to do quotient estimation both in the text and in the exercises. That book has chapters devoted to bignum implementations.

Are you using the void Four1(long double[],int,int) in your code and then convolving and then doing a inverse transform well I got multiplication to work but when I tried to do division the same way it spat out one result then quit so I cannot help but if you have the tome called "Numeric Recipes in C++" go to near the end and you will find what you are looking for actually it starts on Page 916 to 926.

This question is over 2 years old, but for this size numbers you can look at the OpenSSL source code. It does RSA with this size numbers so has lots of math routines optimized for 1000 to 4000 bit numbers.

Related

Information about CGAL and alternatives

I'm working on a problem that will eventually run in an embedded microcontroller (ESP8266). I need to perform some fairly simple operations on linear equations. I don't need much, but do need to be able work with points and linear equations to:
Define an equations for lines either from two known points, or one
point and a gradient
Calculate a new x,y point on an equation line that is a specific distance from another point on that equation line
Drop a perpendicular onto an equation line from a point
Perform variations of cosine-rule calculations on points and triangle sides defined as equations
I've roughed up some code for this a while ago based on high school "y = mx + c" concepts, but it's flawed (it fails with infinities when lines are vertical), and currently in Scala. Since I suspect I'm reinventing a wheel that's not my primary goal, I'd like to use someone else's work for this!
I've come across CGAL, and it seems very likely it's capable of all this and more, but I have two questions about it (given that it seems to take ages to get enough understanding of this kind of huge library to actually be able to answer simple questions!)
It seems to assert some kind of mathematical perfection in it's calculations, but that's not important to me, and my system will be severely memory constrained. Does it use/offer memory efficient approximations?
Is it possible (and hopefully easy) to separate out just a limited subset of features, or am I going to find the entire library (or even a very large subset) heading into my memory limited machine?
And, I suppose the inevitable follow up: are there more suitable libraries I'm unaware of?
TIA!
The problems that you are mentioning sound fairly simple indeed, so I'm wondering if you really need any library at all. Maybe if you post your original code we could help you fix it--your problem sounds like you need to redo a calculation avoiding a division by zero.
As for your point (2) about separating a limited number of features from CGAL, giving the size and the coding style of that project, from my experience that will be significantly more complicated (if at all possible) than fixing your own code.
In case you want to try a simpler library than CGAL, maybe you could try Boost.Geometry
Regards,

Complexity of Integer vs. Binary Constraints in CPLEX

Recently, I have been trying to learn a bit about CPLEX and was hoping someone could help me understand the complexity when solving for integer vs. binary constraints.
For example, say we are trying to allocate a pie around 10 people for maximum utility, where each person has a utility that is linear with the amount of pie they receive. However, we want to introduce the constraint that at least 3 people have to get a bit of pie.
What's the difference between thinking of this as a single integer constraint (number_of_people_with_pie >= 3) vs. 10 binary variables (person_1_has_pie + person_2_has_pie + ... person_10_has_pie >= 3)? I would imagine the former is simplest but wonder if there is any benefits to forming the problem in terms of binary variables?
In addition to this, any recommended reading for better understanding MIP and CPLEX would be greatly appreciated, especially in better understanding where the problem becomes NP or in what situations simplex struggles to find the global minima.
Thanks!
I agree with Alex and Erwin's comment that this really depends on what you want to model. For this particular model I disagree with Alex: to me it makes more sense to use one decision variable per person, otherwise it may become hard to figure out which person gets how much of the pie.
A problem becomes NP hard as soon as you add integrality or SOS constraints. A good reading for MIP in general is Alex Schrijver's "Theory of Integer and Linear Programming". That should cover all the topics you need for an in-depth understanding of things.
It really depends on the case but in yours I would use 1 decision variable rather than 10.
Sometimes, that's not obvious and trying and measuring can prove oneself right or wrong. And that's one of the reason why using high modeling languages can help. (Abstract modeling languages such as OPL)
I recommend a MOOC on cognitive class : https://cognitiveclass.ai/courses/mathematical-optimization-for-business-problems/
and the OPL language manual : https://www.ibm.com/support/knowledgecenter/SSSA5P_12.7.0/ilog.odms.studio.help/pdf/opl_languser.pdf

Arbitrary precision integer programming solvers?

Are there any free solvers capable of solving integer programming problems with large numbers (at least 336 bits)? All the solvers I've looked at appear to assume only double precision, and I haven't been able to find any that claim arbitrary precision.
The only one that I know of is SCIP - its a really good solver and they have a beta of a version that does exact arithmetic, although I haven't tried that side of SCIP yet:
http://scip.zib.de/exactmip.shtml
Also, the people working on SCIP all seem to be really helpful, and there is an active support email list too so you are likely to get good support.
Tim

SMT solvers for bit vector arithmetic

I'm planning some experiments in symbolic execution of C code, using an off-the-shelf SMT solver, and wondering which solver to use; looking at e.g. the SMT contest entrants, and taking only the open-source systems, narrows it down to Beaver, Boolector, CVC3, OpenSMT, Sateen, Sonolar, STP, Verit; which is still a long list.
Trying to narrow it down a little further, I notice that some of the systems advertise the ability to handle bit vector arithmetic, whereas others only advertise the ability to handle general integer arithmetic. In principle, the former is correct for C, where variables are machine words, not unbounded integers. How much difference does it make in practice? What happens if you try to use a general integer system for this kind of job? Does one of the following scenarios apply?
A bit vector system is slightly more efficient, but you can use either, no problem.
You can use a general integer system with a bit of tweaking.
A general integer system is fine for signed int (because the result of overflow is undefined) but will give the wrong answer for unsigned.
A general integer system just isn't correct for machine word arithmetic, and I can reduce my short list to only those systems that provide bit vector arithmetic.
Something else...?
I've tried to ask as specific a question as possible, but if anyone can suggest any other criteria for narrowing down the list, that would be great!
I've had good experience using STP for symbolic execution. STP was designed precisely for this task. Also, there have been a number of symbolic execution tools that have successfully used STP for this purpose, so there is reason to believe that STP doesn't suck. I would definitely recommend STP to others as a default choice for this sort of experimentation.
However, I haven't tried the other systems, so I don't know how STP compares to them.
Personally, I see STP as the baseline and the default choice for this kind of application. So, if you only have time to try one solver, trying STP seems like a pretty reasonable choice.
If I had to guess, my guess would be that bit-vector arithmetic is important to support, because any large systems code is going to have a non-trivial amount of code that performs bitwise operations. Also, I'd suspect/worry some systems code may rely upon the behavior of unsigned arithmetic to wrap modulo 2n, and if you try to model it with integers, you will not get the semantics of C right (because, as you say, integers just aren't correct for machine-word arithmetic) and consequently, if you try to use an integer-only solver, you may experience some difficulties. However, I have no hard evidence for either of these suspicions.
P.S. Z3 might also be a contender to add to your list to consider. (Do you really need your solver to be open-source, as long as it is free? I'd expect that a symbolic execution tool would use it only as a blackbox, without modification.)
According to SMT-Wikipedia at 2011-08, we have:
Based on these measures, it appears that the most vibrant, well-organized projects are OpenSMT, STP and CVC4.
I'm just checking this stuff - so far, all three seems reasonable, plus older CVC -> CVC3.

How to test numerical analysis routines?

Are there any good online resources for how to create, maintain and think about writing test routines for numerical analysis code?
One of the limitations I can see for something like testing matrix multiplication is that the obvious tests (like having one matrix being the identity) may not fully test the functionality of the code.
Also, there is the fact that you are usually dealing with large data structures as well. Does anyone have some good ideas about ways to approach this, or have pointers to good places to look?
It sounds as if you need to think about testing in at least two different ways:
Some numerical methods allow for some meta-thinking. For example, invertible operations allow you to set up test cases to see if the result is within acceptable error bounds of the original. For example, matrix M-inverse times the matrix M * random vector V should result in V again, to within some acceptable measure of error.
Obviously, this example exercises matrix inverse, matrix multiplication and matrix-vector multiplication. I like chains like these because you can generate quite a lot of random test cases and get statistical coverage that would be a slog to have to write by hand. They don't exercise single operations in isolation, though.
Some numerical methods have a closed-form expression of their error. If you can set up a situation with a known solution, you can then compare the difference between the solution and the calculated result, looking for a difference that exceeds these known bounds.
Fundamentally, this question illustrates the problem that testing complex methods well requires quite a lot of domain knowledge. Specific references would require a little more specific information about what you're testing. I'd definitely recommend that you at least have Steve Yegge's recommended book list on hand.
If you're going to be doing matrix calculations, use LAPACK. This is very well-tested code. Very smart people have been working on it for decades. They've thought deeply about issues that the uninitiated would never think about.
In general, I'd recommend two kinds of testing: systematic and random. By systematic I mean exploring edge cases etc. It helps if you can read the source code. Often algorithms have branch points: calculate this way for numbers in this range, this other way for numbers in another range, etc. Test values close to the branch points on either side because that's where approximation error is often greatest.
Random input values are important too. If you rationally pick all the test cases, you may systematically avoid something that you don't realize is a problem. Sometimes you can make good use of random input values even if you don't have the exact values to test against. For example, if you have code to calculate a function and its inverse, you can generate 1000 random values and see whether applying the function and its inverse put you back close to where you started.
Check out a book by David Gries called The Science of Programming. It's about proving the correctness of programs. If you want to be sure that your programs are correct (to the point of proving their correctness), this book is a good place to start.
Probably not exactly what you're looking for, but it's the computer science answer to a software engineering question.