How to implement iterative Karatsuba efficiently and nontrivially? - iteration

Is there any efficient C/C++ implementation or algorithm descritpion for iterative Karatsuba w/o using stack/queue to simulate the function stack in recursive Karatsuba?
Suppose F x G = (F_1 + F_2 x^n) x (G_1 + G_2 x^n), where F and G are polynomials of degree (2n-2).
My hunch is to keep in mind the size & index of the current two operands F_i and G_i. The iterative Karatsuba starts with the multiplication of the smallest operands. Once the index is even, calculate ((F_1 + F_2) x (G_1 + G_2)), merge it with (F_1 x G_1) and (F_2 x G_2), double the "size", and halve the "index".
The problem is that the calculation of ((F_1 + F_2) x (G_1 + G_2)) need another (size, index). I'd like to directly append (F_1 + F_2)/(G_1 + G_2)$ to the end of the F/G-array. In that case, how to deduct the new (size, index) from the old one once the calculation completes?

Related

Is it possible to explicitly and minimally index all integer solutions to x + y + z = 2n?

For example, the analogous two dimensional question, x + y = 2n is easy to solve: one can simply consider pairs (i,2n-i) for i=1,2,...,n and thus index every solution, exactly once. We note that we have n such pairs solving x + y = 2n, for every fixed value of positive integer n, and so the cardinality of such a set is equal to n as expected.
However, trying to repeat the same problem for x + y + z = 2n, it is not clear to me how (or if it is possible) to write down a minimal set {(2n-i-j,i,j)} such that varying i and j over particular intervals precisely produces every such triplet, exactly once. It can be shown that the number of elements in such a minimal set would be equal to the nearest integer to n^2/3.
It is not hard to see how one can obtain such an indexing with repetitions, or how one can algorithmically remove repetitions, but what I would like to know is whether there is a clean, general construction, as for the x + y = 2n case. Is this possible, or will one always have to artificially restrict certain values of the parameters on the intervals for which they are defined?

Big O notation and measuring time according to it

Suppose we have an algorithm that is of order O(2^n). Furthermore, suppose we multiplied the input size n by 2 so now we have an input of size 2n. How is the time affected? Do we look at the problem as if the original time was 2^n and now it became 2^(2n) so the answer would be that the new time is the power of 2 of the previous time?
Big 0 is not for telling you the actual running time, just how the running time is affected by the size of input. If you double the size of input the complexity is still O(2^n), n is just bigger.
number of elements(n) units of work
1 1
2 4
3 8
4 16
5 32
... ...
10 1024
20 1048576
There's a misunderstanding here about how Big-O relates to execution time.
Consider the following formulas which define execution time:
f1(n) = 2^n + 5000n^2 + 12300
f2(n) = (500 * 2^n) + 6
f3(n) = 500n^2 + 25000n + 456000
f4(n) = 400000000
Each of these functions are O(2^n); that is, they can each be shown to be less than M * 2^n for an arbitrary M and starting n0 value. But obviously, the change in execution time you notice for doubling the size from n1 to 2 * n1 will vary wildly between them (not at all in the case of f4(n)). You cannot use Big-O analysis to determine effects on execution time. It only defines an upper boundary on the execution time (which is not even guaranteed to be the minimum form of the upper bound).
Some related academia below:
There are three notable bounding functions in this category:
O(f(n)): Big-O - This defines a upper-bound.
Ω(f(n)): Big-Omega - This defines a lower-bound.
Θ(f(n)): Big-Theta - This defines a tight-bound.
A given time function f(n) is Θ(g(n)) only if it is also Ω(g(n)) and O(g(n)) (that is, both upper and lower bounded).
You are dealing with Big-O, which is the usual "entry point" to the discussion; we will neglect the other two entirely.
Consider the definition from Wikipedia:
Let f and g be two functions defined on some subset of the real numbers. One writes:
f(x)=O(g(x)) as x tends to infinity
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x > x0
Going from here, assume we have f1(n) = 2^n. If we were to compare that to f2(n) = 2^(2n) = 4^n, how would f1(n) and f2(n) relate to each other in Big-O terms?
Is 2^n <= M * 4^n for some arbitrary M and n0 value? Of course! Using M = 1 and n0 = 1, it is true. Thus, 2^n is upper-bounded by O(4^n).
Is 4^n <= M * 2^n for some arbitrary M and n0 value? This is where you run into problems... for no constant value of M can you make 2^n grow faster than 4^n as n gets arbitrarily large. Thus, 4^n is not upper-bounded by O(2^n).
See comments for further explanations, but indeed, this is just an example I came up with to help you grasp Big-O concept. That is not the actual algorithmic meaning.
Suppose you have an array, arr = [1, 2, 3, 4, 5].
An example of a O(1) operation would be directly access an index, such as arr[0] or arr[2].
An example of a O(n) operation would be a loop that could iterate through all your array, such as for elem in arr:.
n would be the size of your array. If your array is twice as big as the original array, n would also be twice as big. That's how variables work.
See Big-O Cheat Sheet for complementary informations.

Time complexity of Simpson's rule for simple intergral calculus

I am looking for a reference and a proof for the time complexity of Simpson's rule for integral calculus.
I am not sure if the class complexity of that rule belongs to O(N).
Could you point me out to the right direction ?
Thanks
First of all, the Simpson's Rule requires three inputs:
The function f(x), assume it takes O(1) time.
The bounds of integration (a, b)
The number of subdivisions, n. Then the width of the "bar" d = (b - a) / n Note n must be an even positive integer.
Simpson's Rule states that
∫ab f(x) ≈ (d/3)([f(x0) + f(xn)] + [2f(x1) + 4f(x2)] + [2f(x3) + 4f(x4)] + ... [2f(xn-2) + 4f(xn-1)])
∫ab f(x) ≈ (d/3)([f(x0) + f(xn)] + ∑k=2(n-1)/2 f(xk)
where xk is equal to a + kd. Note x0 = a, xn = a + nd = b.
From the summation term ∑k=2(n-1)/2, we can easily state that there are [(n-1)/2 - 2 + 1] terms, and there are also two more terms for f(x0), f(xn). The number of terms used for the Simpson rule for a given n is linear to n.
Assuming multiplication is constant and the function complexity is constant, we note the summation formula to determine that the time complexity of the Simpson rule is O(n), it runs in linear time.

What is definition of truncated polynomial?

In NTRUEncryption, I seen the trucated polynimials, but I cannot understand the trunacated polynomial calculation.
So, could tell me anyone How we calculate the truncated polynomial?
The polynomials are truncated in the sense that they only have coefficients up to a certain degree.
Here is how you truncate the product of two truncated polynomials (the sum is trivial):
Assume you have two truncated polynomials, i.e. two polynomials of degree no greater than n-1
a = a[0] + a[1]X + ... + a[n-1]X^(n-1)
b = b[0] + b[1]X + ... + b[n-1]X^(n-1)
Then their "truncated" product is defined as the polynomial
a * b = c[0] + c[1]X + ... +c[n-1]X^(n-1)
where the c[k] coefficients are computed as follow:
Reverse b[0]..b[n-1] to get b[n-1]..b[0].
Rotate the result of step 1 above k+1 times to the right and get b[k]..b[0]b[n-1]..b[k+1]
Denote with b_k[0]..b_k[n-1] the array calculated in 2.
Now define
c[k] = a[0]b_k[0] + a[1]b_k[1] + ... + a[n-1]b_k[n-1].
This operation can also be made by multiplying the polynomials a and b in the usual way and then truncating the result to the degree n-1. The reason for the algorithm above is to avoid computing coefficients that will not be used in the final result.

Is there an iterative way to calculate radii along a scanline?

I am processing a series of points which all have the same Y value, but different X values. I go through the points by incrementing X by one. For example, I might have Y = 50 and X is the integers from -30 to 30. Part of my algorithm involves finding the distance to the origin from each point and then doing further processing.
After profiling, I've found that the sqrt call in the distance calculation is taking a significant amount of my time. Is there an iterative way to calculate the distance?
In other words:
I want to efficiently calculate: r[n] = sqrt(x[n]*x[n] + y*y)). I can save information from the previous iteration. Each iteration changes by incrementing x, so x[n] = x[n-1] + 1. I can not use sqrt or trig functions because they are too slow except at the beginning of each scanline.
I can use approximations as long as they are good enough (less than 0.l% error) and the errors introduced are smooth (I can't bin to a pre-calculated table of approximations).
Additional information:
x and y are always integers between -150 and 150
I'm going to try a couple ideas out tomorrow and mark the best answer based on which is fastest.
Results
I did some timings
Distance formula: 16 ms / iteration
Pete's interperlating solution: 8 ms / iteration
wrang-wrang pre-calculation solution: 8ms / iteration
I was hoping the test would decide between the two, because I like both answers. I'm going to go with Pete's because it uses less memory.
Just to get a feel for it, for your range y = 50, x = 0 gives r = 50 and y = 50, x = +/- 30 gives r ~= 58.3. You want an approximation good for +/- 0.1%, or +/- 0.05 absolute. That's a lot lower accuracy than most library sqrts do.
Two approximate approaches - you calculate r based on interpolating from the previous value, or use a few terms of a suitable series.
Interpolating from previous r
r = ( x2 + y2 ) 1/2
dr/dx = 1/2 . 2x . ( x2 + y2 ) -1/2 = x/r
double r = 50;
for ( int x = 0; x <= 30; ++x ) {
double r_true = Math.sqrt ( 50*50 + x*x );
System.out.printf ( "x: %d r_true: %f r_approx: %f error: %f%%\n", x, r, r_true, 100 * Math.abs ( r_true - r ) / r );
r = r + ( x + 0.5 ) / r;
}
Gives:
x: 0 r_true: 50.000000 r_approx: 50.000000 error: 0.000000%
x: 1 r_true: 50.010000 r_approx: 50.009999 error: 0.000002%
....
x: 29 r_true: 57.825065 r_approx: 57.801384 error: 0.040953%
x: 30 r_true: 58.335225 r_approx: 58.309519 error: 0.044065%
which seems to meet the requirement of 0.1% error, so I didn't bother coding the next one, as it would require quite a bit more calculation steps.
Truncated Series
The taylor series for sqrt ( 1 + x ) for x near zero is
sqrt ( 1 + x ) = 1 + 1/2 x - 1/8 x2 ... + ( - 1 / 2 )n+1 xn
Using r = y sqrt ( 1 + (x/y)2 ) then you're looking for a term t = ( - 1 / 2 )n+1 0.36n with magnitude less that a 0.001, log ( 0.002 ) > n log ( 0.18 ) or n > 3.6, so taking terms to x^4 should be Ok.
Y=10000
Y2=Y*Y
for x=0..Y2 do
D[x]=sqrt(Y2+x*x)
norm(x,y)=
if (y==0) x
else if (x>y) norm(y,x)
else {
s=Y/y
D[round(x*s)]/s
}
If your coordinates are smooth, then the idea can be extended with linear interpolation. For more precision, increase Y.
The idea is that s*(x,y) is on the line y=Y, which you've precomputed distances for. Get the distance, then divide it by s.
I assume you really do need the distance and not its square.
You may also be able to find a general sqrt implementation that sacrifices some accuracy for speed, but I have a hard time imagining that beating what the FPU can do.
By linear interpolation, I mean to change D[round(x)] to:
f=floor(x)
a=x-f
D[f]*(1-a)+D[f+1]*a
This doesn't really answer your question, but may help...
The first questions I would ask would be:
"do I need the sqrt at all?".
"If not, how can I reduce the number of sqrts?"
then yours: "Can I replace the remaining sqrts with a clever calculation?"
So I'd start with:
Do you need the exact radius, or would radius-squared be acceptable? There are fast approximatiosn to sqrt, but probably not accurate enough for your spec.
Can you process the image using mirrored quadrants or eighths? By processing all pixels at the same radius value in a batch, you can reduce the number of calculations by 8x.
Can you precalculate the radius values? You only need a table that is a quarter (or possibly an eighth) of the size of the image you are processing, and the table would only need to be precalculated once and then re-used for many runs of the algorithm.
So clever maths may not be the fastest solution.
Well there's always trying optimize your sqrt, the fastest one I've seen is the old carmack quake 3 sqrt:
http://betterexplained.com/articles/understanding-quakes-fast-inverse-square-root/
That said, since sqrt is non-linear, you're not going to be able to do simple linear interpolation along your line to get your result. The best idea is to use a table lookup since that will give you blazing fast access to the data. And, since you appear to be iterating by whole integers, a table lookup should be exceedingly accurate.
Well, you can mirror around x=0 to start with (you need only compute n>=0, and the dupe those results to corresponding n<0). After that, I'd take a look at using the derivative on sqrt(a^2+b^2) (or the corresponding sin) to take advantage of the constant dx.
If that's not accurate enough, may I point out that this is a pretty good job for SIMD, which will provide you with a reciprocal square root op on both SSE and VMX (and shader model 2).
This is sort of related to a HAKMEM item:
ITEM 149 (Minsky): CIRCLE ALGORITHM
Here is an elegant way to draw almost
circles on a point-plotting display:
NEW X = OLD X - epsilon * OLD Y
NEW Y = OLD Y + epsilon * NEW(!) X
This makes a very round ellipse
centered at the origin with its size
determined by the initial point.
epsilon determines the angular
velocity of the circulating point, and
slightly affects the eccentricity. If
epsilon is a power of 2, then we don't
even need multiplication, let alone
square roots, sines, and cosines! The
"circle" will be perfectly stable
because the points soon become
periodic.
The circle algorithm was invented by
mistake when I tried to save one
register in a display hack! Ben Gurley
had an amazing display hack using only
about six or seven instructions, and
it was a great wonder. But it was
basically line-oriented. It occurred
to me that it would be exciting to
have curves, and I was trying to get a
curve display hack with minimal
instructions.