how can we find the xor of elements of an array over the range l,r with a given number x provided there are multiple queries of such type? - optimization

For the above questioned i went through some of the query optimization techniques like square root decomposition , binary indexed tree but they don't help in solving my problem optimally . If anybody can suggest an query optimization technique through which i can solve this problem please do suggest.

You can do that in constant time, using a O(n) space, where n is the size of your array. The initial construction takes O(n).
Given an array A, you first need to build the array XOR-A, in this way:
XOR-A[0] = A[0]
For i from 1 to n-1:
XOR-A[i] = XOR-A[i-1] xor A[i]
Then you can answer a query on the range (l, r) as follows:
get_xor_range(l, r, XOR-A):
return XOR-A[l-1] xor XOR-A[r]
We use the fact that for any x, x xor x = 0. That's what makes the job here.
Edit: Sorry I did not understand the problem well at first.
Here is a method to update the array in O(m + n) time, and O(n) space, where n is the size of the array and m the number of queries.
Notation: the array is A, of size n, the queries are (l_i, r_i, x_i), 0 <= i < m
L <- array of tuple (l_i, x_i) sorted in ascending order by the l_i
R <- array of tuple (r_i, x_i) sorted in ascending order by the r_i
X <- array initialised at 0 everywhere, of size `n`
idx <- 0
l, x <- L[idx]
for j from 0 to n-1 do:
while l == j do:
X[j] <- X[j] xor x
idx ++
l, x <- L[idx]
if j < n then X[j+1] <- X[j]
idx <- 0
r, x <- R[idx]
for j from 0 to n-1 do:
while r == j do:
X[j] <- X[j] xor x
idx ++
r, x <- R[idx]
if j < n then X[j+1] <- X[j]
for j from 0 to n-1 do:
A[j] <- A[j] xor X[j]

Related

what is the time complexity of a nested for loop that iterates n - 1 - i?

So if I have a loop like this?
int x, y, z;
for(int i = 0; i < n - 1; i++) {
for(int j = 0; j < n - 1 - i; j++){
x = 1;
y = 2;
z = 3;
}
}
so we start with the x, y, z definition so we have 4 operations there,
int i = 0 occurs once, i < n - 1 and i++ iterate n - 1 times, int j = 0, iterates n - 1 times and j < n - 1 - i and j++ iterates (n - 1) * (n - 1 - i) and xyz = 1 would iterate (n - 1) * (n - 1 - i) as well. So if I were to simplify this, would the above code run at O(n^2)?
so we start with the x, y, z definition so we have 4 operations there
This is not necessary, we need only count critical operations (i.e. in this case how often the loop body executes).
So if I were to simplify this, would the above code run at O(n²)?
A function T(n) is in O(g(n)) if T(n) <= c*g(n) (under the assumption n >= n0) for some constants c > 0, n0 > 0.
So for your code, the loop body is executed n - i times for every i, of which there are n. So we have:
Which is indeed true for c = 1/2, n0 = 1. Therefore T(n) ∈ O(n²).
You are correct that the complexity is O(n^2). There is more than one way to approach the question of why.
The formal way is to count the number of iterations of the inner loop, which will be n-1 the first time, then n-2, then n-3, ... all the way down to 1, giving a total of n*(n-1)/2 iterations, which is O(n^2).
An informal way is to say the outer loop runs O(n) times, and "on average", i is roughly n/2, so the inner loop runs on average about (n - n/2) = n/2 times, which is also O(n). So the total number of iterations is O(n) * O(n) = O(n^2).
With both of these techniques, it's not enough to just say that the loop body iterates O(n^2) times - we also need to check the complexity of the inner loop body. In this code, the body of the inner loop just does a few assignments, so it has a complexity of O(1). This means the overall complexity of the code is O(n^2) * O(1) = O(n^2). If instead the inner loop body did e.g. a binary search over an array of length n, then that would be O(log n) and the overall complexity of the code would be O(n^2 log n), for example.
Yes, you are right. Time complexity of this program will be O(n^2) at its worst case.

How are they calculating the Time Complexity for this Problem

Problem 6: Find the complexity of the below program: 
void function(int n)
{
    int i = 1, s =1;
    while (s <= n)
    {
        i++;
        s += i;
        printf("*");
    }
}
Solution: We can define the terms ‘s’ according to relation si = si-1 + i. The value of ‘i’ increases by one for each iteration. The value contained in ‘s’ at the ith iteration is the sum of the first ‘i’ positive integers. If k is total number of iterations taken by the program, then while loop terminates if: 1 + 2 + 3 ….+ k = [k(k+1)/2] > n So k = O(√n).
Time Complexity of the above function O(√n).
FROM: https://www.geeksforgeeks.org/analysis-algorithms-set-5-practice-problems/
Looking it over and over.
Apparently they are saying the Time Complexity is O(√n). I don't understand how they are getting to this result, and i've tried looking at this problem over and over. Can anyone break it down into detail?
At the start of the while-loop, we have s = 1; i = 1, and n is some (big) number. In each step of the loop, the following is done,
Take the current i, and increment it by one;
Add this new value for i to the sum s.
It is not difficult to see that successive updates of i forms the sequence 1, 2, 3, ..., and s the sequence 1, 1 + 2, 1 + 2 + 3, .... By a result attributed to the young Gauss, the sum of the first k natural numbers 1 + 2 + 3 + ... + k is k(k + 1) / 2. You should recognise that the sequence s fits this description, where k indicates the number of iterations!
The while-loop terminates when s > n, which is now equivalent to finding the lowest iteration number k such that (k(k + 1) / 2) > n. Simplifying for the asymptotic case, this gives a result such that k^2 > n, which we can simplify for k as k > sqrt(n). It follows that this algorithm runs in a time proportional to sqrt(n).
It is clear that k is the first integer such that k(k+1)/2 > n (otherwise the loop would have stopped earlier).
Then k-1 cannot have this same property, which means that (k-1)((k-1)+1)/2 <= n or (k-1)k/2 <= n. And we have the following sequence of implications:
(k-1)k/2 <= n → (k-1)k <= 2n
→ (k-1)^2 < 2n ; k-1 < k
→ k <= sqrt(2n) + 1 ; solve for k
<= sqrt(2n) + sqrt(2n) ; 1 < sqrt(2n)
= 2sqrt(2)sqrt(n)
= O(sqrt(n))

Beginner Finite Elemente Code does not solve equation properly

I am trying to write the code for solving the extremely difficult differential equation:
x' = 1
with the finite element method.
As far as I understood, I can obtain the solution u as
with the basis functions phi_i(x), while I can obtain the u_i as the solution of the system of linear equations:
with the differential operator D (here only the first derivative). As a basis I am using the tent function:
def tent(l, r, x):
m = (l + r) / 2
if x >= l and x <= m:
return (x - l) / (m - l)
elif x < r and x > m:
return (r - x) / (r - m)
else:
return 0
def tent_half_down(l,r,x):
if x >= l and x <= r:
return (r - x) / (r - l)
else:
return 0
def tent_half_up(l,r,x):
if x >= l and x <= r:
return (x - l) / (r - l)
else:
return 0
def tent_prime(l, r, x):
m = (l + r) / 2
if x >= l and x <= m:
return 1 / (m - l)
elif x < r and x > m:
return 1 / (m - r)
else:
return 0
def tent_half_prime_down(l,r,x):
if x >= l and x <= r:
return - 1 / (r - l)
else:
return 0
def tent_half_prime_up(l, r, x):
if x >= l and x <= r:
return 1 / (r - l)
else:
return 0
def sources(x):
return 1
Discretizing my space:
n_vertex = 30
n_points = (n_vertex-1) * 40
space = (0,5)
x_space = np.linspace(space[0],space[1],n_points)
vertx_list = np.linspace(space[0],space[1], n_vertex)
tent_list = np.zeros((n_vertex, n_points))
tent_prime_list = np.zeros((n_vertex, n_points))
tent_list[0,:] = [tent_half_down(vertx_list[0],vertx_list[1],x) for x in x_space]
tent_list[-1,:] = [tent_half_up(vertx_list[-2],vertx_list[-1],x) for x in x_space]
tent_prime_list[0,:] = [tent_half_prime_down(vertx_list[0],vertx_list[1],x) for x in x_space]
tent_prime_list[-1,:] = [tent_half_prime_up(vertx_list[-2],vertx_list[-1],x) for x in x_space]
for i in range(1,n_vertex-1):
tent_list[i, :] = [tent(vertx_list[i-1],vertx_list[i+1],x) for x in x_space]
tent_prime_list[i, :] = [tent_prime(vertx_list[i-1],vertx_list[i+1],x) for x in x_space]
Calculating the system of linear equations:
b = np.zeros((n_vertex))
A = np.zeros((n_vertex,n_vertex))
for i in range(n_vertex):
b[i] = np.trapz(tent_list[i,:]*sources(x_space))
for j in range(n_vertex):
A[j, i] = np.trapz(tent_prime_list[j] * tent_list[i])
And then solving and reconstructing it
u = np.linalg.solve(A,b)
sol = tent_list.T.dot(u)
But it does not work, I am only getting some up and down pattern. What am I doing wrong?
First, a couple of comments on terminology and notation:
1) You are using the weak formulation, though you've done this implicitly. A formulation being "weak" has nothing to do with the order of derivatives involved. It is weak because you are not satisfying the differential equation exactly at every location. FE minimizes the weighted residual of the solution, integrated over the domain. The functions phi_j actually discretize the weighting function. The difference when you only have first-order derivatives is that you don't have to apply the Gauss divergence theorem (which simplifies to integration by parts for one dimension) to eliminate second-order derivatives. You can tell this wasn't done because phi_j is not differentiated in the LHS.
2) I would suggest not using "A" as the differential operator. You also use this symbol for the global system matrix, so your notation is inconsistent. People often use "D", since this fits better to the idea that it is used for differentiation.
Secondly, about your implementation:
3) You are using way more integration points than necessary. Your elements use linear interpolation functions, which means you only need one integration point located at the center of the element to evaluate the integral exactly. Look into the details of Gauss quadrature to see why. Also, you've specified the number of integration points as a multiple of the number of nodes. This should be done as a multiple of the number of elements instead (in your case, n_vertex-1), because the elements are the domains on which you're integrating.
4) You have built your system by simply removing the two end nodes from the formulation. This isn't the correct way to specify boundary conditions. I would suggesting building the full system first and using one of the typical methods for applying Dirichlet boundary conditions. Also, think about what constraining two nodes would imply for the differential equation you're trying to solve. What function exists that satisfies x' = 1, x(0) = 0, x(5) = 0? You have overconstrained the system by trying to apply 2 boundary conditions to a first-order differential equation.
Unfortunately, there isn't a small tweak that can be made to get the code to work, but I hope the comments above help you rethink your approach.
EDIT to address your changes:
1) Assuming the matrix A is addressed with A[row,col], then your indices are backwards. You should be integrating with A[i,j] = ...
2) A simple way to apply a constraint is to replace one row with the constraint desired. If you want x(0) = 0, for example, set A[0,j] = 0 for all j, then set A[0,0] = 1 and set b[0] = 0. This substitutes one of the equations with u_0 = 0. Do this after integrating.

Haskell Heap Issues with Parameter Passing Style

Here's a simple program that blows my heap to Kingdom Come:
intersect n k z s rs c
| c == 23 = rs
| x == y = intersect (n+1) (k+1) (z+1) (z+s) (f : rs) (c+1)
| x < y = intersect (n+1) k (z+1) s rs c
| otherwise = intersect n (k+1) z s rs c
where x = (2*n*n) + 4 * n
y = (k * k + k )
f = (z, (x `div` 2), (z+s))
p = intersect 1 1 1 0 [] 0
main = do
putStr (show p)
What the program does is calculate the intersection of two infinite series, stopping when it reaches 23 elements. But that's not important to me.
What's interesting is that as far as I can tell, there shouldn't be much here that is sitting on the heap. The function intersect is recursives with all recursions written as tail calls. State is accumulated in the arguments, and there is not much of it. 5 integers and a small list of tuples.
If I were a betting person, I would bet that somehow thunks are being built up in the arguments as I do the recursion, particularly on arguments that aren't evaluated on a given recursion. But that's just a wild hunch.
What's the true problem here? And how does one fix it?
If you have a problem with the heap, run the heap profiler, like so:
$ ghc -O2 --make A.hs -prof -auto-all -rtsopts -fforce-recomp
[1 of 1] Compiling Main ( A.hs, A.o )
Linking A.exe ...
Which when run:
$ ./A.exe +RTS -M1G -hy
Produces an A.hp output file:
$ hp2ps -c A.hp
Like so:
So your heap is full of Integer, which indicates some problem in the accumulating parameters of your functions -- where all the Integers are.
Modifying the function so that it is strict in the lazy Integer arguments (based on the fact you never inspect their value), like so:
{-# LANGUAGE BangPatterns #-}
intersect n k !z !s rs c
| c == 23 = rs
| x == y = intersect (n+1) (k+1) (z+1) (z+s) (f : rs) (c+1)
| x < y = intersect (n+1) k (z+1) s rs c
| otherwise = intersect n (k+1) z s rs c
where x = (2*n*n) + 4 * n
y = (k * k + k )
f = (z, (x `div` 2), (z+s))
p = intersect 1 1 1 0 [] 0
main = do
putStr (show p)
And your program now runs in constant space with the list of arguments you're producing (though doesn't terminate for c == 23 in any reasonable time).
If it is OK to get the resulting list reversed, you can take advantage of Haskell's laziness and return the list as it is computed, instead of passing it recursively as an accumulating argument. Not only does this let you consume and print the list as it is being computed (thereby eliminating one space leak right there), you can also factor out the decision about how many elements you want from intersect:
{-# LANGUAGE BangPatterns #-}
intersect n k !z s
| x == y = f : intersect (n+1) (k+1) (z+1) (z+s)
| x < y = intersect (n+1) k (z+1) s
| otherwise = intersect n (k+1) z s
where x = (2*n*n) + 4 * n
y = (k * k + k )
f = (z, (x `div` 2), (z+s))
p = intersect 1 1 1 0
main = do
putStrLn (unlines (map show (take 23 p)))
As Don noted, we need to be careful so that accumulating arguments evaluate timely instead of building up big thunks. By making the argument z strict we ensure that all arguments will be demanded.
By outputting one element per line, we can watch the result being produced:
$ ghc -O2 intersect.hs && ./intersect
[1 of 1] Compiling Main ( intersect.hs, intersect.o )
Linking intersect ...
(1,3,1)
(3,15,4)
(10,120,14)
(22,528,36)
(63,4095,99)
(133,17955,232)
(372,139128,604)
(780,609960,1384)
...

Order of growth

for
f = n(log(n))^5
g = n^1.01
is
f = O(g)
f = 0(g)
f = Omega(g)?
I tried dividing both by n and i got
f = log(n)^5
g = n^0.01
But I am still clueless to which one grows faster. Can someone help me with this and explain the reasoning to the answer? I really want to know how (without calculator) one can determine which one grows faster.
Probably easiest to compare their logarithmic profiles:
If (for some C1, C2, a>0)
f < C1 n log(n)^a
g < C2 n^(1+k)
Then (for large enough n)
log(f) < log(n) + a log(log(n)) + log(C1)
log(g) < log(n) + k log(n) + log(C2)
Both are dominated by log(n) growth, so the question is which residual is bigger. The log(n) residual grows faster than log(log(n)), regardless of how small k or how large a is, so g would grow faster than f.
So in terms of big-O notation: g grows faster than f, so f can (asymptotically) be bounded from above by a function like g:
f(n) < C3 g(n)
So f = O(g). Similarly, g can be bounded from below by f, so g = Omega(f). But f cannot be bounded from below by a function like g, since g will eventually outgrow it. So f != Omega(g) and f != Theta(g).
But aaa makes a very good point: g does not begin to dominate over f until n becomes obscenely large.
I don't have a lot of experience with algorithm scaling, so corrections are welcome.
I would break this up into several easy, reusable lemmas:
Lemma 1: For a positive constant k, f = O(g) if and only if f = O(k g).
Proof: Suppose f = O(g). Then there exist constants c and N such that |f(n)| < c |g(n)| for n > N.
Thus |f(n)| < (c/k) (k |g(n)| ) for n > N and constant (c/k), so f = O (k g). The converse is trivially similar.
Lemma 2: If h is a positive monotonically increasing function and f and g are positive for sufficiently large n, then f = O(g) if and only if h(f) = O( h(g) ).
Proof: Suppose f = O(g). Then there exist constants c and N such that |f(n)| < c |g(n)| for n > N. Since f and g are positive for n > M, f(n) < c g(n) for n > max(N, M). Since h is monotonically increasing, h(f(n)) < c h(g(n)) for n > max(N, M), and lastly |h(f(n))| < c |h(g(n))| for n > max(N, M) since h is positive. Thus h(f) = O(h(g)).
The converse follows similarly; the key fact being that if h is monotonically increasing, then h(a) < h(b) => a < b.
Lemma 3: If h is an invertible monotonically increasing function, then f = O(g) if and only if f(h) + O(g(h)).
Proof: Suppose f = O(g). Then there exist constants c, N such that |f(n)| < c |g(n)| for n > N. Thus |f(h(n))| < c |g(h(n))| for h(n) > N. Since h(n) is invertible and monotonically increasing, h(n) > N whenever n > h^-1(N). Thus h^-1(N) is the new constant we need, and f(h(n)) = O(g(h(n)).
The converse follows similarly, using g's inverse.
Lemma 4: If h(n) is nonzero for n > M, f = O(g) if and only if f(n)h(n) = O(g(n)h(n)).
Proof: If f = O(g), then for constants c, N, |f(n)| < c |g(n)| for n > N. Since |h(n)| is positive for n > M, |f(n)h(n)| < c |g(n)h(n)| for n > max(N, M) and so f(n)h(n) = O(g(n)h(n)).
The converse follows similarly by using 1/h(n).
Lemma 5a: log n = O(n).
Proof: Let f = log n, g = n. Then f' = 1/n and g' = 1, so for n > 1, g increases more quickly than f. Moreover g(1) = 1 > 0 = f(1), so |f(n)| < |g(n)| for n > 1 and f = O(g).
Lemma 5b: n != O(log n).
Proof: Suppose otherwise for contradiction, and let f = n and g = log n. Then for some constants c, N, |n| < c |log n| for n > N.
Let d = max(2, 2c, sqrt(N+1) ). By the calculation in lemma 5a, since d > 2 > 1, log d < d. Thus
|f(2d^2)| = 2d^2 > 2d(log d) >= d log d + d log 2 = d (log 2d) > 2c log 2d > c log (2d^2) = c g(2d^2) = c |g(2d^2)| for 2d^2 > N, a contradiction. Thus f != O(g).
So now we can assemble the answer to the question you originally asked.
Step 1:
log n = O(n^a)
n^a != O(log n)
For any positive constant a.
Proof: log n = O(n) by Lemma 5a. Thus log n = 1/a log n^a = O(1/a n^a) = O(n^a) by Lemmas 3 (for h(n) = n^a), 4, and 1. The second fact follows similarly by using Lemma 5b.
Step 2:
log^5 n = O(n^0.01)
n^0.01 != O(log^5 n)
Proof: log n = O(n^0.002) by step 1. Then by Lemma 2 (with h(n) = n^5), log^5 n = O( (n^0.002)^5 ) = O(n^0.01). The second fact follows similarly.
Final answer:
n log^5 n = O(n^1.01)
n^1.01 != O(n log^5 n)
In other words,
f = O(g)
f != 0(g)
f != Omega(g)
Proof: Apply Lemma 4 (using h(n) = n) to step 2.
With practice these rules become "obvious" and second nature. and unless your test requires that you prove your answer you'll find yourself whipping through these kinds of big-O problems.
how about checking their intersections?
Solve[Log[n] == n^(0.01/5), n]
1809
{{n -> 2.72374}, {n -> 8.70811861815 10 }}
I cheated with Mathematica
you can also reason with derivatives,
In[71]:= D[Log[n], n]
1
-
n
In[72]:= D[n^(0.01/5), n]
0.002
------
0.998
n
consider what happens as n gets really large, change in first tends to zero, later function doesnt lose its derivative (exponent is greater than 0).
this tells you which is more complex theoretically.
however in the practical region, first function is going to grow faster.
This is not 100% mathematically kosher without proving something about logs, but here he goes:
f = log(n)^5
g = n^0.01
We take logs of both:
log(f) = log(log(n)^5)) = 5*log(log(n)) = O(log(log(n)))
log(g) = log(n^0.01) = 0.01*log(n) = O(log(n))
From this we see that the first one grows asymptotically slower, because it has a double log in it and logs grow slowly. An non-formal argument why this reasoning by taking logs is valid is this: log(n) tells you roughly how many digits there are in the number n. So if the number of digits of g is growing asymptotically faster than the number of digits of f, then surely the actual number g is growing faster than the number f!