Having two complex arrays of dimension 2 I want to calculate a point wise multiplication (Hadamard product):
complex(8) :: A(N,N), B(N,N), C(N,N)
...
do j = 1, N
do i = 1, N
C(i,j) = A(i,j)*B(i,j)
enddo
enddo
Is there any BLAS routine to optimize that or is this actually already the most efficient way to write the Hadamard product? Or does the compiler do the job for me in such a simple case?
I code in Fortran so the first index is the fast index.
Related
for(i=1;i<=n;i=i*2)
{
for(j=1;j<=i;j++)
{
}
}
How the complexity of the following code is O(nlogn) ?
Time complexity in terms of what? If you want to know how many inner loop operations the algorithm performs, it is not O(n log n). If you want to take into account also the arithmetic operations, then see further below. If you literally are to plug in that code into a programming language, chances are the compiler will notice that your code does nothing and optimise the loop away, resulting in constant O(1) time complexity. But only based on what you've given us, I would interpret it as time complexity in terms of whatever might be inside the inner loop, not counting arithmetic operations of the loops themselves. If so:
Consider an iteration of your inner loop a constant-time operation, then we just need to count how many iterations the inner loop will make.
You will find that it will make
1 + 2 + 4 + 8 + ... + n
iterations, if n is a square number. If it is not square, it will stop a bit sooner, but this will be our upper limit.
We can write this more generally as
the sum of 2i where i ranges from 0 to log2n.
Now, if you do the math, e.g. using the formula for geometric sums, you will find that this sum equals
2n - 1.
So we have a time complexity of O(2n - 1) = O(n), if we don't take the arithmetic operations of the loops into account.
If you wish to verify this experimentally, the best way is to write code that counts how many times the inner loop runs. In javascript, you could write it like this:
function f(n) {
let c = 0;
for(i=1;i<=n;i=i*2) {
for(j=1;j<=i;j++) {
++c;
}
}
console.log(c);
}
f(2);
f(4);
f(32);
f(1024);
f(1 << 20);
If you do want to take the arithmetic operations into account, then it depends a bit on your assumptions but you can indeed get some logarithmic coefficients to account for. It depends on how you formulate the question and how you define an operation.
First, we need to estimate number of high-level operations executed for different n. In this case the inner loop is an operation that you want to count, if I understood the question right.
If it is difficult, you may automate it. I used Matlab for example code since there was no tag for specific language. Testing code will look like this:
% Reasonable amount of input elements placed in array, change it to fit your needs
x = 1:1:100;
% Plot linear function
plot(x,x,'DisplayName','O(n)', 'LineWidth', 2);
hold on;
% Plot n*log(n) function
plot(x, x.*log(x), 'DisplayName','O(nln(n))','LineWidth', 2);
hold on;
% Apply our function to each element of x
measured = arrayfun(#(v) test(v),x);
% Plot number of high level operations performed by our function for each element of x
plot(x,measured, 'DisplayName','Measured','LineWidth', 2);
legend
% Our function
function k = test(n)
% Counter for operations
k = 0;
% Outer loop, same as for(i=1;i<=n;i=i*2)
i = 1;
while i < n
% Inner loop
for j=1:1:i
% Count operations
k=k+1;
end
i = i*2;
end
end
And the result will look like
Our complexity is worse than linear but not worse than O(nlogn), so we choose O(nlogn) as an upper bound.
Furthermore the upper bound should be:
O(n*log2(n))
The worst case is n being in 2^x. x€real numbers
The inner loop is evaluated n times, the outer loop log2 (logarithm basis 2) times.
I am looking to write the following sum of products:
in the most efficient way in Python.
Note that f(j,i) and g(k, i) are just some functions of the indices, usually fractions of the form j^c_1/i^c_2. Furthermore, when i ==j, then the first product would have to evaluate to 1. I have though of a Kronecker delta as a workaround to this but every suggestion is welcome on how to impose such conditions.
The issue is that I would require the "limit" behaviour of this and thus, I would like to iterate for a large number of n. I have not written products before, mostly sums and I was wondering how to go ahead with this. I am more or less familiar with loops but I have read there are also other, more efficient ways of accomplishing this, e.g. iterators.
What is the best way to do this and could you please provide a working example?
My attempt so far is very basic, as I am aware of the structure of the sum/product but not how to evaluate it.
Here is what I have so far:
product_1 = 1
product_2 = 1
for i in range(a, n + 1, 1):
for j in range(a, n + 1, 1):
if i == j
something to make the product 1
else:
product_1 *= f(i,j)
for k in range(b, n + 1, 1):
product_2 *= g(i,k)
Then somehow times the products and have a summation variable somewhere?
The simple way to code this would be:
result = 0
for i in range(a, n + 1, 1):
product_1 = 1
product_2 = 1
for j in range(a, n + 1, 1):
if i != j:
product_1 *= f(i,j)
for k in range(b, n + 1, 1):
product_2 *= g(i,k)
result += product_1 * product_2
To time your code, I suggest you check this page which is a great introduction to timing in Python.
To evaluate this faster, check the map operator or, if you want to work with NumPy, the numpy.vectorize function that you could use to vectorized both f and g in this context. You could also vectorized the outer loop.
I'm new with getting time complexities and I can't seem to understand the logic behind getting this at the end:
100 (n(n+1) / 2)
For this function:
function a() {
int i,j,k,n;
for(i=1; i<=n; i++) {
for(j=1; j<=i; j++) {
for(k=1; k<=100; k++) {
print("hello");
}
}
}
}
Here's how I understand its algorithm:
i = 1, 2, 3, 4...n
j = 1, 2, 3, 4...(dependent to i, which can be 'n')
k = 1(100), 2(100), 3(100), 4(100)...n(100)
= 100 [1, 2, 3, 4.....]
If I'll use this algorithm above to simulate the end equation, I'll get this result:
End Equation:
100 (n(n+1) / 2)
Simulation
i = 1, 2, 3, 4... n
j = 1, 2, 3, 4... n
k = 100, 300, 600, 10000
I usually study these in youtube and get the idea of Big O, Omega & Theta but when it comes to this one, I can't figure out how they end with the equation such as what I have given. Please help and if you have some best practices, please share.
EDIT:
As for my own assumption of answer, it think it should be this one:
100 ((n+n)/2) or 100 (2n / 2)
Source:
https://www.youtube.com/watch?v=FEnwM-iDb2g
At around: 15:21
I think you've got i and j correct, except that it's not clear why you say k = 100, 200, 300... In every loop, k runs from 1 to 100.
So let's think through the inner loop first:
k from 1 to 100:
// Do something
The inner loop is O(100) = O(1) because its runtime does not depend on n. Now we analyze the outer loops:
i from 1 to n:
j from 1 to i:
// Do inner stuff
Now lets count how many times Do inner stuff executes:
i = 1 1 time
i = 2 2 times
i = 3 3 times
... ...
i = n n times
This is our classic triangular sum 1 + 2 + 3 + ... n = n(n+1) / 2. Therefore, the time complexity of the outer two loops is O(n(n+1)/2) which reduces to O(n^2).
The time complexity of the entire thing is O(1 * n^2) = O(n^2) because nesting loops multiplies the complexities (assuming the runtime of the inner loop is independent of the variables in the outer loops). Note here that if we had not reduced at various phases, we would be left with O(100(n)(n+1)/2), which is equivalent to O(n^2) because of the properties of big-O notation.
SOME TIPS:
You asked for some best practices. Here are some "rules" that I made use of in analyzing the example you posted.
In time complexity analysis, we can ignore multiplication by a constant. This is why the inner loop is still O(1) even though it executes 100 times. Understanding this is the basis of time complexity. We are analyzing runtime on a large scale, not counting the number of clock cycles.
With nested loops where the runtime is independent of each other, just multiply the complexity. Nesting the O(1) loop inside the outer O(N^2) loops resulted in O(N^2) code.
Some more reduction rules: http://courses.washington.edu/css162/rnash/quarters/current/labs/bigOLab/lab9.htm
If you can break code up into smaller pieces (in the same way we analyzed the k loop separately from the outer loops) then you can take advantage of the nesting rule to find the combined complexity.
Note on Omega/Theta:
Theta is the "exact bound" for time complexity whereas Big-O and Omega are upper and lower bounds respectively. Because there is no random data (like there is in a sorting algorithm), we can get an exact bound on the time complexity and the upper bound is equal to the lower bound. Therefore, it does not make any difference if we use O, Omega or Theta in this case.
Suppose we have an algorithm that is of order O(2^n). Furthermore, suppose we multiplied the input size n by 2 so now we have an input of size 2n. How is the time affected? Do we look at the problem as if the original time was 2^n and now it became 2^(2n) so the answer would be that the new time is the power of 2 of the previous time?
Big 0 is not for telling you the actual running time, just how the running time is affected by the size of input. If you double the size of input the complexity is still O(2^n), n is just bigger.
number of elements(n) units of work
1 1
2 4
3 8
4 16
5 32
... ...
10 1024
20 1048576
There's a misunderstanding here about how Big-O relates to execution time.
Consider the following formulas which define execution time:
f1(n) = 2^n + 5000n^2 + 12300
f2(n) = (500 * 2^n) + 6
f3(n) = 500n^2 + 25000n + 456000
f4(n) = 400000000
Each of these functions are O(2^n); that is, they can each be shown to be less than M * 2^n for an arbitrary M and starting n0 value. But obviously, the change in execution time you notice for doubling the size from n1 to 2 * n1 will vary wildly between them (not at all in the case of f4(n)). You cannot use Big-O analysis to determine effects on execution time. It only defines an upper boundary on the execution time (which is not even guaranteed to be the minimum form of the upper bound).
Some related academia below:
There are three notable bounding functions in this category:
O(f(n)): Big-O - This defines a upper-bound.
Ω(f(n)): Big-Omega - This defines a lower-bound.
Θ(f(n)): Big-Theta - This defines a tight-bound.
A given time function f(n) is Θ(g(n)) only if it is also Ω(g(n)) and O(g(n)) (that is, both upper and lower bounded).
You are dealing with Big-O, which is the usual "entry point" to the discussion; we will neglect the other two entirely.
Consider the definition from Wikipedia:
Let f and g be two functions defined on some subset of the real numbers. One writes:
f(x)=O(g(x)) as x tends to infinity
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x > x0
Going from here, assume we have f1(n) = 2^n. If we were to compare that to f2(n) = 2^(2n) = 4^n, how would f1(n) and f2(n) relate to each other in Big-O terms?
Is 2^n <= M * 4^n for some arbitrary M and n0 value? Of course! Using M = 1 and n0 = 1, it is true. Thus, 2^n is upper-bounded by O(4^n).
Is 4^n <= M * 2^n for some arbitrary M and n0 value? This is where you run into problems... for no constant value of M can you make 2^n grow faster than 4^n as n gets arbitrarily large. Thus, 4^n is not upper-bounded by O(2^n).
See comments for further explanations, but indeed, this is just an example I came up with to help you grasp Big-O concept. That is not the actual algorithmic meaning.
Suppose you have an array, arr = [1, 2, 3, 4, 5].
An example of a O(1) operation would be directly access an index, such as arr[0] or arr[2].
An example of a O(n) operation would be a loop that could iterate through all your array, such as for elem in arr:.
n would be the size of your array. If your array is twice as big as the original array, n would also be twice as big. That's how variables work.
See Big-O Cheat Sheet for complementary informations.
I would like to optimize for speed the following Fortran code
DO ii = 1, N
A (:,:) = A (:,:) + C (ii) * B (:,:, ii )
ENDDO
with A(M,M) dimension and B(M,M) dimension.
I was thinking of use BLAS
DO jj=1,M
CALL zgemm('n', 'n', 1, M, N, cone, C(:), cone, B (jj,:, : ),&
N, czero, A(:,:), cone )
ENDDO
but this does not look very efficient as I still have a loop. Is it possible to use increment and how?
In my case N is always > M