Is the time complexity of the following cases correct? - time-complexity

I am a bit confused about the (average case) time complexity of the following cases:
I have N=3 arrays, each with different number of elements:
Array1 has n1 elements
Array2 has n2 elements
Array3 has n3 elements
Case A: I perform quicksort on each array in a sequential manner, starting from the first array till the last.
In this case the time complexity will be N*O(nlogn) (where n is the generalized form of the number of elements of an array) or O(n1logn1 + n2logn2 + n3logn3), which asymptotically is equal to O(max(n1logn1, n2logn2, n3logn3))?
Case B: I perform quicksort on each array in parallel.
In this case the time complexity will be O(max(n1logn1, n2logn2, n3logn3))?
Case C: There is a 50% chance of performing quicksort (on all arrays, in parallel) and 50% chance of not sorting any array.
Isn't this case essentially the same as case B? I.e. 0.5 * O(max(n1logn1, n2logn2, n3logn3)), which asymptotically is equal to O(max(n1logn1, n2logn2, n3logn3))?
Therefore, all cases have the same time complexity, O(max(n1logn1, n2logn2, n3logn3))?

Related

Median of medians algorithm: why divide the array into blocks of size 5 what if we divide into group of 4 is this effect time complexity

i was little confuse to make recursion (T) what if we divide the array (n/4). and why it is always recommended to use odd division of array why specifically 5. when we divide into 5 the recursion is like this
T(n) <= theta(n) + T(n/5) + T(7n/10)
what will be the recursion (T) of this and is it effect time complexity as well because when we divide the array into (n/5) the time complexity is theta(n).

What is the computational cost of finding an element in a sorted array

Say that I have an array of size n that has been sorted using Quicksort e.g. X= [1,2,3,6,7]. I want to match all the values in this array with n values in another array that has a random order e.g. Y= [3,7,6,2,1].
I can iterate through each element of Y and compare it to the middle value of X i.e 3 so I would only need to complete at most n/2 checks. What would be the total computational complexity of doing this for all values of Y? I am looking for a tight bound.

Why Are Time Complexities Like O(N + N) Equal To O(N)? [duplicate]

This question already has answers here:
Why is the constant always dropped from big O analysis?
(7 answers)
Closed 2 years ago.
I commonly use a site called LeetCode for practice on problems. On a lot of answers in the discuss section of a problem, I noticed that run times like O(N + N) or O(2N) gets changed to O(N). For example:
int[] nums = {1, 2, 3, 4, 5};
for(int i = 0; i < nums.length; i ++) {
System.out.println(nums[i]);
}
for(int i = 0; i < nums.length; i ++) {
System.out.println(nums[i]);
}
This becomes O(N), even though it iterates through nums twice. Why is it not O(2N) or O(N + N)?
In time complexity, constant coefficients do not play a role. This is because the actual time it takes an algorithm to run depends also on the physical constraints of the machine. This means that if you run your code on a machine which is twice as fast as another, all other conditions being equal, it would run in about half the time with the same input.
But that’s not the same thing when you compare two algorithms with different time complexities. For example, when you compare the running time of an algorithm of O( N ^ 2 ) to an algorithm of O(N), the running time of O( N ^ 2 ) grows so fast with the growth of input size that the O(N) one cannot catch up with it, no matter how big you choose its constant coefficient.
Let’s say your constant coefficient is 1000, instead of just 2, then for input sizes of ( N > 1000 ) the running time of O( N ^ 2 ) algorithm becomes a proportion of ( N * N ) while N would be growing proportional to the input size, while the running time of the O(N) algorithm only remains proportional to ( 1000 * N ).
Time complexity for O(n+n) reduces to O(2n). Now 2 is a constant. So the time complexity will essentially depend on n.
Hence the time complexity of O(2n) equates to O(n).
Also if there is something like this O(2n + 3) it will still be O(n) as essentially the time will depend on the size of n.
Now suppose there is a code which is O(n^2 + n), it will be O(n^2) as when the value of n increases the effect of n will become less significant compared to effect of n^2.

time complexity for loop justification

Hi could anyone explain why the first one is True and second one is False?
First loop , number of times the loop gets executed is k times,
Where for a given n, i takes values 1,2,4,......less than n.
2 ^ k <= n
Or, k <= log(n).
Which implies , k the number of times the first loop gets executed is log(n), that is time complexity here is O(log(n)).
Second loop does not get executed based on p as p is not used in the decision statement of for loop. p does take different values inside the loop, but doesn't influence the decision statement, number of times the p*p gets executed, its time complexity is O(n).
O(logn):
for(i=0;i<n;i=i*c){// Any O(1) expression}
Here, time complexity is O(logn) when the index i is multiplied/divided by a constant value.
In the second case,
for(p=2,i=1,i<n;i++){ p=p*p }
The incremental increase is constant i.e i=i+1, the loop will run n times irrespective of the value of p. Hence the loop alone has a complexity of O(n). Considering naive multiplication p = p*p is an O(n) expression where n is the size of p. Hence the complexity should be O(n^2)
Let me summarize with an example, suppose the value of n is 8 then the possible values of i are 1,2,4,8 as soon as 8 comes look will break. You can see loop run for 3 times i.e. log(n) times as the value of i keeps on increasing by 2X. Hence, True.
For the second part, its is a normal loop which runs for all values of i from 1 to n. And the value of p is increasing be the factor p^2n. So it should be O(p^2n). Thats why it is wrong.
In order to understand why some algorithm is O(log n) it is enough to check what happens when n = 2^k (i.e., we can restrict ourselves to the case where log n happens to be an integer k).
If we inject this into the expression
for(i=1; i<2^k; i=i*2) s+=i;
we see that i will adopt the values 2, 4, 8, 16,..., i.e., 2^1, 2^2, 2^3, 2^4,... until reaching the last one 2^k. In other words, the body of the loop will be evaluated k times. Therefore, if we assume that the body is O(1), we see that the complexity is k*O(1) = O(k) = O(log n).

Big O notation and measuring time according to it

Suppose we have an algorithm that is of order O(2^n). Furthermore, suppose we multiplied the input size n by 2 so now we have an input of size 2n. How is the time affected? Do we look at the problem as if the original time was 2^n and now it became 2^(2n) so the answer would be that the new time is the power of 2 of the previous time?
Big 0 is not for telling you the actual running time, just how the running time is affected by the size of input. If you double the size of input the complexity is still O(2^n), n is just bigger.
number of elements(n) units of work
1 1
2 4
3 8
4 16
5 32
... ...
10 1024
20 1048576
There's a misunderstanding here about how Big-O relates to execution time.
Consider the following formulas which define execution time:
f1(n) = 2^n + 5000n^2 + 12300
f2(n) = (500 * 2^n) + 6
f3(n) = 500n^2 + 25000n + 456000
f4(n) = 400000000
Each of these functions are O(2^n); that is, they can each be shown to be less than M * 2^n for an arbitrary M and starting n0 value. But obviously, the change in execution time you notice for doubling the size from n1 to 2 * n1 will vary wildly between them (not at all in the case of f4(n)). You cannot use Big-O analysis to determine effects on execution time. It only defines an upper boundary on the execution time (which is not even guaranteed to be the minimum form of the upper bound).
Some related academia below:
There are three notable bounding functions in this category:
O(f(n)): Big-O - This defines a upper-bound.
Ω(f(n)): Big-Omega - This defines a lower-bound.
Θ(f(n)): Big-Theta - This defines a tight-bound.
A given time function f(n) is Θ(g(n)) only if it is also Ω(g(n)) and O(g(n)) (that is, both upper and lower bounded).
You are dealing with Big-O, which is the usual "entry point" to the discussion; we will neglect the other two entirely.
Consider the definition from Wikipedia:
Let f and g be two functions defined on some subset of the real numbers. One writes:
f(x)=O(g(x)) as x tends to infinity
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x > x0
Going from here, assume we have f1(n) = 2^n. If we were to compare that to f2(n) = 2^(2n) = 4^n, how would f1(n) and f2(n) relate to each other in Big-O terms?
Is 2^n <= M * 4^n for some arbitrary M and n0 value? Of course! Using M = 1 and n0 = 1, it is true. Thus, 2^n is upper-bounded by O(4^n).
Is 4^n <= M * 2^n for some arbitrary M and n0 value? This is where you run into problems... for no constant value of M can you make 2^n grow faster than 4^n as n gets arbitrarily large. Thus, 4^n is not upper-bounded by O(2^n).
See comments for further explanations, but indeed, this is just an example I came up with to help you grasp Big-O concept. That is not the actual algorithmic meaning.
Suppose you have an array, arr = [1, 2, 3, 4, 5].
An example of a O(1) operation would be directly access an index, such as arr[0] or arr[2].
An example of a O(n) operation would be a loop that could iterate through all your array, such as for elem in arr:.
n would be the size of your array. If your array is twice as big as the original array, n would also be twice as big. That's how variables work.
See Big-O Cheat Sheet for complementary informations.