Best case and worst case time complexity - time-complexity

Given the following pseudo code for an array A
x = 0
for i = 0 to n - 2
for j = i to n - 1
if A[i] > A[j]:
x = x + 1
return x
Is the worst case complexity O(n^2) or Theta(n^2) and why? I don't seem to understand the difference between the two.
As for the best case complexity, is it not the same as the worst case complexity because the algorithm still has to run through the same lines?

The dominating operation in this algorithm is comparison A[i] > A[j]. This comparison is always done n^2 times.
O(n^2) means that this is worst-case complexity. If you use O notation you say that this complexity could be better in best-case.
Theta(n^2) means that this is the complexity in all cases.
So the answer is: the complexity is Theta(n^2) because in both best- and worst-case it's n^2.
See: Big-Theta notation and Big-O notation

Related

Why Are Time Complexities Like O(N + N) Equal To O(N)? [duplicate]

This question already has answers here:
Why is the constant always dropped from big O analysis?
(7 answers)
Closed 2 years ago.
I commonly use a site called LeetCode for practice on problems. On a lot of answers in the discuss section of a problem, I noticed that run times like O(N + N) or O(2N) gets changed to O(N). For example:
int[] nums = {1, 2, 3, 4, 5};
for(int i = 0; i < nums.length; i ++) {
System.out.println(nums[i]);
}
for(int i = 0; i < nums.length; i ++) {
System.out.println(nums[i]);
}
This becomes O(N), even though it iterates through nums twice. Why is it not O(2N) or O(N + N)?
In time complexity, constant coefficients do not play a role. This is because the actual time it takes an algorithm to run depends also on the physical constraints of the machine. This means that if you run your code on a machine which is twice as fast as another, all other conditions being equal, it would run in about half the time with the same input.
But that’s not the same thing when you compare two algorithms with different time complexities. For example, when you compare the running time of an algorithm of O( N ^ 2 ) to an algorithm of O(N), the running time of O( N ^ 2 ) grows so fast with the growth of input size that the O(N) one cannot catch up with it, no matter how big you choose its constant coefficient.
Let’s say your constant coefficient is 1000, instead of just 2, then for input sizes of ( N > 1000 ) the running time of O( N ^ 2 ) algorithm becomes a proportion of ( N * N ) while N would be growing proportional to the input size, while the running time of the O(N) algorithm only remains proportional to ( 1000 * N ).
Time complexity for O(n+n) reduces to O(2n). Now 2 is a constant. So the time complexity will essentially depend on n.
Hence the time complexity of O(2n) equates to O(n).
Also if there is something like this O(2n + 3) it will still be O(n) as essentially the time will depend on the size of n.
Now suppose there is a code which is O(n^2 + n), it will be O(n^2) as when the value of n increases the effect of n will become less significant compared to effect of n^2.

Big O time complexity of n^1.001

Why is the growth of n^1.001 greater than n log n in Big O notation?
The n^0.001 doesn't seem significant...
For any exponent (x) greater than 1, nx is eventually greater than n * log(n). In the case of x = 1.001, the n in question is unbelievably large. Even if you lower x to 1.01, nx doesn't get bigger than n * log(n) until beyond n = 1E+128 (but before you reach 1E+256).
So, for problems where n is less than astronomical, n1.001 will be less than n * log(n), but you will eventually reach a point where it will be greater.
In case someone is interested, here is a formal proof:
For the sake of simplicity, let's assume we are using logarithms in base e.
Let a > 1 be any exponent (e.g., a = 1.001). Then a-1 > 0. Now consider the function
f(x) = x^(a-1)/log(x)
Using L'Hôpital's rule it is not hard to see that this function is unbounded. Moreover, computing the derivative of f(x), one can also see that the function is increasing for x > exp(1/(a-1)).
Therefore, there must exist an integer N such that, for all n > N, is f(n) > 1. In other words
n^(a-1)/log(n) > 1
or
n^(a-1) > log(n)
so
n^a > n log(n).
This shows that O(n^a) >= O(n log(n)).
But wait a minute. We wanted >, not >=, right? Fortunately this is easy to see. For instance, in the case a = 1.001, we have
O(n^1.001) > O(n^1.0001) >= O(n log(n))
and we are done.

Time Complexity Analysis in the following relation

Can the recurrence :
T(N)= SUM T(N-i) //i=1 to N
be solved as:
T(N)<= N*T(N-1)
which finally comes O(N^(N-1)) ?
By solving iteratively it comes:
T(N)=N*(N-1)T(N-2).... , T(N)=N....(N-k+1)T(1), k=N-1.
so finally O(N!)
Note that O gives you an upper bound on the execution time, which means that if a certain algorithm, for example, is linear, then it is O(n), but it is also O(n^2) and O(n!) and it is also O of any superlinear function.
Your inference is correct, however on both steps you overestimated your function complexity. The recurrent relation T(N) = sum(T(N-i)) is O(2^N) (and I suspect it is also o(2^N)). It is easy to show, since 2^n = sum(2^i) + 1 for 1 <= i <= n - 1.
On your first step you used a higher bound, which is perfectly fine for the O. However, even with your bound of T(N) <= N*T(N-1) the complexity you ended up with is too high. O(N!), which is less than what you estimated, also satisfies T(N) <= N*T(N-1).

Asymptotic complexity for typical expressions

The increasing order of following functions shown in the picture below in terms of asymptotic complexity is:
(A) f1(n); f4(n); f2(n); f3(n)
(B) f1(n); f2(n); f3(n); f4(n);
(C) f2(n); f1(n); f4(n); f3(n)
(D) f1(n); f2(n); f4(n); f3(n)
a)time complexity order for this easy question was given as--->(n^0.99)*(logn) < n ......how? log might be a slow growing function but it still grows faster than a constant
b)Consider function f1 suppose it is f1(n) = (n^1.0001)(logn) then what would be the answer?
whenever there is an expression which involves multiplication between logarithimic and polynomial expression , does the logarithmic function outweigh the polynomial expression?
c)How to check in such cases suppose
1)(n^2)logn vs (n^1.5) which has higher time complexity?
2) (n^1.5)logn vs (n^2) which has higher time complexity?
If we consider C_1 and C_2 such that C_1 < C_2, then we can say the following with certainty
(n^C_2)*log(n) grows faster than (n^C_1)
This is because
(n^C_1) grows slower than (n^C_2) (obviously)
also, for values of n larger than 2 (for log in base 2), log(n) grows faster than
1.
in fact, log(n) is asymptotically greater than any constant C,
because log(n) -> inf as n -> inf
if both (n^C_2) is asymptotically than (n^C_1) AND log(n) is asymptotically greater
than 1, then we can certainly say that
(n^2)log(n) has greater complexity than (n^1.5)
We think of log(n) as a "slowly growing" function, but it still grows faster than 1, which is the key here.
coder101 asked an interesting question in the comments, essentially,
is n^e = Ω((n^c)*log_d(n))?
where e = c + ϵ for arbitrarily small ϵ
Let's do some algebra.
n^e = (n^c)*(n^ϵ)
so the question boils down to
is n^ϵ = Ω(log_d(n))
or is it the other way around, namely:
is log_d(n) = Ω(n^ϵ)
In order to do this, let us find the value of ϵ that satisfies n^ϵ > log_d(n).
n^ϵ > log_d(n)
ϵ*ln(n) > ln(log_d(n))
ϵ > ln(log_d(n)) / ln(n)
Because we know for a fact that
ln(n) * c > ln(ln(n)) (1)
as n -> infinity
We can say that, for an arbitrarily small ϵ, there exists an n large enough to
satisfy ϵ > ln(log_d(n)) / ln(n)
because, by (1), ln(log_d(n)) / ln(n) ---> 0 as n -> infinity.
With this knowledge, we can say that
is n^ϵ = Ω(log_d(n))
for arbitrarily small ϵ
which means that
n^(c + ϵ) = Ω((n^c)*log_d(n))
for arbitrarily small ϵ.
in layperson's terms
n^1.1 > n * ln(n)
for some n
also
n ^ 1.001 > n * ln(n)
for some much, much bigger n
and even
n ^ 1.0000000000000001 > n * ln(n)
for some very very big n.
Replacing f1 = (n^0.9999)(logn) by f1 = (n^1.0001)(logn) will yield answer (C): n, (n^1.0001)(logn), n^2, 1.00001^n
The reasoning is as follows:
. (n^1.0001)(logn) has higher complexity than n, obvious.
. n^2 higher than (n^1.0001)(logn) because the polynomial part asymptotically dominates the logarithmic part, so the higher-degree polynomial n^2 wins
. 1.00001^n dominates n^2 because the 1.00001^n has exponential growth, while n^2 has polynomial growth. Exponential growth asymptotically wins.
BTW, 1.00001^n looks a little similar to a family called "sub-exponential" growth, usually denoted (1+Ɛ)^n. Still, whatever small is Ɛ, sub-exponential growth still dominates any polynomial growth.
The complexity of this problem lays between f1(n) and f2(n).
For f(n) = n ^ c where 0 < c < 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
For f(n) = logc(n), where c > 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
The product of such two functions will also eventually become trivial compared with a linear growth curve.
Hence, Theta(n ^ c * logc(n)) is asymptotically less complex than Theta(n).

How is this algorithm O(n)?

Working through the recurrences, you can derive that during each call to this function, the time complexity will be: T(n) = 2T(n/2) + O(1)
And the height of the recurrence tree would be log2(n), where is the total number of calls (i.e. nodes in the tree).
It was said by the instructor that this function has a time complexity of O(n), but I simply cannot see why.
Further, when you substitute O(n) into the time complexity equation there are strange results. For example,
T(n) <= cn
T(n/2) <= (cn)/2
Back into the original equation:
T(n) <= cn + 1
Where this is obviously not true because cn + 1 !< cn
Your instructor is correct. This is an application of the Master theorem.
You can't substitute O(n) like you did in the time complexity equation, a correct substitution would be a polynomial form like an + b, since O(n) only shows the highest significant degree (there can be constants of lower degree).
To expand on the answer, you correctly recognize an time complexity equation of the form
T(n) = aT(n/b) + f(n), with a = 2, b = 2 and f(n) asympt. equals O(1).
With this type of equations, you have three cases that depends on the compared value of log_b(a) (cost of recursion) and of f(n) (cost of solving the basic problem of length n):
1° f(n) is much longer than the recursion itself (log_b(a) < f(n)), for instance a = 2, b = 2 and f(n) asympt. equals O(n^16). Then the recursion is of negligible complexity and the total time complexity can be assimilated to the complexity of f(n):
T(n) = f(n)
2° The recursion is longer than f(n) (log_b(a) > f(n)), which is the case here Then the complexity is O(log_b(a)), in your example O(log_2(2)), ie O(n).
3° The critical case where f(n) == log_b(a), ie there exists k >= 0 such that f(n) = O(n^{log_b(a)} log^k (n)), then the complexity is:
T(n) = O(n^{log_b(a)} log^k+1 (a)}
This is the ugly case in my opinion.