I'm having trouble understanding time complexity beyond just Big O. In this example:
f(n) = n^10 g
g(n) = (2n)^10
Is f θ(g)? I'm guessing it's θ(g) because you can find a constant c1 and c2 that will allow c1*g(n) to be an upper bound of f(n) and c2*g(n) that will be a lower bound.
See, f(n)=n^10 and g(n)=(2n)^10.
So, f(n)>=((1/4)^10)*(2n)^10 is greater than g(n). So, f(n)>=c1*g(n) for some c1=1/4;
Similarly, f(n)<=(c2)/*(2n)^10 is smaller than g(n) for any value of c2 greater than or equal to 1/2.
So, f(n)<=c2*g(n).
Hence c1*g(n)<=f(n)<=c2*g(n); where c1<1/2 and c2>1/2.
Hence, f(n)=Theta(g(n)) OR f(n)=θ(g(n)).
Related
Suppose we have an algorithm that is of order O(2^n). Furthermore, suppose we multiplied the input size n by 2 so now we have an input of size 2n. How is the time affected? Do we look at the problem as if the original time was 2^n and now it became 2^(2n) so the answer would be that the new time is the power of 2 of the previous time?
Big 0 is not for telling you the actual running time, just how the running time is affected by the size of input. If you double the size of input the complexity is still O(2^n), n is just bigger.
number of elements(n) units of work
1 1
2 4
3 8
4 16
5 32
... ...
10 1024
20 1048576
There's a misunderstanding here about how Big-O relates to execution time.
Consider the following formulas which define execution time:
f1(n) = 2^n + 5000n^2 + 12300
f2(n) = (500 * 2^n) + 6
f3(n) = 500n^2 + 25000n + 456000
f4(n) = 400000000
Each of these functions are O(2^n); that is, they can each be shown to be less than M * 2^n for an arbitrary M and starting n0 value. But obviously, the change in execution time you notice for doubling the size from n1 to 2 * n1 will vary wildly between them (not at all in the case of f4(n)). You cannot use Big-O analysis to determine effects on execution time. It only defines an upper boundary on the execution time (which is not even guaranteed to be the minimum form of the upper bound).
Some related academia below:
There are three notable bounding functions in this category:
O(f(n)): Big-O - This defines a upper-bound.
Ω(f(n)): Big-Omega - This defines a lower-bound.
Θ(f(n)): Big-Theta - This defines a tight-bound.
A given time function f(n) is Θ(g(n)) only if it is also Ω(g(n)) and O(g(n)) (that is, both upper and lower bounded).
You are dealing with Big-O, which is the usual "entry point" to the discussion; we will neglect the other two entirely.
Consider the definition from Wikipedia:
Let f and g be two functions defined on some subset of the real numbers. One writes:
f(x)=O(g(x)) as x tends to infinity
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x > x0
Going from here, assume we have f1(n) = 2^n. If we were to compare that to f2(n) = 2^(2n) = 4^n, how would f1(n) and f2(n) relate to each other in Big-O terms?
Is 2^n <= M * 4^n for some arbitrary M and n0 value? Of course! Using M = 1 and n0 = 1, it is true. Thus, 2^n is upper-bounded by O(4^n).
Is 4^n <= M * 2^n for some arbitrary M and n0 value? This is where you run into problems... for no constant value of M can you make 2^n grow faster than 4^n as n gets arbitrarily large. Thus, 4^n is not upper-bounded by O(2^n).
See comments for further explanations, but indeed, this is just an example I came up with to help you grasp Big-O concept. That is not the actual algorithmic meaning.
Suppose you have an array, arr = [1, 2, 3, 4, 5].
An example of a O(1) operation would be directly access an index, such as arr[0] or arr[2].
An example of a O(n) operation would be a loop that could iterate through all your array, such as for elem in arr:.
n would be the size of your array. If your array is twice as big as the original array, n would also be twice as big. That's how variables work.
See Big-O Cheat Sheet for complementary informations.
I know sublinear time algorithm is expressed by o(n).
Is T(n)=n/x sublinear in $n$ for positive number x?
In other words, is n/x=o(n)?
No.
T(n) = n/x is linear, in the same way as T(n) = xn is linear. If your function is just n multiplied by some constant c, then it's linear. In this particular case, c=1/x.
You can also check this using the formal definition of small o.
Formally, f(n) = o(g(n)) as n → ∞ means that for
every positive constant ε there exists a constant N such that |f(n)| <= ε|g(n)| for all n>= N.
In this case, pick ε=1/2x and you wont be able to find an N to satisfy the condition to make n/x = o(n).
Intuitively, one says f(n) = o(g(n)) if and only if f(n) is dominated by g(n) eventually even if you "slow g(n) down" multiplying it by a very small constant.
How would you possibly show that 2 is O(1)?
More over, how would you show that a constant is theta(1) hence omega(1) and O(1)?
For O, I am under the impression that you are able to do a simplification for f(n), whereby it can be reduced down to 1, but then how can this prove that 2 is O(1) for some n0? What would be the n0 value in this case?
By definition, a function f is in O(1) if there exist constants n0 and M such that f(n) ≤ M · 1 = M for all n ≥ n0.
If f(n) is defined as 2, then just set M = 2 (or any greater value; it doesn't matter) and n0 = 1 (or any greater value; it doesn't matter), and the condition is met.
[…] that 2 is O(1) for some n0? What would be the n0 value in this case?
n0 is not a parameter here; it's not meaningful to say "O(1) for some n0". You can arbitrarily choose any value of n0 that makes f satisfy the condition; if one exists, then f is O(1), period.
Big Oh and Theta so not indicate the time taken by an algorithm. They indicate the rate of increase in time as the input increases for the algorithm. When you understand this, things become very easy and less mathematical. f(x) = 2 {for all and any x} is always O(1) since the output value (2) does not depend on the input value (x) at all! O(1) represents this independence. So does theta(1) and omega(1).
The increasing order of following functions shown in the picture below in terms of asymptotic complexity is:
(A) f1(n); f4(n); f2(n); f3(n)
(B) f1(n); f2(n); f3(n); f4(n);
(C) f2(n); f1(n); f4(n); f3(n)
(D) f1(n); f2(n); f4(n); f3(n)
a)time complexity order for this easy question was given as--->(n^0.99)*(logn) < n ......how? log might be a slow growing function but it still grows faster than a constant
b)Consider function f1 suppose it is f1(n) = (n^1.0001)(logn) then what would be the answer?
whenever there is an expression which involves multiplication between logarithimic and polynomial expression , does the logarithmic function outweigh the polynomial expression?
c)How to check in such cases suppose
1)(n^2)logn vs (n^1.5) which has higher time complexity?
2) (n^1.5)logn vs (n^2) which has higher time complexity?
If we consider C_1 and C_2 such that C_1 < C_2, then we can say the following with certainty
(n^C_2)*log(n) grows faster than (n^C_1)
This is because
(n^C_1) grows slower than (n^C_2) (obviously)
also, for values of n larger than 2 (for log in base 2), log(n) grows faster than
1.
in fact, log(n) is asymptotically greater than any constant C,
because log(n) -> inf as n -> inf
if both (n^C_2) is asymptotically than (n^C_1) AND log(n) is asymptotically greater
than 1, then we can certainly say that
(n^2)log(n) has greater complexity than (n^1.5)
We think of log(n) as a "slowly growing" function, but it still grows faster than 1, which is the key here.
coder101 asked an interesting question in the comments, essentially,
is n^e = Ω((n^c)*log_d(n))?
where e = c + ϵ for arbitrarily small ϵ
Let's do some algebra.
n^e = (n^c)*(n^ϵ)
so the question boils down to
is n^ϵ = Ω(log_d(n))
or is it the other way around, namely:
is log_d(n) = Ω(n^ϵ)
In order to do this, let us find the value of ϵ that satisfies n^ϵ > log_d(n).
n^ϵ > log_d(n)
ϵ*ln(n) > ln(log_d(n))
ϵ > ln(log_d(n)) / ln(n)
Because we know for a fact that
ln(n) * c > ln(ln(n)) (1)
as n -> infinity
We can say that, for an arbitrarily small ϵ, there exists an n large enough to
satisfy ϵ > ln(log_d(n)) / ln(n)
because, by (1), ln(log_d(n)) / ln(n) ---> 0 as n -> infinity.
With this knowledge, we can say that
is n^ϵ = Ω(log_d(n))
for arbitrarily small ϵ
which means that
n^(c + ϵ) = Ω((n^c)*log_d(n))
for arbitrarily small ϵ.
in layperson's terms
n^1.1 > n * ln(n)
for some n
also
n ^ 1.001 > n * ln(n)
for some much, much bigger n
and even
n ^ 1.0000000000000001 > n * ln(n)
for some very very big n.
Replacing f1 = (n^0.9999)(logn) by f1 = (n^1.0001)(logn) will yield answer (C): n, (n^1.0001)(logn), n^2, 1.00001^n
The reasoning is as follows:
. (n^1.0001)(logn) has higher complexity than n, obvious.
. n^2 higher than (n^1.0001)(logn) because the polynomial part asymptotically dominates the logarithmic part, so the higher-degree polynomial n^2 wins
. 1.00001^n dominates n^2 because the 1.00001^n has exponential growth, while n^2 has polynomial growth. Exponential growth asymptotically wins.
BTW, 1.00001^n looks a little similar to a family called "sub-exponential" growth, usually denoted (1+Ɛ)^n. Still, whatever small is Ɛ, sub-exponential growth still dominates any polynomial growth.
The complexity of this problem lays between f1(n) and f2(n).
For f(n) = n ^ c where 0 < c < 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
For f(n) = logc(n), where c > 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
The product of such two functions will also eventually become trivial compared with a linear growth curve.
Hence, Theta(n ^ c * logc(n)) is asymptotically less complex than Theta(n).
I was studying Big O notation. I know that Big O is denoted by:
f(n) E O(g(n)) or f(n) = O(g(n))
It means the function f (n) has growth rate no greater than g(n).
Now lets say I have an equation:
5n +2 E O(n)
by the above equation, shouldn't 'n' be equal to g(n) and '5n+2' equals to f(n).
Now for any value of n. f(n) is always greater then g(n). So how Big O is true in this case ?
You should read the concept of Big Oh in more detail.
The relation
f(n) E O(g(n))
says
for some Constant C
f(n) <= C * g(n)
In this case C is some value for which 5n + 2 is always smaller than Cn
If you solve it:
5n + 2 <= Cn
2 <= (C - 5)*n
From this you can easily find out that if C = 6
then for any value of n your equation always holds!
Hope this helps!
That's not a correct definition of big O notation. If f(x) is O(g(x)), then there must exist some constants C and N such that: |f(x)| <= C |g(x)| for all x>N. So, if f(x) is always less than or equal to some constant * g(x) after some x value N, then f(x) is O(g(n)). Effectively, this means that constant factors are irrelevant, because you can choose C to be any value. So, for your example f(n)=5n+2 <= C*g(n)=10000n so, f(n) is O(g(n)).
Considering what the Big-O notation stands for you have the statement
5n +2 E O(n)
or as well
5n +2 = O(n)
Given that Big-O notation states an upper bound to our function, which is to establish an upper limit to the possible results of our given funcion, the problen can be reconsidered in the following way:
5n +2 <= c*n , for some constant c
We can see that the statement holds true given that it is possible to find some constant that will be greater than or equal to our function (making that constant as big or small as we need).
In a more general way, we can say that any given function f(n) will belong to O(g(n)) if the degree of g(n) is greater that or equal to the degree of f(n), that is, the highest degree among its terms.
Formally:
Let f(n) = n^x;
Let g(n) = n^y; so that x <= y
Then f(n) = O(g(n)).
The same applies to Big-Omega the other way arround.
Hope it works for you