Big O time complexity of n^1.001 - time-complexity

Why is the growth of n^1.001 greater than n log n in Big O notation?
The n^0.001 doesn't seem significant...

For any exponent (x) greater than 1, nx is eventually greater than n * log(n). In the case of x = 1.001, the n in question is unbelievably large. Even if you lower x to 1.01, nx doesn't get bigger than n * log(n) until beyond n = 1E+128 (but before you reach 1E+256).
So, for problems where n is less than astronomical, n1.001 will be less than n * log(n), but you will eventually reach a point where it will be greater.

In case someone is interested, here is a formal proof:
For the sake of simplicity, let's assume we are using logarithms in base e.
Let a > 1 be any exponent (e.g., a = 1.001). Then a-1 > 0. Now consider the function
f(x) = x^(a-1)/log(x)
Using L'Hôpital's rule it is not hard to see that this function is unbounded. Moreover, computing the derivative of f(x), one can also see that the function is increasing for x > exp(1/(a-1)).
Therefore, there must exist an integer N such that, for all n > N, is f(n) > 1. In other words
n^(a-1)/log(n) > 1
or
n^(a-1) > log(n)
so
n^a > n log(n).
This shows that O(n^a) >= O(n log(n)).
But wait a minute. We wanted >, not >=, right? Fortunately this is easy to see. For instance, in the case a = 1.001, we have
O(n^1.001) > O(n^1.0001) >= O(n log(n))
and we are done.

Related

Proving or Refuting Time Complexity

I have an exam soon and I wasn't at university for a long time, cause I was at the hospital
Prove or refute the following statements:
log(n)= O(
√
n)
3^(n-1)= O(2^n)
f(n) + g(n) = O(f(g(n)))
2^(n+1) = O(2^n)
Could someone please help me and explain to me ?
(1) is true because log(n) grows asymptotically slower than any polynomial, including sqrt(n) = n^(1/2). To prove this we can observe that both log(n) and sqrt(n) are strictly increasing functions for n > 0 and then focus on a sequence where both evaluate easily, e.g., 2^(2k). Now we see log(2^(2k)) = 2k, but sqrt(2^(2k)) = 2^k. For k = 2, 2k = 2^k, and for k > 2, 2k < 2^k. This glosses over some details but the idea is sound. You can finish this by arguing that between 2^(2k) and 2^(2(k+1)) both functions have values greater than one for k >= 2 and thus any crossings can be eliminated by multiplying sqrt(n) by some constant.
(2) it is not true that 3^(n-1) is O(2^n). Suppose this were true. Then there exists an n0 and c such that for n > n0, 3^(n-1) <= c*2^n. First, eliminate the -1 by adding a (1/3) to the front; so (1/3)*3^n <= c*2^n. Next, divide through by 2^n: (1/3)*(3/2)^n <= c. Multiply by 3: (3/2)^n <= 3c. Finally, take the log of both sides with base 3/2: n <= log_3/2 (3c). The RHS is a constant expression and n is a variable; so this cannot be true of arbitrarily large n as required. This is a contradiction so our supposition was wrong; that is, 3^(n-1) is not O(2^n).
(3) this is not true. f(n) = 1 and g(n) = n is an easy counterexample. In this case, f(n) + g(n) = 1 + n but O(f(g(n)) = O(f(n)) = O(1).
(4) this is true. Rewrite 2^(n+1) as 2*2^n and it becomes obvious that this is true for n >= 1 by choosing c > 2.

BIG(O) time complexity

What is the time Complexity for below code:
1)
function(values,xlist,ylist):
sum =0
n=0
for r from 0 to xlist:
for c from 0 to ylist:
sum+= values[r][c]
n+1
return sum/n
2)
function PrintCharacters():
characters= {"a","b","c","d"}
foreach character in characters
print(character)
According to me the 1st code has O(xlist*ylist) complexity and 2nd code has O(n).
Is this right?
Big O notation to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines
For example, when analyzing some algorithm, one might find that the time (or the number of steps) it takes to complete a problem of size n is given by
T(n) = 4 n^2 - 2 n + 2
If we ignore constants (which makes sense because those depend on the particular hardware the program is run on) and slower growing terms, we could say "T(n)" grows at the order of n^2 " and write:T(n) = O(n^2)
For the formal definition, suppose f(x) and g(x) are two functions defined on some subset of the real numbers. We write
f(x) = O(g(x))
(or f(x) = O(g(x)) for x -> infinity to be more precise) if and only if there exist constants N and C such that
|f(x)| <= C|g(x)| for all x>N
Intuitively, this means that f does not grow faster than g
If a is some real number, we write
f(x) = O(g(x)) for x->a
if and only if there exist constants d > 0 and C such that
|f(x)| <= C|g(x)| for all x with |x-a| < d
So for your case it would be
O(n) as |f(x)| > C|g(x)|
Reference from http://web.mit.edu/16.070/www/lecture/big_o.pdf
for r from 0 to xlist: // --> n time
for c from 0 to ylist: // n time
sum+= values[r][c]
n+1
}
function PrintCharacters():
characters= {"a","b","c","d"}
foreach character in characters --> # This loop will run as many time as there are characters suppose n characters than it will run time so O(n)
print(character)
Big O Notation gives an assumption when value is very big outer loop
will run n times and inner loop is running n times
Assume n -> 100 than total n^2 10000 run times

Time Complexity of nested loops including if statement

I'm unsure of the general time complexity of the following code.
Sum = 0
for i = 1 to N
if i > 10
for j = 1 to i do
Sum = Sum + 1
Assuming i and j are incremented by 1.
I know that the first loop is O(n) but the second loop is only going to run when N > 10. Would the general time complexity then be O(n^2)? Any help is greatly appreciated.
Consider the definition of Big O Notation.
________________________________________________________________
Let f: ℜ → ℜ and g: ℜ → ℜ.
Then, f(x) = O(g(x))
&iff;
∃ k ∈ ℜ ∋ ∃ M > 0 ∈ ℜ ∋ ∀ x ≥ k, |f(x)| ≤ M ⋅ |g(x)|
________________________________________________________________
Which can be read less formally as:
________________________________________________________________
Let f and g be functions defined on a subset of the real numbers.
Then, f is O of g if, for big enough x's (this is what the k is for in the formal definition) there is a constant M (from the real numbers, of course) such that M times g(x) will always be greater than or equal to (really, you can just increase M and it will always be greater, but I regress) f(x).
________________________________________________________________
(You may note that if a function is O(n), then it is also O(n²) and O(e^n), but of course we are usually interested in the "smallest" function g such that it is O(g). In fact, when someone says f is O of g then they almost always mean that g is the smallest such function.)
Let's translate this to your problem. Let f(N) be the amount of time your process takes to complete as a function of N. Now, pretend that addition takes one unit of time to complete (and checking the if statement and incrementing the for-loop take no time), then
f(1) = 0
f(2) = 0
...
f(10) = 0
f(11) = 11
f(12) = 23
f(13) = 36
f(14) = 50
We want to find a function g(N) such that for big enough values of N, f(N) ≤ M ⋅g(N). We can satisfy this by g(N) = N² and M can just be 1 (maybe it could be smaller, but we don't really care). In this case, big enough means greater than 10 (of course, f is still less than M⋅g for N <11).
tl;dr: Yes, the general time complexity is O(n²) because Big O assumes that your N is going to infinity.
Let's assume your code is
Sum = 0
for i = 1 to N
for j = 1 to i do
Sum = Sum + 1
There are N^2 sum operations in total. Your code with if i > 10 does 10^2 sum operations less. As a result, for enough big N we have
N^2 - 10^2
operations. That is
O(N^2) - O(1) = O(N^2)

How to calculate the time complexity?

Assume that function f is in the complexity class O(N (log N)2), and that for N = 1,000 the program runs in 8 seconds.
How to write a formula T(N) that can compute the approximate time that it takes to run f for any input of size N???
Here is the answer:
8 = c (1000 x 10)
c = 8x10^-4
T(N) = 8x10-4* (N log2 N)
I don't understand the first line where does the 10 come from?
Can anybody explain the answer to me please? Thanks!
I don't understand the first line where does the 10 come from? Can
anybody explain the answer to me please? Thanks!
T(N) is the maximum time complexity. c is the constant or O(1) time, which is the portion of the algorithm's speed which is not affected by the size of the input. The 10 comes from rounding to simplify the math. It's actually 9.965784, which is log2 of 1000, e.g.
N x log2 N is
1000 x 10 or
1000 x 9.965784
O(N (log N)^2) describes how the runtime scales with N, but it's not a formula for calculating runtime in seconds. In fact, Big-O notation doesn't generally give the exact scaling function itself, but an upper bound on it as N becomes large. See here (there's a nice picture showing this last point).
If you're interested in a function's runtime in practice (particularly in the non-asymptotic regime, i.e. small N), one option is to actually run the function and measure it. Do this for multiple values of N, chosen on some grid (possibly with nonlinear spacing). Then, you can interpolate between these points.
Define S(N)=N(log N)^2
If you can assume that S(N) bounds your program for all N >= 1000
Then you can bound your execution time by good'ol rule of three:
S(1000) - T(1000)
S(N) - T(N)
T(N) <= S(N)* T(1000)/S(1000) for all N >=1000
S(1000) approx 10E4
T(1000) = 8
T(N) <= N(log N)^2 * 8 / 10E4

Asymptotic complexity for typical expressions

The increasing order of following functions shown in the picture below in terms of asymptotic complexity is:
(A) f1(n); f4(n); f2(n); f3(n)
(B) f1(n); f2(n); f3(n); f4(n);
(C) f2(n); f1(n); f4(n); f3(n)
(D) f1(n); f2(n); f4(n); f3(n)
a)time complexity order for this easy question was given as--->(n^0.99)*(logn) < n ......how? log might be a slow growing function but it still grows faster than a constant
b)Consider function f1 suppose it is f1(n) = (n^1.0001)(logn) then what would be the answer?
whenever there is an expression which involves multiplication between logarithimic and polynomial expression , does the logarithmic function outweigh the polynomial expression?
c)How to check in such cases suppose
1)(n^2)logn vs (n^1.5) which has higher time complexity?
2) (n^1.5)logn vs (n^2) which has higher time complexity?
If we consider C_1 and C_2 such that C_1 < C_2, then we can say the following with certainty
(n^C_2)*log(n) grows faster than (n^C_1)
This is because
(n^C_1) grows slower than (n^C_2) (obviously)
also, for values of n larger than 2 (for log in base 2), log(n) grows faster than
1.
in fact, log(n) is asymptotically greater than any constant C,
because log(n) -> inf as n -> inf
if both (n^C_2) is asymptotically than (n^C_1) AND log(n) is asymptotically greater
than 1, then we can certainly say that
(n^2)log(n) has greater complexity than (n^1.5)
We think of log(n) as a "slowly growing" function, but it still grows faster than 1, which is the key here.
coder101 asked an interesting question in the comments, essentially,
is n^e = Ω((n^c)*log_d(n))?
where e = c + ϵ for arbitrarily small ϵ
Let's do some algebra.
n^e = (n^c)*(n^ϵ)
so the question boils down to
is n^ϵ = Ω(log_d(n))
or is it the other way around, namely:
is log_d(n) = Ω(n^ϵ)
In order to do this, let us find the value of ϵ that satisfies n^ϵ > log_d(n).
n^ϵ > log_d(n)
ϵ*ln(n) > ln(log_d(n))
ϵ > ln(log_d(n)) / ln(n)
Because we know for a fact that
ln(n) * c > ln(ln(n)) (1)
as n -> infinity
We can say that, for an arbitrarily small ϵ, there exists an n large enough to
satisfy ϵ > ln(log_d(n)) / ln(n)
because, by (1), ln(log_d(n)) / ln(n) ---> 0 as n -> infinity.
With this knowledge, we can say that
is n^ϵ = Ω(log_d(n))
for arbitrarily small ϵ
which means that
n^(c + ϵ) = Ω((n^c)*log_d(n))
for arbitrarily small ϵ.
in layperson's terms
n^1.1 > n * ln(n)
for some n
also
n ^ 1.001 > n * ln(n)
for some much, much bigger n
and even
n ^ 1.0000000000000001 > n * ln(n)
for some very very big n.
Replacing f1 = (n^0.9999)(logn) by f1 = (n^1.0001)(logn) will yield answer (C): n, (n^1.0001)(logn), n^2, 1.00001^n
The reasoning is as follows:
. (n^1.0001)(logn) has higher complexity than n, obvious.
. n^2 higher than (n^1.0001)(logn) because the polynomial part asymptotically dominates the logarithmic part, so the higher-degree polynomial n^2 wins
. 1.00001^n dominates n^2 because the 1.00001^n has exponential growth, while n^2 has polynomial growth. Exponential growth asymptotically wins.
BTW, 1.00001^n looks a little similar to a family called "sub-exponential" growth, usually denoted (1+Ɛ)^n. Still, whatever small is Ɛ, sub-exponential growth still dominates any polynomial growth.
The complexity of this problem lays between f1(n) and f2(n).
For f(n) = n ^ c where 0 < c < 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
For f(n) = logc(n), where c > 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
The product of such two functions will also eventually become trivial compared with a linear growth curve.
Hence, Theta(n ^ c * logc(n)) is asymptotically less complex than Theta(n).