Big-O notation prove - time-complexity

I'm trying to prove that this formula (n2+1)/(n+1) is O(n)
As you know, we need to come up with n0 and C.
So I'm confused a little bit about how to choose an appropriate C since the equation here is division.
So with C=1, (n2+1) / (n+1) / n
(n2+n) / (n+n) / n >= (n2+1) /(n+1)
but I'm stuck here in how to simplify the division here.

As n tends to infinity your original equation becomes n^2/n which is equivalent to O(n)

Choosing c = 1:
(n^2 + 1)/(n + 1) <= 1*n definition of Big-Oh with c = 1
n^2 + 1 <= n^2 + n multiplying both sides by n + 1
1 <= n subtracting n^2 from both sides
n >= 1 rearranging
Therefore, the choice n0 = 1 works for c = 1.

Related

assess the time complexity of the following code in terms of theta

I know the time complexity is n*log(n) , however I could only assess it with an integral for the inner loop to get an upper bound, how do I get a lower bound? to make it theta and not O?
S=0;
for( i=1; i<n; i++)
for( j=0;j<n;j+=i)
S++;
so line 1 is executed once, line 2 is executed n-1 times + 1 check without entering, each one of these n-1 times line 3 is executed n/i times and we get:
T= 1 + n + (n/1+n/2+...+n/n-1) =< 1+n+n (integral of 1/x from 1 to n) = 1+n+nlog(n) . And that's big O, how about Omega?
Let's decompose the function in the following way: T(n) = 1 + n + n + n*S(n), where S = sum(1/x from x=2 to n-1). Note that it is identical to what you wrote.
The function f(x) = 1/x is monotonically decreasing, therefore you can bound the sum S from above by int(1/x from x=1 to n-1) and from below by int(1/x from x=2 to n). In both cases you get log(n) up to constant terms. For the lower bound log(n-1)-log(1) = log(n-1) and for the lower bound log(n)-log(2).
If these bounds are not obvious picture the left and right Riemann sums of the integrals for a decreasing function.
You did use the lower bound, not the upper one, in your question, by the way. (Because 1/x is decreasing, not increasing.)
Then adding that back into the expression for T(n) we have T(n) >= 1 + 2n + n log(n) - n log(2) and T(n) <= 1 + 2n + n log(n-1). Both are asymptotically proportional to n log(n), giving you the Theta class.

Proving or Refuting Time Complexity

I have an exam soon and I wasn't at university for a long time, cause I was at the hospital
Prove or refute the following statements:
log(n)= O(
√
n)
3^(n-1)= O(2^n)
f(n) + g(n) = O(f(g(n)))
2^(n+1) = O(2^n)
Could someone please help me and explain to me ?
(1) is true because log(n) grows asymptotically slower than any polynomial, including sqrt(n) = n^(1/2). To prove this we can observe that both log(n) and sqrt(n) are strictly increasing functions for n > 0 and then focus on a sequence where both evaluate easily, e.g., 2^(2k). Now we see log(2^(2k)) = 2k, but sqrt(2^(2k)) = 2^k. For k = 2, 2k = 2^k, and for k > 2, 2k < 2^k. This glosses over some details but the idea is sound. You can finish this by arguing that between 2^(2k) and 2^(2(k+1)) both functions have values greater than one for k >= 2 and thus any crossings can be eliminated by multiplying sqrt(n) by some constant.
(2) it is not true that 3^(n-1) is O(2^n). Suppose this were true. Then there exists an n0 and c such that for n > n0, 3^(n-1) <= c*2^n. First, eliminate the -1 by adding a (1/3) to the front; so (1/3)*3^n <= c*2^n. Next, divide through by 2^n: (1/3)*(3/2)^n <= c. Multiply by 3: (3/2)^n <= 3c. Finally, take the log of both sides with base 3/2: n <= log_3/2 (3c). The RHS is a constant expression and n is a variable; so this cannot be true of arbitrarily large n as required. This is a contradiction so our supposition was wrong; that is, 3^(n-1) is not O(2^n).
(3) this is not true. f(n) = 1 and g(n) = n is an easy counterexample. In this case, f(n) + g(n) = 1 + n but O(f(g(n)) = O(f(n)) = O(1).
(4) this is true. Rewrite 2^(n+1) as 2*2^n and it becomes obvious that this is true for n >= 1 by choosing c > 2.

Simplification of complexity of a function with two arguments in terms of Big O

Let's say we have the following complexity:
T(n, k) = n^2 + n + k^2 + 15*k + 123
Where we do not know anything about relations between n and k.
I could say that in terms of Big O complexity will be the following:
T(n) = O(n^2 + n + k^2 + 15*k)
Can I simplify it further and drop only 15 constant or I can drop n and 15*k?
UPDATE: according to this link Big O is not valid notation for two or more variables
Yes, you can.
O(n^2+n+k^2+15*k)=O(n^2+n)+O(k^2+15*k)=O(n^2)+O(k^2)=O(n^2+k^2)
This does assume that k and n are both positive, looking at the behavior as they get large.

Time Complexity of nested loops including if statement

I'm unsure of the general time complexity of the following code.
Sum = 0
for i = 1 to N
if i > 10
for j = 1 to i do
Sum = Sum + 1
Assuming i and j are incremented by 1.
I know that the first loop is O(n) but the second loop is only going to run when N > 10. Would the general time complexity then be O(n^2)? Any help is greatly appreciated.
Consider the definition of Big O Notation.
________________________________________________________________
Let f: ℜ → ℜ and g: ℜ → ℜ.
Then, f(x) = O(g(x))
&iff;
∃ k ∈ ℜ ∋ ∃ M > 0 ∈ ℜ ∋ ∀ x ≥ k, |f(x)| ≤ M ⋅ |g(x)|
________________________________________________________________
Which can be read less formally as:
________________________________________________________________
Let f and g be functions defined on a subset of the real numbers.
Then, f is O of g if, for big enough x's (this is what the k is for in the formal definition) there is a constant M (from the real numbers, of course) such that M times g(x) will always be greater than or equal to (really, you can just increase M and it will always be greater, but I regress) f(x).
________________________________________________________________
(You may note that if a function is O(n), then it is also O(n²) and O(e^n), but of course we are usually interested in the "smallest" function g such that it is O(g). In fact, when someone says f is O of g then they almost always mean that g is the smallest such function.)
Let's translate this to your problem. Let f(N) be the amount of time your process takes to complete as a function of N. Now, pretend that addition takes one unit of time to complete (and checking the if statement and incrementing the for-loop take no time), then
f(1) = 0
f(2) = 0
...
f(10) = 0
f(11) = 11
f(12) = 23
f(13) = 36
f(14) = 50
We want to find a function g(N) such that for big enough values of N, f(N) ≤ M ⋅g(N). We can satisfy this by g(N) = N² and M can just be 1 (maybe it could be smaller, but we don't really care). In this case, big enough means greater than 10 (of course, f is still less than M⋅g for N <11).
tl;dr: Yes, the general time complexity is O(n²) because Big O assumes that your N is going to infinity.
Let's assume your code is
Sum = 0
for i = 1 to N
for j = 1 to i do
Sum = Sum + 1
There are N^2 sum operations in total. Your code with if i > 10 does 10^2 sum operations less. As a result, for enough big N we have
N^2 - 10^2
operations. That is
O(N^2) - O(1) = O(N^2)

How the time complexity of gcd is Θ(logn)?

I was solving a time-complexity question on Interview Bit as given in the below image.
The answer given is Θ(theta)(logn) and I am not able to grasp how the logn term arrive here in the time complexity of this program.
Can someone please explain how the answer is theta of logn?
Theorem given any x, gcd(n, m) where n < fib(x) is recursive called equal or less than x times.
Note: fib(x) is fibonacci(x), where fib(x) = fib(x-1) + fib(x-2)
Prove
Basis
every n <= fib(1), gcd(n, m) is gcd(1,m) only recursive once
Inductive step
assume the theorem is hold for every number less than x, which means:
calls(gcd(n, m)) <= x for every n <= fib(x)
consider n where n <= fib(x+1)
if m > fib(x)
calls(gcd(n, m))
= calls(gcd(m, (n-m))) + 1
= calls(gcd(n-m, m%(n-m))) + 2 because n - m <= fib(x-1)
<= x - 1 + 2
= x + 1
if m <= fib(x)
calls(gcd(n, m))
= calls(gcd(m, (n%m))) + 1 because m <= fib(x)
<= x + 1
So the theorem also holds for x + 1, as mathematical induction, the theorem holds for every x.
Conclusion
gcd(n,m) is Θ(reverse fib) which is Θ(logn)
This algorithm generates a decreasing sequence of integer (m, n) pairs. We can try to prove that such sequence decays fast enough.
Let's say we start with m_1 and n_1, with m_1 < n_1.
At each step we take n_1 % m_1, which is < m_1, and repeat recursively on the pair m_2 = n_1 % m_1 and n_2 = m_1.
Now, let's say n_1 % m_1 = m_1 - p for some p where 0 < p < m_1.
We have max(m_2, n_2) = m_1 - p.
Let's take another step (m_2, n_2) -> (m_3, n_3), we can easily see that max(m_3, n_3) < p, but clearly it is also true that max(m_3, n_3) < m_1 - p as the sequence is strictly decreasing.
So we can write max(m_3, n_3) < min(m_1 - p, p), where min(m_1 - p, p) = m_1 / 2. This result expresses the fact that the sequence decreases geometrically, therefore the algorithm has to terminate in at most log_2(m_1) steps.