What is the time complexity of a log n while loop nested inside a for loop? - time-complexity

I'm having trouble figuring out the time complexity of the following code.
for i in range(0,n):
x = 1
while x < n:
x = x * 2
I understand that the outer for loop runs n times, and that the while loop runs log n times (I think). So does that mean because outer for loop has more of an impact so the run time is o(n)?

It's O(n * log(n)).
The outer loop is running n times and each time it runs it runs the inner loop. The inner loop is indeed O(log n). To see this imagine n being a power of 2. If n is 4,
x = 1
while x < n:
x = x * 2
will iterate twice, once when x is 1 and once when x is 2. If n is 8, this loop will iterate 3 times (on x = 1, 2, and 4). Note that 2^2 is four and 2^3 is eight. The number of iterations is the exponent; you could prove this by mathematical induction if you want to be rigorous. We can get that exponent by taking the base-2 log of n but by the definition the asymptotic notation we can ignore the base and just say that the loop above runs in O(log n) time.
If we have a loop that runs in O(log n) that we are running O(n) times we can say that the whole thing runs in O(n * log(n)) time.

Related

How to prove a lower bound of n * log(n) for this simple algorithm

The question is how many times does this algorithm produce a meow:
KITTYCAT(n):
for i from 0 to n − 1:
for j from 2^i to n − 1:
meow
So The inner loop has a worse case of n, but it only runs log(n) times because even though the outer loop runs n times, whenever i > log(n) the inner loop never runs, so you have O(n * log(n)).
However, since I can't assume a best case of n for the inner loop, how do I prove that the algorithm still has a best case of n * log(n)?
When i > log2n the start value of the inner loop is higher than its end value. Depending on how you interpret it, this either means that the inner loop counts down, or that it does not run at all. If you interpret it as counting down, then it gets very big indeed and ends up dominating, and you have Ω(2n), which is not what you seem to be looking for.
If instead you assume the inner loop goes away, then this code is really
for i from 0 to log2n:
     for j from 2i to n - 1:
          meow
giving you Ω(nlogn)
If you're asking how to prove that last step, you can calculate the exact number of iterations -- the inner loop iterates n times, then n-1 times, then n-2, then n-4, etc all the way down to 0. So the exact complexity (at least when n is a power of 2) is
    n + n-1 + n-2 + n-4 + ... + n-n/4 + n-n/2 + n-n
or
    nlog2n - 1 - 2 - 4 - ... - n/4 - n/2
which converges to
    nlog2n - n
which is asymptotically equivalent to nlogn as n -> ∞

Time complexity of these two while loops

x = 1;
while(x<n)
{
x = x + n / 10;
m = n*n
y = n;
while(m>y)
{
m=m-100;
y=y+20;
}
}
The way I solved it: we are adding to x 1/10 of n every time, so no matter how big n will be, the number of repitation we are doing is always 10.
The inner loop is from n to n^2, and each variable in it is increasing linearly, so the inner loop should be O(n)
and because the outer loop is O(1), we get O(n) for all of the function.
but the optional answers for the question are: O(n^2), O(n^3), O(n^4), O(nlogn)
What am I missing? thanks.
You are correct that the outer loop runs a constant number of times (10), but your reasoning about the inner loop isn't all the way there. You say that there is a "linear increase" with each iteration of the inner loop , and if that were the case you would be correct that the whole function runs in O(n), but it's not. Try and figure it out from here, but if you're still stuck:
The inner loop has to make up the difference between n^2 and n in terms of the difference between m and y. That difference between m and y is reduced by a constant amount with each iteration: 120, not a linear amount. Therefore the inner loop runs (n^2 - n)/120 times. Multiplying the number of times the outer loop runs by the times the inner loop runs, we get:
O(10) * O((n^2 - n)/120)
= O(1) * O(n^2)
= O(n^2)

BIG(O) time complexity

What is the time Complexity for below code:
1)
function(values,xlist,ylist):
sum =0
n=0
for r from 0 to xlist:
for c from 0 to ylist:
sum+= values[r][c]
n+1
return sum/n
2)
function PrintCharacters():
characters= {"a","b","c","d"}
foreach character in characters
print(character)
According to me the 1st code has O(xlist*ylist) complexity and 2nd code has O(n).
Is this right?
Big O notation to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines
For example, when analyzing some algorithm, one might find that the time (or the number of steps) it takes to complete a problem of size n is given by
T(n) = 4 n^2 - 2 n + 2
If we ignore constants (which makes sense because those depend on the particular hardware the program is run on) and slower growing terms, we could say "T(n)" grows at the order of n^2 " and write:T(n) = O(n^2)
For the formal definition, suppose f(x) and g(x) are two functions defined on some subset of the real numbers. We write
f(x) = O(g(x))
(or f(x) = O(g(x)) for x -> infinity to be more precise) if and only if there exist constants N and C such that
|f(x)| <= C|g(x)| for all x>N
Intuitively, this means that f does not grow faster than g
If a is some real number, we write
f(x) = O(g(x)) for x->a
if and only if there exist constants d > 0 and C such that
|f(x)| <= C|g(x)| for all x with |x-a| < d
So for your case it would be
O(n) as |f(x)| > C|g(x)|
Reference from http://web.mit.edu/16.070/www/lecture/big_o.pdf
for r from 0 to xlist: // --> n time
for c from 0 to ylist: // n time
sum+= values[r][c]
n+1
}
function PrintCharacters():
characters= {"a","b","c","d"}
foreach character in characters --> # This loop will run as many time as there are characters suppose n characters than it will run time so O(n)
print(character)
Big O Notation gives an assumption when value is very big outer loop
will run n times and inner loop is running n times
Assume n -> 100 than total n^2 10000 run times

simple time complexity O(nlogn)

I am reviewing some Big O notation for an interview and I come across this problem.
for i = 1 to n do:
j = i
while j < n do:
j = 2 * j
simple right? the outer loop provides n steps. and each of those steps we do a single step O(1) of assignment j=i then log(n-j) or log(n-i) since j = i step for the while loop. I thought the time complexity would be O(nlogn) but the answer is O(n).
here is the answer:
The running time is approximately the following sum: Σ 1 +
log(n/i) for i from 1 to n which is Θ(n).
Now it has been a while so I am a bit rusty. where does log(n/i) comes from? I know log(n) - log(i) = log(n/i) however I thought we log(n-i) not log(n) - log(i). and how is the time complexity not O(nlogn)? I am sure I am missing something simple but I been staring at this for hours now and I am starting to lose my mind.
source: here is the source to this problem Berkeley CS 170, Fall 2009, HW 1
edit: after thinking about it a little more it makes sense that the time complexity of the inner loop is log(n/i). cause each inner loop runs n-i times but i double each loop. if the inner loop were always starting at 0 we have log(n) but take into account the number of the loop we don't have to loop over which is log(i). log(n) - log(i) which is log(n/i).
I think the log(n/i) comes from the inner loop
notice how j = i
which means when i=2 (lets say n=10)
the inner loop
while j < n do:
j = 2 * j
will run only from j=2 to 10 where j multilplies itself by 2 (hence the log) & quickly overruns the value of n=10
so the inner loop runs log base 2 n/i times
i ran a simple i=10 through the code & it looks like linear time because most of the time inner loop runs only once.
example : when the value of i becomes such that if you multiply it by 2, you get greater than or equal to n, you don't run the inner loop more than once.
so if n=10 you get one execution in the inner loop starting from i=n/2 (if i=10/2=5) then j starts with j=5, gets in the loop once multiplies itself with 2 & the inner loop condition while j < n do: fails.
EDIT : it would be O(n.log(n)) if the value of j started with j=0 everytime & the inner loop ran from i to n

Time Complexity of nested loops including if statement

I'm unsure of the general time complexity of the following code.
Sum = 0
for i = 1 to N
if i > 10
for j = 1 to i do
Sum = Sum + 1
Assuming i and j are incremented by 1.
I know that the first loop is O(n) but the second loop is only going to run when N > 10. Would the general time complexity then be O(n^2)? Any help is greatly appreciated.
Consider the definition of Big O Notation.
________________________________________________________________
Let f: ℜ → ℜ and g: ℜ → ℜ.
Then, f(x) = O(g(x))
&iff;
∃ k ∈ ℜ ∋ ∃ M > 0 ∈ ℜ ∋ ∀ x ≥ k, |f(x)| ≤ M ⋅ |g(x)|
________________________________________________________________
Which can be read less formally as:
________________________________________________________________
Let f and g be functions defined on a subset of the real numbers.
Then, f is O of g if, for big enough x's (this is what the k is for in the formal definition) there is a constant M (from the real numbers, of course) such that M times g(x) will always be greater than or equal to (really, you can just increase M and it will always be greater, but I regress) f(x).
________________________________________________________________
(You may note that if a function is O(n), then it is also O(n²) and O(e^n), but of course we are usually interested in the "smallest" function g such that it is O(g). In fact, when someone says f is O of g then they almost always mean that g is the smallest such function.)
Let's translate this to your problem. Let f(N) be the amount of time your process takes to complete as a function of N. Now, pretend that addition takes one unit of time to complete (and checking the if statement and incrementing the for-loop take no time), then
f(1) = 0
f(2) = 0
...
f(10) = 0
f(11) = 11
f(12) = 23
f(13) = 36
f(14) = 50
We want to find a function g(N) such that for big enough values of N, f(N) ≤ M ⋅g(N). We can satisfy this by g(N) = N² and M can just be 1 (maybe it could be smaller, but we don't really care). In this case, big enough means greater than 10 (of course, f is still less than M⋅g for N <11).
tl;dr: Yes, the general time complexity is O(n²) because Big O assumes that your N is going to infinity.
Let's assume your code is
Sum = 0
for i = 1 to N
for j = 1 to i do
Sum = Sum + 1
There are N^2 sum operations in total. Your code with if i > 10 does 10^2 sum operations less. As a result, for enough big N we have
N^2 - 10^2
operations. That is
O(N^2) - O(1) = O(N^2)