How to prove a lower bound of n * log(n) for this simple algorithm - time-complexity

The question is how many times does this algorithm produce a meow:
KITTYCAT(n):
for i from 0 to n − 1:
for j from 2^i to n − 1:
meow
So The inner loop has a worse case of n, but it only runs log(n) times because even though the outer loop runs n times, whenever i > log(n) the inner loop never runs, so you have O(n * log(n)).
However, since I can't assume a best case of n for the inner loop, how do I prove that the algorithm still has a best case of n * log(n)?

When i > log2n the start value of the inner loop is higher than its end value. Depending on how you interpret it, this either means that the inner loop counts down, or that it does not run at all. If you interpret it as counting down, then it gets very big indeed and ends up dominating, and you have Ω(2n), which is not what you seem to be looking for.
If instead you assume the inner loop goes away, then this code is really
for i from 0 to log2n:
     for j from 2i to n - 1:
          meow
giving you Ω(nlogn)
If you're asking how to prove that last step, you can calculate the exact number of iterations -- the inner loop iterates n times, then n-1 times, then n-2, then n-4, etc all the way down to 0. So the exact complexity (at least when n is a power of 2) is
    n + n-1 + n-2 + n-4 + ... + n-n/4 + n-n/2 + n-n
or
    nlog2n - 1 - 2 - 4 - ... - n/4 - n/2
which converges to
    nlog2n - n
which is asymptotically equivalent to nlogn as n -> ∞

Related

What is the time complexity of a log n while loop nested inside a for loop?

I'm having trouble figuring out the time complexity of the following code.
for i in range(0,n):
x = 1
while x < n:
x = x * 2
I understand that the outer for loop runs n times, and that the while loop runs log n times (I think). So does that mean because outer for loop has more of an impact so the run time is o(n)?
It's O(n * log(n)).
The outer loop is running n times and each time it runs it runs the inner loop. The inner loop is indeed O(log n). To see this imagine n being a power of 2. If n is 4,
x = 1
while x < n:
x = x * 2
will iterate twice, once when x is 1 and once when x is 2. If n is 8, this loop will iterate 3 times (on x = 1, 2, and 4). Note that 2^2 is four and 2^3 is eight. The number of iterations is the exponent; you could prove this by mathematical induction if you want to be rigorous. We can get that exponent by taking the base-2 log of n but by the definition the asymptotic notation we can ignore the base and just say that the loop above runs in O(log n) time.
If we have a loop that runs in O(log n) that we are running O(n) times we can say that the whole thing runs in O(n * log(n)) time.

Time Complexity of Selection sort

start = 0
while (start!= len(array)-1):
for i in range(start +1,len(array)):
if (array[i]<array[start]):
array[i],array[start] = array[start],array[i]
print(array)
start += 1
in this case should'nt the complexity be like
O(n) = n * [(n-1) + (n-2) + .... (n-(n-1))]
as for each of the n times of the outer loop the inner loop runs for diff steps gradually reducing by one. In this way O(n) comes to be (n^3 - n^2)/2. What is wrong with my approach.?enter code here
Look at that in this way. The first time (start=0) the inner loop performs n-1 steps,
the second time (start=1) the inner loop performs n-2 steps, and so on. Thus you have:
(n-1) + (n-2) + ... + 1 steps, which is equals to (n^2-n)/2 steps.

BIG(O) time complexity

What is the time Complexity for below code:
1)
function(values,xlist,ylist):
sum =0
n=0
for r from 0 to xlist:
for c from 0 to ylist:
sum+= values[r][c]
n+1
return sum/n
2)
function PrintCharacters():
characters= {"a","b","c","d"}
foreach character in characters
print(character)
According to me the 1st code has O(xlist*ylist) complexity and 2nd code has O(n).
Is this right?
Big O notation to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines
For example, when analyzing some algorithm, one might find that the time (or the number of steps) it takes to complete a problem of size n is given by
T(n) = 4 n^2 - 2 n + 2
If we ignore constants (which makes sense because those depend on the particular hardware the program is run on) and slower growing terms, we could say "T(n)" grows at the order of n^2 " and write:T(n) = O(n^2)
For the formal definition, suppose f(x) and g(x) are two functions defined on some subset of the real numbers. We write
f(x) = O(g(x))
(or f(x) = O(g(x)) for x -> infinity to be more precise) if and only if there exist constants N and C such that
|f(x)| <= C|g(x)| for all x>N
Intuitively, this means that f does not grow faster than g
If a is some real number, we write
f(x) = O(g(x)) for x->a
if and only if there exist constants d > 0 and C such that
|f(x)| <= C|g(x)| for all x with |x-a| < d
So for your case it would be
O(n) as |f(x)| > C|g(x)|
Reference from http://web.mit.edu/16.070/www/lecture/big_o.pdf
for r from 0 to xlist: // --> n time
for c from 0 to ylist: // n time
sum+= values[r][c]
n+1
}
function PrintCharacters():
characters= {"a","b","c","d"}
foreach character in characters --> # This loop will run as many time as there are characters suppose n characters than it will run time so O(n)
print(character)
Big O Notation gives an assumption when value is very big outer loop
will run n times and inner loop is running n times
Assume n -> 100 than total n^2 10000 run times

simple time complexity O(nlogn)

I am reviewing some Big O notation for an interview and I come across this problem.
for i = 1 to n do:
j = i
while j < n do:
j = 2 * j
simple right? the outer loop provides n steps. and each of those steps we do a single step O(1) of assignment j=i then log(n-j) or log(n-i) since j = i step for the while loop. I thought the time complexity would be O(nlogn) but the answer is O(n).
here is the answer:
The running time is approximately the following sum: Σ 1 +
log(n/i) for i from 1 to n which is Θ(n).
Now it has been a while so I am a bit rusty. where does log(n/i) comes from? I know log(n) - log(i) = log(n/i) however I thought we log(n-i) not log(n) - log(i). and how is the time complexity not O(nlogn)? I am sure I am missing something simple but I been staring at this for hours now and I am starting to lose my mind.
source: here is the source to this problem Berkeley CS 170, Fall 2009, HW 1
edit: after thinking about it a little more it makes sense that the time complexity of the inner loop is log(n/i). cause each inner loop runs n-i times but i double each loop. if the inner loop were always starting at 0 we have log(n) but take into account the number of the loop we don't have to loop over which is log(i). log(n) - log(i) which is log(n/i).
I think the log(n/i) comes from the inner loop
notice how j = i
which means when i=2 (lets say n=10)
the inner loop
while j < n do:
j = 2 * j
will run only from j=2 to 10 where j multilplies itself by 2 (hence the log) & quickly overruns the value of n=10
so the inner loop runs log base 2 n/i times
i ran a simple i=10 through the code & it looks like linear time because most of the time inner loop runs only once.
example : when the value of i becomes such that if you multiply it by 2, you get greater than or equal to n, you don't run the inner loop more than once.
so if n=10 you get one execution in the inner loop starting from i=n/2 (if i=10/2=5) then j starts with j=5, gets in the loop once multiplies itself with 2 & the inner loop condition while j < n do: fails.
EDIT : it would be O(n.log(n)) if the value of j started with j=0 everytime & the inner loop ran from i to n

How do i find the number of times x=x+1 is executed in terms of N?

im also having trouble finding omega(), and theta() as appropriate
x=0;
for k=1 to n
for j=1 to n-k
X=X+1;
The inner loop is n-1 + n-2 + n-3 ... + 1 + 0. Use this tutorial on calculating the sum of an arithmetic series to find the solution. The outer loop is obviously just "n."
This will be the big-theta. The big-oh will be the same as big-theta when you pull off everything but the first term and remove the multiplier, e.g. Theta(2*log(n) + 5) becomes O(log(n)). Omega is the same as big-Oh in this case, because the best case and worst case are identical; or you can cheat and say that big-Omega is constant time, because the big-Omega of EVERY function is constant time.
First, look at your boundaries. k=1 and k=n.
For k=1, the inside loop is executed (n-1) times.
For k=n the inside loop is execured (0) times.
So, 0 + 1 + ... + (n-1) is an arithmetic sum => (n-1)(n)/2 times.
Now, test it on a few small values :)
the answer is like this :
n-1 + n -2 + n -3 + ... = n*n - (1+2+3+ ... + n) = n^2 - n(n-1)/2