time complexity of the given code and the reason - time-complexity

enter image description here
can any one tell me why is the time complexity O(n) while the outer loop runs for log(n) and inner loop runs for n time the time complexity should be O(n*log(n)) but its O(n)?

The inner loop doesn't run n times, it runs i times. i varies in the outer loop. So to find the time complexity, you set up an expression for how many times the inner loop runs. In your image, this is already done in the explanation.
From there, after some mathematics, you will find that the time complexity is O(n). Hint: Geometric series.

Related

I'm in a middle of a computer science class, learning time complexity and stuck on a problem (Java)

So, first, here's the code:
code from our presentation
We are learning time complexity, and according to our teacher, the run time for this one is
n^2 +4n .
But for me and my friends, it seems like this is wrong. It's a for loop, inside a for loop, inside a for loop, so doesn't this needs to be n^3? What I need is some explanation on what is the runtime for this code, we literally sat for an hour with the teacher on this (the teacher is not that smart, because she didn't really learned this subject she just had a couple of classes about it) and all she could say was that you don't count the first for loop. Please any explanation?
There are several points to make here:
Having three nested loops doesn't imply O(n^3) complexity. It depends on the loops and what happens within the loops. It could be anything between O(1) and never halting.
Your teacher should specify what should be measured more precisely: the number of steps for the entire piece of code? Or just the two inner loops, ignoring the outer loop?
You need to specify what the variable parameters are. In this case it's pretty clear what it is (probably n, and not i or j), but it helps to be precise.
The code doesn't run because there is a syntax error: b[i] ) reader.nextInt();. So strictly speaking, there is no time complexity to reason about here.
OK, now assuming that the line should actually read b[i] = reader.nextInt();, the variable parameter is n, and the whole code should be measured: your teacher is right that the entire code runs in O(n^2), because the outer and the second loops share the same index variable i. So the outer loop iterates exactly once.
It doesn't really make sense to be much more precise than O(n^2) unless you define precisely what counts as 1. So stating something that looks precise like n^2 +4n is pretty much meaningless without much more context.

A interesting question about time complexity

during my classroom i asked this question to my teacher and he couldn't answer that's why i am asking here.
i asked that during a code , what if we have a loop to run from 1 to 10 , does the complexity would be O(1) {big O of 1} . heanswered yes. so here's the question what if i have written a loop to run from 1 to 1 million .is it sill O(1)? or is it O(n) or something else?
pseudo code -
for i in range(1,1 million):
print("hey")
what is the time complexity for that loop
now , if you think the answer is O(n) , how can you say it to be O(n) , because O(n) is when complexity is linear.
and what is the silver lining? when a code gets O(1) and O(n) .
like if i would have written a loop for 10 or 100 or 1000 or 10000 or 100000. when did it transformed from O(1) to O(n).
By definition, O(10000000) and O(1) are equal, Let me quickly explain what complexity means.
What we try to represent with the abstraction of time (and space) complexity isn't how fast a program will run, it what is the growth in runtime (or space) given the growth in input length.
For instance, given a loop with a fixed number of iterations (lets say 10), it doesnt matter if your input will be 1 long or 10000000000000, because your loop will ALWAYS run the same number of iteration therefore, no growth in runtime (even if that 10 iterations may take 1 week to run, it will always be 1 week).
but, if your algorithm's steps are dependent in your input length, that means the longer your input, the longer your algorithm's steps, the question is, how much more steps?
in summary, time (and space) complexity is an abstraction, its not here to tell us how long things will take, its simply here to tell us how the growth in time will be given growth in input, O(1) == O(10000000), because its not about how long it will take, its about the change in the runtime, O(1) algorithm can take 10 years, but it will always take 10 years, even for very large input length.
I think you are confusing the term. Time complexity for a given algorithm is given by the relationship between change in execution time with respect to change in input size.
If you are running a fixed loop from 1 to 10, but doing something in each iteration, then that counts as O(10), or O(1), meaning that it will take the same time each run.
But, as soon as the number of iterations starts depending on the number of elements or tasks, then a loop becomes O(n), meaning that the complexity becomes linear. The more the tasks, proportionally more the time.
I hope that clears some things up. :-)

Time Complexity Of The Below Program

algorithm what (n)
begin
if n = 1 then call A
else
begin
what (n-1);
call B(n)
end
end.
In the above program, I was asked to find the time complexity where procedure A takes O(1) time and procedure B takes O(1/n).
I formed the recurrence relation T(n) = T(n-1) + O(1/n)
And solving it, I got T(n) = O(log n) since we will get harmonic series if we solve it by using back substitution method and time complexity to compute the sum of harmonic series is O(lgn). But the answer is given as O(n). I am not able to figure out how they got that answer. In the explanation they have added a constant times n to the recurrence relation. I didn't get why we should add that constant times n. Please help me in understanding this.
This is likely a trick question set by the author / examiner to catch you out. You must note that the O(1) operations involved in each call to what (pushing arguments to stack etc.) overshadow the O(1/n) complexity of B – at least asymptotically speaking. So the actual time complexity is T(n) = T(n - 1) + O(1), which gives the correct answer.

What is the Difference in the Big-Oh of these Loops?

The Three For Loops:
I'm fairly new to this Big-Oh stuff and I'm having difficulty seeing the difference in complexity between these three loops.
They all seem to run less than O(n^2) but more than O(n).
Could someone explain to me how to evaluate the complexity of these loops?
Thanks!
Could someone explain to me how the evaluate to complexity of these loops?
Start by clearly defining the problem. The linked image has little to go on, so let's start making up stuff:
The parameter being varied is integer n.
C is a constant positive integer value greater than one.
the loop variables are integers
integers do not overflow
The costs of addition, comparison, assignment, multiplication and indexing is all constant.
The cost we are looking to find the complexity of is the cost of the constant operations of the innermost loop; we ignore all the additions and whatnot in the actual computations of the loop variables.
In each case the innermost statement is the same, and is of constant cost, so let's just call that cost "one unit" of cost.
Great.
What is the cost of the first loop?
The cost of the inner statement is one unit.
The cost of the "j" loop containing it is ten units every time.
How many times does the "i" loop run? Roughly n divided by C times.
So the total cost of the "i" loop is 10 * n / C, which is O(n).
Now can you do the second and third loops? Say more clearly where you are running into trouble. Start with:
The cost of the first run of the "j" loop is 1 unit.
The cost of the second run of the "j" loop is C units
The cost of the third run of the "j" loop is C * C units
...
and go from there.
Remember that you don't need to work out the exact cost function. You just need to figure out the dominating cost. Hint: what do we know about C * C * C ... at the last run of the outer loop?
You can analyse the loops using Sigma notation. Note that for the purpose of studying the asymptotic behaviour of loop (a), the constant C just describes the linear increment in the loop, and we can freely choose any value of C in our analysis (since the inner loop is just a fixed number of iterations) however assuming C>0 (integer). Hence, for loop (a), choose C=1. For loop (b), we'll include C, and assume, however, that C > 1 (integer). If C = 1 in loop (b), it will never terminate as i is never incremented. Finally define the innermost operations in all loops as our basic operations, with cost O(1).
The Sigma notation analysis follows:
Hence
(a) is O(n)
(b) is O(n)
(c) is O(n*log(n))

Amortized complexity of a balanced binary search tree

I'm not entirely sure what amortized complexity means. Take a balanced binary search tree data structure (e.g. a red-black tree). The cost of a normal search is naturally log(N) where N is the number of nodes. But what is the amortized complexity of a sequence of m searches in, let's say, ascending order. Is it just log(N)/m?
Well you can consider asymptotic analysis as a strict method to set a upper bound for the running time of algorithms, where as amortized analysis is a some what liberal method.
For example consider an algorithm A with two statements S1 and S2. The cost of executing S1 is 10 and S2 is 100. Both the statements are placed inside a loop as follows.
n=0;
while(n<100)
{
if(n % 10 != 0)
{
S1;
}
else
{
s2;
}
n++;
}
Here the number of times S1 executed is 10 times the count of S2. But asymptotic analysis will only consider the facts that S2 takes a time of 10 units and it is inside a loop executing 100 times. So the upper limit for execution time is of the order of 10 * 100 = 1000. Where as amortized analysis averages out the number of times the statements S1 and S2 are executed. So the upper time limit for execution is of the order of 200. Thus amortized analysis gives a better estimate of the upper limit for executing an algorithm.
I think it is mlog(N) because you have to do m search operations (each time from root node downto target node), while the complexity of one single operation is log(N).
EDIT: #user1377000 you are right, I have mistaken amortized complexity from asymptotic complexity. But I don't think it is log(N)/m... because it is not guaranteed that you can finished all m search operations in O(logN) time.
What is amortized analysis of algorithms?
I think this might help.
In case of a balanced search tree the amortized complexity is equal to asymptotic one. Each search operation takes O(logn) time, both asymptotic and average. Therefore for m searches the average complexity will be O(mlogn).
Pass in the items to be found all at once.
You can think of it in terms of divide-and-conquer.
Take the item x in the root node.
Binary-search for x into your array of m items.
Partition the array into things less than x and greater than x. (Ignore things equal to x, since you already found it.)
Recursively search for the former partition in your left child, and for the latter in your right child.
One worst case: your array of items is just the list of things in the leaf nodes. (n is roughly 2m.) You'd have to visit every node. Your search would cost lg(n) + 2*lg(n/2) + 4*lg(n/4) + .... That's linear. Think of it as doing smaller and smaller binary searches until you hit every element in the array once or twice.
I think there's also a way to do it by keeping track of where you are in the tree after a search. C++'s std::map and std::set return iterators which can move left and right within the tree, and they might have methods which can take advantage of an existing iterator into the tree.