Whats the time complexity of the following? - time-complexity

I've seen many questions of this type but I still can't get the while iteration clear.
for i=1...n
for j=1..i
k=n
while (k>2)
k=k^(1/3)

The two for loops are O(n^2) combined, and the inner loop is O(log2(log2(n)) [*]. Thus the overall complexity is O(n^2*log2(log2(n))).
To find the number of iterations m of the inner loop, we need to solve the following for m:
n = 2^(3^m)
This gives log3(log2(n)), which is the same as O(log2(log2(n)) (using the same log base for consistency).
[*] Assuming that, in your notation, k^(1/3) is the cube root of k.

Related

calculate time complexity for this solution

I have the following code. I think the solution has O(n^2) because it is a nested loop according to the attached image. can anyone confirm?
function sortSmallestToLargest(data):
sorted_data={}
while data is not empty:
smallest_data=data[0]
foreach i in data:
if (i < smallest_data):
smallest_data= i
sorted_data.add(smallest_data)
data.remove(smallest_data)
return sorted_data
reference image
Ok, now I see what you are doing! Yes, you are right, it's O(n²), because you're always looping through every element of data. Your data will decrease by one every loop, but because with O complexity we don't care about constants, we can say it's O(n) for each loop. Multiplying (because one is inside the other) we have O(n²).

Randomly increasing sequence- Wolfram Mathematica

Good afternoon, I have a problem making recurrence table with randomly increasing sequence. I want it to return an increasing sequence with a random difference between two elements. Right now I've got:
RecurrenceTable[{a[k+1]==a[k] + RandomInteger[{0,4}], a[1]==-12},a,{k,1,5}]
But it returns me an arithmetic progression with chosen d for all k (e.g. {-12,-8,-4,0,4,8,12,16,20,24}).
Also, I will be really grateful for explaining why if I replace every k in my code with n I get:
RecurrenceTable[{4+a[n] == a[n],a[1] == -12},a,{n,1,10}]
Thank You very much for Your time!
I don't believe that RecurrenceTable is what you are looking for.
Try this instead
FoldList[Plus,-12,RandomInteger[{0,4},5]]
which returns, this time,
{-12,-8,-7,-3,1,2}
and returns, this time,
{-12,-9,-5,-3,0,1}

What is a*<a[j] in pseudocode?

I was trying the time complexity mcq questions given in codechef under practice for Data Structures and Algorithms. One of the questions had a line a*< a[i]. What does that line mean?
I know that if there wasn't an and statement the complexity would have been O(n^2). But the a*< is completely alien to me. I searched for it in the internet but all I got was about the a star algorithm and asterisks! I tried running the program in python with a print statement but it says that * is invalid. Does that mean something like a pointer to the 1st element in the array or something?
Find the time complexity of the following function
n = len(a)
j = 0
for i =0 to n-1:
while (j < n and a* < a[j]):
j += 1
The answer is given as O(n). But there are nested loops so it is supposed to be O(n^2).Help required! Thanks
It doesn't actually matter what a* means. The question is to determine the time complexity of the algorithm. Notice that although there are two nested loops, the inner while loop isn't a full independent loop. Its index is j, which starts at 0 and is only ever incremented, with an upper bound of n. So the inner loop can only run a maximum of n times in total. This means that the overall complexity is only O(n).

BFS bad complexity

I am using adjacency lists to represent graph in OCaml. Then I made the following implementation of a BFS in OCaml starting at the node s.
let bfs graph s=
let size = Array.length graph in
let seen = Array.make size false and next = [s] in
let rec aux = function
|[] -> ()
|t::q -> if not seen.(t) then begin seen.(t) <- true; aux (q#graph.(t)) end else aux q
in aux next
size represents the number of nodes of the graph. seen is an array where seen.(t) = true if we've seen the node t, and next is a list of the node we need to see.
The thing is that normally the time complexity for BFS is linear (O( V +E)) yet I feel like my implementation doesn't have this complexity. If I am not mistaken the complexity of q#graph.(t) is quite big since it's O(| q |). So my complexity is quite bad since at each step I am concatenating two lists and this is heavy in time.
Thus I am wondering how can I adapt this code to make an efficient BFS? The problem (I think) comes from the implementation of a Queue using lists. Does the complexity of the Queue module in OCaml takes O(1) to add an element? In this case how can I use this module to make my bfs work, since I can't do pattern matching with Queue just as easily as list?
the complexity of q#graph.(t) is quite big since it's O(| q |). So my complexity is quite bad since at each step I am concatenating two lists and this is heavy in time.
You are absolutely right – this is the bottleneck of your BFS. You should be happily able to use the Queue module, because according to https://ocaml.org/learn/tutorials/comparison_of_standard_containers.html operation of insertion and taking elements is O(1).
One of the differences between queues and lists in OCaml is that queues are mutable structures, so you will need to use non pure functions like add, take and top that respectively insert element in-place, pop element from the front and return first element.
If I am not mistaken the complexity of q#graph.(t) is quite big since it's O(| q |).
That is indeed the problem. What you should be using is graph.(t) # q. The complexity of that is O(| graph.(t) |).
You might ask: What difference does that make?
The difference is that |q| can be anything from 0 to V * E. graph.(t) on the other hand you can work with. You visit every vertex in the graph at most once so overall the complexity will be
O(\Sum_V |grahp.(v))
The sum of all edges of each vertex in the graph. Or in other words: E.
That brings you to the overall complexity of O(V + E).

Analyzing time complexity (Poly log vs polynomial)

Say an algorithm runs at
[5n^3 + 8n^2(lg (n))^4]
Which is the first order term? Would it be the one with the poly log or the polynomial?
For each two constants a>0,b>0, log(n)^a is in o(n^b) (Note small o notation here).
One way to prove this claim is examine what happens when we apply a monotomically increasing function on both sides: the log function.
log(log(n)^a)) = a* log(log(n))
log(n^b) = b * log(n)
Since we know we can ignore constants when it comes to asymptotic notations, we can see that the answer to "which is bigger" log(n)^a or n^b, is the same as "which is bigger": log(log(n)) and log(n). This answer is much more intuitive to answer.