Which one is better in between O(log n) and O(log n^2)? - time-complexity

This is a Question that my (Data Structure)course teacher did in a Class Test. What would be the proper answer here? Since log n^2 =2 log n , as far as I know in a time complexity it could be written as O(log n) since constant multipliers cancels out. Then is one better than the other in any specific way?

Asymptotically they are the same.
Your reasoning is right, O(log n^2) can be simplified to O(log n) and obviously they are equals.
It's like you have two algorithms that works on an array, the first is O(n) and the second is O(2n).
If you look to the number of performed operation, the second performs double the operation of the first but this is not important for the Asymptotic notation.
They are in the same order that is O(n).
In your specific example the order is O(log n) and they can be considered the same.

I would agree with you that any O(log(x^k)) is O(log(x)). The computational complexity scales the same.

Related

an algorithm that is Theta(n) is also O(n^2), is this correct?

As Theta(n) is about the upper and lower bound, this question confused me.
I am sure O(n) can be O(n^2), but Omega(n) is also O(n^2)?
Bear in mind that O(f),Theta(f), Omega(f) are sets of functions.
O(n) is the set of functions that asymptotically grow at most as fast as n (modulo a constant factor), so O(n) is a proper subset of O(n^2).
Omega(n) is the set of functions that asymptotically grow at least as fast as n, so it is definitely not a subset of O(n^2). But it has a non-empty intersection with it, for example 0.5n and 7n^2 are in both sets.

Time Complexity Question about Square Roots

I am trying to answer the following question:
Given n=2k find the complexity
func(n)
if(n==2) return1;
else n=1+func(sqrt(n))
end
I think because there is if-else statement, it will loop n times for sure, but I'm confused with the recursive loop for func(sqrt(n)). Since it is square-rooted, I think the time complexity would be
O(sqrt(n) * n) = O(n^1/2 * n) = O(n^3/2) = O(2k^3/2). However, the possible answer choices are only
O(k)
O(2^n)
O(n*k)
Can O(2k^3/2) be considered as O(k)? I'm confused because although time complexity is often simplified, O(n) and O(n^2) are different, so I thought O(2k^3/2) can only be simplified to O(k^3/2).
I’d argue that none of the answers here are the best answer to this question.
If you have a recursive function that does O(1) work per iteration and then reduces the size of its argument from n to √n, as is the case here, the runtime works out to O(log log n). The intuition behind this is that taking the square root of a number throws away half the digits of the number, roughly, so the runtime will be O(log d), where d is the number of digits of the input number. The number of digits of the number n is O(log n), hence the overall runtime of O(log log n).
In your case, you have n = 2k, so O(log log n) = O(log log k). (Unless you meant n = 2k, In which case O(log log n) = O(log log 2k) = O(log k).)
Notice that we don’t get O(n × √n) as our result. That would be what happens if we do O(√n) work O(n) times. But here, that’s not what we’re doing. The size of the input shrinks by a square root on each iteration, but that’s not the same thing as saying that we’re doing a square root amount of work. And the number of times that this happens isn’t O(n), since the value of n shrinks too rapidly for that.
Reasoning by analogy, the runtime of this code would be O(log n), not O(n × n / 2):
func(n):
if n <= 2 return 1
return func(n/2)

How to calculate time complexity O(n) of the algorithm?

What I have done:
I measured the time spent processing 100, 1000, 10000, 100000, 1000000 items.
Measurements here: https://github.com/DimaBond174/cache_single_thread
.
Then I assumed that O(n) increases in proportion to n, and calculated the remaining algorithms with respect to O(n) ..
Having time measurements for processing 100, 1000, 10000, 100000, 1000000 items how can we now attribute the algorithm to O(1), O(log n), O(n), O(n log n), or O(n^2) ?
Let's define N as one of the possible inputs of data. An algorithm can have different Big O values depending on which input you're referring to, but generally there's only one big input that you care about. Without the algorithm in question, you can only guess. However there are some guidelines that will help you determine which it is.
General Rule:
O(1) - the speed of the program barely changes regardless of size of data. To get this, a program must not have loops operating on the data in question at all.
O(log N) - the program slows down slightly when N increases dramatically, in a logarithmic curve. To get this, loops must only go through a fraction of the data. (for example, binary search).
O(N) - the program's speed is directly proportional to the size of the data input. If you perform an operation on each unit of the data, you get this. You must not have any kind of nested loops (that act on the data).
O(N log N)- the program's speed is significantly reduced by larger input. This occurs when you have a O(logN) operation NESTED in a loop that would otherwise be O(N). So for example, you had a loop that did a binary search for each unit of data.
O(N^2) - The program will slow down to a crawl with larger input and eventually stall with large enough data. This happens when you have NESTED loops. Same as above, but this time the nested loop is O(N) instead of O(log N)
So, try to think of a looping operation as O(N) or O(log N). Then, whenever you have nesting, multiply them together. If the loops are NOT nested, they are not multiplied like this. So two loops separate from each other would simply be O(2N) and not O(N^2).
Also remember that you may have loops under the hood, so you should think about them too. For example, if you did something like Arrays.sort(X) in Java, that would be a O(N logN) operation. So if you have that inside a loop for some reason, your program is going to be a lot slower than you think.
Hope that answers your question.

Does O(N/2) simplifies to O(log n)?

I have one algorithm that only needs to run in O(N/2) time (basically it is a for loop that only runs over half of the elements).
How does this simplifies in the big O notation? O(log n)?
Big O notation drops the factors. O(N/2) = O(1/2 * N) and simplifies to O(N).
If you want to know why the factor is dropped I would refer you to this other SO question.

Practical difference between O(n) and O(1 + n)?

Isn't O(n) an improvement over O(1 + n)?
This is my conception of the difference:
O(n):
for i=0 to n do ; print i ;
O(1 + n):
a = 1;
for i=0 to n do ; print i+a ;
... which would just reduce to O(n) right?
If the target time complexity is O(1 + n), but I have a solution in O(n),
does this mean I'm doing something wrong?
Thanks.
O(1+n) and O(n) are mathematically identical, as you can straightforwardly prove from the formal definition or using the standard rule that O( a(n) + b(n) ) is equal to the bigger of O(a(n)) and O(b(n)).
In practice, of course, if you do n+1 things it'll (usually, dependent on compiler optimizations/etc) take longer than if you only do n things. But big-O notation is the wrong tool to talk about those differences, because it explicitly throws away differences like that.
It's not an improvement because BigO doesn't describe the exact running time of your algorithm but rather its growth rate. BigO therefore describes a class of functions, not a single function. O(n^2) doesn't mean that your algorithms for input of size 2 will run in 4 operations, it means that if you were to plot the running time of your application as a function of n it would be asymptotically upper bound by c*n^2 starting at some n0. This is nice because we know how much slower our algorithm will be for each input size, but we don't really know exactly how fast it will be. Why use the c? Because as I said we don't care about exact numbers but more about the shape of the function - when we multiply by a constant factor the shape stays the same.
Isn't O(n) an improvement over O(1 + n)?
No, it is not. Asymptotically these two are identical. In fact, O(n) is identical to O(n+k) where k is any constant value.