This is a Question that my (Data Structure)course teacher did in a Class Test. What would be the proper answer here? Since log n^2 =2 log n , as far as I know in a time complexity it could be written as O(log n) since constant multipliers cancels out. Then is one better than the other in any specific way?
Asymptotically they are the same.
Your reasoning is right, O(log n^2) can be simplified to O(log n) and obviously they are equals.
It's like you have two algorithms that works on an array, the first is O(n) and the second is O(2n).
If you look to the number of performed operation, the second performs double the operation of the first but this is not important for the Asymptotic notation.
They are in the same order that is O(n).
In your specific example the order is O(log n) and they can be considered the same.
I would agree with you that any O(log(x^k)) is O(log(x)). The computational complexity scales the same.
This earlier question addresses some of the factors that might cause an algorithm to have O(log n) complexity.
What would cause an algorithm to have time complexity O(log log n)?
O(log log n) terms can show up in a variety of different places, but there are typically two main routes that will arrive at this runtime.
Shrinking by a Square Root
As mentioned in the answer to the linked question, a common way for an algorithm to have time complexity O(log n) is for that algorithm to work by repeatedly cut the size of the input down by some constant factor on each iteration. If this is the case, the algorithm must terminate after O(log n) iterations, because after doing O(log n) divisions by a constant, the algorithm must shrink the problem size down to 0 or 1. This is why, for example, binary search has complexity O(log n).
Interestingly, there is a similar way of shrinking down the size of a problem that yields runtimes of the form O(log log n). Instead of dividing the input in half at each layer, what happens if we take the square root of the size at each layer?
For example, let's take the number 65,536. How many times do we have to divide this by 2 until we get down to 1? If we do this, we get
65,536 / 2 = 32,768
32,768 / 2 = 16,384
16,384 / 2 = 8,192
8,192 / 2 = 4,096
4,096 / 2 = 2,048
2,048 / 2 = 1,024
1,024 / 2 = 512
512 / 2 = 256
256 / 2 = 128
128 / 2 = 64
64 / 2 = 32
32 / 2 = 16
16 / 2 = 8
8 / 2 = 4
4 / 2 = 2
2 / 2 = 1
This process takes 16 steps, and it's also the case that 65,536 = 216.
But, if we take the square root at each level, we get
√65,536 = 256
√256 = 16
√16 = 4
√4 = 2
Notice that it only takes four steps to get all the way down to 2. Why is this?
First, an intuitive explanation. How many digits are there in the numbers n and √n? There are approximately log n digits in the number n, and approximately log (√n) = log (n1/2) = (1/2) log n digits in √n. This means that, each time you take a square root, you're roughly halving the number of digits in the number. Because you can only halve a quantity k O(log k) times before it drops down to a constant (say, 2), this means you can only take square roots O(log log n) times before you've reduced the number down to some constant (say, 2).
Now, let's do some math to make this rigorous. Le'ts rewrite the above sequence in terms of powers of two:
√65,536 = √216 = (216)1/2 = 28 = 256
√256 = √28 = (28)1/2 = 24 = 16
√16 = √24 = (24)1/2 = 22 = 4
√4 = √22 = (22)1/2 = 21 = 2
Notice that we followed the sequence 216 → 28 → 24 → 22 → 21. On each iteration, we cut the exponent of the power of two in half. That's interesting, because this connects back to what we already know - you can only divide the number k in half O(log k) times before it drops to zero.
So take any number n and write it as n = 2k. Each time you take the square root of n, you halve the exponent in this equation. Therefore, there can be only O(log k) square roots applied before k drops to 1 or lower (in which case n drops to 2 or lower). Since n = 2k, this means that k = log2 n, and therefore the number of square roots taken is O(log k) = O(log log n). Therefore, if there is algorithm that works by repeatedly reducing the problem to a subproblem of size that is the square root of the original problem size, that algorithm will terminate after O(log log n) steps.
One real-world example of this is the van Emde Boas tree (vEB-tree) data structure. A vEB-tree is a specialized data structure for storing integers in the range 0 ... N - 1. It works as follows: the root node of the tree has √N pointers in it, splitting the range 0 ... N - 1 into √N buckets each holding a range of roughly √N integers. These buckets are then each internally subdivided into √(√ N) buckets, each of which holds roughly √(√ N) elements. To traverse the tree, you start at the root, determine which bucket you belong to, then recursively continue in the appropriate subtree. Due to the way the vEB-tree is structured, you can determine in O(1) time which subtree to descend into, and so after O(log log N) steps you will reach the bottom of the tree. Accordingly, lookups in a vEB-tree take time only O(log log N).
Another example is the Hopcroft-Fortune closest pair of points algorithm. This algorithm attempts to find the two closest points in a collection of 2D points. It works by creating a grid of buckets and distributing the points into those buckets. If at any point in the algorithm a bucket is found that has more than √N points in it, the algorithm recursively processes that bucket. The maximum depth of the recursion is therefore O(log log n), and using an analysis of the recursion tree it can be shown that each layer in the tree does O(n) work. Therefore, the total runtime of the algorithm is O(n log log n).
O(log n) Algorithms on Small Inputs
There are some other algorithms that achieve O(log log n) runtimes by using algorithms like binary search on objects of size O(log n). For example, the x-fast trie data structure performs a binary search over the layers of at tree of height O(log U), so the runtime for some of its operations are O(log log U). The related y-fast trie gets some of its O(log log U) runtimes by maintaining balanced BSTs of O(log U) nodes each, allowing searches in those trees to run in time O(log log U). The tango tree and related multisplay tree data structures end up with an O(log log n) term in their analyses because they maintain trees that contain O(log n) items each.
Other Examples
Other algorithms achieve runtime O(log log n) in other ways. Interpolation search has expected runtime O(log log n) to find a number in a sorted array, but the analysis is fairly involved. Ultimately, the analysis works by showing that the number of iterations is equal to the number k such that n2-k ≤ 2, for which log log n is the correct solution. Some algorithms, like the Cheriton-Tarjan MST algorithm, arrive at a runtime involving O(log log n) by solving a complex constrained optimization problem.
One way to see factor of O(log log n) in time complexity is by division like stuff explained in the other answer, but there is another way to see this factor, when we want to make a trade of between time and space/time and approximation/time and hardness/... of algorithms and we have some artificial iteration on our algorithm.
For example SSSP(Single source shortest path) has an O(n) algorithm on planar graphs, but before that complicated algorithm there was a much more easier algorithm (but still rather hard) with running time O(n log log n), the base of algorithm is as follow (just very rough description, and I'd offer to skip understanding this part and read the other part of the answer):
divide graph into the parts of size O(log n/(log log n)) with some restriction.
Suppose each of mentioned part is node in the new graph G' then compute SSSP for G' in time O(|G'|*log |G'|) ==> here because |G'| = O(|G|*log log n/log n) we can see the (log log n) factor.
Compute SSSP for each part: again because we have O(|G'|) part and we can compute SSSP for all parts in time |n/logn| * |log n/log logn * log (logn /log log n).
update weights, this part can be done in O(n).
for more details this lecture notes are good.
But my point is, here we choose the division to be of size O(log n/(log log n)). If we choose other divisions like O(log n/ (log log n)^2) which may runs faster and brings another result. I mean, in many cases (like in approximation algorithms or randomized algorithms, or algorithms like SSSP as above), when we iterate over something (subproblems, possible solutions, ...), we choose number of iteration corresponding to the trade of that we have (time/space/complexity of algorithm/ constant factor of the algorithm,...). So may be we see more complicated stuffs than "log log n" in real working algorithms.
In big O notation of time complexity in algorithmic analysis, is O(n + k log n) the same as O(n log n) if k is larger than n? I am not entirely sure about this.
I am not 100% sure what you mean by N+KlogN. I'm used to seeing K used as a subset of N, for example "the top Kth set of items in N" which for large N it is common to simply return the top K items in N because then the Big-O time is NlogK which is much faster than NlogN (because K is a smaller number).
If you literally mean N+KlogN, then that would be more complex than simply NlogN as K adds to the number. For example, as K goes to zero you simply end up with NlogN, otherwise you get a greater than NlogN, which I hope would be obvious is more complex.
I hope that does something to answer the question. I confess I feel like I might be missing the point here and if so I apologize.
No, in the specific case you’re mentioning these are not the same. For example, consider this algorithm: given an array of length N and a number K ≥ N, do a linear scan over the array, then do K binary searches on the array. How much work is done here? Well, the linear search takes time O(N), and the K binary searches collectively take time O(K log N), so the total work done is O(N + K log N).
However, the work here is not O(N log N). Since K can be arbitrarily large, the value of K log N can exceed the value of N log N by an arbitrary amount. A different way of seeing this: a bound of O(N log N) means that the runtime depends purely on N and not on K. But that can’t be the case here, since cranking K way, way up definitely increases the runtime, independently of what N is.
Hope this helps!
I assume it as N + (K log N) where N is total count and K is the subset count. Now assuming K is very small compared to N (possibly a constant to get top K numbers from varying N) it reduces to linear time.
For example, to get top 100 items from array of 10000 elements
10000 + (100 * log (10000) base 2) = 10000 + 1300
Now when N is 20000, k log n changes to 1400
So as N increases linearly, the k log n increases in logarithmic manner reducing the overall complexity to linear.
O(n + (k log n)) is approximately O(n)
Could somebody explain me why when you have an algorithm A that has a time complexity of O(n log n) and give it input of size n^2 it gives the following: O(n^2 log n).
I understand that it becomes O(n^2 log n2) and then O(n^2 * 2 * log n) but why does the 2 disappear?
It disappears because time complexity does not care about things that have no effect when n increases (such as a constant multiplier). In fact, it often doesn't even care about things that have less effect.
That's why, if your program runtime can be calculated as n3 - n + 7, the complexity is the much simpler O(n3). You can think of what happens as n approaches infinity. In that case, all the other terms become totally irrelevant compared to the first. That's when you're adding terms.
It's slightly different when multiplying since even lesser terms will still have a substantial effect (because they're multiplied by the thing having the most effect, rather than being added to).
For your specific case, O(n2 log n2) becomes O(n2 2 log n). Then you can remove all terms that have no effect on the outcome as n increases. That's the 2.
for (i=0;i<n;i++)
{
enumerate all subsets of size i = 2^n
each subset of size i takes o(nlogn) to search a solution
from all these solution I want to search the minimum subset of size S.
}
I want to know the complexity of this algorithm it'is 2^n O(nlogn*n)=o(2^n n²) ??
If I understand you right:
You iterate all subsets of a sorted set of n numbers.
For each subset you test in O(n log n) if its is a solution. (how ever you do this)
After you have all this solutions you looking for the one with exact S elements with the smalest sum.
The way you write it, the complexity would be O(2^n * n log n) * O(log (2^n)) = O(2^n * n^2 log n). O(log (2^n)) = O(n) is for searching the minimum solution, and you do this every round of the for loop with worst case i=n/2 and every subset is a solution.
Now Im not sure if you mixing O() and o() up.
2^n O(nlogn*n)=o(2^n n²) is only right if you mean 2^n O(nlog(n*n)).
f=O(g) means, the complexity of f is not bigger than the complexity of g.
f=o(g) means the complexity of f is smaller than the complexity of g.
So 2^n O(nlogn*n) = O(2^n n logn^2) = O(2^n n * 2 logn) = O(2^n n logn) < O(2^n n^2)
Notice: O(g) = o(h) is never a good notation. You will (most likly every time) find a function f with f=o(h) but f != O(g), if g=o(h).
Improvements:
If I understand your algorithm right, you can speed it a little up. You know the size of the subset you looking for, so only look at all the subsets that have the size S. The worst case is S=n/2, so C(n,n/2) ~ 2^(n-1) will not reduce the complexity but saves you a factor 2.
You can also just save a solution and check if the next solution is smaller. this way you get the smallest solution without serching for it again. So the complexity would be O(2^n * n log n).