Big O notation for these algorithm complexities - time-complexity

I have a few algorithm complexities that I'm not entirely sure of what the Big O notations are for them.
i) ((n-1)(n-1) * ... * 2 * 1)/2
ii) 56 + 2n + n^2 + 3n^3
iii) 2n(lg n) + 1001
iv) n^2 * n^3 + 2^n
I believe ii) and iii) are pretty straightforward with the Big O of ii) being O(n^3) and the Big O of iii) being O(n log n) but let me know if these are wrong.
It's mostly i) and iv) I'm a bit confused on. For i) I assumed it followed the same idea as 1+2+3+4+...+n which has a Big O notation of O(n^2) so that's what I put and for iv) I put O(n^5) but I'm not sure if the 2^n affects the Big O notation in this case, I'm not sure what gets priority here or do I just include them both?
Any help would be much appreciated, I'm not that experienced in Big O notation so any advice would be really helpful as well.
Thanks in advance

Since problem i) is multiplying (not adding) the terms from 1 to n, that should be O(n!).
You're right on ii) n^3 is the dominant term, so it's O(n^3), and on iii) both constants 2 and 1001 can be ignored leaving you with O(n log n).
On iv) you were right to combine the first two terms to get n^5, but even that will eventually be surpassed by the 2^n term, so the answer is O(2^n).

Related

Determining time complexity of solution by reductions

Suppose that you found a solution to the A problem and are trying to get some idea of its complexity. You solve A by calling your B sub-routine a total of n^2 times and also doing a constant amount of additional work.
If B is selection sort, what is the time complexity of this solution?
If B is merge sort, what is the time complexity of this solution?
My answer to 1st question is n^2 and to 2nd one is nlogn. Any idea will be appreciated about my answers.
I assume that by "solution" you mean "algorithm", and by "this solution", you mean the algorithm that solves problem A by calling B n^2 times. Furthermore, I assume that by n you mean the size of the input.
Then if B is selection sort, which is an O(n^2) algorithm, the algorithm for solving A would be O(n^2 * n^2) = O(n^4).
If B is merge sort, which is O(n log n), the algorithm for solving A would be O(n^2* n log n) = O(n^3 log n).
Yeah your are right,
O(B) = n ^ 2 -> Selection Sort;
O(B) = n * log(n). -> Marge Sort

Why do we prefer not to specify the constant factor in Big-O notation?

Let's consider classic big O notation definition (proof link):
O(f(n)) is the set of all functions such that there exist positive constants C and n0 with |g(n)| ≤ C * f(n), for all n ≥ n_0.
According to this definition it is legal to do the following (g1 and g2 are the functions that describe two algorithms complexity):
g1(n) = 9999 * n^2 + n ∈ O(9999 * n^2)
g2(n) = 5 * n^2 + N ∈ O(5 * n^2)
And it is also legal to note functions as:
g1(n) = 9999 * N^2 + N ∈ O(n^2)
g2(n) = 5 * N^2 + N ∈ O(n^2)
As you can see the first variant O(9999*N^2) vs (5*N^2) is much more precise and gives us clear vision which algorithm is faster. The second one does not show us anything.
The question is: why nobody use the first variant?
The use of the O() notation is, from the get go, the opposite of noting something "precisely". The very idea is to mask "precise" differences between algorithms, as well as being able to ignore the effect of computing hardware specifics and the choice of compiler or programming language. Indeed, g_1(n) and g_2(n) are both in the same class (or set) of functions of n - the class O(n^2). They differ in specifics, but they are similar enough.
The fact that it's a class is why I edited your question and corrected the notation from = O(9999 * N^2) to ∈ O(9999 * N^2).
By the way - I believe your question would have been a better fit on cs.stackexchange.com.

Does O(N/2) simplifies to O(log n)?

I have one algorithm that only needs to run in O(N/2) time (basically it is a for loop that only runs over half of the elements).
How does this simplifies in the big O notation? O(log n)?
Big O notation drops the factors. O(N/2) = O(1/2 * N) and simplifies to O(N).
If you want to know why the factor is dropped I would refer you to this other SO question.

Practical difference between O(n) and O(1 + n)?

Isn't O(n) an improvement over O(1 + n)?
This is my conception of the difference:
O(n):
for i=0 to n do ; print i ;
O(1 + n):
a = 1;
for i=0 to n do ; print i+a ;
... which would just reduce to O(n) right?
If the target time complexity is O(1 + n), but I have a solution in O(n),
does this mean I'm doing something wrong?
Thanks.
O(1+n) and O(n) are mathematically identical, as you can straightforwardly prove from the formal definition or using the standard rule that O( a(n) + b(n) ) is equal to the bigger of O(a(n)) and O(b(n)).
In practice, of course, if you do n+1 things it'll (usually, dependent on compiler optimizations/etc) take longer than if you only do n things. But big-O notation is the wrong tool to talk about those differences, because it explicitly throws away differences like that.
It's not an improvement because BigO doesn't describe the exact running time of your algorithm but rather its growth rate. BigO therefore describes a class of functions, not a single function. O(n^2) doesn't mean that your algorithms for input of size 2 will run in 4 operations, it means that if you were to plot the running time of your application as a function of n it would be asymptotically upper bound by c*n^2 starting at some n0. This is nice because we know how much slower our algorithm will be for each input size, but we don't really know exactly how fast it will be. Why use the c? Because as I said we don't care about exact numbers but more about the shape of the function - when we multiply by a constant factor the shape stays the same.
Isn't O(n) an improvement over O(1 + n)?
No, it is not. Asymptotically these two are identical. In fact, O(n) is identical to O(n+k) where k is any constant value.

time comlpexity of enumeration all the subsets

for (i=0;i<n;i++)
{
enumerate all subsets of size i = 2^n
each subset of size i takes o(nlogn) to search a solution
from all these solution I want to search the minimum subset of size S.
}
I want to know the complexity of this algorithm it'is 2^n O(nlogn*n)=o(2^n n²) ??
If I understand you right:
You iterate all subsets of a sorted set of n numbers.
For each subset you test in O(n log n) if its is a solution. (how ever you do this)
After you have all this solutions you looking for the one with exact S elements with the smalest sum.
The way you write it, the complexity would be O(2^n * n log n) * O(log (2^n)) = O(2^n * n^2 log n). O(log (2^n)) = O(n) is for searching the minimum solution, and you do this every round of the for loop with worst case i=n/2 and every subset is a solution.
Now Im not sure if you mixing O() and o() up.
2^n O(nlogn*n)=o(2^n n²) is only right if you mean 2^n O(nlog(n*n)).
f=O(g) means, the complexity of f is not bigger than the complexity of g.
f=o(g) means the complexity of f is smaller than the complexity of g.
So 2^n O(nlogn*n) = O(2^n n logn^2) = O(2^n n * 2 logn) = O(2^n n logn) < O(2^n n^2)
Notice: O(g) = o(h) is never a good notation. You will (most likly every time) find a function f with f=o(h) but f != O(g), if g=o(h).
Improvements:
If I understand your algorithm right, you can speed it a little up. You know the size of the subset you looking for, so only look at all the subsets that have the size S. The worst case is S=n/2, so C(n,n/2) ~ 2^(n-1) will not reduce the complexity but saves you a factor 2.
You can also just save a solution and check if the next solution is smaller. this way you get the smallest solution without serching for it again. So the complexity would be O(2^n * n log n).