Proving f(n) is O(n^4) - time-complexity

I'm trying to figure out how I could prove f(n) = c_0 + n * c_1 * n * c_2 * (n(n+1))/2 * c_3 is of a certain complexity. I've simply expanded all the brackets and I figured it must be O(n^4). Although I'm not sure how I could prove this.
I've tried to simplify it a bit and got the following:
f(n) = c_0 + n * c_1 * n * c_2 * (n(n+1))/2 * c_3 <= c * n^4
= c_0 + c_1 * c_2 * c_3 * (n^4 + n^3) <= 2c * n^4
But I'm not really sure what I could do from here. All the proofs I've done so far were simpler, with only one group of constants, and as such it was easy to choose some c and some n that would satisfy the inequality.
Any help is appreciated, thank you.

f(n) = c_0 + n * c_1 * n * c_2 * (n(n+1))/2 * c_3 is O(1), not O(n^4).
O(x) notation is about algorithmic complexity. Thus, either your question doesn't make sense (it's akin to: "What is the colour of happiness?" - two concepts that just do not apply to each other), or, it's asking about the algorithmic complexity in calculating that formula on a computer.
O(n) means something along the lines of: If you chart the 'size of the input' vs the 'time taken to finish the calculation for that input' / 'RAM required to perform the calculation for that input', you get a line that looks roughly like y = C*x - a line that runs at some angle (and not vertical or horizontal), in other words.
O(n^2) would mean that chart start to stabilize by looking like y = C*x^2 as you increment n, and so on.
So, what's n? That depends on how the question is asked. For example, for an algorithm that "sorts a list", without further context, n is clearly 'the size of the list'.
The first problem here is that the operation you describe is a calculation, not a variable-sized input in the first place. Thus the only workable definition of what n might be is the actual value of n itself.
Thus, the question becomes: As you increment the value of n whilst running the method:
int calc(int n) {
return c_0 + n * c_1 * n * c_2 * (n(n+1))/2 * c_3;
}
what happens to how long it takes to run the calc method, and what happens to how much memory it takes?
The answer is: Nothing happens; it's constant. calc takes about as long regardless of what n value you pass in. Therefore, that graph graphing 'value of n' vs 'time taken' is a horizontal line: y = C (where C is some constant). That's O(1), not O(n^4).
Possibly this has nothing whatsoever to do with big-O notation and you've confused a few things, in which case this is a question for a math-inclined Stack Overflow-esque site, not SO itself, which is for programmers.

Related

What is the time complexity of the given code

I want to know the time complexity of the code attached.
I get O(n^2logn), while my friends get O(nlogn) and O(n^2).
SomeMethod() = log n
Here is the code:
j = i**2;
for (k = 0; k < j; k++) {
for (p = 0; p < j; p++) {
x += p;
}
someMethod();
}
The question is not very clear about the variable N and the statement i**2.
i**2 gives a compilation error in java.
assuming someMethod() takes log N time(as mentioned in question), and completely ignoring value of N,
lets call i**2 as Z
someMethod(); runs Z times. and time complexity of the method is log N so that becomes:
Z * log N ----------------------------------------- A
lets call this expression A.
Now, x+=p runs Z^2 times (i loop * j loop) and takes constant time to run. that makes the following expression:
( Z^2 ) * 1 = ( Z^2 ) ---------------------- B
lets call this expression B.
The total run time is sum of expression A and expression B. which brings us to:
O((Z * log N) + (Z^2))
where Z = i**2
so final expression will be O(((i**2) * log N) + ((i**2)^2))
if we can assume i**2 is i^2, the expression becomes,
O(((i^2) * log N) + (i^4))
Considering only the higher order variables, like we consider n^2 in n^2 + 2n + 5, the complexity can be expressed as follows,
i^4
Based on the picture, the complexity is O(logNI2 + I4).
We cannot give a complexity class with one variable because the picture does not explain the relationship between N and I. They must be treated as separate variables.
And likewise, we cannot eliminate the logNI2 term because the N variable will dominate the I variable in some regions of N x I space.
If we treat N as a constant, then the complexity class reduces to O(I4).
We get the same if we treat N as being the same thing as I; i.e. there is a typo in the question.
(I think there is mistake in the way the question was set / phrased. If not, this is a trick question designed to see if you really understood the mathematical principles behind complexity involving multiple independent variables.)

How do you calculate combined orders of growth?

Suppose I have a recursive procedure with a formal parameter p. This procedure
wraps the recursive call in a Θ(1) (deferred) operation
and executes a Θ(g(k)) operation before that call.
k is dependent upon the value of p. [1]
The procedure calls itself with the argument p/b where b is a constant (assume it terminates at some point in the range between 1 and 0).
Question 1.
If n is the value of the argument to p in the initial call to the procedure, what are the orders of growth of the space and the number of steps executed, in terms of n, for the process this procedure generates
if k = p? [2]
if k = f(p)? [3]
Footnotes
[1] i.e., upon the value of the argument passed into p.
[2] i.e., the size of the input to the nested operation is same as that for our procedure.
[3] i.e., the size of the input to the nested operation is some function of the input size of our procedure.
Sample procedure
(define (* a b)
(cond ((= b 0) 0)
((even? b) (double (* a (halve b))))
(else (+ a (* a (- b 1))))))
This procedure performs integer multiplication as repeated additions based on the rules
a * b = double (a * (b / 2)) if b is even
a * b = a + (a * (b - 1)) if b is odd
a * b = 0 if b is zero
Pseudo-code:
define *(a, b) as
{
if (b is 0) return 0
if (b is even) return double of *(a, halve (b))
else return a + *(a, b - 1)
}
Here
the formal parameter is b.
argument to the recursive call is b/2.
double x is a Θ(1) operation like return x + x.
halve k is Θ(g(k)) with k = b i.e., it is Θ(g(b)).
Question 2.
What will be the orders of growth, in terms of n, when *(a, n) is evaluated?
Before You Answer
Please note that the primary questions are the two parts of question 1.
Question 2 can be answered as the first part. For the second part, you can assume f(p) to be any function you like: log p, p/2, p^2 etc.
I saw someone has already answered question 2, so I'll answer question 1 only.
First thing is to notice is that the two parts of the question are equivalent. In the first question, k=p so we execute a Θ(g(p)) operation for some function g. In the second one, k=f(p) and we execute a Θ(g(f(p))) = Θ((g∘f)(p)). replace g from the first question by g∘f and the second question is solved.
Thus, let's consider the first case only, i.e. k=p. Denote the time complexity of the recursive procedure by T(n) and we have that:
T(n) = T(n/b) + g(n) [The free term should be multiplied by a constant c, but we can talk about complexity in "amount of c's" and the theta bound will obviously remain the same]
The solution of the recursive formula is T(n) = g(n) + g(n/b) + ... + g(n/b^i) + ... + g(1)
We cannot further simplify it unless given additional information about g. For example, if g is a polynomial, g(n) = n^k, we get that
T(n) = n^k * (1 + b^-k + b^-2k + b^-4k + ... + b^-log(n)*k) <= n^k * (1 + b^-1 + b^-2 + ....) <= n^k * c for a constant c, thus T(n) = Θ(n^k).
But, if g(n) = log_b(n), [from now on I ommit the base of the log] we get that T(n) = log(n) + log(n/b) + ... + log(n/(b^log_b(n))) = log(n^log(n) * 1/b^(1 + 2 + ... log(n))) = log(n)^2 - log(n)^2 / 2 - log(n) / 2 = Θ(log(n) ^ 2) = Θ(g(n)^2).
You can easily prove, using a similar proof to the one where g is a polynomial that when g = Ω(n), i.e., at least linear, then the complexity is g(n). But when g is sublinear the complexity may be well bigger than g(n), as g(n/b) may be much bigger then g(n) / b.
You need to apply the wort case analysis.
First,
you can approximate the solution by using powers of two:
If then clearly the algorithm takes: (where ).
If it is an odd number then after applying -1 you get an even number and you divide by 2, you can repeat this only times, and the number of steps is also , the case of b being an odd number is clearly the worst case and this gives you the answer.
(I think you need an additional base case for: b=1)

Complexity class of a function with known dependency between its execution time and its input size

Suppose I am trying to find the complexity class of a function. My data set doubles every time I evaluate the function, and each time this happens, the time it takes to execute the function increases by a factor of (X).
If we know (X), how do we find the complexity class/ O notation of the function? For example, if X is slightly over 2, then the Big-O notation is O(N log N).
Let T(n) be the time complexity of the function you are talking about, where n is the size of input data. We can write recursive equation for T(n):
T(n) = X * T(n/2)
where X is your constant. Let's "unroll" this recursion:
T(n) = X * T(n/2) = X^2 * T(n/4) = X^3 * T(n/8) = ... = X^k * T(n/2^k)
This unrolling process should end when the parameter k becomes large enough to satisfy:
n/2^k = 1
which means that n = 2^k or k = log(n) (logarithm is by base 2). Also we can assume that:
T(1) = C
where C is some another constant. Now we look at unrolled equation and substitute k by log(n) and T(1) by C:
T(n) = X^log(n) * C
We can simplify this formula using logarithm properties:
T(n) = C * n^log(X)

Comparison of functions asymptotically

I have 2 functions:
f(n) = n*log(n)
g(n) = n^(1.1) * log(log(log(n)))
I want to know how these functions compare to each other. From what I understand, f(n) will always grow faster than g(n). In other words: f(n) in ω(g(n))
I am assuming log base 10, but it really does not matter as any base could be used. I tried a number of combinations of n and c, as the following relation seems to hold:
f(n) ≥ c g(n) ≥ 0
The one combination that seemed to stick out to me was the following:
c = 0
n = 10^10
In this instance:
f(10^10) = (10^10) log(10^10) = (10^10)*(10) = 10^11
c*g(n) = 0 * (10^10)^(1.1) * log(log(log(10^10))
= 0 * (10^11) * log(log(10))
= 0 * (10^11) * log(1)
= 0 * (10^11) * 0 = 0
Hence f(n) will always be greater than g(n) and the relationship will be f(n) is ω(n).
Would my understanding be correct here?
edited: for correction
First of all, the combination sticking out to you doesn't work because it's invalid. A function f(x) is said to be O(g(x)) if and only if there exists a real number x' and positive real number c such that f(x)≤cg(x) for all x≥x'. You use c=0, which is not positive, and so using it to understand asymptotic complexity isn't going to be helpful.
But more importantly, in your example, it's not the case that f(x)=Ω(g(x)). In fact, it's actually f(x)=O(g(x)). You can see this because log(n)=O(n^0.1) (proof here), so nlog(n)=O(n^1.1), so nlog(n)=O(n^1.1 log(log(log(n)))), and thus f(x)=O(g(x)).

Asymptotic analysis question: sum[log(i)*i^3, {i, n}] is big-theta (log(n)*n^4)

I've got a homework question that's been puzzling me. It asks that you prove that the function Sum[log(i)*i^3, {i, n}) (ie. the sum of log(i)*i^3 from i=1 to n) is big-theta (log(n)*n^4).
I know that Sum[i^3, {i, n}] is ( (n(n+1))/2 )^2 and that Sum[log(i), {i, n}) is log(n!), but I'm not sure if 1) I can treat these two separately since they're part of the same product inside the sum, and 2) how to start getting this into a form that will help me with the proof.
Any help would be really appreciated. Thanks!
The series looks like this - log 1 + log 2 * 2^3 + log 3 * 3^3....(upto n terms)
the sum of which does not converge. So if we integrate it
Integral to (1 to infinity) [ logn * n^3] (integration by parts)
you will get 1/4*logn * n^4 - 1/16* (n^4)
It is clear that the dominating term there is logn*n^4, therefore it belongs to Big Theta(log n * n^4)
The other way you could look at it is -
The series looks like log 1 + log2 * 8 + log 3 * 27......+ log n * n^3.
You could think of log n as the term with the highest value, since all logarithmic functions grow at the same rate asymptotically,
You could treat the above series as log n (1 + 2^3 + 3^3...) which is
log n [n^2 ( n + 1)^2]/4
Assuming f(n) = log n * n^4
g(n) = log n [n^2 ( n + 1)^2]/4
You could show that lim (n tends to inf) for f(n)/g(n) will be a constant [applying L'Hopital's rule]
That's another way to prove that the function g(n) belongs to Big Theta (f(n)).
Hope that helps.
Hint for one part of your solution: how large is the sum of the last two summands of your left sum?
Hint for the second part: If you divide your left side (the sum) by the right side, how many summands to you get? How large is the largest one?
Hint for the first part again: Find a simple lower estimate for the sum from n/2 to n in your first expression.
Try BigO limit definition and use calculus.
For calculus you might like to use some Computer Algebra System.
In following answer, I've shown, how to do this with Maxima Opensource CAS :
Asymptotic Complexity of Logarithms and Powers