Given that fib(n)=fib(n-1)+fib(n-2) for n>1 and given that fib(0)=a, fib(1)=b (some a, b >0), which of the following is true?
fib(n) is
Select one or more:
a. O(n^2)
b. O(2^n)
c. O((1-sqrt 5)/2)^n)
d. O(n)
e. Answer depends on a and b.
f. O((1+sqrt 5)/2)^n)
Solving the Fibonacci sequence I got that:
fib(n)= 1/(sqrt 5) ((1+sqrt 5)/2)^n - 1/(sqrt 5) ((1-sqrt 5)/2)^n
But what would be the time complexity in this case? Would that mean the answers are c and f?
From your closed form of your formula, the term 1 / (sqrt 5) ((1 - sqrt 5) / 2)^n has limit 0 as n grows to infinity (|(1 - sqrt 5) / 2| < 1). Therefore we can ignore this term. Also since in time complexity theory we don't care about muliplication constants the following is true:
fib(n) = Θ(φ^n)
where φ = (1 + sqrt 5) / 2 a.k.a. the golden ratio constant.
So it's an exponential function and we can exclude a, d, e. We can exclude c since as was said it has limit 0. But answer b is also correct because φ < 2 and O expresses an upper bound.
Finally, the correct answers are:
b, f
Θ(φ^n) is correct when a=1 and b=1 or a=1 and b=2 . The value of φ depends on a and b.
For computing fib(n-1) and fib(n-2) if we compute them recursively complexity is exponential, but if we save two last values and use them, complexity is O(n) and not depends on a and b.
Related
I'm trying to implement Algorithm D from Knuth's "The Art of Computer Programming, Vol 2" in Rust although I'm having trouble understating how to implement the very last step of unnormalizing. My natural numbers are a class where each number is a vector of u64, in base u64::MAX. Addition, subtraction, and multiplication have been implemented.
Knuth's Algorithm D is a euclidean division algorithm which takes two natural numbers x and y and returns (q,r) where q = x / y (integer division) and r = x % y, the remainder. The algorithm depends on an approximation method which only works if the first digit of y is greater than b/2, where b is the base you're representing the numbers in. Since not all numbers are of this form, it uses a "normalizing trick", for example (if we were in base 10) instead of doing 200 / 23, we calculate a normalizer d and do (200 * d) / (23 * d) so that 23 * d has a first digit greater than b/2.
So when we use the approximation method, we end up with the desired q but the remainder is multiplied by a factor of d. So the last step is to divide r by d so that we can get the q and r we want. My problem is, I'm a bit confused at how we're suppose to do this last step as it requires division and the method it's part of is trying to implement division.
(Maybe helpful?):
The way that d is calculated is just by taking the integer floor of b-1 divided by the first digit of y. However, Knuth suggests that it's possible to make d a power of 2, as long as d * the first digit of y is greater than b / 2. I think he makes this suggestion so that instead of dividing, we can just do a binary shift for this last step. Although I don't think I can do that given that my numbers are represented as vectors of u64 values, instead of binary.
Any suggestions?
Suppose we have an algorithm that is of order O(2^n). Furthermore, suppose we multiplied the input size n by 2 so now we have an input of size 2n. How is the time affected? Do we look at the problem as if the original time was 2^n and now it became 2^(2n) so the answer would be that the new time is the power of 2 of the previous time?
Big 0 is not for telling you the actual running time, just how the running time is affected by the size of input. If you double the size of input the complexity is still O(2^n), n is just bigger.
number of elements(n) units of work
1 1
2 4
3 8
4 16
5 32
... ...
10 1024
20 1048576
There's a misunderstanding here about how Big-O relates to execution time.
Consider the following formulas which define execution time:
f1(n) = 2^n + 5000n^2 + 12300
f2(n) = (500 * 2^n) + 6
f3(n) = 500n^2 + 25000n + 456000
f4(n) = 400000000
Each of these functions are O(2^n); that is, they can each be shown to be less than M * 2^n for an arbitrary M and starting n0 value. But obviously, the change in execution time you notice for doubling the size from n1 to 2 * n1 will vary wildly between them (not at all in the case of f4(n)). You cannot use Big-O analysis to determine effects on execution time. It only defines an upper boundary on the execution time (which is not even guaranteed to be the minimum form of the upper bound).
Some related academia below:
There are three notable bounding functions in this category:
O(f(n)): Big-O - This defines a upper-bound.
Ω(f(n)): Big-Omega - This defines a lower-bound.
Θ(f(n)): Big-Theta - This defines a tight-bound.
A given time function f(n) is Θ(g(n)) only if it is also Ω(g(n)) and O(g(n)) (that is, both upper and lower bounded).
You are dealing with Big-O, which is the usual "entry point" to the discussion; we will neglect the other two entirely.
Consider the definition from Wikipedia:
Let f and g be two functions defined on some subset of the real numbers. One writes:
f(x)=O(g(x)) as x tends to infinity
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x > x0
Going from here, assume we have f1(n) = 2^n. If we were to compare that to f2(n) = 2^(2n) = 4^n, how would f1(n) and f2(n) relate to each other in Big-O terms?
Is 2^n <= M * 4^n for some arbitrary M and n0 value? Of course! Using M = 1 and n0 = 1, it is true. Thus, 2^n is upper-bounded by O(4^n).
Is 4^n <= M * 2^n for some arbitrary M and n0 value? This is where you run into problems... for no constant value of M can you make 2^n grow faster than 4^n as n gets arbitrarily large. Thus, 4^n is not upper-bounded by O(2^n).
See comments for further explanations, but indeed, this is just an example I came up with to help you grasp Big-O concept. That is not the actual algorithmic meaning.
Suppose you have an array, arr = [1, 2, 3, 4, 5].
An example of a O(1) operation would be directly access an index, such as arr[0] or arr[2].
An example of a O(n) operation would be a loop that could iterate through all your array, such as for elem in arr:.
n would be the size of your array. If your array is twice as big as the original array, n would also be twice as big. That's how variables work.
See Big-O Cheat Sheet for complementary informations.
The general formula for time complexity is T(n) = aT(n/c) + bn^k
If a > c^k, the complexity is O(n^log base c a)
If a = c^k, O(n^k log n)
If a < c^k, O(n^k)
a is the amount of times the recursive function is called, but what do b, c, and k represent?
c is a constant factor by which the problem size is reduced in every recursive call; e.g., for merge-sort, we usually have c = 2.
The last part of your equation is usually represented by the more general f(n); in your case, f(n) is a polynomial function with exponent k and some factor b.
How would you possibly show that 2 is O(1)?
More over, how would you show that a constant is theta(1) hence omega(1) and O(1)?
For O, I am under the impression that you are able to do a simplification for f(n), whereby it can be reduced down to 1, but then how can this prove that 2 is O(1) for some n0? What would be the n0 value in this case?
By definition, a function f is in O(1) if there exist constants n0 and M such that f(n) ≤ M · 1 = M for all n ≥ n0.
If f(n) is defined as 2, then just set M = 2 (or any greater value; it doesn't matter) and n0 = 1 (or any greater value; it doesn't matter), and the condition is met.
[…] that 2 is O(1) for some n0? What would be the n0 value in this case?
n0 is not a parameter here; it's not meaningful to say "O(1) for some n0". You can arbitrarily choose any value of n0 that makes f satisfy the condition; if one exists, then f is O(1), period.
Big Oh and Theta so not indicate the time taken by an algorithm. They indicate the rate of increase in time as the input increases for the algorithm. When you understand this, things become very easy and less mathematical. f(x) = 2 {for all and any x} is always O(1) since the output value (2) does not depend on the input value (x) at all! O(1) represents this independence. So does theta(1) and omega(1).
I've got a homework question that's been puzzling me. It asks that you prove that the function Sum[log(i)*i^3, {i, n}) (ie. the sum of log(i)*i^3 from i=1 to n) is big-theta (log(n)*n^4).
I know that Sum[i^3, {i, n}] is ( (n(n+1))/2 )^2 and that Sum[log(i), {i, n}) is log(n!), but I'm not sure if 1) I can treat these two separately since they're part of the same product inside the sum, and 2) how to start getting this into a form that will help me with the proof.
Any help would be really appreciated. Thanks!
The series looks like this - log 1 + log 2 * 2^3 + log 3 * 3^3....(upto n terms)
the sum of which does not converge. So if we integrate it
Integral to (1 to infinity) [ logn * n^3] (integration by parts)
you will get 1/4*logn * n^4 - 1/16* (n^4)
It is clear that the dominating term there is logn*n^4, therefore it belongs to Big Theta(log n * n^4)
The other way you could look at it is -
The series looks like log 1 + log2 * 8 + log 3 * 27......+ log n * n^3.
You could think of log n as the term with the highest value, since all logarithmic functions grow at the same rate asymptotically,
You could treat the above series as log n (1 + 2^3 + 3^3...) which is
log n [n^2 ( n + 1)^2]/4
Assuming f(n) = log n * n^4
g(n) = log n [n^2 ( n + 1)^2]/4
You could show that lim (n tends to inf) for f(n)/g(n) will be a constant [applying L'Hopital's rule]
That's another way to prove that the function g(n) belongs to Big Theta (f(n)).
Hope that helps.
Hint for one part of your solution: how large is the sum of the last two summands of your left sum?
Hint for the second part: If you divide your left side (the sum) by the right side, how many summands to you get? How large is the largest one?
Hint for the first part again: Find a simple lower estimate for the sum from n/2 to n in your first expression.
Try BigO limit definition and use calculus.
For calculus you might like to use some Computer Algebra System.
In following answer, I've shown, how to do this with Maxima Opensource CAS :
Asymptotic Complexity of Logarithms and Powers