I'm trying to figure out right names (definitions) to the following items.
Let's say algorithm 1 has a time complexity like this:
T1(n) = 5 * n^2 + n + 123 = O(n^2)
How should I name algorithm that has such complexity? Is it correct to say that algorithm has quadratic complexity type or algorithm type is quadratic or algorithm is quadratic complexity class?
I use word type, because according to this article, if we have a function like:
T(n) = n^2
We say that function has quadratic type.
I think that word class is absolutely incorrect because complexity classes is about NP, NL, etc problems.
Now let's say we have algorithm 2 with complexity like this:
T2(n) = 2 * log n + 15 = O(log n)
UPDATE:
So the question is: Is it correct to say that algorithms 1 and 2 have different types or classes or something else complexities? What is the right word?
UPDATE 2:
Let's imagine the following. You are talking with your friend Bob and said: Bob, the first algorithm has quadratic complexity and the second one has logarithmic complexity. So Bob, as you can see this algorithms have different complexities ...? What word should you use instead of ...? Types or classes or maybe something else?
The problem is that english is not my native language and for me it is almost impossible to find out right definitions, when we talk about complexities. Because all I can do is try to translate it from my language, but my language does not even have this definitions.
O(n^2) is quadratic time complexity. You can refer to this Wiki page for more detailed explanation of the various time complexities.
In simple terms, you can call an algorithm that has quadratic time complexity as: The algorithm runs in quadratic time.
Some more info:
An algorithm will have a space complexity and a time complexity.
The time complexity gives information about how long the algorithm will take to run as a function of its input.
An algorithm that has a quadratic time complexity O(n^2) will have its run time proportional to the square of n. (e.g. bubble sort)
An algorithm that has logarithmic time complexity O(log n) will have its run time proportional to the log of n.
(e.g. binary search)
Both these classes have deterministic times, so they would be in complexity class P.
Illustration of running time:
O(n) is a better algorithm to have than O(n^2). It is possible that O(n^2) will perform better than an O(n) algorithm for up to a certain n. But as n grows larger, the O(n^2) will be slower than an O(n) algorithm.
Example:
T1(n) = 5 * n = O(n)
T2(n) = 9999*n = O(n)
T3(n) = n^2 = O(n^2)
Case 1: n=10
T1 takes 5*10 = 50 sec
T2 takes 9999*10 = 99990 sec
T3 takes 10 * 10 = 100 sec
T3 performs better than T2, even though it is O(n^2).
Case 2: n=100
T1 takes 5*100 = 500 sec
T2 takes 9999*100 = 999900 sec
T3 takes 100 * 100 = 10000 sec
T3 performs better than T2, even though it is O(n^2).
Case 3: n=10000
T1 takes 5*10000 = 50000 sec
T2 takes 9999*10000 = 99990000 sec
T3 takes 10000 * 10000 = 100000000 sec
Now, T2 performs better than T3. For all n > 10000, T2 will perform better than T3.
Related
This question already has answers here:
Why is the constant always dropped from big O analysis?
(7 answers)
Closed 2 years ago.
I commonly use a site called LeetCode for practice on problems. On a lot of answers in the discuss section of a problem, I noticed that run times like O(N + N) or O(2N) gets changed to O(N). For example:
int[] nums = {1, 2, 3, 4, 5};
for(int i = 0; i < nums.length; i ++) {
System.out.println(nums[i]);
}
for(int i = 0; i < nums.length; i ++) {
System.out.println(nums[i]);
}
This becomes O(N), even though it iterates through nums twice. Why is it not O(2N) or O(N + N)?
In time complexity, constant coefficients do not play a role. This is because the actual time it takes an algorithm to run depends also on the physical constraints of the machine. This means that if you run your code on a machine which is twice as fast as another, all other conditions being equal, it would run in about half the time with the same input.
But that’s not the same thing when you compare two algorithms with different time complexities. For example, when you compare the running time of an algorithm of O( N ^ 2 ) to an algorithm of O(N), the running time of O( N ^ 2 ) grows so fast with the growth of input size that the O(N) one cannot catch up with it, no matter how big you choose its constant coefficient.
Let’s say your constant coefficient is 1000, instead of just 2, then for input sizes of ( N > 1000 ) the running time of O( N ^ 2 ) algorithm becomes a proportion of ( N * N ) while N would be growing proportional to the input size, while the running time of the O(N) algorithm only remains proportional to ( 1000 * N ).
Time complexity for O(n+n) reduces to O(2n). Now 2 is a constant. So the time complexity will essentially depend on n.
Hence the time complexity of O(2n) equates to O(n).
Also if there is something like this O(2n + 3) it will still be O(n) as essentially the time will depend on the size of n.
Now suppose there is a code which is O(n^2 + n), it will be O(n^2) as when the value of n increases the effect of n will become less significant compared to effect of n^2.
Suppose we have an algorithm that is of order O(2^n). Furthermore, suppose we multiplied the input size n by 2 so now we have an input of size 2n. How is the time affected? Do we look at the problem as if the original time was 2^n and now it became 2^(2n) so the answer would be that the new time is the power of 2 of the previous time?
Big 0 is not for telling you the actual running time, just how the running time is affected by the size of input. If you double the size of input the complexity is still O(2^n), n is just bigger.
number of elements(n) units of work
1 1
2 4
3 8
4 16
5 32
... ...
10 1024
20 1048576
There's a misunderstanding here about how Big-O relates to execution time.
Consider the following formulas which define execution time:
f1(n) = 2^n + 5000n^2 + 12300
f2(n) = (500 * 2^n) + 6
f3(n) = 500n^2 + 25000n + 456000
f4(n) = 400000000
Each of these functions are O(2^n); that is, they can each be shown to be less than M * 2^n for an arbitrary M and starting n0 value. But obviously, the change in execution time you notice for doubling the size from n1 to 2 * n1 will vary wildly between them (not at all in the case of f4(n)). You cannot use Big-O analysis to determine effects on execution time. It only defines an upper boundary on the execution time (which is not even guaranteed to be the minimum form of the upper bound).
Some related academia below:
There are three notable bounding functions in this category:
O(f(n)): Big-O - This defines a upper-bound.
Ω(f(n)): Big-Omega - This defines a lower-bound.
Θ(f(n)): Big-Theta - This defines a tight-bound.
A given time function f(n) is Θ(g(n)) only if it is also Ω(g(n)) and O(g(n)) (that is, both upper and lower bounded).
You are dealing with Big-O, which is the usual "entry point" to the discussion; we will neglect the other two entirely.
Consider the definition from Wikipedia:
Let f and g be two functions defined on some subset of the real numbers. One writes:
f(x)=O(g(x)) as x tends to infinity
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x > x0
Going from here, assume we have f1(n) = 2^n. If we were to compare that to f2(n) = 2^(2n) = 4^n, how would f1(n) and f2(n) relate to each other in Big-O terms?
Is 2^n <= M * 4^n for some arbitrary M and n0 value? Of course! Using M = 1 and n0 = 1, it is true. Thus, 2^n is upper-bounded by O(4^n).
Is 4^n <= M * 2^n for some arbitrary M and n0 value? This is where you run into problems... for no constant value of M can you make 2^n grow faster than 4^n as n gets arbitrarily large. Thus, 4^n is not upper-bounded by O(2^n).
See comments for further explanations, but indeed, this is just an example I came up with to help you grasp Big-O concept. That is not the actual algorithmic meaning.
Suppose you have an array, arr = [1, 2, 3, 4, 5].
An example of a O(1) operation would be directly access an index, such as arr[0] or arr[2].
An example of a O(n) operation would be a loop that could iterate through all your array, such as for elem in arr:.
n would be the size of your array. If your array is twice as big as the original array, n would also be twice as big. That's how variables work.
See Big-O Cheat Sheet for complementary informations.
Assume that function f is in the complexity class O(N (log N)2), and that for N = 1,000 the program runs in 8 seconds.
How to write a formula T(N) that can compute the approximate time that it takes to run f for any input of size N???
Here is the answer:
8 = c (1000 x 10)
c = 8x10^-4
T(N) = 8x10-4* (N log2 N)
I don't understand the first line where does the 10 come from?
Can anybody explain the answer to me please? Thanks!
I don't understand the first line where does the 10 come from? Can
anybody explain the answer to me please? Thanks!
T(N) is the maximum time complexity. c is the constant or O(1) time, which is the portion of the algorithm's speed which is not affected by the size of the input. The 10 comes from rounding to simplify the math. It's actually 9.965784, which is log2 of 1000, e.g.
N x log2 N is
1000 x 10 or
1000 x 9.965784
O(N (log N)^2) describes how the runtime scales with N, but it's not a formula for calculating runtime in seconds. In fact, Big-O notation doesn't generally give the exact scaling function itself, but an upper bound on it as N becomes large. See here (there's a nice picture showing this last point).
If you're interested in a function's runtime in practice (particularly in the non-asymptotic regime, i.e. small N), one option is to actually run the function and measure it. Do this for multiple values of N, chosen on some grid (possibly with nonlinear spacing). Then, you can interpolate between these points.
Define S(N)=N(log N)^2
If you can assume that S(N) bounds your program for all N >= 1000
Then you can bound your execution time by good'ol rule of three:
S(1000) - T(1000)
S(N) - T(N)
T(N) <= S(N)* T(1000)/S(1000) for all N >=1000
S(1000) approx 10E4
T(1000) = 8
T(N) <= N(log N)^2 * 8 / 10E4
Can the recurrence :
T(N)= SUM T(N-i) //i=1 to N
be solved as:
T(N)<= N*T(N-1)
which finally comes O(N^(N-1)) ?
By solving iteratively it comes:
T(N)=N*(N-1)T(N-2).... , T(N)=N....(N-k+1)T(1), k=N-1.
so finally O(N!)
Note that O gives you an upper bound on the execution time, which means that if a certain algorithm, for example, is linear, then it is O(n), but it is also O(n^2) and O(n!) and it is also O of any superlinear function.
Your inference is correct, however on both steps you overestimated your function complexity. The recurrent relation T(N) = sum(T(N-i)) is O(2^N) (and I suspect it is also o(2^N)). It is easy to show, since 2^n = sum(2^i) + 1 for 1 <= i <= n - 1.
On your first step you used a higher bound, which is perfectly fine for the O. However, even with your bound of T(N) <= N*T(N-1) the complexity you ended up with is too high. O(N!), which is less than what you estimated, also satisfies T(N) <= N*T(N-1).
The increasing order of following functions shown in the picture below in terms of asymptotic complexity is:
(A) f1(n); f4(n); f2(n); f3(n)
(B) f1(n); f2(n); f3(n); f4(n);
(C) f2(n); f1(n); f4(n); f3(n)
(D) f1(n); f2(n); f4(n); f3(n)
a)time complexity order for this easy question was given as--->(n^0.99)*(logn) < n ......how? log might be a slow growing function but it still grows faster than a constant
b)Consider function f1 suppose it is f1(n) = (n^1.0001)(logn) then what would be the answer?
whenever there is an expression which involves multiplication between logarithimic and polynomial expression , does the logarithmic function outweigh the polynomial expression?
c)How to check in such cases suppose
1)(n^2)logn vs (n^1.5) which has higher time complexity?
2) (n^1.5)logn vs (n^2) which has higher time complexity?
If we consider C_1 and C_2 such that C_1 < C_2, then we can say the following with certainty
(n^C_2)*log(n) grows faster than (n^C_1)
This is because
(n^C_1) grows slower than (n^C_2) (obviously)
also, for values of n larger than 2 (for log in base 2), log(n) grows faster than
1.
in fact, log(n) is asymptotically greater than any constant C,
because log(n) -> inf as n -> inf
if both (n^C_2) is asymptotically than (n^C_1) AND log(n) is asymptotically greater
than 1, then we can certainly say that
(n^2)log(n) has greater complexity than (n^1.5)
We think of log(n) as a "slowly growing" function, but it still grows faster than 1, which is the key here.
coder101 asked an interesting question in the comments, essentially,
is n^e = Ω((n^c)*log_d(n))?
where e = c + ϵ for arbitrarily small ϵ
Let's do some algebra.
n^e = (n^c)*(n^ϵ)
so the question boils down to
is n^ϵ = Ω(log_d(n))
or is it the other way around, namely:
is log_d(n) = Ω(n^ϵ)
In order to do this, let us find the value of ϵ that satisfies n^ϵ > log_d(n).
n^ϵ > log_d(n)
ϵ*ln(n) > ln(log_d(n))
ϵ > ln(log_d(n)) / ln(n)
Because we know for a fact that
ln(n) * c > ln(ln(n)) (1)
as n -> infinity
We can say that, for an arbitrarily small ϵ, there exists an n large enough to
satisfy ϵ > ln(log_d(n)) / ln(n)
because, by (1), ln(log_d(n)) / ln(n) ---> 0 as n -> infinity.
With this knowledge, we can say that
is n^ϵ = Ω(log_d(n))
for arbitrarily small ϵ
which means that
n^(c + ϵ) = Ω((n^c)*log_d(n))
for arbitrarily small ϵ.
in layperson's terms
n^1.1 > n * ln(n)
for some n
also
n ^ 1.001 > n * ln(n)
for some much, much bigger n
and even
n ^ 1.0000000000000001 > n * ln(n)
for some very very big n.
Replacing f1 = (n^0.9999)(logn) by f1 = (n^1.0001)(logn) will yield answer (C): n, (n^1.0001)(logn), n^2, 1.00001^n
The reasoning is as follows:
. (n^1.0001)(logn) has higher complexity than n, obvious.
. n^2 higher than (n^1.0001)(logn) because the polynomial part asymptotically dominates the logarithmic part, so the higher-degree polynomial n^2 wins
. 1.00001^n dominates n^2 because the 1.00001^n has exponential growth, while n^2 has polynomial growth. Exponential growth asymptotically wins.
BTW, 1.00001^n looks a little similar to a family called "sub-exponential" growth, usually denoted (1+Ɛ)^n. Still, whatever small is Ɛ, sub-exponential growth still dominates any polynomial growth.
The complexity of this problem lays between f1(n) and f2(n).
For f(n) = n ^ c where 0 < c < 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
For f(n) = logc(n), where c > 1, the curve growth will eventually be so slow that it would become so trivial compared with a linear growth curve.
The product of such two functions will also eventually become trivial compared with a linear growth curve.
Hence, Theta(n ^ c * logc(n)) is asymptotically less complex than Theta(n).