Simplifying time complexity of a function - time-complexity

Say I have a time complexity O(f(m) * n) where f(m) is not a randomized function but it will always produce a value between 0 and 1 (exclusive). Should I drop the f(m) term and conclude that my time complexity is O(n)? Thanks so much.

This is big O notation you are using. It always tells maximum time an algorithm will take or time for worst case scenario. As O(f(m)*n) will have max value n when f(m) will have max value 1. So it can be written as O(n).

Related

Big O vs function type definition

I'm trying to figure out right names (definitions) to the following items.
Let's say algorithm 1 has a time complexity like this:
T1(n) = 5 * n^2 + n + 123 = O(n^2)
How should I name algorithm that has such complexity? Is it correct to say that algorithm has quadratic complexity type or algorithm type is quadratic or algorithm is quadratic complexity class?
I use word type, because according to this article, if we have a function like:
T(n) = n^2
We say that function has quadratic type.
I think that word class is absolutely incorrect because complexity classes is about NP, NL, etc problems.
Now let's say we have algorithm 2 with complexity like this:
T2(n) = 2 * log n + 15 = O(log n)
UPDATE:
So the question is: Is it correct to say that algorithms 1 and 2 have different types or classes or something else complexities? What is the right word?
UPDATE 2:
Let's imagine the following. You are talking with your friend Bob and said: Bob, the first algorithm has quadratic complexity and the second one has logarithmic complexity. So Bob, as you can see this algorithms have different complexities ...? What word should you use instead of ...? Types or classes or maybe something else?
The problem is that english is not my native language and for me it is almost impossible to find out right definitions, when we talk about complexities. Because all I can do is try to translate it from my language, but my language does not even have this definitions.
O(n^2) is quadratic time complexity. You can refer to this Wiki page for more detailed explanation of the various time complexities.
In simple terms, you can call an algorithm that has quadratic time complexity as: The algorithm runs in quadratic time.
Some more info:
An algorithm will have a space complexity and a time complexity.
The time complexity gives information about how long the algorithm will take to run as a function of its input.
An algorithm that has a quadratic time complexity O(n^2) will have its run time proportional to the square of n. (e.g. bubble sort)
An algorithm that has logarithmic time complexity O(log n) will have its run time proportional to the log of n.
(e.g. binary search)
Both these classes have deterministic times, so they would be in complexity class P.
Illustration of running time:
O(n) is a better algorithm to have than O(n^2). It is possible that O(n^2) will perform better than an O(n) algorithm for up to a certain n. But as n grows larger, the O(n^2) will be slower than an O(n) algorithm.
Example:
T1(n) = 5 * n = O(n)
T2(n) = 9999*n = O(n)
T3(n) = n^2 = O(n^2)
Case 1: n=10
T1 takes 5*10 = 50 sec
T2 takes 9999*10 = 99990 sec
T3 takes 10 * 10 = 100 sec
T3 performs better than T2, even though it is O(n^2).
Case 2: n=100
T1 takes 5*100 = 500 sec
T2 takes 9999*100 = 999900 sec
T3 takes 100 * 100 = 10000 sec
T3 performs better than T2, even though it is O(n^2).
Case 3: n=10000
T1 takes 5*10000 = 50000 sec
T2 takes 9999*10000 = 99990000 sec
T3 takes 10000 * 10000 = 100000000 sec
Now, T2 performs better than T3. For all n > 10000, T2 will perform better than T3.

time complexity for loop justification

Hi could anyone explain why the first one is True and second one is False?
First loop , number of times the loop gets executed is k times,
Where for a given n, i takes values 1,2,4,......less than n.
2 ^ k <= n
Or, k <= log(n).
Which implies , k the number of times the first loop gets executed is log(n), that is time complexity here is O(log(n)).
Second loop does not get executed based on p as p is not used in the decision statement of for loop. p does take different values inside the loop, but doesn't influence the decision statement, number of times the p*p gets executed, its time complexity is O(n).
O(logn):
for(i=0;i<n;i=i*c){// Any O(1) expression}
Here, time complexity is O(logn) when the index i is multiplied/divided by a constant value.
In the second case,
for(p=2,i=1,i<n;i++){ p=p*p }
The incremental increase is constant i.e i=i+1, the loop will run n times irrespective of the value of p. Hence the loop alone has a complexity of O(n). Considering naive multiplication p = p*p is an O(n) expression where n is the size of p. Hence the complexity should be O(n^2)
Let me summarize with an example, suppose the value of n is 8 then the possible values of i are 1,2,4,8 as soon as 8 comes look will break. You can see loop run for 3 times i.e. log(n) times as the value of i keeps on increasing by 2X. Hence, True.
For the second part, its is a normal loop which runs for all values of i from 1 to n. And the value of p is increasing be the factor p^2n. So it should be O(p^2n). Thats why it is wrong.
In order to understand why some algorithm is O(log n) it is enough to check what happens when n = 2^k (i.e., we can restrict ourselves to the case where log n happens to be an integer k).
If we inject this into the expression
for(i=1; i<2^k; i=i*2) s+=i;
we see that i will adopt the values 2, 4, 8, 16,..., i.e., 2^1, 2^2, 2^3, 2^4,... until reaching the last one 2^k. In other words, the body of the loop will be evaluated k times. Therefore, if we assume that the body is O(1), we see that the complexity is k*O(1) = O(k) = O(log n).

Big O notation and measuring time according to it

Suppose we have an algorithm that is of order O(2^n). Furthermore, suppose we multiplied the input size n by 2 so now we have an input of size 2n. How is the time affected? Do we look at the problem as if the original time was 2^n and now it became 2^(2n) so the answer would be that the new time is the power of 2 of the previous time?
Big 0 is not for telling you the actual running time, just how the running time is affected by the size of input. If you double the size of input the complexity is still O(2^n), n is just bigger.
number of elements(n) units of work
1 1
2 4
3 8
4 16
5 32
... ...
10 1024
20 1048576
There's a misunderstanding here about how Big-O relates to execution time.
Consider the following formulas which define execution time:
f1(n) = 2^n + 5000n^2 + 12300
f2(n) = (500 * 2^n) + 6
f3(n) = 500n^2 + 25000n + 456000
f4(n) = 400000000
Each of these functions are O(2^n); that is, they can each be shown to be less than M * 2^n for an arbitrary M and starting n0 value. But obviously, the change in execution time you notice for doubling the size from n1 to 2 * n1 will vary wildly between them (not at all in the case of f4(n)). You cannot use Big-O analysis to determine effects on execution time. It only defines an upper boundary on the execution time (which is not even guaranteed to be the minimum form of the upper bound).
Some related academia below:
There are three notable bounding functions in this category:
O(f(n)): Big-O - This defines a upper-bound.
Ω(f(n)): Big-Omega - This defines a lower-bound.
Θ(f(n)): Big-Theta - This defines a tight-bound.
A given time function f(n) is Θ(g(n)) only if it is also Ω(g(n)) and O(g(n)) (that is, both upper and lower bounded).
You are dealing with Big-O, which is the usual "entry point" to the discussion; we will neglect the other two entirely.
Consider the definition from Wikipedia:
Let f and g be two functions defined on some subset of the real numbers. One writes:
f(x)=O(g(x)) as x tends to infinity
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x > x0
Going from here, assume we have f1(n) = 2^n. If we were to compare that to f2(n) = 2^(2n) = 4^n, how would f1(n) and f2(n) relate to each other in Big-O terms?
Is 2^n <= M * 4^n for some arbitrary M and n0 value? Of course! Using M = 1 and n0 = 1, it is true. Thus, 2^n is upper-bounded by O(4^n).
Is 4^n <= M * 2^n for some arbitrary M and n0 value? This is where you run into problems... for no constant value of M can you make 2^n grow faster than 4^n as n gets arbitrarily large. Thus, 4^n is not upper-bounded by O(2^n).
See comments for further explanations, but indeed, this is just an example I came up with to help you grasp Big-O concept. That is not the actual algorithmic meaning.
Suppose you have an array, arr = [1, 2, 3, 4, 5].
An example of a O(1) operation would be directly access an index, such as arr[0] or arr[2].
An example of a O(n) operation would be a loop that could iterate through all your array, such as for elem in arr:.
n would be the size of your array. If your array is twice as big as the original array, n would also be twice as big. That's how variables work.
See Big-O Cheat Sheet for complementary informations.

How to calculate the time complexity?

Assume that function f is in the complexity class O(N (log N)2), and that for N = 1,000 the program runs in 8 seconds.
How to write a formula T(N) that can compute the approximate time that it takes to run f for any input of size N???
Here is the answer:
8 = c (1000 x 10)
c = 8x10^-4
T(N) = 8x10-4* (N log2 N)
I don't understand the first line where does the 10 come from?
Can anybody explain the answer to me please? Thanks!
I don't understand the first line where does the 10 come from? Can
anybody explain the answer to me please? Thanks!
T(N) is the maximum time complexity. c is the constant or O(1) time, which is the portion of the algorithm's speed which is not affected by the size of the input. The 10 comes from rounding to simplify the math. It's actually 9.965784, which is log2 of 1000, e.g.
N x log2 N is
1000 x 10 or
1000 x 9.965784
O(N (log N)^2) describes how the runtime scales with N, but it's not a formula for calculating runtime in seconds. In fact, Big-O notation doesn't generally give the exact scaling function itself, but an upper bound on it as N becomes large. See here (there's a nice picture showing this last point).
If you're interested in a function's runtime in practice (particularly in the non-asymptotic regime, i.e. small N), one option is to actually run the function and measure it. Do this for multiple values of N, chosen on some grid (possibly with nonlinear spacing). Then, you can interpolate between these points.
Define S(N)=N(log N)^2
If you can assume that S(N) bounds your program for all N >= 1000
Then you can bound your execution time by good'ol rule of three:
S(1000) - T(1000)
S(N) - T(N)
T(N) <= S(N)* T(1000)/S(1000) for all N >=1000
S(1000) approx 10E4
T(1000) = 8
T(N) <= N(log N)^2 * 8 / 10E4

Time Complexity Analysis in the following relation

Can the recurrence :
T(N)= SUM T(N-i) //i=1 to N
be solved as:
T(N)<= N*T(N-1)
which finally comes O(N^(N-1)) ?
By solving iteratively it comes:
T(N)=N*(N-1)T(N-2).... , T(N)=N....(N-k+1)T(1), k=N-1.
so finally O(N!)
Note that O gives you an upper bound on the execution time, which means that if a certain algorithm, for example, is linear, then it is O(n), but it is also O(n^2) and O(n!) and it is also O of any superlinear function.
Your inference is correct, however on both steps you overestimated your function complexity. The recurrent relation T(N) = sum(T(N-i)) is O(2^N) (and I suspect it is also o(2^N)). It is easy to show, since 2^n = sum(2^i) + 1 for 1 <= i <= n - 1.
On your first step you used a higher bound, which is perfectly fine for the O. However, even with your bound of T(N) <= N*T(N-1) the complexity you ended up with is too high. O(N!), which is less than what you estimated, also satisfies T(N) <= N*T(N-1).