How to prove a constant is O(1) - time-complexity

How would you possibly show that 2 is O(1)?
More over, how would you show that a constant is theta(1) hence omega(1) and O(1)?
For O, I am under the impression that you are able to do a simplification for f(n), whereby it can be reduced down to 1, but then how can this prove that 2 is O(1) for some n0? What would be the n0 value in this case?

By definition, a function f is in O(1) if there exist constants n0 and M such that f(n) ≤ M · 1 = M for all n ≥ n0.
If f(n) is defined as 2, then just set M = 2 (or any greater value; it doesn't matter) and n0 = 1 (or any greater value; it doesn't matter), and the condition is met.
[…] that 2 is O(1) for some n0? What would be the n0 value in this case?
n0 is not a parameter here; it's not meaningful to say "O(1) for some n0". You can arbitrarily choose any value of n0 that makes f satisfy the condition; if one exists, then f is O(1), period.

Big Oh and Theta so not indicate the time taken by an algorithm. They indicate the rate of increase in time as the input increases for the algorithm. When you understand this, things become very easy and less mathematical. f(x) = 2 {for all and any x} is always O(1) since the output value (2) does not depend on the input value (x) at all! O(1) represents this independence. So does theta(1) and omega(1).

Related

Is O(n*d) similar to O(n) where d is the constant

I'm new to time complexity and etc and trying to figure which algorithm is better.Might not be the best question of all time but yeah :/
If d is a constant then O(d*n) and O(n) are the same thing. This is what Big-O is all about i.e. the fact that these two are considered the same Big-O is part of the definition of Big-O.
The definition of Big-O is basically that for large n's some function f(n) is O(g(n)) if there exists a constant k such that f(n) ≤ k * g(n).
In your case, d is just absorbed by the constant k in that definition. A suitable constant k clearly exists: d*n ≤ k*nas long as k is greater than d.

Big O notation and measuring time according to it

Suppose we have an algorithm that is of order O(2^n). Furthermore, suppose we multiplied the input size n by 2 so now we have an input of size 2n. How is the time affected? Do we look at the problem as if the original time was 2^n and now it became 2^(2n) so the answer would be that the new time is the power of 2 of the previous time?
Big 0 is not for telling you the actual running time, just how the running time is affected by the size of input. If you double the size of input the complexity is still O(2^n), n is just bigger.
number of elements(n) units of work
1 1
2 4
3 8
4 16
5 32
... ...
10 1024
20 1048576
There's a misunderstanding here about how Big-O relates to execution time.
Consider the following formulas which define execution time:
f1(n) = 2^n + 5000n^2 + 12300
f2(n) = (500 * 2^n) + 6
f3(n) = 500n^2 + 25000n + 456000
f4(n) = 400000000
Each of these functions are O(2^n); that is, they can each be shown to be less than M * 2^n for an arbitrary M and starting n0 value. But obviously, the change in execution time you notice for doubling the size from n1 to 2 * n1 will vary wildly between them (not at all in the case of f4(n)). You cannot use Big-O analysis to determine effects on execution time. It only defines an upper boundary on the execution time (which is not even guaranteed to be the minimum form of the upper bound).
Some related academia below:
There are three notable bounding functions in this category:
O(f(n)): Big-O - This defines a upper-bound.
Ω(f(n)): Big-Omega - This defines a lower-bound.
Θ(f(n)): Big-Theta - This defines a tight-bound.
A given time function f(n) is Θ(g(n)) only if it is also Ω(g(n)) and O(g(n)) (that is, both upper and lower bounded).
You are dealing with Big-O, which is the usual "entry point" to the discussion; we will neglect the other two entirely.
Consider the definition from Wikipedia:
Let f and g be two functions defined on some subset of the real numbers. One writes:
f(x)=O(g(x)) as x tends to infinity
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x > x0
Going from here, assume we have f1(n) = 2^n. If we were to compare that to f2(n) = 2^(2n) = 4^n, how would f1(n) and f2(n) relate to each other in Big-O terms?
Is 2^n <= M * 4^n for some arbitrary M and n0 value? Of course! Using M = 1 and n0 = 1, it is true. Thus, 2^n is upper-bounded by O(4^n).
Is 4^n <= M * 2^n for some arbitrary M and n0 value? This is where you run into problems... for no constant value of M can you make 2^n grow faster than 4^n as n gets arbitrarily large. Thus, 4^n is not upper-bounded by O(2^n).
See comments for further explanations, but indeed, this is just an example I came up with to help you grasp Big-O concept. That is not the actual algorithmic meaning.
Suppose you have an array, arr = [1, 2, 3, 4, 5].
An example of a O(1) operation would be directly access an index, such as arr[0] or arr[2].
An example of a O(n) operation would be a loop that could iterate through all your array, such as for elem in arr:.
n would be the size of your array. If your array is twice as big as the original array, n would also be twice as big. That's how variables work.
See Big-O Cheat Sheet for complementary informations.

Is T(n)=n/x sublinear in $n$?

I know sublinear time algorithm is expressed by o(n).
Is T(n)=n/x sublinear in $n$ for positive number x?
In other words, is n/x=o(n)?
No.
T(n) = n/x is linear, in the same way as T(n) = xn is linear. If your function is just n multiplied by some constant c, then it's linear. In this particular case, c=1/x.
You can also check this using the formal definition of small o.
Formally, f(n) = o(g(n)) as n → ∞ means that for
every positive constant ε there exists a constant N such that |f(n)| <= ε|g(n)| for all n>= N.
In this case, pick ε=1/2x and you wont be able to find an N to satisfy the condition to make n/x = o(n).
Intuitively, one says f(n) = o(g(n)) if and only if f(n) is dominated by g(n) eventually even if you "slow g(n) down" multiplying it by a very small constant.

Big O calculation

I was studying Big O notation. I know that Big O is denoted by:
f(n) E O(g(n)) or f(n) = O(g(n))
It means the function f (n) has growth rate no greater than g(n).
Now lets say I have an equation:
5n +2 E O(n)
by the above equation, shouldn't 'n' be equal to g(n) and '5n+2' equals to f(n).
Now for any value of n. f(n) is always greater then g(n). So how Big O is true in this case ?
You should read the concept of Big Oh in more detail.
The relation
f(n) E O(g(n))
says
for some Constant C
f(n) <= C * g(n)
In this case C is some value for which 5n + 2 is always smaller than Cn
If you solve it:
5n + 2 <= Cn
2 <= (C - 5)*n
From this you can easily find out that if C = 6
then for any value of n your equation always holds!
Hope this helps!
That's not a correct definition of big O notation. If f(x) is O(g(x)), then there must exist some constants C and N such that: |f(x)| <= C |g(x)| for all x>N. So, if f(x) is always less than or equal to some constant * g(x) after some x value N, then f(x) is O(g(n)). Effectively, this means that constant factors are irrelevant, because you can choose C to be any value. So, for your example f(n)=5n+2 <= C*g(n)=10000n so, f(n) is O(g(n)).
Considering what the Big-O notation stands for you have the statement
5n +2 E O(n)
or as well
5n +2 = O(n)
Given that Big-O notation states an upper bound to our function, which is to establish an upper limit to the possible results of our given funcion, the problen can be reconsidered in the following way:
5n +2 <= c*n , for some constant c
We can see that the statement holds true given that it is possible to find some constant that will be greater than or equal to our function (making that constant as big or small as we need).
In a more general way, we can say that any given function f(n) will belong to O(g(n)) if the degree of g(n) is greater that or equal to the degree of f(n), that is, the highest degree among its terms.
Formally:
Let f(n) = n^x;
Let g(n) = n^y; so that x <= y
Then f(n) = O(g(n)).
The same applies to Big-Omega the other way arround.
Hope it works for you

Theoretical time complexity

I'm having trouble understanding time complexity beyond just Big O. In this example:
f(n) = n^10 g
g(n) = (2n)^10
Is f θ(g)? I'm guessing it's θ(g) because you can find a constant c1 and c2 that will allow c1*g(n) to be an upper bound of f(n) and c2*g(n) that will be a lower bound.
See, f(n)=n^10 and g(n)=(2n)^10.
So, f(n)>=((1/4)^10)*(2n)^10 is greater than g(n). So, f(n)>=c1*g(n) for some c1=1/4;
Similarly, f(n)<=(c2)/*(2n)^10 is smaller than g(n) for any value of c2 greater than or equal to 1/2.
So, f(n)<=c2*g(n).
Hence c1*g(n)<=f(n)<=c2*g(n); where c1<1/2 and c2>1/2.
Hence, f(n)=Theta(g(n)) OR f(n)=θ(g(n)).