Not sure whether it's smaller or larger - Big O notation - time-complexity

Could one of you kindly to tell me whether it's smaller or bigger?
Is O(N * logK) bigger than O(N)? I think it is bigger because O(NlogN) is bigger than O(N), the linear one.

Yes, it should increase, unless for some reason K is always one, in which you wouldnt put the 'logK' in O(N*logK) and it would just be O(N) which is obv equal to O(N)
Think of it this way: What is O(N) and O(N*logK) saying?
Well O(N) is saying, for example, that you have something like an array with N elements in it. For each element you are doing an operation that takes constant time, ie adding a number to that element
While O(N*logK) is saying, not only do you need to do an operation for each element, you need to do an operation that takes logK time. Its important to note that K would denote something different than N in this case, for example you could have the array from the O(N) example plus another array with K elements. Heres a code example
public void SomeNLogKOperation(int[] nElements, int[] kElements){
//for each element in nElements, ie O(N)
for(int i = 0; i < nElements.length; i++){
//do operation that takes O(logK) time, now we have O(N*logK)
int val = operationThatTakesLogKTime(nElements[i], kElements)
}
}
public void SomeNOperation(int[] nElements){
//for each element in nElements, ie O(N)
for(int i = 0; i < nElements.length; i++){
//simple operation that takes O(1) time, so we have O(N*1) = O(N)
int val = nElements[i] + 1;
}
}

I absolutely missed you used log(K) in the expression - this answer is invalid if K is not dependent on N and more, less than 1. But the you use O NlogN in the next
sentence so lets go with N log N.
So for N = 1000 O(N) is exactly that.
O(NlogN) is logN more. Usually we are looking at a base 2 log, so O(NlogN) is about 10000.
The difference is not large but very measurable.
For N = 1,000,000
You have O(N) at 1 million
O(NlogN) would sit comfortably at 20 million.
It is helpful to know your logs to common values
8-bit max 255 => log 255 = 8
10 bit max 1024 => log 1024 = 10: Conclude log 1000 is very close to 10.
16 bit 65735 => log 65735 = 16
20 bits max 1024072 = 20 bits very close to 1 million.

This question is not asked in the context of algorithmic time complexity. Only math is required here.
So we are comparing too functions. It all depends on context. What do we know of N and K? If K and N are both free variables that tend to infinity, then yes, O(N * log k) is "bigger" than O(N), in the sense that
N = O(N * log k) but
N * log k ≠ O(N).
However, if K is some constant parameter > 0, then they are the same complexity class.
On the other hand, K could be 0 or negative, in which case we obtain different relationships. So you need to define/provide more context to be able to make this comparison.

Related

What is the time complexity (Big-O) of this while loop (Pseudocode)?

This is written in pseudocode.
We have an Array A of length n(n>=2)
int i = 1;
while (i < n) {
if (A[i] == 0) {
terminates the while-loop;
}
doubles i
}
I am new to this whole subject and coding, so I am having a hard time grasping it and need an "Explain like im 5".
I know the code doesnt make a lot of sense but it is just an exercise, I have to determine best case and worst case.
So in the Best case Big O would be O(1) if the value in [1] is 0.
For the worst-case scenario I thought the time complexity of this loop would be O(log(n)) as i doubles.
Is that correct?
Thanks in advance!
For Big O notation you take the worse case scenario. For the case where A[i] never evaluates to zero then your loop is like this:
int i = 1;
while(i < n) {
i *= 2;
}
i is doubled on each iteration, ie exponential growth.
Given an example of n=16
the values of i would be:
1
2
4
8
wouldn't get to 16
4 iterations
and 2^4 = 16
to work out the power, you would take log to base 2 of n, ie log(16) = 4
So the worst case would be log(n)
So the complexity would be stated as O(log(n))

How to calculate time complexity of this code?=.=

char[] chars = String.valueOf(num).toCharArray();
int n = chars.length;
for (int i = n - 1; i > 0; i--) {
if (chars[i - 1] > chars[i]) {
chars[i - 1]--;
Arrays.fill(chars, i, n, '9');
}
}
return Integer.parseInt(new String(chars));
What is the time complexity of this code? Could you teach me how to calculate it? Thank you!
Time complexity is a measure of how a program's run-time changes as input size grows. The first thing you have to determine is what (if any) aspect of your input can cause run-time to vary, and to represent that aspect as a variable. Framing run-time as a function of that variable (or those variables) is how you determine the program's time complexity. Conventionally, when only a single aspect of the input causes run-time to vary, we represent that aspect by the variable n.
Your program primarily varies in run-time based on how many times the for-loop runs. We can see that the for-loop runs the length of chars times (note that the length of chars is the number of digits of num). Conveniently, that length is already denoted as n, which we will use as the variable representing input size. That is, taking n to be the number of digits in num, we want to determine exactly how run-time varies with n by expressing run-time as a function of n.
Also note that when doing complexity analysis, you are primarily concerned with the growth in run-time as n gets arbitrarily large (how run-time scales with n as n goes to infinity), so you typically ignore constant factors and only focus on the highest order terms, writing run-time in "Big O" notation. That is, because 3n, n, and n/2 all grow in about the same way as n goes to infinity, we would represent them all as O(n), because our primary goal is to distinguish this kind of linear growth, from the quadratic O(n^2) growth of n^2, 5n^2 + 10, or n^2 + n, from the logarithmic O(logn) growth of log(n), log(2n), or log(n) + 1, or the constant O(1) time (time that doesn't scale with n) represented by 1, 5, 100000 etc.
So, let's try to express the total number of operations the program does in terms of n. It can be helpful at first to just go line-by-line for this and to add everything up in the end.
The first line, turning num into an n character string and then turning that string to an length n array of chars, does O(n) work. (n work to turn each of n digits into a character, than another n work to put each of those characters into an array. n + n = 2n = O(n) total work)
char[] chars = String.valueOf(num).toCharArray(); // O(n)
The next line just reads the length value from the array and saves it as n. This operation takes the same amount of time no matter how long the array is, so it is O(1).
int n = chars.length; // O(1)
For every digit of num, our program runs 1 for loop, so O(n) total loops:
for (int i = n - 1; i > 0; i--) { // O(n)
Inside the for loop, a conditional check is performed, and then, if it returns true, a value may be decremented and the array from i to n filled.
if (chars[i - 1] > chars[i]) { // O(1)
chars[i - 1]--; // O(1)
Arrays.fill(chars, i, n, '9'); // O(n-i) = O(n)
}
The fill operation is O(n-i) because that is how many characters may be changed to '9'. O(n-i) is O(n), because i is just a constant and lower order than n, which, as previously mentioned, means it gets ignored in big O.
Finally, you parse the n characters of chars as an int, and return it. Altogether:
static Integer foo(int num) {
char[] chars = String.valueOf(num).toCharArray(); // O(n)
int n = chars.length; // O(1)
for (int i = n - 1; i > 0; i--) { // O(n)
if (chars[i - 1] > chars[i]) { // O(1)
chars[i - 1]--; // O(1)
Arrays.fill(chars, i, n, '9'); // O(n)
}
}
return Integer.parseInt(new String(chars)); // O(n)
}
When we add everything up, we get, the total time-complexity as a function of n, T(n).
T(n) = O(n) + O(1) + O(n)*(O(1) + O(1) + O(n)) + O(n)
There is a product in the expression to represent the total work done across all iterations of the for-loop: O(n) iterations times O(1) + O(1) + O(n) work in each iteration. In reality, some iterations the for loop might only do O(1) work (when the condition is false), but in the worst case the whole body is executed every iteration, and complexity analysis is typically done for the worst case unless otherwise specified.
You can simplify this function for run-time by using the fact that big O strips constants and lower-order terms, along with the facts that that O(a) + O(b) = O(a + b) and a*O(b) = O(a*b).
T(n) = O(n+1+n) + O(n)*O(1 + 1 + n)
= O(2n+1) + O(n)*O(n+2)
= O(n) + O(n)*O(n)
= O(n) + O(n^2)
= O(n^2 + n)
= O(n^2)
So you would say that the overall time complexity of the program is O(n^2), meaning that run-time scales quadratically with input size in the worst case.

Why Are Time Complexities Like O(N + N) Equal To O(N)? [duplicate]

This question already has answers here:
Why is the constant always dropped from big O analysis?
(7 answers)
Closed 2 years ago.
I commonly use a site called LeetCode for practice on problems. On a lot of answers in the discuss section of a problem, I noticed that run times like O(N + N) or O(2N) gets changed to O(N). For example:
int[] nums = {1, 2, 3, 4, 5};
for(int i = 0; i < nums.length; i ++) {
System.out.println(nums[i]);
}
for(int i = 0; i < nums.length; i ++) {
System.out.println(nums[i]);
}
This becomes O(N), even though it iterates through nums twice. Why is it not O(2N) or O(N + N)?
In time complexity, constant coefficients do not play a role. This is because the actual time it takes an algorithm to run depends also on the physical constraints of the machine. This means that if you run your code on a machine which is twice as fast as another, all other conditions being equal, it would run in about half the time with the same input.
But that’s not the same thing when you compare two algorithms with different time complexities. For example, when you compare the running time of an algorithm of O( N ^ 2 ) to an algorithm of O(N), the running time of O( N ^ 2 ) grows so fast with the growth of input size that the O(N) one cannot catch up with it, no matter how big you choose its constant coefficient.
Let’s say your constant coefficient is 1000, instead of just 2, then for input sizes of ( N > 1000 ) the running time of O( N ^ 2 ) algorithm becomes a proportion of ( N * N ) while N would be growing proportional to the input size, while the running time of the O(N) algorithm only remains proportional to ( 1000 * N ).
Time complexity for O(n+n) reduces to O(2n). Now 2 is a constant. So the time complexity will essentially depend on n.
Hence the time complexity of O(2n) equates to O(n).
Also if there is something like this O(2n + 3) it will still be O(n) as essentially the time will depend on the size of n.
Now suppose there is a code which is O(n^2 + n), it will be O(n^2) as when the value of n increases the effect of n will become less significant compared to effect of n^2.

Calculate function time complexity

I am trying to calculate the time complexity of this function
Code
int Almacen::poner_items(id_sala s, id_producto p, int cantidad){
it_prod r = productos.find(p);
if(r != productos.end()) {
int n = salas[s - 1].size();
int m = salas[s - 1][0].size();
for(int i = n - 1; i >= 0 && cantidad > 0; --i) {
for(int j = 0; j < m && cantidad > 0; ++j) {
if(salas[s - 1][i][j] == "NULL") {
salas[s - 1][i][j] = p;
r->second += 1;
--cantidad;
}
}
}
}
else {
displayError();
return -1;
}
return cantidad;
}
the variable productos is a std::map and its find method has a time complexity of Olog(n) and other variable salas is a std::vector.
I calculated the time and I found that it was log(n) + nm but am not sure if it is the correct expression or I should leave it as nm because it is the worst or if I whould use n² only.
Thanks
The overall function is O(nm). Big-O notation is all about "in the limit of large values" (and ignores constant factors). "Small" overheads (like an O(log n) lookup, or even an O(n log n) sort) are ignored.
Actually, the O(n log n) sort case is a bit more complex. If you expect m to be typically the same sort of size as n, then O(nm + nlogn) == O(nm), if you expect n ≫ m, then O(nm + nlogn) == O(nlogn).
Incidentally, this is not a question about C++.
In general when using big O notation, you only leave the most dominant term when taking all variables to infinity.
n by itself is much larger than log n at infinity, so even without m you can (and generally should) drop the log n term, so O(nm) looks fine to me.
In non-theoretical use cases, it is sometimes important to understand the actual complexity (for non-infinite inputs), since sometimes algorithms that are slow at infinity can produce better results for shorter inputs (there are some examples where O(1) algorithms have such a terrible constant that an exponential algorithm does better in real life). quick sort is considered a practical example of an O(n^2) algorithm that often does better than it's O(n log n) counterparts.
Read about "Big O Notation" for more info.
let
k = productos.size()
n = salas[s - 1].size()
m = salas[s - 1][0].size()
your algorithm is O(log(k) + nm). You need to use a distinct name for each independent variable
Now it might be the case that there is a relation between k, n, m and you can re-label with a reduced set of variables, but that is not discernible from your code, you need to know about the data.
It may also be the case that some of these terms won't grow large, in which case they are actually constants, i.e. O(1).
E.g. you may know k << n, k << m and n ~= m , which allows you describe it as O(n^2)

Time complexity of for loops, I cannot really understand a thing

So these are the for loops that I have to find the time complexity, but I am not really clearly understood how to calculate.
for (int i = n; i > 1; i /= 3) {
for (int j = 0; j < n; j += 2) {
... ...
}
for (int k = 2; k < n; k = (k * k) {
...
}
For the first line, (int i = n; i > 1; i /= 3), keeps diving i by 3 and if i is less than 1 then the loop stops there, right?
But what is the time complexity of that? I think it is n, but I am not really sure.
The reason why I am thinking it is n is, if I assume that n is 30 then i will be like 30, 10, 3, 1 then the loop stops. It runs n times, doesn't it?
And for the last for loop, I think its time complexity is also n because what it does is
k starts as 2 and keeps multiplying itself to itself until k is greater than n.
So if n is 20, k will be like 2, 4, 16 then stop. It runs n times too.
I don't really think I am understanding this kind of questions because time complexity can be log(n) or n^2 or etc but all I see is n.
I don't really know when it comes to log or square. Or anything else.
Every for loop runs n times, I think. How can log or square be involved?
Can anyone help me understanding this? Please.
Since all three loops are independent of each other, we can analyse them separately and multiply the results at the end.
1. i loop
A classic logarithmic loop. There are countless examples on SO, this being a similar one. Using the result given on that page and replacing the division constant:
The exact number of times that this loop will execute is ceil(log3(n)).
2. j loop
As you correctly figured, this runs O(n / 2) times;
The exact number is floor(n / 2).
3. k loop
Another classic known result - the log-log loop. The code just happens to be an exact replicate of this SO post;
The exact number is ceil(log2(log2(n)))
Combining the above steps, the total time complexity is given by
Note that the j-loop overshadows the k-loop.
Numerical tests for confirmation
JavaScript code:
T = function(n) {
var m = 0;
for (var i = n; i > 1; i /= 3) {
for (var j = 0; j < n; j += 2)
m++;
for (var k = 2; k < n; k = k * k)
m++;
}
return m;
}
M = function(n) {
return ceil(log(n)/log(3)) * (floor(n/2) + ceil(log2(log2(n))));
}
M(n) is what the math predicts that T(n) will exactly be (the number of inner loop executions):
n T(n) M(n)
-----------------------
100000 550055 550055
105000 577555 577555
110000 605055 605055
115000 632555 632555
120000 660055 660055
125000 687555 687555
130000 715055 715055
135000 742555 742555
140000 770055 770055
145000 797555 797555
150000 825055 825055
M(n) matches T(n) perfectly as expected. A plot of T(n) against n log n (the predicted time complexity):
I'd say that is a convincing straight line.
tl;dr; I describe a couple of examples first, I analyze the complexity of the stated problem of OP at the bottom of this post
In short, the big O notation tells you something about how a program is going to perform if you scale the input.
Imagine a program (P0) that counts to 100. No matter how often you run the program, it's going to count to 100 as fast each time (give or take). Obviously right?
Now imagine a program (P1) that counts to a number that is variable, i.e. it takes a number as an input to which it counts. We call this variable n. Now each time P1 runs, the performance of P1 is dependent on the size of n. If we make n a 100, P1 will run very quickly. If we make n equal to a googleplex, it's going to take a little longer.
Basically, the performance of P1 is dependent on how big n is, and this is what we mean when we say that P1 has time-complexity O(n).
Now imagine a program (P2) where we count to the square root of n, rather than to itself. Clearly the performance of P2 is going to be worse than P1, because the number to which they count differs immensely (especially for larger n's (= scaling)). You'll know by intuition that P2's time-complexity is equal to O(n^2) if P1's complexity is equal to O(n).
Now consider a program (P3) that looks like this:
var length= input.length;
for(var i = 0; i < length; i++) {
for (var j = 0; j < length; j++) {
Console.WriteLine($"Product is {input[i] * input[j]}");
}
}
There's no n to be found here, but as you might realise, this program still depends on an input called input here. Simply because the program depends on some kind of input, we declare this input as n if we talk about time-complexity. If a program takes multiple inputs, we simply call those different names so that a time-complexity could be expressed as O(n * n2 + m * n3) where this hypothetical program would take 4 inputs.
For P3, we can discover it's time-complexity by first analyzing the number of different inputs, and then by analyzing in what way it's performance depends on the input.
P3 has 3 variables that it's using, called length, i and j. The first line of code does a simple assignment, which' performance is not dependent on any input, meaning the time-complexity of that line of code is equal to O(1) meaning constant time.
The second line of code is a for loop, implying we're going to do something that might depend on the length of something. And indeed we can tell that this first for loop (and everything in it) will be executed length times. If we increase the size of length, this line of code will do linearly more, thus this line of code's time complexity is O(length) (called linear time).
The next line of code will take O(length) time again, following the same logic as before, however since we are executing this every time execute the for loop around it, the time complexity will be multiplied by it: which results in O(length) * O(length) = O(length^2).
The insides of the second for loop do not depend on the size of the input (even though the input is necessary) because indexing on the input (for arrays!!) will not become slower if we increase the size of the input. This means that the insides will be constant time = O(1). Since this runs in side of the other for loop, we again have to multiply it to obtain the total time complexity of the nested lines of code: `outside for-loops * current block of code = O(length^2) * O(1) = O(length^2).
The total time-complexity of the program is just the sum of everything we've calculated: O(1) + O(length^2) = O(length^2) = O(n^2). The first line of code was O(1) and the for loops were analyzed to be O(length^2). You will notice 2 things:
We rename length to n: We do this because we express
time-complexity based on generic parameters and not on the ones that
happen to live within the program.
We removed O(1) from the equation. We do this because we're only
interested in the biggest terms (= fastest growing). Since O(n^2)
is way 'bigger' than O(1), the time-complexity is defined equal to
it (this only works like that for terms (e.g. split by +), not for
factors (e.g. split by *).
OP's problem
Now we can consider your program (P4) which is a little trickier because the variables within the program are defined a little cloudier than the ones in my examples.
for (int i = n; i > 1; i /= 3) {
for (int j = 0; j < n; j += 2) {
... ...
}
for (int k = 2; k < n; k = (k * k) {
...
}
}
If we analyze we can say this:
The first line of code is executed O(cbrt(3)) times where cbrt is the cubic root of it's input. Since i is divided by 3 every loop, the cubic root of n is the number of times the loop needs to be executed before i is smaller or equal to 1.
The second for loop is linear in time because j is executed
O(n / 2) times because it is increased by 2 rather than 1 which
would be 'normal'. Since we know that O(n/2) = O(n), we can say
that this for loop is executed O(cbrt(3)) * O(n) = O(n * cbrt(n)) times (first for * the nested for).
The third for is also nested in the first for, but since it is not nested in the second for, we're not going to multiply it by the second one (obviously because it is only executed each time the first for is executed). Here, k is bound by n, however since it is increased by a factor of itself each time, we cannot say it is linear, i.e. it's increase is defined by a variable rather than by a constant. Since we increase k by a factor of itself (we square it), it will reach n in 2log(n) steps. Deducing this is easy if you understand how log works, if you don't get this you need to understand that first. In any case, since we analyze that this for loop will be run O(2log(n)) time, the total complexity of the third for is O(cbrt(3)) * O(2log(n)) = O(cbrt(n) *2log(n))
The total time-complexity of the program is now calculated by the sum of the different sub-timecomplexities: O(n * cbrt(n)) + O(cbrt(n) *2log(n))
As we saw before, we only care about the fastest growing term if we talk about big O notation, so we say that the time-complexity of your program is equal to O(n * cbrt(n)).