How do you assess the time complexity when the values continue to decrease? In this example, let's assume the length of the array can be any length so the length of the input array is of length n;
The for-loop is O(n) complexity
Math.max() in javascript is O(m) for the number of elements to assess (?)
So the total time complexity is O(n*m)?
Or do you assume only the worst case when m===n, so the time complexity is O(n^2)?
Contrived example below:
const arr = [1,2,3,4,5];
const n = arr.length;
for (let i=0; i<n; i++) {
const max = Math.max(...arr);
arr.pop();
console.log({arr,max});
}
Related
I just confused about to find out time complexity of the this nested loop:
for (int i=0; i<n; i++){
i*=2;
for (int j=0; j<i; j++){
}
}
And also what is the time complexity of the outer loop (just ignore the inner loop:
for (int i=0; i<n; i++){
i*=2;
}
I ran the following code which is a modified version of the second code from the question.
let it = 0
let pow = 1000
let n = Math.pow(2, pow)
for (let i = 0; i < n; i++) {
it++
i *= 2
}
console.log(it)
It prints the value of either pow+1 or pow, which implies that it runs log2(n) loops.
Although it contains a single for loop, it modifies iterating variables inside the loop. In every iteration, the value of i is doubled, which halves the remaining number of iterations. So instead of O(n), it is O(log n). It is similar to binary search if it helps you to understand better. In every iteration, it halves the remaining search space.
However, for the first code, it is quite confusing. Since the outer loop runs for log2(n) times, and the inner loop runs i times which is dependent on n. My guess is for the first code, big O notation is O(n log(n)). But for this case, my guess is as good as yours.
I have a really simple question, I have this loop:
for (int i=0; i<n; i++) {
"some O(n) stuff here)"
}
what will be the BEST time complexity of this algorithm?
O(n)? (for loop O(1) * O(n) stuff)
or
O(n^2)? (for loop O(n) * O(n) stuff inside the loop)
Will the for loop itself be considered as O(n) as normally, or will it be considered as O(1)
since it will only make only 1 loop for the BEST case scenario?
You are right, the best time complexity is O(N) (and even Θ(N)), if the best running time of "stuff" is constant (even zero).
Anyway, if "stuff" is known to be best case Ω(f(N)), then the best total time is Ω(N f(N)).
If your loop is doing O(n) stuff for n times then the time complexity will be O(n^2). May you call it worst case. The best case and average case will be based on your some O(n) stuff that is executed with every iteration of your loop.
Lets take a simple example of bubble sort algorithm:
for (int i = 0; i < n - 1; ++i) {
for (int j = 0; j < n - i - 1; ++j) {
if (a[j] > a[j + 1]) {
swap(&a[j], &a[j + 1]);
}
}
}
Time complexity this will always be O(n^2), whether array is sorted (irrespective of order - ascending or descending) or not.
But this can be optimised by observing that the nth pass finds the nth largest element and puts it into its final place. So, the inner loop can avoid looking at the last n − 1 items when running for the nth time:
for (int i = 0; i < n - 1; ++i) {
swapped = false;
for (int j = 0; j < n - i - 1; ++j) {
if (a[j] > a[j + 1]) {
swap(&a[j], &a[j + 1]);
swapped = true;
}
}
if (swapped == false) {
break;
}
}
Now the best case time complexity is O(n) i.e. when the array is sorted in ascending order (in context of above implementation). Average and worst case are still O(n^2).
So, to identify the best case time complexity of your algorithm you have to show us the implementation of some O(n) stuff and if not implementation then at least show the algorithm that you are trying to implement.
As you stated it, it's O(n^2).
'Cause You are doing n times a O(n) operation.
Would the following code be considered O(1) or O(n)? The for loop has constant complexity which runs 10 times but I'm not sure whether the if condition would be considered O(n) or O(1).
for (i = 0; i < 10; i++)
{
if (arr [i] == 1001)
{
return i;
}
}
The complexity would be O(1), because regardless of how much you increase the input the complexity of the algorithm remains constant. Namely, the operations performed inside that loop are considered to have constant time, hence O(1).
for (i = 0; i < 10; i++)
{
if (arr [i] == 1001)
{
return i;
}
}
On the other hand if your loop was:
for (i = 0; i < 10; i++)
{
f(n)
}
and the function f(n) had a complexity of O(n) then the complexity of the entire code snippet would be O(n) since 10 * N can be labeled as O(n).
For a more in depth explanation have a look at What is a plain English explanation of “Big O” notation?
I have this function and I am trying to calculate time and space complexity, I got an answer but I am not sure if it's correct, I'd like to know whether this is correct or not
void f1(int n)
{
int i,j,k=0;
for(int i=n/2; i<=n; i++)
{
for(int j=2; j<=i; j*=2)
{
k+=n;
}
}
free(malloc(k));
}
My results:
For the outer for loop it's pretty straightforward O(n).
For the inner loop, we have log((n/2)+i) each iteration, so basically log(n) each iteration.
And so the total time complexity is O(n*log(n))
For space complexity, it's O(k) whenever k receives it's final value, since final value for k is k+n nlog(n) times, we have that after all iterations, k=n^2log(n), and so space complexity is O((n^2)*log(n)).
Is this correct?
"...It is very possible for O(N) code to run faster than O(1) code for specific inputs. Big O just describes the rate of increase."
According to my understanding:
O(N) - Time taken for an algorithm to run based on the varying values of input N.
O(1) - Constant time taken for the algorithm to execute irrespective of the size of the input e.g. int val = arr[10000];
Can someone help me understand based on the author's statement?
O(N) code run faster than O(1)?
What are the specific inputs the author is alluding to?
Rate of increase of what?
O(n) constant time can absolutely be faster than O(1) linear time. The reason is that constant-time operations are totally ignored in Big O, which is a measure of how fast an algorithm's complexity increases as input size n increases, and nothing else. It's a measure of growth rate, not running time.
Here's an example:
int constant(int[] arr) {
int total = 0;
for (int i = 0; i < 10000; i++) {
total += arr[0];
}
return total;
}
int linear(int[] arr) {
int total = 0;
for (int i = 0; i < arr.length; i++) {
total += arr[i];
}
return total;
}
In this case, constant does a lot of work, but it's fixed work that will always be the same regardless of how large arr is. linear, on the other hand, appears to have few operations, but those operations are dependent on the size of arr.
In other words, as arr increases in length, constant's performance stays the same, but linear's running time increases linearly in proportion to its argument array's size.
Call the two functions with a single-item array like
constant(new int[] {1});
linear(new int[] {1});
and it's clear that constant runs slower than linear.
But call them like:
int[] arr = new int[10000000];
constant(arr);
linear(arr);
Which runs slower?
After you've thought about it, run the code given various inputs of n and compare the results.
Just to show that this phenomenon of run time != Big O isn't just for constant-time functions, consider:
void exponential(int n) throws InterruptedException {
for (int i = 0; i < Math.pow(2, n); i++) {
Thread.sleep(1);
}
}
void linear(int n) throws InterruptedException {
for (int i = 0; i < n; i++) {
Thread.sleep(10);
}
}
Exercise (using pen and paper): up to which n does exponential run faster than linear?
Consider the following scenario:
Op1) Given an array of length n where n>=10, print the first ten elements twice on the console. --> This is a constant time (O(1)) operation, because for any array of size>=10, it will execute 20 steps.
Op2) Given an array of length n where n>=10, find the largest element in the array. This is a constant time (O(N)) operation, because for any array, it will execute N steps.
Now if the array size is between 10 and 20 (exclusive), Op1 will be slower than Op2. But let's say, we take an array of size>20 (for eg, size =1000), Op1 will still take 20 steps to complete, but Op2 will take 1000 steps to complete.
That's why the big-o notation is about growth(rate of increase) of an algorithm's complexity