How is it possible that O(1) constant time code is slower than O(n) linear time code? - time-complexity

"...It is very possible for O(N) code to run faster than O(1) code for specific inputs. Big O just describes the rate of increase."
According to my understanding:
O(N) - Time taken for an algorithm to run based on the varying values of input N.
O(1) - Constant time taken for the algorithm to execute irrespective of the size of the input e.g. int val = arr[10000];
Can someone help me understand based on the author's statement?
O(N) code run faster than O(1)?
What are the specific inputs the author is alluding to?
Rate of increase of what?

O(n) constant time can absolutely be faster than O(1) linear time. The reason is that constant-time operations are totally ignored in Big O, which is a measure of how fast an algorithm's complexity increases as input size n increases, and nothing else. It's a measure of growth rate, not running time.
Here's an example:
int constant(int[] arr) {
int total = 0;
for (int i = 0; i < 10000; i++) {
total += arr[0];
}
return total;
}
int linear(int[] arr) {
int total = 0;
for (int i = 0; i < arr.length; i++) {
total += arr[i];
}
return total;
}
In this case, constant does a lot of work, but it's fixed work that will always be the same regardless of how large arr is. linear, on the other hand, appears to have few operations, but those operations are dependent on the size of arr.
In other words, as arr increases in length, constant's performance stays the same, but linear's running time increases linearly in proportion to its argument array's size.
Call the two functions with a single-item array like
constant(new int[] {1});
linear(new int[] {1});
and it's clear that constant runs slower than linear.
But call them like:
int[] arr = new int[10000000];
constant(arr);
linear(arr);
Which runs slower?
After you've thought about it, run the code given various inputs of n and compare the results.
Just to show that this phenomenon of run time != Big O isn't just for constant-time functions, consider:
void exponential(int n) throws InterruptedException {
for (int i = 0; i < Math.pow(2, n); i++) {
Thread.sleep(1);
}
}
void linear(int n) throws InterruptedException {
for (int i = 0; i < n; i++) {
Thread.sleep(10);
}
}
Exercise (using pen and paper): up to which n does exponential run faster than linear?

Consider the following scenario:
Op1) Given an array of length n where n>=10, print the first ten elements twice on the console. --> This is a constant time (O(1)) operation, because for any array of size>=10, it will execute 20 steps.
Op2) Given an array of length n where n>=10, find the largest element in the array. This is a constant time (O(N)) operation, because for any array, it will execute N steps.
Now if the array size is between 10 and 20 (exclusive), Op1 will be slower than Op2. But let's say, we take an array of size>20 (for eg, size =1000), Op1 will still take 20 steps to complete, but Op2 will take 1000 steps to complete.
That's why the big-o notation is about growth(rate of increase) of an algorithm's complexity

Related

Time complexity of decreasing elements in Math.max() in for-loop?

How do you assess the time complexity when the values continue to decrease? In this example, let's assume the length of the array can be any length so the length of the input array is of length n;
The for-loop is O(n) complexity
Math.max() in javascript is O(m) for the number of elements to assess (?)
So the total time complexity is O(n*m)?
Or do you assume only the worst case when m===n, so the time complexity is O(n^2)?
Contrived example below:
const arr = [1,2,3,4,5];
const n = arr.length;
for (let i=0; i<n; i++) {
const max = Math.max(...arr);
arr.pop();
console.log({arr,max});
}

Big-O of fixed loops

I was discussing some code during an interview and don't think I did a good job articulating one of my blocks of code.
I know (high-level) we are taught two for loops == O(n^2), but what happens when you make some assertions as part of the work that limit the work done to a constant amount.
The code I came up with was something like
String[] someVal = new String[]{'a','b','c','d'} ;// this was really - some other computation
if(someVal != 4) {
return false;
}
for(int i=0; i < someVal; i++){
String subString = someVal[i];
if(subString.length() != 8){
return false;
}
for(int j = 0; j < subString.length(); j++){
// do some other stuff
}
}
So there are two for loops, but the number of iterations become fixed because of the length check before proceeding.
for(int i=0; i < **4**; i++){
String subString = someVal[i];
if(subString.length() != 8){ return false }
for(int j = 0; j < **8**; j++){
// do some other stuff
}
}
I tried to argue this made it constant, but didn't do a great job.
Was I completely wrong or off-base?
Your early exit condition inside of the for loop is if(subString.length() != 8), so your second for loop is executed any time if the length exactly 8. This in fact makes the complexity of the second for loop constant, as it is not depending on the input size. But before your first for loop you have another early exit condition if(someVal != 4) making the first for loop also constant.
So yes, I would follow your argumentation, that the complete function has a constant big-O time complexity. Maybe repeating in the explanation that big-O always describes an upper bound complexity, which will never be crossed and constant time factors can be reduced to 1.
But keep in mind, that a constant complexity based on real world input could still be longer in execution time than a O(n) complexity, based on the size of n. If it would be a known pre-condition that n does not grow beyond a (low) given number, I would not argue about the Big-O complexity, but about overall expected runtime, where the second constant for loop could have a larger impact than expected by Big-O complexity analysis.

Calculating the time and space complexity of function f2

I have this function and I am trying to calculate time and space complexity, I got an answer but I am not sure if it's correct, I'd like to know whether this is correct or not
void f1(int n)
{
int i,j,k=0;
for(int i=n/2; i<=n; i++)
{
for(int j=2; j<=i; j*=2)
{
k+=n;
}
}
free(malloc(k));
}
My results:
For the outer for loop it's pretty straightforward O(n).
For the inner loop, we have log((n/2)+i) each iteration, so basically log(n) each iteration.
And so the total time complexity is O(n*log(n))
For space complexity, it's O(k) whenever k receives it's final value, since final value for k is k+n nlog(n) times, we have that after all iterations, k=n^2log(n), and so space complexity is O((n^2)*log(n)).
Is this correct?

Time complexity of 2 nested loops

I want to detect the time complexity for this block of code that consists of 2 nested loops:
function(x) {
for (var i = 4; i <= x; i++) {
for (var k = 0; k <= ((x * x) / 2); k++) {
// data processing
}
}
}
I guess the time complexity here is O(n^3), could you please correct me if I am wrong with a brief explanation?
the time complexity here is O(n^2). Loops like this where you increment the value of i and k by one have the complexity of O(n) and since there are two nested it is O(n^2). This complexity doesn't depend on X. If X is a smaller number the program will execute faster but still the complexity stays the same.

What would be complexity of this recursive algorithm

How to calculate the complexity of a bit complicated recursive algorithm like this
in this case what would be the complexity of something(0,n)
void something(int b, int e) {
if (b >= e) return;
int a = 0;
for(int i=b; i < e; i++)
a++;
int k = (b+e)/2;
something(b, k);
something(k+1, e);
}
I tried to analyze this algorithm and think it has complexity is n*ln(n) but still can't get a formal proof.
Initially, loop will run for (e-b) times and it will call itself 2 times, but reducing the size of loop by half
So, ((e-b)/2) will run for 2 iteration; once by (b,(b+e)/2) and again by ((b+e)/2+1,e)
Like wise both iterations will again call themself 2 times, but reducing the iteration length by half.
So `((e-b)/4) will run 4 times, and so on.
The height of the recursion tree will be log(b-e), (each node is having 2 children)
So, time complexity = (b-e) + 2*((b-e)/2) + 4*((b-e)/4) + .....(log(b-e) times)
this expression evaluates to (b-e)*log(b-e)
therefore, time complexity = O(nlogn)