Calculating the time and space complexity of function f2 - time-complexity

I have this function and I am trying to calculate time and space complexity, I got an answer but I am not sure if it's correct, I'd like to know whether this is correct or not
void f1(int n)
{
int i,j,k=0;
for(int i=n/2; i<=n; i++)
{
for(int j=2; j<=i; j*=2)
{
k+=n;
}
}
free(malloc(k));
}
My results:
For the outer for loop it's pretty straightforward O(n).
For the inner loop, we have log((n/2)+i) each iteration, so basically log(n) each iteration.
And so the total time complexity is O(n*log(n))
For space complexity, it's O(k) whenever k receives it's final value, since final value for k is k+n nlog(n) times, we have that after all iterations, k=n^2log(n), and so space complexity is O((n^2)*log(n)).
Is this correct?

Related

Time complexity of nested loop with multiplication?

I just confused about to find out time complexity of the this nested loop:
for (int i=0; i<n; i++){
i*=2;
for (int j=0; j<i; j++){
}
}
And also what is the time complexity of the outer loop (just ignore the inner loop:
for (int i=0; i<n; i++){
i*=2;
}
I ran the following code which is a modified version of the second code from the question.
let it = 0
let pow = 1000
let n = Math.pow(2, pow)
for (let i = 0; i < n; i++) {
it++
i *= 2
}
console.log(it)
It prints the value of either pow+1 or pow, which implies that it runs log2(n) loops.
Although it contains a single for loop, it modifies iterating variables inside the loop. In every iteration, the value of i is doubled, which halves the remaining number of iterations. So instead of O(n), it is O(log n). It is similar to binary search if it helps you to understand better. In every iteration, it halves the remaining search space.
However, for the first code, it is quite confusing. Since the outer loop runs for log2(n) times, and the inner loop runs i times which is dependent on n. My guess is for the first code, big O notation is O(n log(n)). But for this case, my guess is as good as yours.

Time complexity of 2 nested loops

I want to detect the time complexity for this block of code that consists of 2 nested loops:
function(x) {
for (var i = 4; i <= x; i++) {
for (var k = 0; k <= ((x * x) / 2); k++) {
// data processing
}
}
}
I guess the time complexity here is O(n^3), could you please correct me if I am wrong with a brief explanation?
the time complexity here is O(n^2). Loops like this where you increment the value of i and k by one have the complexity of O(n) and since there are two nested it is O(n^2). This complexity doesn't depend on X. If X is a smaller number the program will execute faster but still the complexity stays the same.

Time complexity of dependent nested logarithmic loop

What is the Big-O complexity of the following loop?
for (int i=2; i <= n; i=i*2) {
for (int k=i; k <= n; k=k*2) {
times++;
}
}
The outer loop is of course logarithmic, but I am not sure on how to deal with the increasing starting condition of the second loop. It looks similar to logarithmic complexity, but how do I get the k=i in there?

How is it possible that O(1) constant time code is slower than O(n) linear time code?

"...It is very possible for O(N) code to run faster than O(1) code for specific inputs. Big O just describes the rate of increase."
According to my understanding:
O(N) - Time taken for an algorithm to run based on the varying values of input N.
O(1) - Constant time taken for the algorithm to execute irrespective of the size of the input e.g. int val = arr[10000];
Can someone help me understand based on the author's statement?
O(N) code run faster than O(1)?
What are the specific inputs the author is alluding to?
Rate of increase of what?
O(n) constant time can absolutely be faster than O(1) linear time. The reason is that constant-time operations are totally ignored in Big O, which is a measure of how fast an algorithm's complexity increases as input size n increases, and nothing else. It's a measure of growth rate, not running time.
Here's an example:
int constant(int[] arr) {
int total = 0;
for (int i = 0; i < 10000; i++) {
total += arr[0];
}
return total;
}
int linear(int[] arr) {
int total = 0;
for (int i = 0; i < arr.length; i++) {
total += arr[i];
}
return total;
}
In this case, constant does a lot of work, but it's fixed work that will always be the same regardless of how large arr is. linear, on the other hand, appears to have few operations, but those operations are dependent on the size of arr.
In other words, as arr increases in length, constant's performance stays the same, but linear's running time increases linearly in proportion to its argument array's size.
Call the two functions with a single-item array like
constant(new int[] {1});
linear(new int[] {1});
and it's clear that constant runs slower than linear.
But call them like:
int[] arr = new int[10000000];
constant(arr);
linear(arr);
Which runs slower?
After you've thought about it, run the code given various inputs of n and compare the results.
Just to show that this phenomenon of run time != Big O isn't just for constant-time functions, consider:
void exponential(int n) throws InterruptedException {
for (int i = 0; i < Math.pow(2, n); i++) {
Thread.sleep(1);
}
}
void linear(int n) throws InterruptedException {
for (int i = 0; i < n; i++) {
Thread.sleep(10);
}
}
Exercise (using pen and paper): up to which n does exponential run faster than linear?
Consider the following scenario:
Op1) Given an array of length n where n>=10, print the first ten elements twice on the console. --> This is a constant time (O(1)) operation, because for any array of size>=10, it will execute 20 steps.
Op2) Given an array of length n where n>=10, find the largest element in the array. This is a constant time (O(N)) operation, because for any array, it will execute N steps.
Now if the array size is between 10 and 20 (exclusive), Op1 will be slower than Op2. But let's say, we take an array of size>20 (for eg, size =1000), Op1 will still take 20 steps to complete, but Op2 will take 1000 steps to complete.
That's why the big-o notation is about growth(rate of increase) of an algorithm's complexity

What would be complexity of this recursive algorithm

How to calculate the complexity of a bit complicated recursive algorithm like this
in this case what would be the complexity of something(0,n)
void something(int b, int e) {
if (b >= e) return;
int a = 0;
for(int i=b; i < e; i++)
a++;
int k = (b+e)/2;
something(b, k);
something(k+1, e);
}
I tried to analyze this algorithm and think it has complexity is n*ln(n) but still can't get a formal proof.
Initially, loop will run for (e-b) times and it will call itself 2 times, but reducing the size of loop by half
So, ((e-b)/2) will run for 2 iteration; once by (b,(b+e)/2) and again by ((b+e)/2+1,e)
Like wise both iterations will again call themself 2 times, but reducing the iteration length by half.
So `((e-b)/4) will run 4 times, and so on.
The height of the recursion tree will be log(b-e), (each node is having 2 children)
So, time complexity = (b-e) + 2*((b-e)/2) + 4*((b-e)/4) + .....(log(b-e) times)
this expression evaluates to (b-e)*log(b-e)
therefore, time complexity = O(nlogn)