hi read in a book that calling subroutines is considered to be a constant time operation, even if the subroutines itself does not execute in constant time, but depends on the input size.
Then if i have the following piece of code:
void func(int m){
int n = 10;
subrout(m);//function which complexity depends on m
subrout2(n);//function which complexity depends on n
}
i suppose i can consider func() to be a constant time function, e.g. O(1)?
and what if i have this:
void func(){
int n = 10;
Type object;
object.member_method(n);/*member function which time complexity depends upon n*/
}
can i still consider func() a constant time function?
is there some case in which this rule falls ?
thanks!
No, you cannot consider func(int m) to have a constant time complexity. Its time complexity is O(T1(m) + T2(10)), where T1 and T2 are functions describing the time complexity of subrout and subrout2, respectively.
In the second case, the time complexity, technically, is constant.
As a general comment, the point of specifying time complexity with asymptotic notation is to describe how the number of operations increases as a function of input size.
What the book probably meant to say is that time complexity of the calling function T_func is T_call + T_callee. Here T_call is the time operation of passing parameters and setting up the environment for the callee and T_callee is the time spent inside the subroutine. The book says is that it is safe to assume T_call is constant, while no such assumptions are made regarding T_callee.
To clarify assume we have a function func that calls one subroutine callee.
func(s){
callee(s);
}
then T_func(s) = T_call + T_callee(s). If size(s) = n and T_callee = O(f(n)) then it is safe to say that T_func = O(f(n)).
Related
Code:
int main()
{
for(long long i=0;i<10000000;i++)
{
}
return 0;
}
I asked this because i wanted to know , Whether an empty loop add to the time of running of program. Like, say we do have a function within the loop but it does not run on every loop due to some condition:
Code:
int main()
{
for(long long i=0;i<10000;i++)
{
for(long long i=1;i<10000;i++)
{
if(//"some condition")
{
func(); // some function which we know is going to run only one-hundredth of the time due to the condition. time complexity of func() is O(1).
}
}
}
return 0;
}
Will the timecomplexity be O(N*N)??
Time-complexity is only meaningful in the context of variable-sized data-set; it describes how quickly the program's total execution time will increase as the size of the data-set increases. For example, if you have N items to process, and your algorithm needs to read each of those items a fixed number of times, then your algorithm is considered to be O(N).
In your first case, if we assume you have a "data set" whose current size is 10000000, then your single for-loop would be O(N) -- but note that since your for-loop doesn't have any observable effects, an optimizing compiler would probably just omit the loop entirely, reducing it to effectively O(1).
In your second (nested-loop) example (assuming the variable-set-size is 10000), the algorithm is O(N^2), because the number of steps the program has to run increases with the square of the set-size. That is true regardless of how often the internal if test evaluates to true, because the program will have to do some steps (such as evaluating the if condition) N*N times no how often (or rarely) the if-test evaluates to true. (Again, the exception would be if the compiler could somehow prove that the if statement never evaluates to true, or that the func() function had no observable side-effects, in which case it could legally omit the whole thing and just return 0 immediately)
Your first code has a worst-case complexity of O(n), because it iterates n times. Regardless of it doing nothing or a milllion things in each iteration, it is always of O(n) complexity. It may not be optimized away and the optimizer may not skip the empty loop.
Similarly, your second program has a complexity of O(n^2) because it iterates n^2 many times. The if condition inside may or may not be satisfied for some cases, and the program may not execute in the cases where the if is not satisfied, but it visits n^2 cases, which is enough to establish an O(n^2) complexity.
I am learning about time complexity and am noticing that tutorials I have seen do not take into account the time complexity of native functions (Javascript in this example)
The below function that removes duplicate values in an array and return the sorted array would be evaluated with time complexity of O(n) instead of O(n + nlogn). Is O(n) correct? Should we take into account time complexities of native functions when calculating time complexities?
function uniqueSort(arr) {
const store = {};
const result = [arr[0]];
for(let i =0; i < arr.length; i++) {
if(!store[arr[i]]) {
result.push(arr[i]);
store[arr[i]] = true;
}
}
return result.sort((a,b) => a - b);
}
Is O(n) correct?
O(N) is not correct. When you evaluate the time complexity of a function you have to account for the time complexity of all operations line by line in the function- including that of native functions. It wouldn't make much sense to call a function O(1) if all you're doing is calling various native functions (that can be quite expensive!) inside of your own function.
The function you provided a snippet of would be O(n + nlog(n)) because there is the operation of looping through the array (O(N)) and the operation of sorting using javascript's native function which is nlog(n).
Oftentimes in Big-O notation we classify a function simply by its slowest operation, so you can also describe the function as being O(nlogn).
#include<stdio.h>
int main()
{
int T,i,sum,n; //Here T is the test case
scanf("%d",&T);
while(T--)
{
scanf("%d",&n);
sum=0;
for(i=1;i<=n;i++)
sum=sum+i;
printf("%d\n",sum);
}
return 0;
}
If i give input of test case as T=50 and n=100.
Which is correct : time complexity O(n)=100 or time complexity O(n)=100*50.
The concept of Big-O analysis is not specific to certain values. Time Complexity , which is commonly expressed in Big-Oh , excludes coefficients and lower order terms. Here in your Code, The time complexity would be O(T*N). It will never ever be O(50*100) or O(100). There is no such notation. Any algorithm which runs in constant time (50*100 in your code) will be expressed as O(1).
In one liner, Time Complexity will never be a value, it'll be expressed as a function that depends on the input size.
Also, to have a clear understanding, I'd suggest you to go through this tutorial: Time Complexity Analysis by MyCodeSchool
I have a question about calculating the expected running time of a given function. I understand just fine, how to calculate code fragments with cycles in them (for / while / if , etc.) but functions without them seems a bit odd to me. For example, lets say that we have the following code fragment:
public void Add(T item)
{
var newArr = new T[this.arr.Length + 1];
Array.Copy(this.arr, newArr, this.arr.Length);
newArr[newArr.Length - 1] = item;
this.arr = newArr;
}
If my logic works correctly, the function Add has a complexity of O(1), because in the best/worst/average case it will just read every line of code once, right?
You always have to consider the time complexity of the function calls, too. I don't know how Array.Copy is implemented, but I'm going to guess it's O(N), making the whole Add function O(N) as well. Your intuition is right, though - the rest of it is in fact O(1).
If you have multiple sub-operations with O(n) + O(log(n)) etc and the costliest step is the cost of the whole operation - by default big O refers to the worst case. Here as you copy the array, it is an O(n) operation
Complexity is calculated following this 2 rules :
-Calling a method (complexity+ 1)
-Encountering the following keywords : if, while, repeat, for, &&, ||, catch, case, etc … (complexity+ 1)
In your case , given you are trying to copy an array and not a single value , the algorithm will complete N copy operations giving you an O(N) operation.
I have a question. Suppose you have to call a function twice in a block of code, and are guaranteed for it to return the same value both times. Should you optimize your code by creating an extra variable?
Example:
Should
foo1(v.length()); // foo1 doesn't modify v.length()
foo2(v.length());
be changed to
int vlen = v.length();
foo1(vlen);
foo2(vlen);
for optimability?
In short, yes.
The increase in performance for one execution of the hypothetical code block where the above code appears might be negligible, but consider that if you have a repetitive loop in which the code block appears, you might see a small performance gain, because there would be less "call/ret" spaghetti stringing in the code. Yet, with the faster processors on the market today, allow me to emphasize that the performance increase is small. Also, the compiled code block is probably smaller by a byte or two in the version where you only called v.length() once.
So again, a small increase in efficiency that is negligible. Yet, it's still the best practice to optimize like this -- especially if you have something like a for loop, where the performance gain is roughly multiplied by the number of times that the unaltered value returned by the function is utilized and multiplied, the performance gain could prove non-negligible.
unsigned int something = function();
for( unsigned int i = 0; i < something; i ++ )
{
...
}
rather than
for( unsigned int i = 0; i < function(); i ++ )
{
...
}