#include<stdio.h>
int main()
{
int T,i,sum,n; //Here T is the test case
scanf("%d",&T);
while(T--)
{
scanf("%d",&n);
sum=0;
for(i=1;i<=n;i++)
sum=sum+i;
printf("%d\n",sum);
}
return 0;
}
If i give input of test case as T=50 and n=100.
Which is correct : time complexity O(n)=100 or time complexity O(n)=100*50.
The concept of Big-O analysis is not specific to certain values. Time Complexity , which is commonly expressed in Big-Oh , excludes coefficients and lower order terms. Here in your Code, The time complexity would be O(T*N). It will never ever be O(50*100) or O(100). There is no such notation. Any algorithm which runs in constant time (50*100 in your code) will be expressed as O(1).
In one liner, Time Complexity will never be a value, it'll be expressed as a function that depends on the input size.
Also, to have a clear understanding, I'd suggest you to go through this tutorial: Time Complexity Analysis by MyCodeSchool
Related
Consider the following code:
void counterMethod(int n)
{
int count=0;
for(int i=0; i<n; i++)
{
count++;
}
}
Since the time complexity of this function would be O(n), but here 'n' refers to the 'value of input' not the 'size of input'. Please clarify why is it 'input size' according to the formal definition:
"Time complexity is the amount of time taken by an algorithm to run, as a function of the length of the input"
In edge-cases like this, the formal definition of "input size" which theorists use does not agree with the practical definition which most programmers use when thinking about actual code.
The formal definition of "input size" is the number of bits (or sometimes machine words) required to represent the input. In most cases this is proportional to e.g. the length of an array, the size of a dictionary, or so on, so the definition is equivalent. Formally, your counting method's time complexity is O(2N) where N is the number of bits required to represent the input number. (Usually you would write lowercase n for the number of bits and uppercase N for the actual numerical value, for readability; uppercase N is "bigger".) The formal definition is like this so that terms like "polynomial time" and "NP" have exact meanings which make sense for a variety of different algorithm inputs.
The intuitive, practical definition is that you can measure "input size" by any variable or quantity that matters for your application. Often, the numerical value of an integer is more important to your application than the number of bits required to represent it; typically you don't care about the number of bits. In that case your counting method takes O(n) time where n is the value of the integer.
Most programmers (e.g. on Stack Overflow) will talk about time complexity using this practical definition, simply because it's easier and more useful for real programming. So in your case, O(n) isn't a time complexity according to the formal definition, but if the reason you want to know the time complexity is because you want to estimate how long the code will take to run, or compare it with another algorithm to see which should be faster, then you won't care about the formal definition.
// Checks if list contains a specific elements
public boolean contains(String it) {
int index=front;
while(index!=-1){
if(dataList[index].equals(it)) {
return true;
}
index= nextList[index];
}
return false;
}
how does the .equals() comparison method affect the algorithmic complexity? Does it turn it from linear to quadratic?
You should take .equals() as a basic step when analysing the runtime of this algorithm. the important thing for computing the runtime of the algorithm is the size of dataList and in this sense the size of "it" is constant.
The best case complexity is given by the case of finding the element in the first position, and then is O(1), the worst case is when you find the element in the last position, and then is O(n). The average case is given by taking all the possibilities (that the element is in position one, or in position 2 and so on) and dividing by n, formally:
1+2+3+..+n/n
and this is O(n) too.
Code:
int main()
{
for(long long i=0;i<10000000;i++)
{
}
return 0;
}
I asked this because i wanted to know , Whether an empty loop add to the time of running of program. Like, say we do have a function within the loop but it does not run on every loop due to some condition:
Code:
int main()
{
for(long long i=0;i<10000;i++)
{
for(long long i=1;i<10000;i++)
{
if(//"some condition")
{
func(); // some function which we know is going to run only one-hundredth of the time due to the condition. time complexity of func() is O(1).
}
}
}
return 0;
}
Will the timecomplexity be O(N*N)??
Time-complexity is only meaningful in the context of variable-sized data-set; it describes how quickly the program's total execution time will increase as the size of the data-set increases. For example, if you have N items to process, and your algorithm needs to read each of those items a fixed number of times, then your algorithm is considered to be O(N).
In your first case, if we assume you have a "data set" whose current size is 10000000, then your single for-loop would be O(N) -- but note that since your for-loop doesn't have any observable effects, an optimizing compiler would probably just omit the loop entirely, reducing it to effectively O(1).
In your second (nested-loop) example (assuming the variable-set-size is 10000), the algorithm is O(N^2), because the number of steps the program has to run increases with the square of the set-size. That is true regardless of how often the internal if test evaluates to true, because the program will have to do some steps (such as evaluating the if condition) N*N times no how often (or rarely) the if-test evaluates to true. (Again, the exception would be if the compiler could somehow prove that the if statement never evaluates to true, or that the func() function had no observable side-effects, in which case it could legally omit the whole thing and just return 0 immediately)
Your first code has a worst-case complexity of O(n), because it iterates n times. Regardless of it doing nothing or a milllion things in each iteration, it is always of O(n) complexity. It may not be optimized away and the optimizer may not skip the empty loop.
Similarly, your second program has a complexity of O(n^2) because it iterates n^2 many times. The if condition inside may or may not be satisfied for some cases, and the program may not execute in the cases where the if is not satisfied, but it visits n^2 cases, which is enough to establish an O(n^2) complexity.
hi read in a book that calling subroutines is considered to be a constant time operation, even if the subroutines itself does not execute in constant time, but depends on the input size.
Then if i have the following piece of code:
void func(int m){
int n = 10;
subrout(m);//function which complexity depends on m
subrout2(n);//function which complexity depends on n
}
i suppose i can consider func() to be a constant time function, e.g. O(1)?
and what if i have this:
void func(){
int n = 10;
Type object;
object.member_method(n);/*member function which time complexity depends upon n*/
}
can i still consider func() a constant time function?
is there some case in which this rule falls ?
thanks!
No, you cannot consider func(int m) to have a constant time complexity. Its time complexity is O(T1(m) + T2(10)), where T1 and T2 are functions describing the time complexity of subrout and subrout2, respectively.
In the second case, the time complexity, technically, is constant.
As a general comment, the point of specifying time complexity with asymptotic notation is to describe how the number of operations increases as a function of input size.
What the book probably meant to say is that time complexity of the calling function T_func is T_call + T_callee. Here T_call is the time operation of passing parameters and setting up the environment for the callee and T_callee is the time spent inside the subroutine. The book says is that it is safe to assume T_call is constant, while no such assumptions are made regarding T_callee.
To clarify assume we have a function func that calls one subroutine callee.
func(s){
callee(s);
}
then T_func(s) = T_call + T_callee(s). If size(s) = n and T_callee = O(f(n)) then it is safe to say that T_func = O(f(n)).
I have a question about calculating the expected running time of a given function. I understand just fine, how to calculate code fragments with cycles in them (for / while / if , etc.) but functions without them seems a bit odd to me. For example, lets say that we have the following code fragment:
public void Add(T item)
{
var newArr = new T[this.arr.Length + 1];
Array.Copy(this.arr, newArr, this.arr.Length);
newArr[newArr.Length - 1] = item;
this.arr = newArr;
}
If my logic works correctly, the function Add has a complexity of O(1), because in the best/worst/average case it will just read every line of code once, right?
You always have to consider the time complexity of the function calls, too. I don't know how Array.Copy is implemented, but I'm going to guess it's O(N), making the whole Add function O(N) as well. Your intuition is right, though - the rest of it is in fact O(1).
If you have multiple sub-operations with O(n) + O(log(n)) etc and the costliest step is the cost of the whole operation - by default big O refers to the worst case. Here as you copy the array, it is an O(n) operation
Complexity is calculated following this 2 rules :
-Calling a method (complexity+ 1)
-Encountering the following keywords : if, while, repeat, for, &&, ||, catch, case, etc … (complexity+ 1)
In your case , given you are trying to copy an array and not a single value , the algorithm will complete N copy operations giving you an O(N) operation.