what is the different between using if else and if else if else (deep condition) in complexity of code? - conditional-statements

i get confused after i ready many articles about complexity specially with complexity of condition in many website i been looking to know my solution for my answer for 1 hour but none of website provide it so i just want to know what is the best when i use conditions like
int x = 10;
if(x<1)
cout<<"done";
else
for(int i=0;i<n;i++){
cout<<"done";
}
i know that in this case we will choice the higher complexity of both of them and it will be big(O) which is O(N)
but what about if i use else if is it will be the same as i think of above code ?
int x = 10;
if(x<1)
cout<<"done";
else if(x> 20)
for(int i=0;i<n;i++){
cout<<"done";
}
else
for(int i=0;i<n;i++){
cout<<"done";
}
does it make this example O(N) too like the above example
please if you have any resources or links that provide good content of algorithm complexity write it after you answer my question , thank you

It remains O(n). Think of big O as "as n changes, how does the runtime length of the algorithm change?" Since n changes the number of iterations of a regular loop, you always add one loop iteration for each addition to n, so it's linear (O(n)).

Related

Is a nested for loop automatically O(n^2)?

I was recently asked an interview question about testing the validity of a Sudoku board. A basic answer involves for loops. Essentially:
for(int x = 0; x != 9; ++x)
for(int y = 0; y != 9; ++y)
// ...
Do this nested for loops to check the rows. Do it again to check the columns. Do one more for the sub-squares but that one is more funky because we're dividing the suoku board into sub-boards so we end end up more than two nested loops, maybe three or four.
I was later asked the complexity of this code. Frankly, as far as I'm concerned, all the cells of the board are visited exactly three times so O(3n). To me, the fact that we have nested loops doesn't mean this code is automatically O(n^2) or even O(n^highest-nesting-level-of-loops). But I have suspicion that that's the answer the interviewer expected...
Posed another way, what is the complexity of these two pieces of code:
for(int i = 0; i != n; ++i)
// ...
and:
for(int i = 0; i != sqrt(n); ++i)
for(int j = 0; j != sqrt(n); ++j)
// ...
Your general intuition is correct. Let's clarify a bit about Big-O notation:
Big-O gives you an upper bound for the worst-case (time) complexity for your algorithm, in relation to n - the size of your input. In essence, it is a measurement of how the amount of work changes in relation to the size of the input.
When you say something like
all the cells of the board are visited exactly three times so O(3n).
you are implying that n (the size of your input) is the the number of cells in the board and therefore visiting all cells three times would indeed be an O(3n) (which is O(n)) operation. If this is the case you would be correct.
However usually when referring to Sudoku problems (or problems involving a grid in general), n is taken to be the number of cells in each row/column (an n x n board). In this case, the runtime complexity would be O(3n²) (which is indeed equal to O(n²)).
In the future, it is perfectly valid to ask your interviewer what n is.
As for the question in the title (Is a nested for loop automatically O(n^2)?) the short answer is no.
Consider this example:
for(int i = 0 ; i < n ; i++) {
for(int j = 0 ; j < n ; j * 2) {
... // some constant time operation
}
}
The outer loops makes n iterations while the inner loop makes log2(n) iterations - therefore the time complexity will be O(nlogn).
In your examples, in the first one you have a single for-loop making n iterations, therefore a complexity of (at least) O(n) (the operation is performed an order of n times).
In the second one you two nested for loops, each making sqrt(n) iterations, therefore a total runtime complexity of (at least) O(n) as well. The second function isn't automatically O(n^2) simply because it contains a nested loop. The amount of operations being made is still of the same order (n) therefore these two examples have the same complexity - since we assume n is the same for both examples.
This is the most crucial point to sail home. To compare between the performance of two algorithms, you must be using the same input to make the comparison. In your sudoku problem you could have defined n in a few different ways, and the way you did would directly affect the complexity calculation of the problem - even if the amount of work is all the same.
*NOTE - this is unrelated to your question, but in the future avoid using != in loop conditions. In your second example, if log(n) is not a whole number, the loop could run forever, depending on the language and how it is defined. It is therefore recommended to use < instead.
It depends on how you define the so-called N.
If the size of the board is N-by-N, then yes, the complexity is O(N^2).
But if you say, the total number of grids is N (i.e., the board id sqrt(N)-by-sqrt(N)), then the complexity is O(N), or 3O(N) if you mind the constant.

Using Google OR tools for crew scheduling

I'm searching for a proper way to use Google OR tools for solving an annual ship crew scheduling problem. I tried to follow the scheduling problem examples provided, but I couldn't find a way to set a 4 dimensional decision variable needed ( D[i,j,k,t] , i for captains, j for engineers, k for ship and t for time-period (days or weeks)).
Although there are many examples given (for C#) the major problems I faced is the way to set and utilize this main decision variable, and how to use the Decision Builder, since in all examples the variables had 2 dimensions, and were .flattened in order to make the comparisons. Unfortunately, I haven't found a way to use smaller D-Variables, since the penalty score (minimize penalty problem) is estimated by possible sets of Captains-Engineers , Captains-Ships, and Engineer Ships.
Why don't you create you 4D array, and then fill it with variables one by one.
Here is the code for matrices:
public IntVar[,] MakeIntVarMatrix(int rows, int cols, long min, long max) {
IntVar[,] array = new IntVar[rows, cols];
for (int i = 0; i < rows; ++i) {
for (int j = 0; j < cols; ++j) {
array[i,j] = MakeIntVar(min, max);
}
}
return array;
}
This being said, please use the CP-SAT solver as the original CP solver is deprecated.
For an introduction:
see:
https://developers.google.com/optimization/cp/cp_solver#cp-solver_example/
https://github.com/google/or-tools/tree/master/ortools/sat/doc

Determining growth function and Big O

Before anyone asks, yes this was a previous test question I got wrong and knew I got wrong because I honestly just don't understand growth functions and Big O. I've read the technical definition, I know what they are but not how to calculate them. My textbook gives examples off of real-life situations, but I still find it hard to interpret code. If someone can tell me their thought process on how they determine these, that would seriously help. (i.e. this section of code tells me to multiply n by x, etc, etc).
public static int sort(int lowI, int highI, int nums[]) {
int i = lowI;
int j = highI;
int pivot = nums[lowI +(highI-lowI)/2];
int counter = 0;
while (i <= j) {
while (nums[i] < pivot) {
i++;
counter++;
}
while (nums[j] > pivot) {
j--;
counter++;
}
count++;
if (i <= j) {
NumSwap(i, j, nums); //saves i to temp and makes i = j, j = temp
i++;
j--;
}
}
if(lowI< j)
{
return counter + sort(lowI, j, nums);
}
if(i < highI)
{
return counter + sort(i, highI, nums);
}
return counter;
}
It might help for you to read some explanations of Big-O. I think of Big-O as the number of "basic operations" computed as the "input size" increases. For sorting algorithms, "basic operations" usually means comparisons (or counter increments, in your case), and the "input size" is the size of the list to sort.
When I analyze for runtime, I'll start by mentally dividing the code into sections. I ignore one-off lines (like int i = lowI;) because they're only run once, and Big-O doesn't care about constants (though, note in your case that int i = lowI; runs once with each recursion, so it's not only run once overall).
For example, I'd mentally divide your code into three overall parts to analyze: there's the main while loop while (i <= j), the two while loops inside of it, and the two recursive calls at the end. How many iterations will those loops run for, depending on the values of i and j? How many times will the function recurse, depending on the size of the list?
If I'm having trouble thinking about all these different parts at once, I'll isolate them. For example, how long will one of the inner for loops run for, depending on the values of i and j? Then, how long does the outer while loop run for?
Once I've thought about the runtime of each part, I'll bring them back together. At this stage, it's important to think about the relationships between the different parts. "Nested" relationships (i.e. the nested block loops a bunch of times each time the outer thing loops once) usually mean that the run times are multiplied. For example, since the inner while loops are nested within the outer while loop, the total number of iterations is (inner run time + other inner) * outer. It also seems like the total run time would look something like this - ((inner + other inner) * outer) * recursions - too.

Complexity of subset sum solver algorithm

I made a subset sum problem but I am still confused about its complexity.
Please find the algorithm here:
http://www.vinaysofs.com/P=NP%20proved%20with%20Subset%20Sum%20Problem%20solution%20manuscript.pdf
Basically the essence is:
i = 0;
func1(i)
{
if(func1(i + 1))
return true;
else
func1(i + 2, with few modifications in other arguments);
}
Here, i is index of an element in an integer set like {1,1,1,1,1,1}.
Worst case for above algorithm comes when all elements are 1 and we need a sum which is 1 more than the total sum of all elements.
I want to know the complexity of this algorithm as someone told me it is non polynomial(Exponential time - 2^n. If it is polynomial then it would be a big achievement.
In my view also it is not polynomial as T(n)=2T(n-1)+6 but that is in worst case that is every time the 1st recursive call fails.
Please help..
Thanks
Vinay

What's the limit of compiler optimization? How smart is it?

People keep telling me instead of writing "shift 1 bit to the left", just write "multiple by 2", because it's a lot more readable, and the compile will be smart enough to do the optimization.
What else would compiles generally do, and developers should not do (for code readability)? I always write string.length == 0 instead of string == "" because I read somewhere 5-6 years ago, saying numeric operations are much faster. Is this still true?
Or, would most compiler be smart enough to convert the following:
int result = 0;
for (int i = 0; i <= 100; i++)
{
result += i;
}
into: int result = 5050;?
What is your favourite "optimization" that you do, because most compiles won't do?
Algorithms: no compiler on the planet so far can choose a better algorithm for you. Too many people hastily jump to the rewrite-in-C part after they benchmark, when they should have really considered replacing the algorithm they're using in the first place.