I was trying the time complexity mcq questions given in codechef under practice for Data Structures and Algorithms. One of the questions had a line a*< a[i]. What does that line mean?
I know that if there wasn't an and statement the complexity would have been O(n^2). But the a*< is completely alien to me. I searched for it in the internet but all I got was about the a star algorithm and asterisks! I tried running the program in python with a print statement but it says that * is invalid. Does that mean something like a pointer to the 1st element in the array or something?
Find the time complexity of the following function
n = len(a)
j = 0
for i =0 to n-1:
while (j < n and a* < a[j]):
j += 1
The answer is given as O(n). But there are nested loops so it is supposed to be O(n^2).Help required! Thanks
It doesn't actually matter what a* means. The question is to determine the time complexity of the algorithm. Notice that although there are two nested loops, the inner while loop isn't a full independent loop. Its index is j, which starts at 0 and is only ever incremented, with an upper bound of n. So the inner loop can only run a maximum of n times in total. This means that the overall complexity is only O(n).
Related
Good afternoon, I have a problem making recurrence table with randomly increasing sequence. I want it to return an increasing sequence with a random difference between two elements. Right now I've got:
RecurrenceTable[{a[k+1]==a[k] + RandomInteger[{0,4}], a[1]==-12},a,{k,1,5}]
But it returns me an arithmetic progression with chosen d for all k (e.g. {-12,-8,-4,0,4,8,12,16,20,24}).
Also, I will be really grateful for explaining why if I replace every k in my code with n I get:
RecurrenceTable[{4+a[n] == a[n],a[1] == -12},a,{n,1,10}]
Thank You very much for Your time!
I don't believe that RecurrenceTable is what you are looking for.
Try this instead
FoldList[Plus,-12,RandomInteger[{0,4},5]]
which returns, this time,
{-12,-8,-7,-3,1,2}
and returns, this time,
{-12,-9,-5,-3,0,1}
I was recently asked an interview question about testing the validity of a Sudoku board. A basic answer involves for loops. Essentially:
for(int x = 0; x != 9; ++x)
for(int y = 0; y != 9; ++y)
// ...
Do this nested for loops to check the rows. Do it again to check the columns. Do one more for the sub-squares but that one is more funky because we're dividing the suoku board into sub-boards so we end end up more than two nested loops, maybe three or four.
I was later asked the complexity of this code. Frankly, as far as I'm concerned, all the cells of the board are visited exactly three times so O(3n). To me, the fact that we have nested loops doesn't mean this code is automatically O(n^2) or even O(n^highest-nesting-level-of-loops). But I have suspicion that that's the answer the interviewer expected...
Posed another way, what is the complexity of these two pieces of code:
for(int i = 0; i != n; ++i)
// ...
and:
for(int i = 0; i != sqrt(n); ++i)
for(int j = 0; j != sqrt(n); ++j)
// ...
Your general intuition is correct. Let's clarify a bit about Big-O notation:
Big-O gives you an upper bound for the worst-case (time) complexity for your algorithm, in relation to n - the size of your input. In essence, it is a measurement of how the amount of work changes in relation to the size of the input.
When you say something like
all the cells of the board are visited exactly three times so O(3n).
you are implying that n (the size of your input) is the the number of cells in the board and therefore visiting all cells three times would indeed be an O(3n) (which is O(n)) operation. If this is the case you would be correct.
However usually when referring to Sudoku problems (or problems involving a grid in general), n is taken to be the number of cells in each row/column (an n x n board). In this case, the runtime complexity would be O(3n²) (which is indeed equal to O(n²)).
In the future, it is perfectly valid to ask your interviewer what n is.
As for the question in the title (Is a nested for loop automatically O(n^2)?) the short answer is no.
Consider this example:
for(int i = 0 ; i < n ; i++) {
for(int j = 0 ; j < n ; j * 2) {
... // some constant time operation
}
}
The outer loops makes n iterations while the inner loop makes log2(n) iterations - therefore the time complexity will be O(nlogn).
In your examples, in the first one you have a single for-loop making n iterations, therefore a complexity of (at least) O(n) (the operation is performed an order of n times).
In the second one you two nested for loops, each making sqrt(n) iterations, therefore a total runtime complexity of (at least) O(n) as well. The second function isn't automatically O(n^2) simply because it contains a nested loop. The amount of operations being made is still of the same order (n) therefore these two examples have the same complexity - since we assume n is the same for both examples.
This is the most crucial point to sail home. To compare between the performance of two algorithms, you must be using the same input to make the comparison. In your sudoku problem you could have defined n in a few different ways, and the way you did would directly affect the complexity calculation of the problem - even if the amount of work is all the same.
*NOTE - this is unrelated to your question, but in the future avoid using != in loop conditions. In your second example, if log(n) is not a whole number, the loop could run forever, depending on the language and how it is defined. It is therefore recommended to use < instead.
It depends on how you define the so-called N.
If the size of the board is N-by-N, then yes, the complexity is O(N^2).
But if you say, the total number of grids is N (i.e., the board id sqrt(N)-by-sqrt(N)), then the complexity is O(N), or 3O(N) if you mind the constant.
very new to big O complexity and I was wondering if an algorithm where you have a given array, and you initialise an auxilary array with the same amount of indexes count as n time already, or do you just assume this is O(1), or nothing at all?
TL;DR: Ignore it
Long answer: This will depend on the rest of your algorithm as well as what you want to achieve. Typically you will do something useful with the array afterwards which does have at least the same time complexity as filling the array, so that array-filling does not contribute to the time complexity. Furthermore filling an array with 0 feels like something you do to initialize the array, so your "real" algorithm can work properly. But nevertheless there are some cases you could consider.
Please note that I use pseudocode in the following examples, I hope it's clear what the algorithm should do. Also note that all the examples don't do anything useful with the array. It's just to show my point.
Lets say you have following code:
A = Array[n]
for(i=0, i<n, i++)
A[i] = 0
print "Hello World"
Then obviously the runtime of your algorithm is highly dependent on the value of n and thus should be counted as linear complexity O(n)
On the other hand, if you have a much more complicated function, say this one:
A = Array[n]
for(i=0, i<n, i++)
A[i] = 0
for(i=0, i<n, i++)
for(j=n-1, j>=0, j--)
print "Hello World"
Then even if you take the complexity of filling the array into account, you will end with complexity of O(n^2+2n) which is equal to the class O(n^2), so it does not matter in this case.
The most interesting case is surely when you have different options to use as basic operation. Say we have the following code (someFunction being an arbitrary function):
A = Array[n*n]
for(i=0, i<n*n, i++)
A[i] = 0
for(i=0, i*i<n, i++)
someFunction(i)
Now it depends on what you choose as basic operation. Which one you choose is highly dependent on what you want to achieve. Let's say someFunction is a very cheap function (regarding time complexity) and accessing the array A is more expensive. Then you would propably go with O(n^2), since accessing the array is done n^2 times. If on the other hand someFunction is expensive compared to filling the array, you would propably choose this as base operation and go with O(sqrt(n)).
Please be aware that one could also come to the conclusion that since the first part (array-filling) is executed more often than the other part (someFunction) it does not matter which one of the operations will take longer time to finish, since at some point the array-filling will need longer time. Thus you could argue that the complexity has to be quadratic O(n^2) This may be right from a theoretical view. But in real life you usually will have an operation you want to count and don't care about the other operations.
Actually you could consider ignoring the array filling as well as taking it into account in all the examples I provided above, depending whether print or accessing the array is more expensive. But I hope in the first two examples it is obvious which one will add more runtime and thus should be considered as the basic operation.
I made a subset sum problem but I am still confused about its complexity.
Please find the algorithm here:
http://www.vinaysofs.com/P=NP%20proved%20with%20Subset%20Sum%20Problem%20solution%20manuscript.pdf
Basically the essence is:
i = 0;
func1(i)
{
if(func1(i + 1))
return true;
else
func1(i + 2, with few modifications in other arguments);
}
Here, i is index of an element in an integer set like {1,1,1,1,1,1}.
Worst case for above algorithm comes when all elements are 1 and we need a sum which is 1 more than the total sum of all elements.
I want to know the complexity of this algorithm as someone told me it is non polynomial(Exponential time - 2^n. If it is polynomial then it would be a big achievement.
In my view also it is not polynomial as T(n)=2T(n-1)+6 but that is in worst case that is every time the 1st recursive call fails.
Please help..
Thanks
Vinay
I've seen many questions of this type but I still can't get the while iteration clear.
for i=1...n
for j=1..i
k=n
while (k>2)
k=k^(1/3)
The two for loops are O(n^2) combined, and the inner loop is O(log2(log2(n)) [*]. Thus the overall complexity is O(n^2*log2(log2(n))).
To find the number of iterations m of the inner loop, we need to solve the following for m:
n = 2^(3^m)
This gives log3(log2(n)), which is the same as O(log2(log2(n)) (using the same log base for consistency).
[*] Assuming that, in your notation, k^(1/3) is the cube root of k.