Is this for loop O(1) or O(n)? - code-analysis

There’s given an array[] = {0,2,6,...,1000000} with any size and a piece of code:
for i = 0 to size
print array[0]
Is this O(1) because it only prints the first item or O(n) because it prints it n (size) time?

This is O(n), where n is the size of the array given.
Let's just say printing an element of the array takes 1 second.
If the array has 1 element, the program will print the first element 1 time, so it takes 1 second to run the program.
If the array has 10 elements, the program will print the first element 10 times, so it takes 10 seconds to run the program.
If the array has 100 elements, the program will print the first element 100 times, so it takes 100 seconds to run the program.
The time it takes to run the program increases linearly with the size of the array. Hence the algorithm is O(n).

It is O(n).
Since this for loop is from 0 to size, so this for is O(n). And inside for, it print array[0] which is O(1).
So the whole segment becomes O(n) x O(1) which is also O(n).
It prints the first item for n times, which results in O(n).
For example, think of this simple assignment code below:
for i = 0 to size
a = array[0]
It also gets the first item for n times, and it is obviously O(n) (ignore any compiler optimization).

Related

Cost function of exponent loop?

How do you calculate the cost function for this loop:
for (int i=3; i^2<N; i=i+2) {
// One operation here
}
Along with the cost function, what is the Big-O notation?
The number of iterations is determined by the loop condition, which is i^2<N. The loop start with 3 and is increased by 2 in each iteration. The condition of the loop ensures that the loop will run until i^2 becomes equal to or greater than N. The value of i^2 increases linearly as i increases, which means that the value of i^2 will increase faster than i. As a result, the value of i^2 will reach N much faster than i will reach N. Finally, the loop will run ⌊√n⌋ iterations, which means that the time complexity is O(sqrt(n)).

What is the complexity of this program. Is it O(n)?

This is a simple program I want to know the complexity of this program. I assume this is O(n) as it has only a single operation in one for loop.
a = int(input("Enter a:"))
b = int(input("Enter b:"))
sol = a
for i in range(a,b):
sol = sol & i+1
print("\nSol",sol)
Yes, it is O(n), sort of. You have to remember O(n) means the number of operations grows with the size of the input. Perhaps you're worried about the & and (i+1) operations in the for loop. What you need to keep in mind here is these operations are constant since they're all performing on a 32-bit integer. Therefore, the only parameters changing how long the program will run is the actual number of iterations of the for loop.
If you're assuming n = b - a, then this program is O(n). In fact, if you break down the actual runtime:
per loop: 1 AND operation, 1 addition operation
now do (b-a) iterations, so 2 operations per loop, (b-a) times = 2*(b-a)
If we assume n = b-a, then this runtime becomes 2*n, which is O(n).
I assume you define n := b - a. The complexity is actually n log(n). There is only 1 operation in the loop so the complexity is n * Time(operation in loop), but as i consists of log(n) bits, the complexity is O(n log(n))
EDIT:
I now regard n := b. It does not affect my original answer, and it makes more sense as it's the size of the input. (It doesn't make sense to say that n=1 for some big a,a+1)
To make it more efficient, notice you calculate (a)&(a+1)&(a+2)&..&(b).
So we just need to set 0's instead of 1's in the binary representation of b, in every place in which there is a 0 in this position for some a <= k < b. How can we know whether to set a digit to 0 or not then? I'll leave it
to you :)
It is possible to do in log(n) time, the size of the binary representation of b.
So in this case we get that the time is O(log(n)^2) = o(n)

Why time and space complexity of counting sort is O(n + k) and not O(max(n, k))?

Here, 'n' and 'k' are the size of the input array and the maximum element of the array respectively.
Since there is one run in the array of size 'n' for the count of the frequency of elements and, a separate run in the array of size 'k' and for each pass(or iteration) in the array, there are count[i] iterations where 'count' is the array of size 'k'.
Same with space complexity.
I am looking for a good explanation explaining every bit of the concept, as you can guess I am horribly confused.
Please note that O(n+k) = O(max(n, k)) because
max(n,k) <= n+k <= 2max(n,k)
and the big-O doesn't see the constant 2.
Thanks to everyone who has responded. But, I think I got it.
Assumptions:
Actual array with size N is A[]
Maximum element in array A[] is K
Array for counting frequency of elments with size K is count[]
Auxiliary array for storing sorted elements with size N is sorted[]
I looked at it in this way, there is one run in A[] for getting the maximum element and one more run to store the frequency of each element.
This takes O(N).
Now, there is one run in count[] and for each iteration, there is a loop for count[i] times for inserting the array elements in the sorted order in sorted[].
The sum of all the elements in count[] cannot be greater than N. So the total time for these operations is O(N + K)
Therefore, the worst-case time complexity is O(N + K). Correct me if I'm wrong somewhere.
Actually, there are two runs on the array k
The k represents the size of the array. The 'k' in O notation actually represent the maximum element.
If we write O(max(n,k)) it will hide the details of the algorithm, which is highly dependent on the maximum element

Time complexity of probabilistic algorithm

The game is simple: a program has to 'guess' some given n such that 1 < n < N, where n and N are positive integers.
Assuming I'm using a true random number generator function ( rng() ) to generate the computer's 'guess', the probability that it will 'guess' the given n at any try is 1/N.
Example pseudo-code:
n = a // we assign some positive integer value to n
N = b // we assign some positive integer value to N
check loop
{
if rng(N) = n
print some success message
exit program
else
back to the beginning of the check loop
}
OK, I'm interested in how do I determine the time complexity (the upper bound especially) of this algorithm? I'm kind of confused. How does the probability aspect affect this? If I understand correctly, the worst case scenario (in theory) is that the program runs forever?
Even if theoretically your program can run forever, complexity here is O(n) - because doubling n you're halving probability of guessing particular value on every step, thus doubling number of steps. Even if program could run forever with given n, it would run twice forever if n is 2 times bigger.
Complexity in O notation doesn't tell you how many operations will be performed. It tells you how number of operations depends on input size.

T(n) of the nested loop, i get the answer as (logn+1)(logn+2) , am i right?

i=n;
while(i>=1){
j=i;
while(j<=n){
thetha(1)
j=j*2;
}
i=i/2;
}
Edit : changed the code because of op's comment below.
Yes, you are correct in that the outer loop is Log(n) and the inner loop is Log(n), which yields (log n)(log n).
The reason for Log(n) complexity is because the number of remaining iterations in the loop is halved at each iteration. Whether this is achieved by dividing the iterating variable i by 2 or by multiplying the variable j by 2 is irrelevant. The time taken to complete the loops grows as Log(n) for each loop.
The multiplication of (log n)(log n) is due to the fact that each iteration of the outer loop executes Log(n) iterations of the inner loop.
The additions are unnecessary because in big-O notation we are only concerned with the rate at which a function grows relative to another function. Offsetting it by a constant (or multiplying by a constant) does not change the relative complexity of the functions, so the end result is (log n)(log n).
In the while(i>=1){ (...) } loop, i is bigger than 1 (stricly bigger except for the last iteration). Thus, after j=i, j is bigger than 1 (stricly bigger except for the last iteration).
For that reason, your code is more or less equivalent to :
i=n;
while(i>1){
i=i/2;
}
if (i==1){
j=i
while(j<=1){
thetha(1)
j=j*2;
}
}
Which can be rewritten :
i=n;
while(i>1){
i=i/2;
}
if (i==1){
thetha(1)
}
And the overall complexity is the complexity of the while loop which is log(n).