Why is the time-complexity for modulus operation constant? - time-complexity

I am working through Cracking the Coding Interview, and I am unsure of an example on time-complexity. They provide this code to determine if a number is prime:
boolean isPrime(int n) {
for (int x = 2; x * x <= n; x++) {
if (n % x == 0) {
return false;
}
}
return true;
}
Later they say "The work inside the for loop is constant". Why is run-time for modulus operator constant? Why does it not depend on n?

The key part of the statement there is inside the for loop. All that's happening is a a modulo operation. Inside the function itself the time complexity depends on n

Related

Time complexity of variable assignation

I know that if I have a function like:
public int addOne(int a){
return (a+1)
}
The time complexity order will be O(1) since we only do one operation (the sum).
But what if I have a function that doesn't do any operations, just assigns some values to some global variables. Like this:
public void assignValues(){
a = 2;
b = 3;
c = 4;
//maybe more
}
What would the time complexity be for this function? My guess is that it would still O(1). Is that correct?
When you discuss the time complexity of an algorithm, you first have to define the variable parameter(s). For example, it doesn't make any sense to say that something is O(n) without defining what you measure by n (e.g. the length of an array? The size of the contents of an array? The bit-length of a number? The absolute value of an integer?).
In your particular case, you have a function that takes no parameters. Assuming that the operations within the function don't depend on any other external parameters, the complexity of such a function is always trivially O(1), no matter what operations you perform inside. For example, the following function is also O(1):
public static int DoSth() {
int x = 0;
const int n = 1000000;
for(int i = 0; i < n; i++)
for(int j = 0; j < n; j++)
x++;
return x;
}
As already mentioned, this assumes that the parameter-less function has no external dependencies. Consider e.g. the following function
public static void DoSth() {
int n = DateTime.Now.Ticks;
for(int i = 0; i < n; i++)
Console.WriteLine(i);
}
This function is O(n log n) if n is the amount of time that has passed since 1.1.0001.

recurrence function time complexity with if-else statement

I am having some trouble solving the time complexity of below function.
pubilc static long powerN(long x, int n){
if(n==0) return 1;
if(n%2 == 0){
long a = powerN(x, n/2);
return a*a;
}
else {
long a = powerN(x, (n-1)/2);
return a*a;
}
}
I've learned that if there is an if statement, we take greater complexity of then/else statement.
In above example, both conditional statement's time complexity is same, so I concluded in this way.
Assume, function powerN be T(n),
T(n) = T(n/2) + k (k is the overhead)
so I concluded T(n)'s time complexity is O(logn).
But I am still wondering if my assumptions is right. Thank you.
Your assumption is basically correct, at least in this case.
In general you need to check the meaning of the if-else statement and how often it is triggered. If one if-else branch is called as often as the other (or better to say complexity-like as often as other - calling it i.e. 10 times more still does not change complexity. Calling if-branch k times more than else-branch if k is constant is fine. Calling it i.e. log n times more does affect complexity).
Just imagine the if statement would be as this:
public static long powerN(long x, int n){
if(n==0) return 1;
if(n < 100){
long a = powerN(x, n-1);
return a*a;
}
else {
long a = powerN(x, (n-1)/2);
return a*a;
}
}
Then basically the if-branch is running in constant time for n < 100 and you should ignore it.
Or imagine this one: This would run with T(n/2) up to n = log(N) and then T(n-1) for log(N) time. Then you simply cannot just take "what is inside" if-else branch and choose the bigger one.
long initialN = 10000;
long intialX = 500;
powerN(initialX, initialN);
public static long powerN(long x, int n){
if(n==0) return 1;
if(n > log(initialN) ){
long a = powerN(x, n/2);
return a*a;
}
else {
long a = powerN(x, n-1);
return a*a;
}
}

Time complexity of Minimum number of squares whose sum equals to given number n

What is the exact time complexity of this code ?
I know its exponential but what kind of exponential like 2^n , sqrt(n) ^ sqrt(n) etc.
If attach some proof, that would be great.
https://www.geeksforgeeks.org/minimum-number-of-squares-whose-sum-equals-to-given-number-n/
class squares {
// Returns count of minimum squares that sum to n
static int getMinSquares(int n)
{
// base cases
if (n <= 3)
return n;
// getMinSquares rest of the table using recursive
// formula
int res = n; // Maximum squares required is
// n (1*1 + 1*1 + ..)
// Go through all smaller numbers
// to recursively find minimum
for (int x = 1; x <= n; x++) {
int temp = x * x;
if (temp > n)
break;
else
res = Math.min(res, 1 + getMinSquares(n - temp));
}
return res;
}
public static void main(String args[])
{
System.out.println(getMinSquares(6));
}
}
In my opinion, since each for loop is calling the same recursion for sqrt(n) number of time and each call is for (n - x*x)) ~ n...
So it should be n ^ sqrt(n).
Is this answer correct ?
The recurrence relation for that function is
T(n) = sum from i=1 to i=sqrt(n) of T(n - i*i)
which is bounded above by
T(n) = sqrt(n) * T(n-1)
as each term in the sum above will be at most T(n-1) and there are sqrt(n) of them. T(n) = sqrt(n) * T(n-1) is O( sqrt(n)^n ). I'm sure there is some clever way to get a better bound, but this function looks like it is exponential.

How to proof the time complexity of this Fibonacci sequence is O(n)

In the following code, i know that the time complexity is O(n) but how do i proof it in a proper way?
Is saying that searching array is O(n) enough?
int f[N];
F(n)
{
if (f[n] >= 0) return f[n];
f[n] = F(n-1) + F(n-2);
return f[n];
}
int main()
{
read n;
f[0] = 0; f[1] = 1;
for (i = 2; i <= n; i++)
f[i] = -1;
print F(n);
}
For each element of the array you call F. it may seem as a recursion to you but a bad implementation. each of the f[n-1] and f[n-2] calls actually just returns the values.
You will have 3n call to F(n), so still O(n).
If you are not obligated to recursion, you can program it with a single loop.

how much time will fibonacci series will take to compute?

i have created the recursive call tree by applying brute force technique but when i give this algorithm 100 values it takes trillion of years to compute..
what you guys suggest me to do that it runs fast by giving 100 values
here is what i have done so far
function fib(n) {
if (n =< 1) {
return n;
} else {
return fib(n - 1) + fib(n - 2);
}
}
You can do it also with a loop:
int a = 1;
int b = 1;
for(int i = 2; i < 100; i++){
int temp = a + b;
a = b;
b = temp;
}
System.out.println("Fib 100 is: "+b);
The runtime is linear and avoids the overhead caused by the recursive calls.
EDIT: Please note that the result is wrong. Since Fib(100) is bigger than Integer.MAX_VALUE you have to use BigInteger or similar to get the correct output but the "logic" will stay the same.
You could have a "cache", where you save already computed Fibonacci numbers. Every time you try to compute
fib(n-1) /* or */ fib(n-2) ;
You would first look into your array of already computed numbers. If it's there, you save a whole lot of time.
So every time you do compute a fibonacci number, save it into your array or list, at the corresponding index.
function fib(n)
{
if (n =< 1)
{
return n;
}
if(fiboList[n] != defaultValue)
{
return fiboList[n];
}
else
{
int fibo = fib(n-1) + fib(n-2);
fiboList[n] = fibo;
return fibo;
}
}
You can also do it by dynamic programming:
def fibo(n):
dp = [0,1] + ([0]*n)
def dpfib(n):
return dp[n-1] + dp[n-2]
for i in range(2,n+2):
dp[i] = dpfib(i)
return dp[n]