how do you calculate the complexity in Big-O notation? can any body explain that on this code below - time-complexity

void KeyExpansion(unsigned char key[N_KEYS], unsigned int* w)
{
unsigned int temp;
for(int i=0; i< N_KEYS; i++)
{
w[i] = (key[N_KEYS*i]<<24) + (key[N_KEYS*i+1]<<16) + (key[N_KEYS*i+2]<<8) + key[N_KEYS*i+3];
}
for(int i = 4; i< EXPANDED_KEY_COUNT; i++)
{
temp = w[i-1];
if(i % 4 == 0)
temp = SubWord(RotWord(temp)) ^ Rcon[i/4];
w[i] = temp ^ w[i-4] ;
}
}

Big-O helps us do analysis based on the input. The issue with your question is that there seems to be several inputs, which may or may not relate with each other.
Input variables look like N_KEYS, and EXPANDED_KEY_COUNT. We also don't know what SubWord() or RotWord() do based on what is provided.
Since SubWord() and RotWord() aren't provided, lets assume they are constant for easy calculations.
You have basic loops and iterate over each value, so its pretty straight forward. This means you have O(N_KEYS) + O(EXPANDED_KEY_COUNT). So the overall time complexity depends on two inputs, and would be bound by the larger.
If SubWord() or RotWord() do anything special that aren't constant time, then that would affect the time complexity of O(EXPANDED_KEY_COUNT) portion of code. You could adjust the time complexity by multiplied against it. But by the names of the methods, it sounds like their time complexity would be based on the length of the string, would would be yet another different input variable.
So this isn't a clear answer, because the question isn't fully clear, but I tried to break things down for you as best as I could.

Related

Time complexity of nested loop with multiplication?

I just confused about to find out time complexity of the this nested loop:
for (int i=0; i<n; i++){
i*=2;
for (int j=0; j<i; j++){
}
}
And also what is the time complexity of the outer loop (just ignore the inner loop:
for (int i=0; i<n; i++){
i*=2;
}
I ran the following code which is a modified version of the second code from the question.
let it = 0
let pow = 1000
let n = Math.pow(2, pow)
for (let i = 0; i < n; i++) {
it++
i *= 2
}
console.log(it)
It prints the value of either pow+1 or pow, which implies that it runs log2(n) loops.
Although it contains a single for loop, it modifies iterating variables inside the loop. In every iteration, the value of i is doubled, which halves the remaining number of iterations. So instead of O(n), it is O(log n). It is similar to binary search if it helps you to understand better. In every iteration, it halves the remaining search space.
However, for the first code, it is quite confusing. Since the outer loop runs for log2(n) times, and the inner loop runs i times which is dependent on n. My guess is for the first code, big O notation is O(n log(n)). But for this case, my guess is as good as yours.

Big-O of fixed loops

I was discussing some code during an interview and don't think I did a good job articulating one of my blocks of code.
I know (high-level) we are taught two for loops == O(n^2), but what happens when you make some assertions as part of the work that limit the work done to a constant amount.
The code I came up with was something like
String[] someVal = new String[]{'a','b','c','d'} ;// this was really - some other computation
if(someVal != 4) {
return false;
}
for(int i=0; i < someVal; i++){
String subString = someVal[i];
if(subString.length() != 8){
return false;
}
for(int j = 0; j < subString.length(); j++){
// do some other stuff
}
}
So there are two for loops, but the number of iterations become fixed because of the length check before proceeding.
for(int i=0; i < **4**; i++){
String subString = someVal[i];
if(subString.length() != 8){ return false }
for(int j = 0; j < **8**; j++){
// do some other stuff
}
}
I tried to argue this made it constant, but didn't do a great job.
Was I completely wrong or off-base?
Your early exit condition inside of the for loop is if(subString.length() != 8), so your second for loop is executed any time if the length exactly 8. This in fact makes the complexity of the second for loop constant, as it is not depending on the input size. But before your first for loop you have another early exit condition if(someVal != 4) making the first for loop also constant.
So yes, I would follow your argumentation, that the complete function has a constant big-O time complexity. Maybe repeating in the explanation that big-O always describes an upper bound complexity, which will never be crossed and constant time factors can be reduced to 1.
But keep in mind, that a constant complexity based on real world input could still be longer in execution time than a O(n) complexity, based on the size of n. If it would be a known pre-condition that n does not grow beyond a (low) given number, I would not argue about the Big-O complexity, but about overall expected runtime, where the second constant for loop could have a larger impact than expected by Big-O complexity analysis.

How is it possible that O(1) constant time code is slower than O(n) linear time code?

"...It is very possible for O(N) code to run faster than O(1) code for specific inputs. Big O just describes the rate of increase."
According to my understanding:
O(N) - Time taken for an algorithm to run based on the varying values of input N.
O(1) - Constant time taken for the algorithm to execute irrespective of the size of the input e.g. int val = arr[10000];
Can someone help me understand based on the author's statement?
O(N) code run faster than O(1)?
What are the specific inputs the author is alluding to?
Rate of increase of what?
O(n) constant time can absolutely be faster than O(1) linear time. The reason is that constant-time operations are totally ignored in Big O, which is a measure of how fast an algorithm's complexity increases as input size n increases, and nothing else. It's a measure of growth rate, not running time.
Here's an example:
int constant(int[] arr) {
int total = 0;
for (int i = 0; i < 10000; i++) {
total += arr[0];
}
return total;
}
int linear(int[] arr) {
int total = 0;
for (int i = 0; i < arr.length; i++) {
total += arr[i];
}
return total;
}
In this case, constant does a lot of work, but it's fixed work that will always be the same regardless of how large arr is. linear, on the other hand, appears to have few operations, but those operations are dependent on the size of arr.
In other words, as arr increases in length, constant's performance stays the same, but linear's running time increases linearly in proportion to its argument array's size.
Call the two functions with a single-item array like
constant(new int[] {1});
linear(new int[] {1});
and it's clear that constant runs slower than linear.
But call them like:
int[] arr = new int[10000000];
constant(arr);
linear(arr);
Which runs slower?
After you've thought about it, run the code given various inputs of n and compare the results.
Just to show that this phenomenon of run time != Big O isn't just for constant-time functions, consider:
void exponential(int n) throws InterruptedException {
for (int i = 0; i < Math.pow(2, n); i++) {
Thread.sleep(1);
}
}
void linear(int n) throws InterruptedException {
for (int i = 0; i < n; i++) {
Thread.sleep(10);
}
}
Exercise (using pen and paper): up to which n does exponential run faster than linear?
Consider the following scenario:
Op1) Given an array of length n where n>=10, print the first ten elements twice on the console. --> This is a constant time (O(1)) operation, because for any array of size>=10, it will execute 20 steps.
Op2) Given an array of length n where n>=10, find the largest element in the array. This is a constant time (O(N)) operation, because for any array, it will execute N steps.
Now if the array size is between 10 and 20 (exclusive), Op1 will be slower than Op2. But let's say, we take an array of size>20 (for eg, size =1000), Op1 will still take 20 steps to complete, but Op2 will take 1000 steps to complete.
That's why the big-o notation is about growth(rate of increase) of an algorithm's complexity

Something weird in for loop speed

here is a part of my program code:
int test;
for(uint i = 0; i < 1700; i++) {
test++;
}
the whole program takes 0.5 seconds to finish, but when I change it to:
int test[1];
for(uint i = 0; i < 1700; i++) {
test[0]++;
}
it will takes 3.5 seconds! and when I change the int to double, it will gets very worse:
double test;
for(uint i = 0; i < 1700; i++) {
test++;
}
it will takes about 18 seconds to finish !!!
I have to increase an int array element and a double variable in my real for loop, and it will takes about 30 seconds!
What's happening here?! Why should it takes that much time for just an increment?!
I know a floating point data type like double has different structure from a fixed point data type like int, but is it the only cause for such a big different time? and what about the second example which is also an int array element?!
Thanks
You have answered your question yourself.
float (double) operations are different from integer ones. Even if you just add 1.0f.
Your second example takes longer than the first one just because you added some pointer refernces. An array in C is -bottom down- not much different from a pointer to the first element. Accessing any element, even the first one, would cause the machine code to load the starting address of the array multiply the index (0 in this case) with the length of each member (4 or whatever bytes int has) and add that (0) to the pointer. Then it has to dereference the pointer, meaning to acutally load the value at that very address. Add one and write back the result.
A smart modern compiler should optimize this a bit. When you want to avoid this optimization, then modify the code a bit and don`t use a constant for the index.
I never tried that with a modern objective-c compiler. But I guess that this code would take much loger than 3.5s to run:
int test[2];
int index = 0;
for(uint i = 0; i < 1700; i++) {
test[index]++;
}
If that does not make much of a change then try this:
-(void)foo:(int)index {
int test[2];
for(uint i = 0; i < 1700; i++) {
test[index]++;
}
}
and then call foo:0;
Give it a try and let us know :)

Which are faster squares or roots?

for (int i = 2; i * i <= n; i++)
for (int i = 2; i <= SQRT(n); i++)
just wondering which is faster I looked at some primitive algorithms for getting roots and it would seem to me that squaring the number would be faster but I don't know for sure. These loops are for determining a numbers "primeness".
Shouldn't the comaprison be between
int sqrt = SQRT(n);
for (int i = 2; i <= sqrt; i++)
and
for (int i = 2; i * i <= n; i++)
The answer will depend on how many loop iterations you do. The sqrt method does less work per iteration, but it has a higher start-up cost. Mind you, this reeks of premature optimisation.
Compiler may 'cache' result of SQRT (n), but i * i it should compute on each step.
Square root will take longer, unless it's implemented in hardware, lookup, or a special machine code version. Newton iteration is the algorithm of choice; it converges quadratically.
Best to benchmark for yourself. I'd recommend moving the call to square root outside the loop so you only do it once rather than every time you check the exit condition.
Why not skip both of them and use some clever maths? The Following code avoid both of them using the Property that Sum of the First n odd numbers is always a perfect square.
A shameless plug for my old blogpost (from my dead blog)
int isPrime(int n)
{
int squares = 1;
int odd = 3;
if( ((n & 1) == 0) || (n < 9)) return (n == 2) || ((n > 1) && (n & 1));
else
{
for( ;squares <= n; odd += 2)
{
if( n % odd == 0)
return 0;
squares+=odd;
}
return 1;
}
}
The square will be faster.
But the square will overflow if n is larger than the square root of the largest int, and then the comparison will go wrong. The square root function could (and you would expect to) be implemented in such a way that is can be calculated on arguments all the way up to the largest representable int. That means it won't go wrong in that way.
In Java, the largest int is 2^31 - 1, which means its square root is just under 46341. If you want to look for primes larger than that, the squaring would stop you.