for (int i = 2; i * i <= n; i++)
for (int i = 2; i <= SQRT(n); i++)
just wondering which is faster I looked at some primitive algorithms for getting roots and it would seem to me that squaring the number would be faster but I don't know for sure. These loops are for determining a numbers "primeness".
Shouldn't the comaprison be between
int sqrt = SQRT(n);
for (int i = 2; i <= sqrt; i++)
and
for (int i = 2; i * i <= n; i++)
The answer will depend on how many loop iterations you do. The sqrt method does less work per iteration, but it has a higher start-up cost. Mind you, this reeks of premature optimisation.
Compiler may 'cache' result of SQRT (n), but i * i it should compute on each step.
Square root will take longer, unless it's implemented in hardware, lookup, or a special machine code version. Newton iteration is the algorithm of choice; it converges quadratically.
Best to benchmark for yourself. I'd recommend moving the call to square root outside the loop so you only do it once rather than every time you check the exit condition.
Why not skip both of them and use some clever maths? The Following code avoid both of them using the Property that Sum of the First n odd numbers is always a perfect square.
A shameless plug for my old blogpost (from my dead blog)
int isPrime(int n)
{
int squares = 1;
int odd = 3;
if( ((n & 1) == 0) || (n < 9)) return (n == 2) || ((n > 1) && (n & 1));
else
{
for( ;squares <= n; odd += 2)
{
if( n % odd == 0)
return 0;
squares+=odd;
}
return 1;
}
}
The square will be faster.
But the square will overflow if n is larger than the square root of the largest int, and then the comparison will go wrong. The square root function could (and you would expect to) be implemented in such a way that is can be calculated on arguments all the way up to the largest representable int. That means it won't go wrong in that way.
In Java, the largest int is 2^31 - 1, which means its square root is just under 46341. If you want to look for primes larger than that, the squaring would stop you.
Related
I was discussing some code during an interview and don't think I did a good job articulating one of my blocks of code.
I know (high-level) we are taught two for loops == O(n^2), but what happens when you make some assertions as part of the work that limit the work done to a constant amount.
The code I came up with was something like
String[] someVal = new String[]{'a','b','c','d'} ;// this was really - some other computation
if(someVal != 4) {
return false;
}
for(int i=0; i < someVal; i++){
String subString = someVal[i];
if(subString.length() != 8){
return false;
}
for(int j = 0; j < subString.length(); j++){
// do some other stuff
}
}
So there are two for loops, but the number of iterations become fixed because of the length check before proceeding.
for(int i=0; i < **4**; i++){
String subString = someVal[i];
if(subString.length() != 8){ return false }
for(int j = 0; j < **8**; j++){
// do some other stuff
}
}
I tried to argue this made it constant, but didn't do a great job.
Was I completely wrong or off-base?
Your early exit condition inside of the for loop is if(subString.length() != 8), so your second for loop is executed any time if the length exactly 8. This in fact makes the complexity of the second for loop constant, as it is not depending on the input size. But before your first for loop you have another early exit condition if(someVal != 4) making the first for loop also constant.
So yes, I would follow your argumentation, that the complete function has a constant big-O time complexity. Maybe repeating in the explanation that big-O always describes an upper bound complexity, which will never be crossed and constant time factors can be reduced to 1.
But keep in mind, that a constant complexity based on real world input could still be longer in execution time than a O(n) complexity, based on the size of n. If it would be a known pre-condition that n does not grow beyond a (low) given number, I would not argue about the Big-O complexity, but about overall expected runtime, where the second constant for loop could have a larger impact than expected by Big-O complexity analysis.
void KeyExpansion(unsigned char key[N_KEYS], unsigned int* w)
{
unsigned int temp;
for(int i=0; i< N_KEYS; i++)
{
w[i] = (key[N_KEYS*i]<<24) + (key[N_KEYS*i+1]<<16) + (key[N_KEYS*i+2]<<8) + key[N_KEYS*i+3];
}
for(int i = 4; i< EXPANDED_KEY_COUNT; i++)
{
temp = w[i-1];
if(i % 4 == 0)
temp = SubWord(RotWord(temp)) ^ Rcon[i/4];
w[i] = temp ^ w[i-4] ;
}
}
Big-O helps us do analysis based on the input. The issue with your question is that there seems to be several inputs, which may or may not relate with each other.
Input variables look like N_KEYS, and EXPANDED_KEY_COUNT. We also don't know what SubWord() or RotWord() do based on what is provided.
Since SubWord() and RotWord() aren't provided, lets assume they are constant for easy calculations.
You have basic loops and iterate over each value, so its pretty straight forward. This means you have O(N_KEYS) + O(EXPANDED_KEY_COUNT). So the overall time complexity depends on two inputs, and would be bound by the larger.
If SubWord() or RotWord() do anything special that aren't constant time, then that would affect the time complexity of O(EXPANDED_KEY_COUNT) portion of code. You could adjust the time complexity by multiplied against it. But by the names of the methods, it sounds like their time complexity would be based on the length of the string, would would be yet another different input variable.
So this isn't a clear answer, because the question isn't fully clear, but I tried to break things down for you as best as I could.
I started preparing for an interview and came across this problem:
An array of integers is given
Now calculate the sum of Hamming distances of all pairs of integers in the array in their binary representation.
Example:
given {1,2,3} or {001,010,011} (used 3 bits just to simplify)
result= HD(001,010)+HD(001,011)+HD(010,011)= 2+1+1=4;
The only optimization, from a purely brute force solution, I know I can use here, is in the individual calculation of Hamming Distance as seen here:
int hamming_distance(unsigned x, unsigned y)
{
int dist;
unsigned val;
dist = 0;
val = x ^ y; // XOR
// Count the number of bits set
while (val != 0)
{
// A bit is set, so increment the count and clear the bit
dist++;
val &= val - 1;
}
// Return the number of differing bits
return dist;
}
What's the best way to go about solving this problem?
Here is my C++ implementation, with O(n) complexity and O(1) space.
int sumOfHammingDistance(vector<unsigned>& nums) {
int n = sizeof(unsigned) * 8;
int len = nums.size();
vector<int> countOfOnes(n, 0);
for (int i = 0; i < len; i++) {
for (int j = 0; j < n; j++) {
countOfOnes[j] += (nums[i] >> j) & 1;
}
}
int sum = 0;
for (int count: countOfOnes) {
sum += count * (len - count);
}
return sum;
}
You can consider the bit-positions separately. That gives you 32 (or some other number) of easier problems, where you still have to calculate the sum of all pairs of hamming distances, except now it's over 1-bit numbers.
The hamming distance between two 1-bit numbers is their XOR.
And now it has become the easiest case of this problem - it's already split per bit.
So to reiterate the answer to that question, you take a bit position, count the number of 0's and the number of 1's, multiply those to get the contribution of this bit position. Sum those for all bit positions. It's even simpler than the linked problem, because the weight of the contribution of every bit is 1 in this problem.
I have a method that finds 3 numbers in an array that add up to a desired number.
code:
public static void threeSum(int[] arr, int sum) {
quicksort(arr, 0, arr.length - 1);
for (int i = 0; i < arr.length - 2; i++) {
for (int j = 1; j < arr.length - 1; j++) {
for (int k = arr.length - 1; k > j; k--) {
if ((arr[i] + arr[j] + arr[k]) == sum) {
System.out.println(Integer.toString(i) + "+" + Integer.toString(j) + "+" + Integer.toString(k) + "=" + sum);
}
}
}
}
}
I'm not sure about the big O of this method. I have a hard time wrapping my head around this right now. My guess is O(n^2) or O(n^2logn). But these are complete guesses. I can't prove this. Could someone help me wrap my head around this?
You have three runs over the array (the i, j and k loops), in sizes that depend primarily on n, the size of the array. Hence, this is an O(n3) operation.
Even though your quicksort is O(nlogn), it is overshadowed by the fact that you have 3 nested for loops. So the time complexity w.r.t number of elements (n) is O(n^3)
It's an O(n^3) complexity because there are three nested forloops. The inner forloop only runs if k>j so you can think of n^2*(n/2) but in big O notation you can ignore this.
Methodically speaking, the order of growth complexity can be accurately inferred like the following:
In Objective-C, is there any difference between n++ and ++n (eg. used in a for loop)?
++n; increments the value of n before the expression is evaluated.
n++; increments the value of n after the expression is evaluated.
So compare the results of this
int n = 41;
int o = ++n; //n = 42, o = 42
with the results of this:
int n = 41;
int o = n++; //n = 42, o = 41
In the case of loops:
for (int i = 0; i < j; i++) {/*...*/}
however it doesn't make any difference, unless you had something like this:
for (int i = 0; i < j; x = i++) {/*...*/}
or this:
for (int i = 0; i < j; x = ++i) {/*...*/}
One could say:
It doesn't matter whether to use n++ or ++n as long as no second (related) variable is modified (based on n) within the same expression.
The same rules apply to --n; and n--;, obviously.
++n increments the value before it's used (pre-increment) and n++ increments after (post-increment).
In the context of a for loop, there is no observable difference, as the increment is applied after the code in the loop has been executed.
++n and n++ differ in what the expression evaluates to. An example:
int n = 0;
NSLog(#"%d", n); // 0
NSLog(#"%d", n++); // still 0, increments afterwards
NSLog(#"%d", n); // 1
NSLog(#"%d", ++n); // 2, because it increments first
NSLog(#"%d", n); // 2
In a loop it wont make a difference. Some people say ++n is faster though
In Scott Meyers "More Effective C++" Book he makes a very rational case for preferring prefix increment to postfix increment. In a nutshell, in that language due to operator overloading facilities prefix increment is almost always faster. Objective C doesn't support overloaded operators but if you have or ever will do any C++ or Objective-C++ programming then preferring prefix increment is a good habit to get into.
Remember that most of the time ++n looks like:
n = n + 1
[do something with n]
Whereas n++ looks like (if used as intended):
register A = n; // copy n
[do something with n]
n = A + 1;
As you can see the postfix case has more instructions. In simple for loops most compilers are smart enough to avoid the copy if it's obvious that the pre-increment n isn't going to be used but that case devolves to the prefix case.
I Hope this makes sense. In summary you should use prefix unless you really want the "side-effect" behavior of evaluate then increment that you get from the postfix version.
As stated above,
--n decrements the value of n before the expression is evaluated.
n--; decrements the value of n after the expression is evaluated.
The thing here to note is when using while loops
For example:
n = 5
while(n--) #Runs the loop 5 times
while(--n) #Runs the loop 4 times
As in n-- the loop runs extra time while n = 1
But in --n 1 is first decremented to 0, and then evaluated. This causes the while loop to break.