Division using right shift for divider which not power of 2 - division

I would like to perform the division of num by 60 which is not power of two using right shift operation. How do I do this?
If I want num/64, I can do num >> 6 since 64 = 2^6
How do I do it for 60?

This should work:
public static final long divisionUsingShift(int x, int y) {
int a, b, q, counter;
q = 0;
if (y != 0) {
while (x >= y) {
a = x >> 1;
b = y;
counter = 1;
while (a >= b) {
b <<= 1;
counter <<= 1;
}
x -= b;
q += counter;
}
}
return q;
}

Related

Given an array A[], determine the minimum value of expression: min( abs( a[ i ] - x ) , abs( a[ i ] - y ) )

Given an array A of N integers. Find two integers x and y such that the sum of the absolute difference between each array element to one of the two chosen integers is minimal.
Determine the minimum value of the expression
Σ(i=0 to n) min( abs(a[i] - x), abs(a[i] - y) )
Example 1:
N = 4
A = [2,3,6,7]
Approach
You can choose the two integers, 3 and 7
The required sum = |2-3| + |3-3| + |6-7| + |7-7| = 1+0+0+1 = 2
Example 2:
N = 8
A = [ 2, 3, 5, 8, 11, 14, 17, 996 ]
Approach
You can choose the two integers, 8 and 996
The required sum = |2-8| + |3-8| + |5-8| + |8-8| + |11-8| + |14-8| + |17-8| + |996-996| = 6+5+3+0+3+6+9+0 = 32
Constraints
1<=T<=100
2<=N<=5*10^3
1<=A[i]<=10^5
The sum of N over all test cases does not exceed 5*10^3.
**My code : **
Please help me with the optimal solution O(N) or O(NlogN)
int minAbsDiff(vector<int> Arr, int N)
{
// Approach: O(N^3)
sort(Arr.begin(), Arr.end());
int sum = INT_MAX;
for (int i = 0; i < N; i++)
for (int j = i + 1; j < N; j++)
{
int tmp_sum = 0;
for (int k = 0; k < N; k++)
{
tmp_sum += min(abs(Arr[k] - Arr[i]), abs(Arr[k] - Arr[j]));
}
sum = min(sum, tmp_sum);
}
std::cout << "Sum is :" << sum << std::endl;
return sum;
}

How to not parallelize inner loops in OpenACC

I am a beginner in doing GPU programming with OpenACC. I was trying to do a direct convolution. Convolution consists of 6 nested loops. I only want the first loop to be parallelized. I gave the pragma #pragma acc loop for the first loop and #pragma acc loop seq for the rest. But the output that I am getting is not correct. Is the approach taken by me to parallelize the loop correct ? Specifications for the convolution: Input channels-3, Input Size- 224X224X3, Output channels- 64, Output Size- 111X111X64, filter size- 3X3X3X64. Following is the link to the header files dog.h and squeezenet_params.h. https://drive.google.com/drive/folders/1a9XRjBTrEFIorrLTPFHS4atBOPrG886i
# include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "squeezenet_params.h"
#include "dog.h"
void conv3x3(
const int input_channels, const int input_size,
const int pad, const int stride, const int start_channel,
const int output_size, const float* restrict input_im, const float* restrict filter_weight,
const float* restrict filter_bias, float* restrict output_im){
#pragma acc data copyin (input_im[0:150527],filter_weight[0:1727],filter_bias[0:63]) copyout(output_im[0:788543])
{
#pragma acc parallel
{
#pragma acc loop
for(int p=0;p<64;++p){
filter_weight += p * input_channels * 9;
float bias = filter_bias[p];
output_im += (start_channel + p) * output_size * output_size;
//loop over output feature map
#pragma acc loop seq
for(int i = 0; i < output_size; i++)
{
#pragma acc loop seq
for(int j = 0; j < output_size; j++)
{
//compute one element in the output feature map
float tmp = bias;
//compute dot product of 2 input_channels x 3 x 3 matrix
#pragma acc loop seq
for(int k = 0; k < input_channels; k++)
{
#pragma acc loop seq
for(int l = 0; l < 3; l++)
{
int h = i * stride + l - pad;
#pragma acc loop seq
for(int m = 0; m < 3; m++)
{
int w = j * stride + m - pad;
if((h >= 0) && (h < input_size) && (w >= 0) && (w < input_size))
{
tmp += input_im[k * input_size * input_size + (i * stride + l - pad) * input_size + j * stride + m - pad] \
* filter_weight[9 * k + 3 * l + m];
}
}
}
}
//add relu activation after conv
output_im[i * output_size + j] = (tmp > 0.0) ? tmp : 0.0;
}
}
}
}
}
}
void main(){
float * result = (float*)malloc(sizeof(float) * (1 * 64 * 111 * 111));
conv3x3(3,224,0,2,0,111,sample,conv1_weight,conv1_bias,result);
for(int i=0;i<64 * 111 * 111;++i){
//if(result[i]>0)
printf("%f:%d\n",result[i],i);
}
}
The contributor posted the same question on the PGI User Forums where I've answered. (See: https://www.pgroup.com/userforum/viewtopic.php?f=4&t=7614). The topic question is incorrect in that the inner loops are not getting parallelized nor are the cause of the issue.
The problem here is that the code has a race condition on the shared "output_im" pointer. My suggested solution is to compute a per thread offset into the array rather than trying to manipulate the pointer itself.
for(int p=0;p<64;++p){
filter_weight += p * input_channels * 9;
float bias = filter_bias[p];
int offset;
offset = (start_channel + p) * output_size * output_size;
//loop over output feature map
#pragma acc loop vector collapse(2)
for(int i = 0; i < output_size; i++)
{
for(int j = 0; j < output_size; j++)
{
... cut ...
}
}
//add relu activation after conv
int idx = offset + (i * output_size + j);
output_im[idx] = (tmp > 0.0) ? tmp : 0.0;
}
}

Is the Time Complexity of this function O(n * (n * log n² ))

What is the Time Complexity of the function below? n > 0
Function fun(n){
Let count = 0;
For( I = 0; I < n; I++){
For(j = 0; j < n; j /= 2) {
For(h = 0; h < n; h /= 2) {
Count = count + 1;
}
}
}
Return count;
}
I have O(n * (n * log n² )) , but something tells me i might be wrong.
The above loop is an infinite loop. time complexity for this cannot be determined, unless the problem statement is updated properly!
Function fun(n){
Let count = 0;
For( I = 0; I < n; I++){
// will run infinitely even if you change j /= 2 to j *= 2, because initial value is 0
For(j = 0; j < n; j /= 2) {
// will run infinitely even if you change h /= 2 to h *= 2, because initial value is 0
For(h = 0; h < n; h /= 2) {
Count = count + 1;
}
}
}
Return count;
}

Divide integer by 16 without using division or cast

OKAY... let me rephrase this question...
How can I obtain x 16ths of an integer without using division or casting to double....
int res = (ref * frac) >> 4
(but worry a a bit about overflow. How big can ref and frac get? If it could overflow, cast to a longer integer type first)
In any operation of such kind it makes sense to multiply first, then divide. Now, if your operands are integers and you are using a compileable language (eg. C), use shr 4 instead of /16 - this will save some processor cycles.
Assuming everything here are ints, any optimizing compiler worth its salt will notice 16 is a power of two, and shift frac accordingly -- so long as optimizations are turned on. Worry more about major optimizations the compiler can't do for you.
If anything, you should bracket ref * frac and then have the divide, as any value of frac less than 16 will result in 0, whether by shift or divide.
You can use left shift or right shift:
public static final long divisionUsingMultiplication(int a, int b) {
int temp = b;
int counter = 0;
while (temp <= a) {
temp = temp<<1;
counter++;
}
a -= b<<(counter-1);
long result = (long)Math.pow(2, counter-1);
if (b <= a) result += divisionUsingMultiplication(a,b);
return result;
}
public static final long divisionUsingShift(int a, int b) {
int absA = Math.abs(a);
int absB = Math.abs(b);
int x, y, counter;
long result = 0L;
while (absA >= absB) {
x = absA >> 1;
y = absB;
counter = 1;
while (x >= y) {
y <<= 1;
counter <<= 1;
}
absA -= y;
result += counter;
}
return (a>0&&b>0 || a<0&&b<0)?result:-result;
}
I don't understand the constraint, but this pseudo code rounds up (?):
res = 0
ref= 10
frac = 2
denominator = 16
temp = frac * ref
while temp > 0
temp -= denominator
res += 1
repeat
echo res

What is the best way to add two numbers without using the + operator?

A friend and I are going back and forth with brain-teasers and I have no idea how to solve this one. My assumption is that it's possible with some bitwise operators, but not sure.
In C, with bitwise operators:
#include<stdio.h>
int add(int x, int y) {
int a, b;
do {
a = x & y;
b = x ^ y;
x = a << 1;
y = b;
} while (a);
return b;
}
int main( void ){
printf( "2 + 3 = %d", add(2,3));
return 0;
}
XOR (x ^ y) is addition without carry. (x & y) is the carry-out from each bit. (x & y) << 1 is the carry-in to each bit.
The loop keeps adding the carries until the carry is zero for all bits.
int add(int a, int b) {
const char *c=0;
return &(&c[a])[b];
}
No + right?
int add(int a, int b)
{
return -(-a) - (-b);
}
CMS's add() function is beautiful. It should not be sullied by unary negation (a non-bitwise operation, tantamount to using addition: -y==(~y)+1). So here's a subtraction function using the same bitwise-only design:
int sub(int x, int y) {
unsigned a, b;
do {
a = ~x & y;
b = x ^ y;
x = b;
y = a << 1;
} while (a);
return b;
}
Define "best". Here's a python version:
len(range(x)+range(y))
The + performs list concatenation, not addition.
Java solution with bitwise operators:
// Recursive solution
public static int addR(int x, int y) {
if (y == 0) return x;
int sum = x ^ y; //SUM of two integer is X XOR Y
int carry = (x & y) << 1; //CARRY of two integer is X AND Y
return addR(sum, carry);
}
//Iterative solution
public static int addI(int x, int y) {
while (y != 0) {
int carry = (x & y); //CARRY is AND of two bits
x = x ^ y; //SUM of two bits is X XOR Y
y = carry << 1; //shifts carry to 1 bit to calculate sum
}
return x;
}
Cheat. You could negate the number and subtract it from the first :)
Failing that, look up how a binary adder works. :)
EDIT: Ah, saw your comment after I posted.
Details of binary addition are here.
Note, this would be for an adder known as a ripple-carry adder, which works, but does not perform optimally. Most binary adders built into hardware are a form of fast adder such as a carry-look-ahead adder.
My ripple-carry adder works for both unsigned and 2's complement integers if you set carry_in to 0, and 1's complement integers if carry_in is set to 1. I also added flags to show underflow or overflow on the addition.
#define BIT_LEN 32
#define ADD_OK 0
#define ADD_UNDERFLOW 1
#define ADD_OVERFLOW 2
int ripple_add(int a, int b, char carry_in, char* flags) {
int result = 0;
int current_bit_position = 0;
char a_bit = 0, b_bit = 0, result_bit = 0;
while ((a || b) && current_bit_position < BIT_LEN) {
a_bit = a & 1;
b_bit = b & 1;
result_bit = (a_bit ^ b_bit ^ carry_in);
result |= result_bit << current_bit_position++;
carry_in = (a_bit & b_bit) | (a_bit & carry_in) | (b_bit & carry_in);
a >>= 1;
b >>= 1;
}
if (current_bit_position < BIT_LEN) {
*flags = ADD_OK;
}
else if (a_bit & b_bit & ~result_bit) {
*flags = ADD_UNDERFLOW;
}
else if (~a_bit & ~b_bit & result_bit) {
*flags = ADD_OVERFLOW;
}
else {
*flags = ADD_OK;
}
return result;
}
Go based solution
func add(a int, b int) int {
for {
carry := (a & b) << 1
a = a ^ b
b = carry
if b == 0 {
break
}
}
return a
}
same solution can be implemented in Python as follows, but there is some problem about number represent in Python, Python has more than 32 bits for integers. so we will use a mask to obtain the last 32 bits.
Eg: if we don't use mask we won't get the result for numbers (-1,1)
def add(a,b):
mask = 0xffffffff
while b & mask:
carry = a & b
a = a ^ b
b = carry << 1
return (a & mask)
Why not just incremet the first number as often, as the second number?
The reason ADD is implememted in assembler as a single instruction, rather than as some combination of bitwise operations, is that it is hard to do. You have to worry about the carries from a given low order bit to the next higher order bit. This is stuff that the machines do in hardware fast, but that even with C, you can't do in software fast.
Here's a portable one-line ternary and recursive solution.
int add(int x, int y) {
return y == 0 ? x : add(x ^ y, (x & y) << 1);
}
I saw this as problem 18.1 in the coding interview.
My python solution:
def foo(a, b):
"""iterate through a and b, count iteration via a list, check len"""
x = []
for i in range(a):
x.append(a)
for i in range(b):
x.append(b)
print len(x)
This method uses iteration, so the time complexity isn't optimal.
I believe the best way is to work at a lower level with bitwise operations.
In python using bitwise operators:
def sum_no_arithmetic_operators(x,y):
while True:
carry = x & y
x = x ^ y
y = carry << 1
if y == 0:
break
return x
Adding two integers is not that difficult; there are many examples of binary addition online.
A more challenging problem is floating point numbers! There's an example at http://pages.cs.wisc.edu/~smoler/x86text/lect.notes/arith.flpt.html
Was working on this problem myself in C# and couldn't get all test cases to pass. I then ran across this.
Here is an implementation in C# 6:
public int Sum(int a, int b) => b != 0 ? Sum(a ^ b, (a & b) << 1) : a;
Implemented in same way as we might do binary addition on paper.
int add(int x, int y)
{
int t1_set, t2_set;
int carry = 0;
int result = 0;
int mask = 0x1;
while (mask != 0) {
t1_set = x & mask;
t2_set = y & mask;
if (carry) {
if (!t1_set && !t2_set) {
carry = 0;
result |= mask;
} else if (t1_set && t2_set) {
result |= mask;
}
} else {
if ((t1_set && !t2_set) || (!t1_set && t2_set)) {
result |= mask;
} else if (t1_set && t2_set) {
carry = 1;
}
}
mask <<= 1;
}
return (result);
}
Improved for speed would be below::
int add_better (int x, int y)
{
int b1_set, b2_set;
int mask = 0x1;
int result = 0;
int carry = 0;
while (mask != 0) {
b1_set = x & mask ? 1 : 0;
b2_set = y & mask ? 1 : 0;
if ( (b1_set ^ b2_set) ^ carry)
result |= mask;
carry = (b1_set & b2_set) | (b1_set & carry) | (b2_set & carry);
mask <<= 1;
}
return (result);
}
It is my implementation on Python. It works well, when we know the number of bytes(or bits).
def summ(a, b):
#for 4 bytes(or 4*8 bits)
max_num = 0xFFFFFFFF
while a != 0:
a, b = ((a & b) << 1), (a ^ b)
if a > max_num:
b = (b&max_num)
break
return b
You can do it using bit-shifting and the AND operation.
#include <stdio.h>
int main()
{
unsigned int x = 3, y = 1, sum, carry;
sum = x ^ y; // Ex - OR x and y
carry = x & y; // AND x and y
while (carry != 0) {
carry = carry << 1; // left shift the carry
x = sum; // initialize x as sum
y = carry; // initialize y as carry
sum = x ^ y; // sum is calculated
carry = x & y; /* carry is calculated, the loop condition is
evaluated and the process is repeated until
carry is equal to 0.
*/
}
printf("%d\n", sum); // the program will print 4
return 0;
}
The most voted answer will not work if the inputs are of opposite sign. The following however will. I have cheated at one place, but only to keep the code a bit clean. Any suggestions for improvement welcome
def add(x, y):
if (x >= 0 and y >= 0) or (x < 0 and y < 0):
return _add(x, y)
else:
return __add(x, y)
def _add(x, y):
if y == 0:
return x
else:
return _add((x ^ y), ((x & y) << 1))
def __add(x, y):
if x < 0 < y:
x = _add(~x, 1)
if x > y:
diff = -sub(x, y)
else:
diff = sub(y, x)
return diff
elif y < 0 < x:
y = _add(~y, 1)
if y > x:
diff = -sub(y, x)
else:
diff = sub(y, x)
return diff
else:
raise ValueError("Invalid Input")
def sub(x, y):
if y > x:
raise ValueError('y must be less than x')
while y > 0:
b = ~x & y
x ^= y
y = b << 1
return x
Here is the solution in C++, you can find it on my github here: https://github.com/CrispenGari/Add-Without-Integers-without-operators/blob/master/main.cpp
int add(int a, int b){
while(b!=0){
int sum = a^b; // add without carrying
int carry = (a&b)<<1; // carrying without adding
a= sum;
b= carry;
}
return a;
}
// the function can be writen as follows :
int add(int a, int b){
if(b==0){
return a; // any number plus 0 = that number simple!
}
int sum = a ^ b;// adding without carrying;
int carry = (a & b)<<1; // carry, without adding
return add(sum, carry);
}
This can be done using Half Adder.
Half Adder is method to find sum of numbers with single bit.
A B SUM CARRY A & B A ^ B
0 0 0 0 0 0
0 1 1 0 0 1
1 0 1 0 0 1
1 1 0 1 0 0
We can observe here that SUM = A ^ B and CARRY = A & B
We know CARRY is always added at 1 left position from where it was
generated.
so now add ( CARRY << 1 ) in SUM, and repeat this process until we get
Carry 0.
int Addition( int a, int b)
{
if(B==0)
return A;
Addition( A ^ B, (A & B) <<1 )
}
let's add 7 (0111) and 3 (0011) answer will be 10 (1010)
A = 0100 and B = 0110
A = 0010 and B = 1000
A = 1010 and B = 0000
final answer is A.
I implemented this in Swift, I am sure someone will benefit from
var a = 3
var b = 5
var sum = 0
var carry = 0
while (b != 0) {
sum = a ^ b
carry = a & b
a = sum
b = carry << 1
}
print (sum)
You can do it iteratively or recursively. Recursive:-
public int getSum(int a, int b) {
return (b==0) ? a : getSum(a^b, (a&b)<<1);
}
Iterative:-
public int getSum(int a, int b) {
int c=0;
while(b!=0) {
c=a&b;
a=a^b;
b=c<<1;
}
return a;
}
time complexity - O(log b)
space complexity - O(1)
for further clarifications if not clear, refer leetcode or geekForGeeks explanations.
I'll interpret this question as forbidding the +,-,* operators but not ++ or -- since the question specified operator and not character (and also because that's more interesting).
A reasonable solution using the increment operator is as follows:
int add(int a, int b) {
if (b == 0)
return a;
if (b > 0)
return add(++a, --b);
else
return add(--a, ++b);
}
This function recursively nudges b towards 0, while giving a the same amount to keep the sum the same.
As an additional challenge, let's get rid of the second if block to avoid a conditional jump. This time we'll need to use some bitwise operators:
int add(int a, int b) {
if(!b)
return a;
int gt = (b > 0);
int m = -1 << (gt << 4) << (gt << 4);
return (++a & --b & 0)
| add( (~m & a--) | (m & --a),
(~m & b++) | (m & ++b)
);
}
The function trace is identical; a and b are nudged between each add call just like before.
However, some bitwise magic is employed to drop the if statement while continuing to not use +,-,*:
A mask m is set to 0xFFFFFFFF (-1 in signed decimal) if b is positive, or 0x00000000 if b is negative.
The reason for shifting the mask left by 16 twice instead a single shift left by 32 is because shifting by >= the size of the value is undefined behavior.
The final return takes a bit of thought to fully appreciate:
Consider this technique to avoid a branch when deciding between two values. Of the values, one is multiplied by the boolean while the other is multiplied by the inverse, and the results are summed like so:
double naiveFoodPrice(int ownPetBool) {
if(ownPetBool)
return 23.75;
else
return 10.50;
}
double conditionlessFoodPrice(int ownPetBool) {
double result = ownPetBool*23.75 + (!ownPetBool)*10.50;
}
This technique works great in most cases. For us, the addition operator can easily be substituted for the bitwise or | operator without changing the behavior.
The multiplication operator is also not allowed for this problem. This is the reason for our earlier mask value - a bitwise and & with the mask will achieve the same effect as multiplying by the original boolean.
The nature of the unary increment and decrement operators halts our progress.
Normally, we would easily be able to choose between an a which was incremented by 1 and an a which was decremented by 1.
However, because the increment and decrement operators modify their operand, our conditionless code will end up always performing both operations - meaning that the values of a and b will be tainted before we finish using them.
One way around this is to simply create new variables which each contain the original values of a and b, allowing a clean slate for each operation. I consider this boring, so instead we will adjust a and b in a way that does not affect the rest of the code (++a & --b & 0) in order to make full use of the differences between x++ and ++x.
We can now get both possible values for a and b, as the unary operators modifying the operands' values now works in our favor. Our techniques from earlier help us choose the correct versions of each, and we now have a working add function. :)
Python codes:
(1)
add = lambda a,b : -(-a)-(-b)
use lambda function with '-' operator
(2)
add= lambda a,b : len(list(map(lambda x:x,(i for i in range(-a,b)))))