One position right barrel shift using ALU Operators? - operators

I was wondering if there was an efficient way to perform a shift right on an 8 bit binary value using only ALU Operators (NOT, OR, AND, XOR, ADD, SUB)
Example:
input: 00110101
output: 10011010
I have been able to implement a shift left by just adding the 8 bit binary value with itself since a shift left is equivalent to multiplying by 2. However, I can't think of a way to do this for shift right.
The only method I have come up with so far is to just perform 7 left barrel shifts. Is this the only way?

It's trivial to see that this cannot be done with {AND, OR, XOR, NOT}. For all these operators, outbit[N] depends on inbit1[N] and inbit2[N] only. AND adds a dependency on inbit1[N]..inbit1[0] and inbit2[N]..inbit2[0]. However, in your case you require a dependency on inbit[N+1]. Therefore, it follows that if there is any solution, it must include a SUB.
However, A - B is just A + (-B) which is A + ((B XOR 11111111) +1). Hence, if there was a solution using SUB, it could be rewritten as a solution using ADD and XOR instead. As we've shown, those operators are insufficient. Hence, the set {ADD, OR, XOR, NOT, ADD, SUB} is insufficient too.

Related

X and Y inputs in LabVIEW

I am new to LabVIEW and I am trying to read a code written in LabVIEW. The block diagram is this:
This is the program to input x and y functions into the voltage input. It is meant to give an input voltage in different forms (sine, heartshape , etc.) into the fast-steering mirror or galvano mirror x and y axises.
x and y function controls are for inputting a formula for a function, and then we use "evaluation single value" function to input into a daq assistant.
I understand that { 2*(|-Mpi|)/N }*i + -Mpi*pi goes into the x value. However, I dont understand why we use this kind of formula. Why we need to assign a negative value and then do the absolute value of -M*pi. Also, I don`t understand why we need to divide to N and then multiply by i. And finally, why need to add -Mpi again? If you provide any hints about this I would really appreciate it.
This is just a complicated way to write the code/formula. Given what the code looks like (unnecessary wire bends, duplicate loop-input-tunnels, hidden wires, unnecessary coercion dots, failure to use appropriate built-in 'negate' function) not much care has been given in writing it. So while it probably yields the correct results you should not expect it to do so in the most readable way.
To answer you specific questions:
Why we need to assign a negative value and then do the absolute value
We don't. We can just move the negation immediately before the last addition or change that to a subtraction:
{ 2*(|Mpi|)/N }*i - Mpi*pi
And as #yair pointed out: We are not assigning a value here, we are basically flipping the sign of whatever value the user entered.
Why we need to divide to N and then multiply by i
This gives you a fraction between 0 and 1, no matter how many steps you do in your for-loop. Think of N as a sampling rate. I.e. your mirrors will always do the same movement, but a larger N just produces more steps in between.
Why need to add -Mpi again
I would strongly assume this is some kind of quick-and-dirty workaround for a bug that has not been fixed properly. Looking at the code it seems this +Mpi*pi has been added later on in the development process. And while I don't know what the expected values are I would believe that multiplying only one of the summands by Pi is probably wrong.

Is there any "modulo" equivalent representation for XNOR?

I don't know if this is the right home for this question but since this kind of explanation is used in programming I am posting it here. If I am wrong, please respond in this post and will transfer the question to another home.
So after studying digital logic I came to know that each of the logic gates have their equivalent in programming, like AND, OR and NOT have their separate operators. And for XOR I have been told that it is equivalent to modulo 2 of the number of inputs in an actual logic gate. But what about XNOR? Is there any representation of this type for XNOR? And is that explanation generalized? Like 'modulo 2' is generalized for 3-4 any n number of inputs. Does this apply for XNOR as well?
Just to be clear here, xor is actually equivalent to modulo 2 of the number of true inputs. Your original statement (without the word "true") could be misread as including both true and false inputs.
In any case, since xnor is simply the complement of xor, it's a relatively simple matter to invert the outgoing value by injecting one extra truth value. In pseudo-code, that would be:
def xor(inputs):
value = sum(inputs) modulo 2
def xnor(inputs):
value = (sum(inputs) + 1) modulo 2

Conditional instructions in AVX2

Can you give the list of conditional instructions available in AVX2?
So far I've found the following:
_mm256_blendv_* for selection from a and b based on mask c
Are there something like conditional multiply and conditional add, etc.?
Also if instructions taking imm8 count (like _mm256_blend_*), could you explain how to get that imm8 after a vector comparision?
Intel Intrinsics Guide suggests gather, load and store operating with a mask. The immediate imm8 in blend_epi16 is not programmable unless self-modifying code or a jump table is considered an option. It's still possible to derive using pext from BMI2 to compact half of odd positioned bits from the result of movemask -- one gets 32 independent mask bits from movemask in AVX2, but blend_epi16 uses each bit to control four bytes--or one 16-bit variable in each bank.
AVX512 introduces optional zero-masking and merge-masking for almost all instructions.
Before that, to do a conditional add, mask one operand (with vandps or vandnps for the inverse) before the add (instead of vblendvps on the result). This is why packed-compare instructions/intrinsics produce all-zero or all-one elements.
0.0 is the additive identity element, so adding it is a no-op. (Except for IEEE semantics of -0.0 and +0.0, I forget how that works exactly).
Masking a constant input instead of blending the result avoids making the critical path longer, for something like conditionally adding 1.0.
Conditional multiply is more cumbersome because 0.0 is not the multiplicative identity. You need to multiply by 1.0 to keep a value unchanged, and you can't easily produce that with an AND or ANDN with a compare result. You can blendv an input, or you can do the multiply and blendv the output.
The alternative to blendv is at least 3 booleans, like AND/ANDN/OR, but that's usually not worth it. Although note that Haswell runs vblendvps and vpblendvb as 2 uops for port 5, so it's a potential bottleneck compared to using integer booleans that can run on any port. Skylake runs them vblendvps as 2 uops for any port. It could make sense to do something to avoid having a blendv on the critical path, though.
Masking an input operand or blending the result is generally how you do branchless SIMD conditionals.
BLENDV is usually at least 2 uops, so it's slower than an AND.
Immediate blends are much more efficient, but you can't use them, because the imm8 blend control has to be a compile-time constant embedded into the instruction's machine code. That's what immediate means in an assembly-language context.

is it possible to separate the concept of precedence and association in yacc

I would like to have a clear example of precedence and one of associativity in yacc, but I find myself yet in having troubles separating these two concepts.
Perhaps this is due to the fact that I'm associating these two concepts to math and mathematical operation.. These are two old examples I built:
Associativity (*) is used to specify the kind of association to be applied (left,right, non assoc....)
In fact
%left '+' '*'
instruct that plus and multiplication are left associative. So far, so good. (not exactly but it serve the purpose of the example)
Precedence (**) is used to give precedence to one operator over another.
%left '+'
%left '*'
the multiplication has higher precedence than plus operation.
So we got the wanted parsing action for E+E*E
E+(E*E) in case of (**)
(E+E)*E in case of only (*) --> this is clearly wrong - but it's fine for the example
So question is, can I separate clearly associativity from precedence without using the concept of associativity?
Even non-associatity implies associativity knowledge… so.. how, if possible, can I talk separately about them?
No. In a parser definition, associativity is just a small detail within the precedence algorithm.
To understand that, it's important to understand what precedence actually means, in parsing terms.
A left-to-right shift-reduce parser has a stack and an input stream. Initially, the stack is empty, and the input stream contains the input to be parsed. The SR parser repeatedly does one of the following two actions until the stack consists only of the start symbol and the input stream is empty (in which case the parse has succeeded), or neither action is possible (in which case the parse has failed):
reduce the production whose right-hand side is on the top of the stack by popping the right-hand side off of the stack and pushing the left-hand side non-terminal;
shift one input symbol from the input onto the stack.
It's an important feature of this framework that reductions can only occur when the production's right-hand side is on the top of the stack.
The shift action is always possible unless the input stream is exhausted, but a reduce action can only be taken if the top of the stack precisely matches the right-hand side of some production.
Different ways of building SR parsers will involve different mechanisms for deciding which action to take in any given stack configuration. One such mechanism is the precedence algorithm. Some very simple languages can be SR parsed only with the precedence algorithm. In other cases, it can be used as an auxiliary decision algorithm in order to resolve ambiguous grammar specifications; this is the use case for precedence in yacc-derived parser generators.
For precedence to work, it is necessary that at most one reduction action be possible in any stack configuration, which means that there cannot be two productions with the same right-hand side. [Note 1]
Given that there is at most one possible reduction action and at most one possible shift action (since the next input symbol, if any, is given), the only issue is deciding whether to shift or reduce. The precedence algorithm involves a precedence function PREC(A→α, a) ⇒ { SHIFT, REDUCE }, whose arguments are a production A→α and a terminal symbol a, which are mapped onto either SHIFT or REDUCE.
Although the precedence relationship is usually written as though it were a comparison, it is not a normal comparison operator because the two arguments are from different domains. It always involves a production and a terminal.
In simple cases, however, it is possible to implement PREC using numeric comparisons. To do that, we define two functions which map productions and terminals, respectively, onto integers: f(A→α) and g(a). We use those to compute PREC:
PREC(A→α, a) ≡
REDUCE if f(A→α) > g(a)
SHIFT if f(A→α) < g(a)
[Note 2]
In any event, the precedence algorithm for a given stack configuration is:
Identify the production P (=A→α) of the possible reduce action, if any.
If only a shift or only a reduce is possible, do that. Otherwise, if both a reduce and a shift are possible, compute PREC(P, input) and reduce using P if the result is REDUCE; otherwise, shift input.
Now that might seem confusing, since most descriptions of precedence relations describe them as though they compared terminals, rather than a production with a terminal. That's because it is normal to "name" each production using the last terminal in the production. Usually, that is unambiguous, because of the restriction on production right-hand sides: since two right-hand side must differ, it is likely that all production right-hand sides have different terminal symbols. [Note 3]
Although that short-hand allows us to say, for example, that "* has higher precedence than +" instead of the somewhat more cumbersome "the production E→E*E has precedence over the terminal +", it is important to remember that the latter statement is what we really mean.
Precedence also applies to single operators. With most operators, we prefer to group from left to right, so that E-E-E should be parsed as though it had been written (E-E)-E. However, some operators like exponentiation group to the right, meaning that E**E**E should be parsed as E**(E**E). This is simple to define using the PREC function; for a left-grouping operator ⊕, we'll have:
PREC(E→E⊕E, ⊕) ≡ REDUCE
while a right-grouping operator ⊗ would have
PREC(E→E⊗E, ⊗) ≡ SHIFT
That's clear when we use the actual arguments to PREC, but it becomes confusing when we use the shorthand notation, which leaves us trying to say that ⊕ has higher precedence than ⊕ while ⊗ has lower precedence than ⊗. To avoid the ambiguity and still let us get away with the shorthand, we describe ⊕ as "left-associative" (%left) and ⊗ as "right-associative" (%right). But the implementation is simply an application of the normal precedence algorithm.
As an example, consider the simple expression language:
E → E + E
E → E * E
E → E ** E
E → id
Here we expect * to bind more tightly than + with ** binding tightest; the first two group to the left while exponentiation groups to the right. To achieve that, we can assign f and g functions as follows:
Production f(Production) Terminal g(Terminal)
E → E + E 2 + 1
E → E * E 4 * 3
E → E ** E 5 ** 6
E → id 8 id 7
Yacc-generated grammars don't use precedence to decide when to reduce the E→id production, but the above will work since the grammar can be parsed completely using only the precedence algorithm.
Parentheses can easily be added; I'll leave that as an exercise.
Notes
There might be some other mechanism to decide between reduction actions, so the restriction is only absolute for a parser which only uses precedence. There might also be some other mechanism to restrict possible shift actions. For example, for a shift to be feasible, the tokens on the top of the stack need to eventually be reduced, which means that some suffix of the stack must be a prefix of the right-hand side of some production. Similarly, a reduction is only feasible if, post-reduce, some suffix of the stack is the prefix of the right-hand side of some production.
You'll see formulations using < and ≥ (or ≤ and >), but to avoid confusion, I'm assuming that the ranges of f and g are different sets of integers. Since the functions are arbitrary, this does not restrict generality.
That's not always the case. For example, languages which allow - to be either a unary or a binary operator will have productions with right-hand sides - E and E - E. Yacc-derived parser generators use the %prec TERMINAL declaration to associate a production with a terminal other than the default.
This is all very confused.
Associativity ... Is used to give precedence to one operator over another
No. Absolutely not. Associativity is used to determine which order two adjacent instances of the same operator are evaluated in. (E+E)+E or E+(E+E). All arithmetic operators except exponentiation are left-associative in mathematics.
%left '+' '*'
This says that + and * are both left-associative and have the same precedence, because they are both on the same line. And it is therefore wrong.
can I separate clearly associativity from precedence without using the concept of associativity
I'm sorry but this is just meaningless.

Vertical bar usage in SQL

I keep running across (someone else's) code like this
Case
When (variable & (2|4|8|16)>0) Then ...
WHEN (variable & (1|32)>0 Then ...
...
End
I figure what's happening here is it's testing whether there is a 1 or a 0 in the 2^1, 2^2, 2^3, or 2^4 places of the variable variable. Is this right? Either way I'm still unclear on why this expression is written in the way it is. I'm unable to find any documentation on this logic mostly because I don't know what to really call it.
You're right, the "pipe" symbol | corresponds to the bitwise OR operator, whereas the ampersand & corresponds to the bitwise AND operator (at least in some databases).
They're checking bits using those bitwise operators. The most likely reason they are doing this the way they did, is for "improved readability". E.g. it is easier to see which bits are being checked when writing
2|4|8|16 -- rather than 30
1|32 -- rather than 33