I keep running across (someone else's) code like this
Case
When (variable & (2|4|8|16)>0) Then ...
WHEN (variable & (1|32)>0 Then ...
...
End
I figure what's happening here is it's testing whether there is a 1 or a 0 in the 2^1, 2^2, 2^3, or 2^4 places of the variable variable. Is this right? Either way I'm still unclear on why this expression is written in the way it is. I'm unable to find any documentation on this logic mostly because I don't know what to really call it.
You're right, the "pipe" symbol | corresponds to the bitwise OR operator, whereas the ampersand & corresponds to the bitwise AND operator (at least in some databases).
They're checking bits using those bitwise operators. The most likely reason they are doing this the way they did, is for "improved readability". E.g. it is easier to see which bits are being checked when writing
2|4|8|16 -- rather than 30
1|32 -- rather than 33
Related
I was reviewing some code from a library for Arduino and saw the following if statement in the main loop:
draw_state++;
if ( draw_state >= 14*8 )
draw_state = 0;
draw_state is a uint8_t.
Why is 14*8 written here instead of 112? I initially thought this was done to save space, as 14 and 8 can both be represented by a single byte, but then so can 112.
I can't see why a compiler wouldn't optimize this to 112, since otherwise it would mean a multiplication has to be done every iteration instead of the lookup of a value. This looks to me like there is some form of memory and processing tradeoff.
Does anyone have a suggestion as to why this was done?
Note: I had a hard time coming up with a clear title, so suggestions are welcome.
Probably to explicitly show where the number 112 came from. For example, it could be number of bits in 14 bytes (but of course I don't know the context of the code, so I could be wrong). It would then be more obvious to humans where the value came from, than wiriting just 112.
And as you pointed out, the compiler will probably optimize it, so there will be no multiplication in the machine code.
In the objective-c variant of C, NS_OPTIONS exists to help validate bit masks. But it seems to have an inherent flaw. If I need to define a value representing a bitwise OR of all of the bits, e.g. FubarAllOptions some would say that the convention is to simply use INT_MAX. However this has a problem.
Imagine that I use NS_OPTIONS for the lower five bits of a uint8_t. e.g.
typedef NS_OPTIONS(uint8_t) {
FubarA=1,
FubarB=1<<1,
FubarC=1<<2,
FubarD=1<<3,
FubarE=1<<4,
FubarAllOptions=0xff // MAX
} FubarOptions;
If I bitwise clear each of the assigned bits of a FubarOptions variable, the remaining three upper bits will remain set. Therefore if I check for the NS_OPTIONS value being nonzero as a test of whether all the bits are cleared, it will appear that some bits are still set. Therefore a bug. FubarAllOptions includes bits that are not assigned.
Q: How do I define FubarAllOptions so that it only includes assigned bits, without laboriously typing out all of the potential options and Or'ing them? i.e. FubarA|FubarB|.... But this would be vulnerable to typo mistakes.
Sure I can take the largest, <<1 and subtract 1. But too this would be vulnerable to typo mistakes.
You will have to set all options manually:
FubarAllOptions = (FubarA | FubarB | FubarC | FubarD | FubarE)
Of course, you can also fix the problem by always checking every option manually instead of masking them all and then comparing with zero.
You are too worried about typing mistakes when you should rather worry what will happen when you start using another bit.
I'm relatively new to Verilog and I've been working on a project in which I would, in an ideal world, like to have an assignment statement like:
assign isinbufferzone = a > (packetlength-16384) ? 1:0;
The file with this type of line in it will compile, but isinbufferzone doesn't go high when it should. I'm assuming it's not happy with having subtraction in the conditional. I'm able to make the module work by moving stuff around, but the result is more complicated than I think it should need to be and the latency really starts to add up. Does anyone have any thoughts on what the most concise way to do this is? Thank you in advance for your help.
You probably expect isinbufferzone to go high if packetlength is 16384 or less regardless of a, however this is not what happens.
If packetlength is less than 16384, the value packetlength - 16384 is not a negative number −X, but some very large positive number (maybe 232 − X, or 217 − X, I'm not quite sure which, but it doesn't matter), because Verilog does unsigned arithmetic by default. This is called integer overflow.
You could maybe try to solve this by declaring some signals as signed, but in my opinion the safest way is to explicitly handle the overflow case and making sure the subtraction result is only evaluated for packetlength values of 16384 or greater:
assign isinbufferzone = (packetlength < 16384) ? 1 : (a > packetlength - 16384);
Why does a compiler not type promote all evaluations of expressions in the right hand side of an assignment expression to at least the left hand sides type level?
e.g.
"double x = (88.0 - 32) * 5 / 9" converts to Celsius from Fahrenheit correctly but...
"double x = (88.0 - 32) * (5 / 9)" will not.
My question is not why the second example does not return the desired result. My question is why does the compiler not type promote the evaluation of (5/9) to that of a double.
Why does a compiler not type promote all evaluations of expressions in
the right hand side of an assignment expression to at least the left
hand sides type level?
Very good question. Actually,let's suppose for sometime that the compiler does this automatically. Now, taking your example :-
double x = 88.0 - 32 * 5 / 9
Now the RHS part of this assignment can be converted into double completely for all tokens(lexemes) in several of ways. I am adding some of them :-
88.0 - 32 * (double)(5 / 9)
88.0 - 32 * 5 / 9 // default rule
88.0 - (double)(32 * 5) / 9
Individually type-casting to double every token which doesn't seem to be a double entity.
Several other ways.
This turns to combinatorial problem like "In how many ways a given expression can be reduced to double(whatever type)?"
But, the compiler designers wouldn't take such a pain in their *** to convert each of the tokens to the desired highest type(double here) considering the exhaustive use of memory. Also it appears like an unnatural rationale behind it doing this way for no reason because users could better perform the operation by manually giving some hints to the compiler that it has to typecast using the way coded by the user.
Being everything automatic conversion is not going to yield you the result always, as sometimes what a user wants may not be achieved with this kind of rationale of automatic type promotion, BUT, the vice-versa of type-promoting will serve in a much better way as is done by the compilers today. Current rule for type-casting is serving all the purposes correctly, though with some extra effort, but, FLAWLESSLY.
I was wondering if there was an efficient way to perform a shift right on an 8 bit binary value using only ALU Operators (NOT, OR, AND, XOR, ADD, SUB)
Example:
input: 00110101
output: 10011010
I have been able to implement a shift left by just adding the 8 bit binary value with itself since a shift left is equivalent to multiplying by 2. However, I can't think of a way to do this for shift right.
The only method I have come up with so far is to just perform 7 left barrel shifts. Is this the only way?
It's trivial to see that this cannot be done with {AND, OR, XOR, NOT}. For all these operators, outbit[N] depends on inbit1[N] and inbit2[N] only. AND adds a dependency on inbit1[N]..inbit1[0] and inbit2[N]..inbit2[0]. However, in your case you require a dependency on inbit[N+1]. Therefore, it follows that if there is any solution, it must include a SUB.
However, A - B is just A + (-B) which is A + ((B XOR 11111111) +1). Hence, if there was a solution using SUB, it could be rewritten as a solution using ADD and XOR instead. As we've shown, those operators are insufficient. Hence, the set {ADD, OR, XOR, NOT, ADD, SUB} is insufficient too.