Bitwise operator predence - language-design

A gotcha I've run into a few times in C-like languages is this:
original | included & ~excluded // BAD
Due to precedence, this parses as:
original | (included & ~excluded) // '~excluded' has no effect
Does anyone know what was behind the original design decision of three separate precedence levels for bitwise operators? More importantly, do you agree with the decision, and why?

The operators have had this precedence since at least C.
I agree with the order because it is the same relative order as the relative order of the arithmetic operators that they are most similar to (+, * and negation).
You can see the similarity of & vs *, and | vs + here:
A B | A&B A*B | A|B A+B
0 0 | 0 0 | 0 0
0 1 | 0 0 | 1 1
1 0 | 0 0 | 1 1
1 1 | 1 1 | 1 2
The similarity of bitwise not and negation can be seen by this formula:
~A = -A - 1

To extend Mark Byers' answer, in boolean algebra (used extensively by electrical engineers to simplify logic circuits to the minimum number of gates and to avoid race conditions), the tradition is that bitwise AND takes precedent over bitwise OR. C was just following this established tradition. See http://en.wikiversity.org/wiki/Boolean_algebra#Combining_Operations :
Just as in ordinary algebra, where
multiplication takes priority over
addition, AND takes priority (or
precedence) over OR.

Related

COALESCE in postgresql conditional displaying seemingly undocumented behavior?

I have looked at the COALESCE documentation and it mentions the typical case of using COALESCE to make default/situational parameters, e.g.
COALESCE(discount, 5)
which evaluates to 5 if discount is not defined as something else.
However, I have seen it used where COALESCE actually evaluated all the arguments, despite the documentation explicitly saying it stops evaluating arguments after the first non-null argument.
Here is an example similar to what I encountered, say you have a table like this:
id | wind | rain | snow
1 | null | 2 | 3
2 | 5 | null | 6
3 | null | 7 | 2
Then you run
SELECT *
FROM weather_table
WHERE
COALESCE(wind, rain, snow) >= 5
You would expect this to only select rows with wind >= 5, right? NO! It selects all rows with either wind, rain or snow more than 5. Which in this case is 2 rows, specifically these two:
2 | 5 | null | 6
3 | null | 7 | 2
Honestly, pretty cool functionality, but it really irks me that I couldn't find any example of this online or in the documentation.
Can anyone tell me what's going on? Am I missing something?
You would expect this to only select rows with wind >= 5, right?
No, I expect it to select rows with what the Coalesce function returns.
The Coalesce function delivers the value of the first non-null parameter. You had Coalesce(wind,rain,snow). The first row had (null,2,3), so coalesce returned 2. The second row had (5,null,6) so returned 5. The third row had (null,7,2) so returned 7.
The last two rows meet the condition >=5, so 2 rows are retrieved.
Notice that the value for snow was never returned in your example, because either wind or rain always had a value.
After writing out the question so clear, I realized what was going on myself. But I want to answer it here in case anyone else is confused.
Turns out the reason is the COALESCE function is run once for each row, which I suppose I could have known. Then it all makes sense.
It checks for each row, do I have non-null wind, if it is >= 5 I add this row to the result, if not I check if rain is non-null, and so on.
Notably though, if my table was had been like this:
id | wind | rain | snow
1 | 0 | 2 | 3
2 | 5 | 0 | 6
3 | 0 | 7 | 2
The command would have worked like I thought, and the COALESCE function completely useless, would have picked only that one row
2 | 5 | 0 | 6
equal to SELECT * FROM weather_table WHERE wind >= 5.
It only works if there are columns which are null (0 <> null).

Implementation of an AND chip in HDL

I'm working through this book http://nand2tetris.org/book.php that teaches fundamental concepts of CS and I got stuck where I'm asked to code an AND chip and test it in provided testing software.
This is what I've got so far:
/**
* And gate:
* out = 1 if (a == 1 and b == 1)
* 0 otherwise
*/
CHIP And {
IN a, b;
OUT out;
PARTS:
// Put your code here:
Not(in=a, out=nota);
Not(in=b, out=notb);
And(a=a, b=b, out=out);
Or(a=nota, b=b, out=nota);
Or(a=a, b=notb, out=notb);
}
Problem is I'm getting this error:
...
at Hack.Gates.CompositeGateClass.readParts(Unknown Source)
at Hack.Gates.CompositeGateClass.<init>(Unknown Source)
at Hack.Gates.GateClass.readHDL(Unknown Source)
at Hack.Gates.GateClass.getGateClass(Unknown Source)
at Hack.Gates.CompositeGateClass.readParts(Unknown Source)
at Hack.Gates.CompositeGateClass.<init>(Unknown Source)
at Hack.Gates.GateClass.readHDL(Unknown Source)
...
And I don't know if I'm getting this error because the testing program is malfunctioning or because my code is wrong and the software can't load it up.
It may be helpful to examine the truth tables for Nand and And:
Nand
a | b | out
0 | 0 | 1
0 | 1 | 1
1 | 0 | 1
1 | 1 | 0
And
a | b | out
0 | 0 | 0
0 | 1 | 0
1 | 0 | 0
1 | 1 | 1
And is the inverse of Nand, meaning that for every combination of inputs, And will give the opposite output of Nand. Another way to think of the "opposite" of a binary value is "not" that value.
If you send 2 inputs through a Nand gate and then send its output through a Not gate, you will have Not(Nand(a, b)), which is equivalent to And(a, b).
Your problem is that you are trying to use parts (Not, And and Or) that haven't been defined yet (and you are trying to use an And gate in your definition of an And gate).
At each point in the course, you can only use parts that you have previously built. If memory serves, at this point the only part you have available is a Nand gate.
You should be able to construct an And gate using only Nand gates.
You're overthinking the problem significantly
If one is given NAND, or (not) AND, then AND can be constructed as (not)NAND since (not)(not) AND = AND

What is the expected behaviour for multiple set-returning functions in SELECT clause?

I'm trying to get a "cross join" with the result of two set-returning functions, but in some cases I don't get the "cross join", see example
Behaviour 1: When set lenghts are the same, it matches item by item from each set
postgres=# SELECT generate_series(1,3), generate_series(5,7) order by 1,2;
generate_series | generate_series
-----------------+-----------------
1 | 5
2 | 6
3 | 7
(3 rows)
Behaviour 2: When set lenghts are different, it "cross join"s the sets
postgres=# SELECT generate_series(1,2), generate_series(5,7) order by 1,2;
generate_series | generate_series
-----------------+-----------------
1 | 5
1 | 6
1 | 7
2 | 5
2 | 6
2 | 7
(6 rows)
I think I'm not understanding something here, can someone explain the expeted behaviour?
Another example, even weirder:
postgres=# SELECT generate_series(1,2) x, generate_series(1,4) y order by x,y;
x | y
---+---
1 | 1
1 | 3
2 | 2
2 | 4
(4 rows)
I am looking for an answer to the question in the title, ideally with link(s) to documentation.
Postgres 10 or newer
pads with null values for smaller set(s). Demo with generate_series():
SELECT generate_series( 1, 2) AS row2
, generate_series(11, 13) AS row3
, generate_series(21, 24) AS row4;
row2 | row3 | row4
-----+------+-----
1 | 11 | 21
2 | 12 | 22
null | 13 | 23
null | null | 24
dbfiddle here
The manual for Postgres 10:
If there is more than one set-returning function in the query's select
list, the behavior is similar to what you get from putting the
functions into a single LATERAL ROWS FROM( ... ) FROM-clause item. For
each row from the underlying query, there is an output row using the
first result from each function, then an output row using the second
result, and so on. If some of the set-returning functions produce
fewer outputs than others, null values are substituted for the missing
data, so that the total number of rows emitted for one underlying row
is the same as for the set-returning function that produced the most
outputs. Thus the set-returning functions run “in lockstep” until they
are all exhausted, and then execution continues with the next
underlying row.
This ends the traditionally odd behavior.
Some other details changed with this rewrite. The release notes:
Change the implementation of set-returning functions appearing in a query's SELECT list (Andres Freund)
Set-returning functions are now evaluated before evaluation of scalar
expressions in the SELECT list, much as though they had been placed
in a LATERAL FROM-clause item. This allows saner semantics for cases
where multiple set-returning functions are present. If they return
different numbers of rows, the shorter results are extended to match
the longest result by adding nulls. Previously the results were cycled
until they all terminated at the same time, producing a number of rows
equal to the least common multiple of the functions' periods. In
addition, set-returning functions are now disallowed within CASE and
COALESCE constructs. For more information see Section 37.4.8.
Bold emphasis mine.
Postgres 9.6 or older
The number of result rows (somewhat surprisingly!) is the lowest common multiple of all sets in the same SELECT list. (Only acts like a CROSS JOIN if there is no common divisor to all set-sizes!) Demo:
SELECT generate_series( 1, 2) AS row2
, generate_series(11, 13) AS row3
, generate_series(21, 24) AS row4;
row2 | row3 | row4
-----+------+-----
1 | 11 | 21
2 | 12 | 22
1 | 13 | 23
2 | 11 | 24
1 | 12 | 21
2 | 13 | 22
1 | 11 | 23
2 | 12 | 24
1 | 13 | 21
2 | 11 | 22
1 | 12 | 23
2 | 13 | 24
dbfiddle here
Documented in manual for Postgres 9.6 the chapter SQL Functions Returning Sets, along with the recommendation to avoid it:
Note: The key problem with using set-returning functions in the select
list, rather than the FROM clause, is that putting more than one
set-returning function in the same select list does not behave very
sensibly. (What you actually get if you do so is a number of output
rows equal to the least common multiple of the numbers of rows
produced by each set-returning function.) The LATERAL syntax produces
less surprising results when calling multiple set-returning functions,
and should usually be used instead.
Bold emphasis mine.
A single set-returning function is OK (but still cleaner in the FROM list), but multiple in the same SELECT list is discouraged now. This was a useful feature before we had LATERAL joins. Now it's merely historical ballast.
Related:
Parallel unnest() and sort order in PostgreSQL
Unnest multiple arrays in parallel
What is the difference between a LATERAL JOIN and a subquery in PostgreSQL?
I cannot find any documentation for this. However, I can describe the behavior that I observe.
The set generating functions each return a finite number of rows. Postgres seems to run the set generating functions until all of them are on their last row -- or, more likely stop when all are back to their first rows. Technically, this would be the least common multiple (LCM) of the series lengths.
I'm not sure why this is the case. And, as I say in a comment, I think it is better to generally put the functions in the from clause.
There is the only note about the issue in the documentation. I'm not sure whether this explains the described behavior or not. Perhaps more important is that such function usage is deprecated:
Currently, functions returning sets can also be called in the select list of a query. For each row that the query generates by itself, the function returning set is invoked, and an output row is generated for each element of the function's result set. Note, however, that this capability is deprecated and might be removed in future releases.

Basic vs. compound condition coverage

I'm trying to get my head around the differences between these 2 coverage criteria and I can't work out how they differ. I think I'm failing to understand exactly what decision coverage is. My software testing textbook states that compound decision coverage can be costly (2n combinations for n basic conditions).
I would have thought basic condition coverage would be costlier.
Consider a && b && c && d && e. My understanding is that in basic condition coverage, each of these atomic variables have to have the value TRUE and FALSE in a test case for the test case to be have basic condition adequacy - that's 32 different test cases.
So what is the actual difference, and what is referred to as a "basic condition". In the example above, is a a basic condition?
Thanks.
Regarding terminology, I don't have a single source handy that uses the exact terms "basic condition coverage" and "multiple condition coverage". Binder's "Testing Object-Oriented Systems" says "condition coverage" and "multiple-condition coverage". Everett & McLeod's "Software Testing" says "simple condition coverage" and "compound condition coverage". But I'm certain that the first term in each case is your "basic condition coverage" and the second is your "compound condition coverage". I'll use those terms below.
Basic condition coverage means that every basic condition in the program is true in some test and false in some test, regardless of other conditions. In the following
if a && b && c
# do stuff
else
# do other stuff
end
there is a compound condition, a && b && c, with three basic conditions, a, b and c. It takes only two test cases, one where all basic conditions are true and one where all are false, to get full basic condition coverage. It doesn't matter that the basic conditions happen to be part of a compound condition.
Note that basic condition coverage is not branch coverage. If the compound condition were a && b && !c, those two test cases above would still achieve basic condition coverage but would not achieve branch coverage.
A less aggressively optimized set of test cases for basic condition coverage would have one test case where all three basic conditions are false and three test cases with a different basic condition true in each. That would still only be four of the eight possible combinations of basic conditions in the compound condition. The uncomfortable feeling that we're ignoring the other four is why there's compound condition coverage. That requires a test for each possible combination of basic conditions in a compound condition. In the example above, you'd need eight tests, one for each possible combination of possible values of a, b and c, to get full compound condition coverage.
First, the difference between Decision and Condition.
A Condition is an atomar boolean expression that can not be broken down into simpler boolean expression. For example: a (if a is boolean).
A Decision is a compound of Conditions with zero or more Boolean operators. A Decision without an operator is also a condition. For example: (a or b) and c but also a and b or just a.
Lets take a simple example
if(decision) {
//branch 1
} else {
//branch 2
}
You need two tests to cover both branches. Thats the decision coverage or branch coverage. In case the decision is a condition (i.e. just a), that is also called basic condition coverage, which is the coverage of the two branches of a single condition.
The decision can be broken down into conditions.
Lets take for example
decision = (a or b) and c
The decision coverage would be achieved with
a,b,c = 0
a,b,c = 1
But the permutation of all the combinations of its boolean sub expressions is the full condition coverage or multiple condition coverage), which is the compound of the basic condition coverage :
| a | b | c |
| 0 | 0 | 1 |
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 0 | 1 | 0 |
| 1 | 0 | 1 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
| 1 | 1 | 0 |
That would be quite a lot of tests, but some of those are redundant as some conditions are covered by others. This is reflected in the Modified Condition/Decision Coverage (MC/DC) which is a combination of condition coverage and function coverage.
For MC/DC it is required, that each condition has to affect the outcome independently. With the above test (all are 0 or all are 1), we ignore the fact, that c-value doesn't matter if a and b are 0, or, that b-value doesnt matter if a and c are 1.
So you should sit down, use your brain, and think about for which combinations the overall result R is 1 or 0.
| a | b | c | a or b | c | R | eq
1 | 0 | 0 | 0 | 0 | 0 | 0 | A
2 | 0 | 0 | 1 | 0 | 1 | 0 | B
3 | 0 | 1 | 0 | 1 | 0 | 0 | A
4 | 0 | 1 | 1 | 1 | 1 | 1 | C
5 | 1 | 0 | 0 | 1 | 0 | 0 | A
6 | 1 | 0 | 1 | 1 | 1 | 1 | D
7 | 1 | 1 | 0 | 1 | 0 | 0 | A
8 | 1 | 1 | 1 | 1 | 1 | 1 | D
The last column shows the equivalence class:
A: c = 0, result is 0, neither a nor b have an influence
B: a,b = 0, result is 0, c has no influence
C: b,c = 1, result is 1, a has no influence
D: a,c = 1, result is 1, b has no influence
For B and C it's quite obvious which to pick, not so for A and D. For each you have to check, what would happen, if I replace the operators, i.e. or -> and, and -> or, how will this affect the result of the (sub)decision. If the result will be affected, you got a candidate - If not, you don't.
A : (0 and/or 0) and/or 0 -> doesn't matter
A : (0 and 1) vs (0 or 1) -> does matter! -> Candidate
A : (1 and 0) vs (1 or 0) -> does matter! -> Candidate
A : (1 and/or 1) -> doesn't matter
D : (1 and 0) vs (1 or 0) -> does matter -> Candidate
D : (1 and 1) -> doesn't matter
So you get the final test set as mentioned above:
a = 0, b = 1, c = 0 -> false branch (A) OR a = 1, b = 0, c = 0
a = 0, b = 0, c = 1 -> false branch (B)
a = 0, b = 1, c = 1 -> true branch (C)
a = 1, b = 0, c = 1 -> true branch (D)
Especially the latter test - changing the operators - can be done with tools like mutation testing, that do not just replaing operators, but can do quite some more, i.e. flipping operands, removing statements, change order of execution, replace return values etc. And for each alteration of your code, it verifies if the test actually fails. This is good indicator of the quality of your test suite and ensures that code is not just covered but your tests for the code are actually valid.
Regarding the terminology, I couldn't find the term "Compound Decision Coverage" somewhere. In my view a "compound decision" would be a compound of compounds of conditions, in other words: a compound of conditions.

Multiple Font Style Combinations in vb.net

If I want to create a font with multiple style combinations, like bold AND underline, I have to place the 'or' statement between it, like in the example below:
lblArt.Font = New Font("Tahoma", 18, FontStyle.Bold Or FontStyle.Underline)
If you place bold 'and' underline, it won't work, and you only get 1 of the 2 (like how the or statement should be working), while that would be the logically way to do it. What is the reason behind this?
Boolean logic works a bit differently than the way we use the terms in English. What's happening here is that the enumerated FontStyle values are actually bit flags, and in order to manipulate bit flags, you use bitwise operations.
To combine two bit flags, you OR them together. An OR operation combines the two values. So imagine that FontStyle.Bold was 2 and FontStyle.Underline was 4. When you OR them together, you get 6—you've combined them together. In Boolean logic, you can think of an OR operation as returning "true" (i.e., setting that bit in the result) if either of the bits in the two operands are set, and "false" if neither of the bits in the two operands are set.
You can write a truth table for such an operation as follows:
| A | B | A OR B |
|---|---|--------|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 1 |
Notice that the results more closely mirror what we, in informal English, would call "and". If either one has it set, then the result has it set, too.
In contrast to OR, a bitwise AND operation only returns "true" (i.e., sets that bit in the result) if both of the bits in the two operands are set. Otherwise, the result is "false". Again, a truth table can be written:
| A | B | A AND B |
|---|---|---------|
| 0 | 0 | 0 |
| 0 | 1 | 0 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
Assuming again that FontStyle.Bold has the value 2 and FontStyle.Underline has the value 4, if you AND them together, you get 0. This is because the values effectively cancel each other out. The net result is that you don't get any font styles—precisely why it doesn't work when you write FontStyle.Bold And FontStyle.Underline.
In VB, a bitwise OR operation is performed using the Or operator. The And operator performs a bitwise AND operation. So in order to do a bitwise inclusion of values, which is how you combine bit flags, you use the Or operator.
try this:
lblArt.Font = New Drawing.Font("Tahoma", _
18, _
FontStyle.Bold or FontStyle.Italic)
use "New Drawing.Font" instead of Font alone
Source