What are XAND and XOR? Also is there an XNot
XOR is short for exclusive or. It is a logical, binary operator that requires that one of the two operands be true but not both.
So these statements are true:
TRUE XOR FALSE
FALSE XOR TRUE
And these statements are false:
FALSE XOR FALSE
TRUE XOR TRUE
There really isn't such a thing as an"exclusive and" (or XAND) since in theory it would have the same exact requirements as XOR. There also isn't an XNOT since NOT is a unary operator that negates its single operand (basically it just flips a boolean value to its opposite) and as such it cannot support any notion of exclusivity.
Guys, don´t scare the crap out of others (hey! just kidding), but it´s really all a question of equivalences and synonyms:
firstly:
"XAND" doesn´t exist logically, neither does "XNAND", however "XAND" is normally thought-up by a studious but confused initiating logic student.(wow!). It com from the thought that, if there´s a XOR(exclusive OR) it´s logical to exist a "XAND"("exclusive" AND). The rational suggestion would be an "IAND"("inclusive" AND), which isn´t used or recognised as well. So:
XNOR <=> !XOR <=> EQV
And all this just discribes a unique operator, called the equivalency operator(<=>, EQV) so:
A | B | A <=> B | A XAND B | A XNOR B | A !XOR B | ((NOT(A) AND B)AND(A AND NOT(B)))
---------------------------------------------------------------------------------------
T | T | T | T | T | T | T
T | F | F | F | F | F | F
F | T | F | F | F | F | F
F | F | T | T | T | T | T
And just a closing comment: The 'X' prefix is only possible if and only if the base operator isn´t unary. So, XNOR <=> NOT XOR <=/=> X NOR.
Peace.
XOR is Exclusive Or. It means "One of the two items being XOR'd is true, but not both of them."
TRUE XOR TRUE : FALSE
TRUE XOR FALSE : TRUE
FALSE XOR TRUE : TRUE
FALSE XOR FALSE: FALSE
Wikipedia's XOR Article
XAND I have not heard of.
In the book written by Charles Petzold titled "Code" he says there are 6 gates. There is the AND logical gate, the OR gate, the NOR gate, the NAND gate, and the XOR gate. He also mentions the 6th gate briefly calling it the "coincidence gate" and implies it's not used very often. He says it has the opposite output of a XOR gate because a XOR gate has the output of "false" when it has two true or two false sides of the equation and the only way for a XOR gate to have its output be true is for one of the sides of the equation to be true and the other to be false, it doesn't matter which. The coincidence is the exact opposite of this because with the coincidence gate if one is true and the other is false (doesn't matter which is which) then it will have its output be "false" in both those cases. And the way for a coincidence gate to have its output be "true" is for both sides to be either false or true. If both are false the coincidence gate will evaluate as true. If both are true then the coincidence gate will also output "true" in that case as well.
So in the cases where the XOR gate outputs "false", the coincidence gate will output "true". And in the cases where the XOR gate will output "true", the coincidence gate will output "false".
Hmm.. well I know about XOR (exclusive or) and NAND and NOR. These are logic gates and have their software analogs.
Essentially they behave like so:
XOR is true only when one of the two arguments is true, but not both.
F XOR F = F
F XOR T = T
T XOR F = T
T XOR T = F
NAND is true as long as both arguments are not true.
F NAND F = T
F NAND T = T
T NAND F = T
T NAND T = F
NOR is true only when neither argument is true.
F NOR F = T
F NOR T = F
T NOR F = F
T NOR T = F
There is no such thing as Xand or Xnot. There is Nand, which is the opposite of and
TRUE and TRUE : TRUE
TRUE and FALSE : FALSE
FALSE and TRUE : FALSE
FALSE and FALSE : FALSE
TRUE nand TRUE : FALSE
TRUE nand FALSE : TRUE
FALSE nand TRUE : TRUE
FALSE nand FALSE : TRUE
To add to this, since I was just dealing with it, if you are looking for an "equivalence gate" or a "coincedence gate" as your XAND, what you really have is just "equals".
If you think about it, given XOR from above:
F XOR F = F
F XOR T = T
T XOR F = T
T XOR T = F
And we expect XAND should be:
F XAND F = T
F XAND T = F
T XAND F = F
T XAND T = T
And isn't this exactly the same?
F == F = T
F == T = F
T == F = F
T == T = T
There's a simple argument to see where the binary logic gates come from, using truth tables, which have come up already.
There are six that represent commutative operations, in which a op b == b op a. Each binary operator has an associated three column truth table that defines it. The first two columns can be fixed for the defining tables for all the operators.
Consider the third column. It's a sequence of four binary digits. There are sixteen combinations, but the constraint of commutativity effectively removes one row from the truth tables, so it's only eight. Two more get knocked off because all truths or all falses isn't a useful gate. These are the familiar or, and, and xor, plus their negations.
This is what you are looking for:
https://en.wikipedia.org/wiki/XNOR_gate
Here is the logic table:
A B XOR XNOR
0 0 0 1
0 1 1 0
1 0 1 0
1 1 0 1
XNOR sometimes is called XAND.
In most cases you won't find an Xand, Xor, nor, nand Logical operator in programming, but fear not in most cases you can simulate it with the other operators.
Since you didn't state any particular language. I won't do any specific language either. For my examples we'll use the following variables.
A = 3
B = 5
C = 7
and for code I'll put it in the code tag to make it easier to see what I did, I'll also follow the logic through the process to show what the end result will be.
NAND
Also known as Not And, can easily be simulated by using a Not operator, (normally indicated as ! )
You can do the following
if(!((A>B) && (B<C)))
if (!(F&&T))
if(!(F))
If(T)
In our example above it will be true, since both sides were not true. Thus giving us the desired result
NOR
Also known as Not OR, just like NAND we can simulate it with the not operator.
if(!((A>B) || (B<C)))
if (!(F||T))
if(!(T))
if(F)
Again this will give us the desired outcomes
XOR
Xor or Exlcusive OR only will be true when one is TRUE but the Other is FALSE
If (!(A > C && B > A) && (A > C || B > A) )
If (!(F && T) && (F || T) )
If (!(F) && (T) )
If (T && T )
If (T)
So that is an example of it working for just 1 or the other being true, I'll show if both are true it will be false.
If ( !(A < C && B > A) && (A < C || B > A) )
If ( !(T && T) && (T ||T) )
If ( !(T) && (T) )
If ( F && T )
If (F)
And both false
If (!(A > C && B < A) && (A > C || B < A) )
If (!(F && F) && (F || F) )
If (!(F) && (F) )
If (T && F )
If (F)
And the picture to help
XAND
And finally our Exclusive And, this will only return true if both are sides are false, or if both are true. Of course You could just call this a Not XOR (NXOR)
Both True
If ( (A < C && B > A) || !(A < C || B > A) )
If ((T&&T) || !(T||T))
IF (T || !T)
If (T || F)
IF (T)
Both False
If ( (A > C && B < A) || !(A > C || B < A) )
If ( (F && F) || !(F ||F))
If ( F || !F)
If ( F || T)
If (T)
And lastly 1 true and the other one false.
If ((A > C && B > A) || !(A > C || B > A) )
If ((F && T) || ! (F || T) )
If (F||!(T))
If (F||F)
If (F)
Or if you want to go the NXOR route...
If (!(!(A > C && B > A) && (A > C || B > A)))
If (!(!(F && T) && (F || T)) )
If (!(!(F) && (T)) )
If (!(T && T) )
If (!(T))
If (F)
Of course everyone else's solutions probably state this as well, I am putting my own answer in here because the top answer didn't seem to understand that not all languages support XOR or XAND for example C uses ^ for XOR and XAND isn't even supported.
So I provided some examples of how to simulate it with the basic operators in the event your language doesn't support XOR or XAND as their own operators like Php if ($a XOR $B).
As for Xnot what is that? Exclusive not? so not not? I don't know how that would look in a logic gate, I think it doesn't exist. Since Not just inverts the output from a 1 to a 0 and 0 to a 1.
Anyway hope that helps.
The truth tables on Wiki clarify http://en.wikipedia.org/wiki/Logic_gate
There is no XAND, and that is the end of part 1 of the questions legitimacy.
[The point is you can always make do without it.]
I personally have mistaken XNOT (which also doesn't exist) for NAND and NOR which are theoretically the only thing you need to make all the other gates link
I believe the confusion stems from the fact that you can use either NAND or NOR (to create everything else [but they are not needed together]), so it's thought of as one thing that's both NAND and NOR together, which basically leaves the mind to supplant the remaining name XNOT which isn't used so it's what I wrongly call XNOT meaning it's either NAND or NOR.
I suppose one could also wrongly in quick discussion try to use the XAND like I do XNOT, to refer to the "a single gate (copied in various arrangements) makes all other gates" logical reality.
XOR (not neither and not both) B'0110' is the inverse
(dual) of IFF (if and only if) B'1001'.
XOR behaves like Austin explained, as an exclusive OR, either A or B but not both and neither yields false.
There are 16 possible logical operators for two inputs since the truth table consists of 4 combinations there are 16 possible ways to arrange two boolean parameters and the corresponding output.
They all have names according to this wikipedia article
The XOR definition is well known to be the odd-parity function.
For two inputs:
A XOR B = (A AND NOT B) OR (B AND NOT A)
The complement of XOR is XNOR
A XNOR B = (A AND B) OR (NOT A AND NOT B)
Henceforth, the normal two-input XAND defined as
A XAND B = A AND NOT B
The complement is XNAND:
A XNAND B = B OR NOT A
A nice result from this XAND definition is that any dual-input binary function can be expressed concisely using no more than one logical function or gate.
+---+---+---+---+
If A is: | 1 | 0 | 1 | 0 |
and B is: | 1 | 1 | 0 | 0 |
+---+---+---+---+
Then: yields:
+-----------+---+---+---+---+
| FALSE | 0 | 0 | 0 | 0 |
| A NOR B | 0 | 0 | 0 | 1 |
| A XAND B | 0 | 0 | 1 | 0 |
| NOT B | 0 | 0 | 1 | 1 |
| B XAND A | 0 | 1 | 0 | 0 |
| NOT A | 0 | 1 | 0 | 1 |
| A XOR B | 0 | 1 | 1 | 0 |
| A NAND B | 0 | 1 | 1 | 1 |
| A AND B | 1 | 0 | 0 | 0 |
| A XNOR B | 1 | 0 | 0 | 1 |
| A | 1 | 0 | 1 | 0 |
| B XNAND A | 1 | 0 | 1 | 1 |
| B | 1 | 1 | 0 | 0 |
| A XNAND B | 1 | 1 | 0 | 1 |
| A OR B | 1 | 1 | 1 | 0 |
| TRUE | 1 | 1 | 1 | 1 |
+-----------+---+---+---+---+
Note that XAND and XNAND lack reflexivity.
This XNAND definition is extensible if we add numbered kinds of exclusive-ANDs to correspond to their corresponding minterms. Then XAND must have ceil(lg(n)) or more inputs, with the unused msbs all zeroes. The normal kind of XAND is written without a number unless used in the context of other kinds.
The various kinds of XAND or XNAND gates are useful for decoding.
XOR is also extensible to any number of bits. The result is one if the number of ones is odd, and zero if even. If you complement any input or output bit of an XOR, the function becomes XNOR, and vice versa.
I have seen no definition for XNOT, I will propose a definition:
Let it to relate to high-impedance (Z, no signal, or perhaps null valued Boolean type Object).
0xnot 0 = Z
0xnot 1 = Z
1xnot 0 = 1
1xnot 1 = 0
Have a look
x y A B C D E F G H I J K L M N
· · T · T · T · T · T · T · T ·
· T · T T · · T T · · T T · · T
T · · · · T T T T · · · · T T T
T T · · · · · · · T T T T T T T
A) !(x OR y)
B) !(x) AND y
C) !(x)
D) x AND !(y)
E) !(y)
F) x XOR y
G) !(x AND y)
H) x AND y
I) !(x XOR y)
J) y
K) !(x) OR y
L) x
M) x OR !(y)
N) x OR y
OMG, a XAND gate does exist. My dad is taking a technological class for a job and there IS an XAND gate. People are saying that both OR and AND are complete opposites, so they expand that to the exclusive-gate logic:
XOR: One or another, but not both.
Xand: One and another, but not both.
This is incorrect. If you're going to change from XOR to XAND, you have to flip every instance of 'AND' and 'OR':
XOR: One or another, but not both.
XAND: One and another, but not one.
So, XAND is true when and only when both inputs are equal, either if the inputs are 0/0 or 1/1
First comes the logic, then the name, possibly patterned on previous naming.
Thus 0+0=0; 0+1=1; 1+0=1; 1+1=1 - for some reason this is called OR.
Then 0-0=0; 0-1=1; 1-0=1; 1-1=0 - it looks like OR except ... let's call it XOR.
Also 0*0=0; 0*1=0; 1*0=0; 1*1=1 - for some reason this is called AND.
Then 0~0=0; 0~1=0; 1~0=0; 1~1=0 - it looks like AND except ... let's call it XAND.
Related
I have a dataset on various households. Each household has various individuals. I want to take one entry(individual) for a specific variable and apply it to the entire household. Finally I want to perform regression on the household data and ignore the individuals. Eg:
Household id | Individual id | Variable 1
H1 | A1 |a
H1 | A2 | .
H1 | A3 | a
H1 | A4 | a
H2 | A1 | B
H2 | A2 | .
H3 | A1 | B
H3 | A2 | a
H3 | A3 | .
H3 | A4 | a
H3 | A5 | .
H3 | A6 | .
I want to create a data where :
Household id Variable 1
H1 | a
H2 | B
H3 | B
[EDIT : it is possible to do it through egen. I was trying to approach it through nested loops.
> foreach h in household_id{
> local d = 0
> local e = 0
> foreach i in individual_id{
> replace `d' = 1 if var1 == 1
> replace `e' = 2 if var1 == 2
> }
> foreach i in individual_id{
> replace a = 1 if `d' == 1
> replace a = 2 if `e' == 2
> }
> }
Here a is another variable which I will directly use to replace var1. The code gives an error :
0 invalid name
What are the corrections required in my code? Is there a fundamental flaw in my usage of local macros? Using tempvar didn't help either.
The code shown is very confused.
A loop over a single entity that is never referred to inside the loop is legal but just is equivalent to the statements inside, executed once.
Notice that you never refer to the local macros h or i within your loops.
Thus your code is equivalent to
local d = 0
local e = 0
replace `d' = 1 if var1 == 1
replace `e' = 2 if var1 == 2
replace a = 1 if `d' == 1
replace a = 2 if `e' == 2
That fails when the local macro d is first replaced by its contents, which results in
replace 0 = 1 if var1 == 1
and the error message is correct because 0 is not a valid variable name. The next statement would raise the same problem.
Stata never gets as far as testing
... if 0 == 1
... if 0 == 2
and those statements, although legal, would do nothing because the conditions will never be satisfied.
So far, so negative. I can't suggest code positively because I don't understand what you want to do. What you have posted doesn't look like real or realistic data. I suggest that you look at http://www.statalist.org/forums/help#stata which suggests how to post Stata data examples (and so is relevant here) and also at https://stackoverflow.com/help/mcve (which does lay down the standard here).
I'm trying to get my head around the differences between these 2 coverage criteria and I can't work out how they differ. I think I'm failing to understand exactly what decision coverage is. My software testing textbook states that compound decision coverage can be costly (2n combinations for n basic conditions).
I would have thought basic condition coverage would be costlier.
Consider a && b && c && d && e. My understanding is that in basic condition coverage, each of these atomic variables have to have the value TRUE and FALSE in a test case for the test case to be have basic condition adequacy - that's 32 different test cases.
So what is the actual difference, and what is referred to as a "basic condition". In the example above, is a a basic condition?
Thanks.
Regarding terminology, I don't have a single source handy that uses the exact terms "basic condition coverage" and "multiple condition coverage". Binder's "Testing Object-Oriented Systems" says "condition coverage" and "multiple-condition coverage". Everett & McLeod's "Software Testing" says "simple condition coverage" and "compound condition coverage". But I'm certain that the first term in each case is your "basic condition coverage" and the second is your "compound condition coverage". I'll use those terms below.
Basic condition coverage means that every basic condition in the program is true in some test and false in some test, regardless of other conditions. In the following
if a && b && c
# do stuff
else
# do other stuff
end
there is a compound condition, a && b && c, with three basic conditions, a, b and c. It takes only two test cases, one where all basic conditions are true and one where all are false, to get full basic condition coverage. It doesn't matter that the basic conditions happen to be part of a compound condition.
Note that basic condition coverage is not branch coverage. If the compound condition were a && b && !c, those two test cases above would still achieve basic condition coverage but would not achieve branch coverage.
A less aggressively optimized set of test cases for basic condition coverage would have one test case where all three basic conditions are false and three test cases with a different basic condition true in each. That would still only be four of the eight possible combinations of basic conditions in the compound condition. The uncomfortable feeling that we're ignoring the other four is why there's compound condition coverage. That requires a test for each possible combination of basic conditions in a compound condition. In the example above, you'd need eight tests, one for each possible combination of possible values of a, b and c, to get full compound condition coverage.
First, the difference between Decision and Condition.
A Condition is an atomar boolean expression that can not be broken down into simpler boolean expression. For example: a (if a is boolean).
A Decision is a compound of Conditions with zero or more Boolean operators. A Decision without an operator is also a condition. For example: (a or b) and c but also a and b or just a.
Lets take a simple example
if(decision) {
//branch 1
} else {
//branch 2
}
You need two tests to cover both branches. Thats the decision coverage or branch coverage. In case the decision is a condition (i.e. just a), that is also called basic condition coverage, which is the coverage of the two branches of a single condition.
The decision can be broken down into conditions.
Lets take for example
decision = (a or b) and c
The decision coverage would be achieved with
a,b,c = 0
a,b,c = 1
But the permutation of all the combinations of its boolean sub expressions is the full condition coverage or multiple condition coverage), which is the compound of the basic condition coverage :
| a | b | c |
| 0 | 0 | 1 |
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 0 | 1 | 0 |
| 1 | 0 | 1 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
| 1 | 1 | 0 |
That would be quite a lot of tests, but some of those are redundant as some conditions are covered by others. This is reflected in the Modified Condition/Decision Coverage (MC/DC) which is a combination of condition coverage and function coverage.
For MC/DC it is required, that each condition has to affect the outcome independently. With the above test (all are 0 or all are 1), we ignore the fact, that c-value doesn't matter if a and b are 0, or, that b-value doesnt matter if a and c are 1.
So you should sit down, use your brain, and think about for which combinations the overall result R is 1 or 0.
| a | b | c | a or b | c | R | eq
1 | 0 | 0 | 0 | 0 | 0 | 0 | A
2 | 0 | 0 | 1 | 0 | 1 | 0 | B
3 | 0 | 1 | 0 | 1 | 0 | 0 | A
4 | 0 | 1 | 1 | 1 | 1 | 1 | C
5 | 1 | 0 | 0 | 1 | 0 | 0 | A
6 | 1 | 0 | 1 | 1 | 1 | 1 | D
7 | 1 | 1 | 0 | 1 | 0 | 0 | A
8 | 1 | 1 | 1 | 1 | 1 | 1 | D
The last column shows the equivalence class:
A: c = 0, result is 0, neither a nor b have an influence
B: a,b = 0, result is 0, c has no influence
C: b,c = 1, result is 1, a has no influence
D: a,c = 1, result is 1, b has no influence
For B and C it's quite obvious which to pick, not so for A and D. For each you have to check, what would happen, if I replace the operators, i.e. or -> and, and -> or, how will this affect the result of the (sub)decision. If the result will be affected, you got a candidate - If not, you don't.
A : (0 and/or 0) and/or 0 -> doesn't matter
A : (0 and 1) vs (0 or 1) -> does matter! -> Candidate
A : (1 and 0) vs (1 or 0) -> does matter! -> Candidate
A : (1 and/or 1) -> doesn't matter
D : (1 and 0) vs (1 or 0) -> does matter -> Candidate
D : (1 and 1) -> doesn't matter
So you get the final test set as mentioned above:
a = 0, b = 1, c = 0 -> false branch (A) OR a = 1, b = 0, c = 0
a = 0, b = 0, c = 1 -> false branch (B)
a = 0, b = 1, c = 1 -> true branch (C)
a = 1, b = 0, c = 1 -> true branch (D)
Especially the latter test - changing the operators - can be done with tools like mutation testing, that do not just replaing operators, but can do quite some more, i.e. flipping operands, removing statements, change order of execution, replace return values etc. And for each alteration of your code, it verifies if the test actually fails. This is good indicator of the quality of your test suite and ensures that code is not just covered but your tests for the code are actually valid.
Regarding the terminology, I couldn't find the term "Compound Decision Coverage" somewhere. In my view a "compound decision" would be a compound of compounds of conditions, in other words: a compound of conditions.
I am trying to create a CFG for:
L = {azn |a ∈ {x, y}* and n = number of x’s in a or number of y’s in a}
I am not sure how or where to begin.
I understand the language description to be a string of x and y's followed by a string of z's there the number of z's must be the same of either x or y.
Examples: {xxyxyyxxyzzzzz, yxyxyxyyyzzzzzz, etc...}
This is my "best" solution:
S => xSz | ySz | ϵ
I know this is wrong because z produces the same number of x and y combined, rather than x or y individually.
EDIT:
I think this is the answer, but I am not sure. It seems to work.
S => xSz | ySz | xS | yS | ϵ
Edit:
Well that doesn't work, as it accept invalid strings too...
I assume that your question can be decomposed into union of two languages
L=L1 U L2
L1 is (xi,yj)zk where i==k and L2 suffices j==k
In other words, L1 contains equal x's and z's while L2 contains equal y's and z's.
Hence, for L1, we can compose the CFG as
S1 => A1S1z | B1
A1 => A1B1 | B1A1 | x
B1 => yB1 | e
The first 2 productions ensure that it spawns the same number of x and z;
Similarly, we can construct the CFG for L2:
S2 => A2S2z | B2
A2 => A2B2 | B2A2 | y
B2 => xB2 | e
Finally, we can union those productions to get the answer:
S => S1 | S2
I think this is the answer:
S = A | B
A = C xSz | C
B = D ySz | D
C = yC | e
D = xD | e
Not sure if it can be reduced though
I am new to CFG's,
Can someone give me tips in creating CFG that generates some language
For example
L = {am bn | m >= n}
What I got is:
So -> a | aSo | aS1 | e
S1 -> b | bS1 | e
but I think this area is wrong, because there is a chance that the number of b's can be greater than a's.
How to write CFG with example ambn
L = {am bn | m >= n}.
Language description: am bn consist of a followed by b where number of a are equal or more then number of b.
some example strings: {^, a, aa, aab, aabb, aaaab, ab......}
So there is always one a for one b but extra a are possible. infect string can be consist of a only. Also notice ^ null is a member of language because in ^ NumberOf(a) = NumberOf(b) = 0
How to write a grammar that accepts the language formed by strings am bn?
In the grammar, there should be rules such that if you add a b symbol you also add a a symbol.
and this can be done with something like:
S --> aSb
But this is incomplete because we need a rule to generate extra as:
A --> aA | a
Combine two production rules into a single grammar CFG.
S --> aSb | A
A --> aA | a
So you can generate any string that consist of a also a and b in (am bn) pattern.
But in above grammar there is no way to generate ^ string.
So, change this grammar like this:
S --> B | ^
B --> aBb | A
A --> aA | a
this grammar can generate {am bn | m >= n} language.
Note: to generate ^ null string, I added an extra first step in grammar by adding S--> B | ^, So you can either add ^ or your string of symbol a and b. (now B plays role of S from previous grammar to generate equal numbers of a and b)
Edit: Thanks to #Andy Hayden
You can also write equivalent grammar for same language {am bn | m >= n}:
S --> aSb | A
A --> aA | ^
notice: here A --> aA | ^ can generate zero or any number of a. And that should be preferable to my grammar because it generates a smaller parse tree for the same string.
(smaller in height preferable because of efficient parsing)
The following tips may be helpful to write Grammar for a formal language:
You are to be clear about language that what it describes (meaning/pattern).
You can remember solutions for some basic problems(the idea being that you can write new grammars).
You can write rules for fundamental languages like I have written for RE in this example to write Right-Linear-Grammmar. The rules will help you to write Grammar for New Languages.
One different approach is to first draw automata, then convert automata to Grammar. We have predefined techniques to write grammar from automata from any class of formal language.
Like a Good Programmer who learns by reading the code of others, similarly one can learn to write grammars for formal languages.
Also the grammar you have written is wrong.
you want to create a grammar for following language
L= {an bm | m>=n }
that means number of 'b' should be greater or equal then number of 'a'
or you can say that for each 'b' there could at most one 'a'. not other way around.
here is grammar for this language
S-> aSb | Sb | b | ab
in this grammar for each 'a' there is one 'b'. but b can be generated without generating any 'a'.
you can also try these languages:
L1= {an bm | m > n }
L2= {an bm | m >= 2n }
L3= {an bm | 2m >= n }
L4= {an bm | m != n }
i am giving grammar for each language.
for L1
S-> aSb | Sb | b
for L2
S-> aSbb | Sb | abb
for L3
S-> AASb | Sb | aab | ab | b
for L4
S-> S1 | S2
S1-> aS1b | S1b | b
S2-> aS2b | aS2 | a
Least variables: S -> a S b | a S | e
with less variables :
S -> a S b | a S | a b | e
A gotcha I've run into a few times in C-like languages is this:
original | included & ~excluded // BAD
Due to precedence, this parses as:
original | (included & ~excluded) // '~excluded' has no effect
Does anyone know what was behind the original design decision of three separate precedence levels for bitwise operators? More importantly, do you agree with the decision, and why?
The operators have had this precedence since at least C.
I agree with the order because it is the same relative order as the relative order of the arithmetic operators that they are most similar to (+, * and negation).
You can see the similarity of & vs *, and | vs + here:
A B | A&B A*B | A|B A+B
0 0 | 0 0 | 0 0
0 1 | 0 0 | 1 1
1 0 | 0 0 | 1 1
1 1 | 1 1 | 1 2
The similarity of bitwise not and negation can be seen by this formula:
~A = -A - 1
To extend Mark Byers' answer, in boolean algebra (used extensively by electrical engineers to simplify logic circuits to the minimum number of gates and to avoid race conditions), the tradition is that bitwise AND takes precedent over bitwise OR. C was just following this established tradition. See http://en.wikiversity.org/wiki/Boolean_algebra#Combining_Operations :
Just as in ordinary algebra, where
multiplication takes priority over
addition, AND takes priority (or
precedence) over OR.