Implementation of an AND chip in HDL - hdl

I'm working through this book http://nand2tetris.org/book.php that teaches fundamental concepts of CS and I got stuck where I'm asked to code an AND chip and test it in provided testing software.
This is what I've got so far:
/**
* And gate:
* out = 1 if (a == 1 and b == 1)
* 0 otherwise
*/
CHIP And {
IN a, b;
OUT out;
PARTS:
// Put your code here:
Not(in=a, out=nota);
Not(in=b, out=notb);
And(a=a, b=b, out=out);
Or(a=nota, b=b, out=nota);
Or(a=a, b=notb, out=notb);
}
Problem is I'm getting this error:
...
at Hack.Gates.CompositeGateClass.readParts(Unknown Source)
at Hack.Gates.CompositeGateClass.<init>(Unknown Source)
at Hack.Gates.GateClass.readHDL(Unknown Source)
at Hack.Gates.GateClass.getGateClass(Unknown Source)
at Hack.Gates.CompositeGateClass.readParts(Unknown Source)
at Hack.Gates.CompositeGateClass.<init>(Unknown Source)
at Hack.Gates.GateClass.readHDL(Unknown Source)
...
And I don't know if I'm getting this error because the testing program is malfunctioning or because my code is wrong and the software can't load it up.

It may be helpful to examine the truth tables for Nand and And:
Nand
a | b | out
0 | 0 | 1
0 | 1 | 1
1 | 0 | 1
1 | 1 | 0
And
a | b | out
0 | 0 | 0
0 | 1 | 0
1 | 0 | 0
1 | 1 | 1
And is the inverse of Nand, meaning that for every combination of inputs, And will give the opposite output of Nand. Another way to think of the "opposite" of a binary value is "not" that value.
If you send 2 inputs through a Nand gate and then send its output through a Not gate, you will have Not(Nand(a, b)), which is equivalent to And(a, b).

Your problem is that you are trying to use parts (Not, And and Or) that haven't been defined yet (and you are trying to use an And gate in your definition of an And gate).
At each point in the course, you can only use parts that you have previously built. If memory serves, at this point the only part you have available is a Nand gate.
You should be able to construct an And gate using only Nand gates.

You're overthinking the problem significantly
If one is given NAND, or (not) AND, then AND can be constructed as (not)NAND since (not)(not) AND = AND

Related

What is the meaning of the "Load" column in Apache balancer-manager?

I've set up the Apache (2.4) load-balancer which is working okay. To monitor its performance, I enabled the balancer-manager handler, which shows the status of the balancers.
I noticed a "Load" column, which was not present in version 2.2, with a value that may be negative, but I don't understand its meaning nor I was able to find documentation relative to this.
Can anyone explain the meaning of that value or point me to the right documentation?
I now understood, how the calculation of "Load" works. Here is a I think more simpler example than on the apache documents page.
Let's say we have 3 worker and a configured load factor of 1.
1) Start
a | b | c
--+---+---
0 | 0 | 0
add the load factor of 1 to all workers
a | b | c
--+---+---
1 | 1 | 1
now select the one with highest value --> a and decrease by the sum of the factor of all (=3) - this is the selected worker
a | b | c
---+---+---
-2 | 1 | 1
2) next round, add again 1 to all
a | b | c
---+---+---
-1 | 2 | 2
now select the one with highest value --> b and decrease by the sum of the factor of all (=3) - this is the selected worker
a | b | c
---+----+----
-1 | -1 | 2
3) next round, add again 1
a | b | c
---+----+----
0 | 0 | 3
now select the one with highest value --> c and decrease by the sum of the factor of all (=3) - this is the selected worker
a | b | c
---+----+----
0 | 0 | 0
startover again :)
I hope this helps others.
The Load value is populated by lbstatus based on this line of code:
ap_rprintf(r, "<td>%d</td><td>", worker->s->lbstatus);
in https://svn.apache.org/viewvc/httpd/httpd/trunk/modules/proxy/mod_proxy_balancer.c?view=markup#l1767 (line might changed when the code modified)
Since your method is by request, lbstatus is specified by mod_lbmethod_byrequests which define:
lbstatus is how urgent this worker has to work to fulfill its quota of
work.
Details on the algorithm can be found here: https://httpd.apache.org/docs/2.4/mod/mod_lbmethod_byrequests.html
i too want to know to description for others column like BUSY, ELECTED etc.. my LB has BUSY over 100 already.. i though BUSY should not exceed 100 ( as in 100% server busyness or something )

Efficient bit field extraction from bytes array being interpreted as a bitstream, possibly using Intel's BMI Set 1

In order to extract bit fields from a bytes array being interpreted as a bitstream, I am aiming to device an efficient bit field extraction function, optimized for Intel's modern CPUs (ideally making use of the new BEXTR instruction) and MS Visual Studio 2017 (I'm developing in VB.NET).
Inputs: Bitstream left to right (MSB first)
Pos=0...7, Len=1...8
Output: bitfield at Pos in length Len (LSB right-aligned)
As an example for Pos=0...7 and Len=3 (masking omitted):
Byte0 Byte1 Shifts
01234567 01234567
xxx >> 5
xxx >> 4
xxx >> 3
xxx >> 2
xxx >> 1
xxx n/a
xx x << 1 | >> 7
x xx << 2 | >> 6
From the example, a possibly naive implementation in pseudo-code would be:
Extract(Pos, Len, ByteAddr):=
if 8-Pos-Len > 0
Res:=[ByteAddr+0] >> 8-Pos-Len
Res:=Res & (2^Len-1)
elseif 8-Pos-Len < 0
Res:=[ByteAddr+0] << Pos+Len-8
Res:=Res & (2^Len-1)
Res:=Res | ([ByteAddr+1] >> 16-Pos-Len))
else
Res:=[ByteAddr+0] & (2^Len-1)
fi
Testing this on "paper" (well, notepad.exe these days) for Len=4 and Pos=0...7 shows, that the algorithm is likely to work:
Byte0 Byte1 B0>> B0<< &(2^Len-1) B1>> B0|B1
8-Pos-Len Pos+Len-8 16-Pos-Len
01234567 01234567 01234567 01234567 01234567 01234567 01234567
xxxx.... | 0000xxxx | 0000xxxx | |
.xxxx... | 000.xxxx | 0000xxxx | |
..xxxx.. | 00..xxxx | 0000xxxx | |
...xxxx. | 0...xxxx | 0000xxxx | |
....xxxx | | 0000xxxx | |
.....xxx y....... ....xxx0 0000xxx0 0000000y 0000xxxy
......xx yy...... ....xx00 0000xx00 000000yy 0000xxyy
.......x yyy..... ....x000 0000x000 00000yyy 0000xyyy
Questions:
(1) For efficiency reasons, should I use a lookup table instead of 2^Len-1, or can I rely on the compiler to reliably optimize powers of 2? (Of course, I could also use (1<<Len)-1. Does the compiler do this?)
(2) In VB.NET, how would I proceed to instruct the compiler to pretty please make use of the new BEXTR instruction?
(3) Should I do this completely different, i.e. package everything in lookup tables? (After all, it's just 8 x 8 possibilities. However, it would not truly be expandable.)

Basic vs. compound condition coverage

I'm trying to get my head around the differences between these 2 coverage criteria and I can't work out how they differ. I think I'm failing to understand exactly what decision coverage is. My software testing textbook states that compound decision coverage can be costly (2n combinations for n basic conditions).
I would have thought basic condition coverage would be costlier.
Consider a && b && c && d && e. My understanding is that in basic condition coverage, each of these atomic variables have to have the value TRUE and FALSE in a test case for the test case to be have basic condition adequacy - that's 32 different test cases.
So what is the actual difference, and what is referred to as a "basic condition". In the example above, is a a basic condition?
Thanks.
Regarding terminology, I don't have a single source handy that uses the exact terms "basic condition coverage" and "multiple condition coverage". Binder's "Testing Object-Oriented Systems" says "condition coverage" and "multiple-condition coverage". Everett & McLeod's "Software Testing" says "simple condition coverage" and "compound condition coverage". But I'm certain that the first term in each case is your "basic condition coverage" and the second is your "compound condition coverage". I'll use those terms below.
Basic condition coverage means that every basic condition in the program is true in some test and false in some test, regardless of other conditions. In the following
if a && b && c
# do stuff
else
# do other stuff
end
there is a compound condition, a && b && c, with three basic conditions, a, b and c. It takes only two test cases, one where all basic conditions are true and one where all are false, to get full basic condition coverage. It doesn't matter that the basic conditions happen to be part of a compound condition.
Note that basic condition coverage is not branch coverage. If the compound condition were a && b && !c, those two test cases above would still achieve basic condition coverage but would not achieve branch coverage.
A less aggressively optimized set of test cases for basic condition coverage would have one test case where all three basic conditions are false and three test cases with a different basic condition true in each. That would still only be four of the eight possible combinations of basic conditions in the compound condition. The uncomfortable feeling that we're ignoring the other four is why there's compound condition coverage. That requires a test for each possible combination of basic conditions in a compound condition. In the example above, you'd need eight tests, one for each possible combination of possible values of a, b and c, to get full compound condition coverage.
First, the difference between Decision and Condition.
A Condition is an atomar boolean expression that can not be broken down into simpler boolean expression. For example: a (if a is boolean).
A Decision is a compound of Conditions with zero or more Boolean operators. A Decision without an operator is also a condition. For example: (a or b) and c but also a and b or just a.
Lets take a simple example
if(decision) {
//branch 1
} else {
//branch 2
}
You need two tests to cover both branches. Thats the decision coverage or branch coverage. In case the decision is a condition (i.e. just a), that is also called basic condition coverage, which is the coverage of the two branches of a single condition.
The decision can be broken down into conditions.
Lets take for example
decision = (a or b) and c
The decision coverage would be achieved with
a,b,c = 0
a,b,c = 1
But the permutation of all the combinations of its boolean sub expressions is the full condition coverage or multiple condition coverage), which is the compound of the basic condition coverage :
| a | b | c |
| 0 | 0 | 1 |
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 0 | 1 | 0 |
| 1 | 0 | 1 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
| 1 | 1 | 0 |
That would be quite a lot of tests, but some of those are redundant as some conditions are covered by others. This is reflected in the Modified Condition/Decision Coverage (MC/DC) which is a combination of condition coverage and function coverage.
For MC/DC it is required, that each condition has to affect the outcome independently. With the above test (all are 0 or all are 1), we ignore the fact, that c-value doesn't matter if a and b are 0, or, that b-value doesnt matter if a and c are 1.
So you should sit down, use your brain, and think about for which combinations the overall result R is 1 or 0.
| a | b | c | a or b | c | R | eq
1 | 0 | 0 | 0 | 0 | 0 | 0 | A
2 | 0 | 0 | 1 | 0 | 1 | 0 | B
3 | 0 | 1 | 0 | 1 | 0 | 0 | A
4 | 0 | 1 | 1 | 1 | 1 | 1 | C
5 | 1 | 0 | 0 | 1 | 0 | 0 | A
6 | 1 | 0 | 1 | 1 | 1 | 1 | D
7 | 1 | 1 | 0 | 1 | 0 | 0 | A
8 | 1 | 1 | 1 | 1 | 1 | 1 | D
The last column shows the equivalence class:
A: c = 0, result is 0, neither a nor b have an influence
B: a,b = 0, result is 0, c has no influence
C: b,c = 1, result is 1, a has no influence
D: a,c = 1, result is 1, b has no influence
For B and C it's quite obvious which to pick, not so for A and D. For each you have to check, what would happen, if I replace the operators, i.e. or -> and, and -> or, how will this affect the result of the (sub)decision. If the result will be affected, you got a candidate - If not, you don't.
A : (0 and/or 0) and/or 0 -> doesn't matter
A : (0 and 1) vs (0 or 1) -> does matter! -> Candidate
A : (1 and 0) vs (1 or 0) -> does matter! -> Candidate
A : (1 and/or 1) -> doesn't matter
D : (1 and 0) vs (1 or 0) -> does matter -> Candidate
D : (1 and 1) -> doesn't matter
So you get the final test set as mentioned above:
a = 0, b = 1, c = 0 -> false branch (A) OR a = 1, b = 0, c = 0
a = 0, b = 0, c = 1 -> false branch (B)
a = 0, b = 1, c = 1 -> true branch (C)
a = 1, b = 0, c = 1 -> true branch (D)
Especially the latter test - changing the operators - can be done with tools like mutation testing, that do not just replaing operators, but can do quite some more, i.e. flipping operands, removing statements, change order of execution, replace return values etc. And for each alteration of your code, it verifies if the test actually fails. This is good indicator of the quality of your test suite and ensures that code is not just covered but your tests for the code are actually valid.
Regarding the terminology, I couldn't find the term "Compound Decision Coverage" somewhere. In my view a "compound decision" would be a compound of compounds of conditions, in other words: a compound of conditions.

Robot Framework - Case sensitive

I need to know if there is a workaround on this error:
When I use Keywords like:
Location Should Be
Location Should Contain
(Which are both part of the Selenium2Library.)
I get this error:
"Location should have been 'http://www.google.pt' but was 'http://WwW.GooGLe.Pt'
I think that is because robot framework is natively case sensitive when comparing strings.
Any help?
Ty
EDIT
Question edited to clarify some subjects.
Luckily Robot Framework allows for keywords to be written in python.
MyLibrary.py
def Compare_Ignore_Case(s1, s2):
if s1.lower() != s2.lower():
return False
else:
return True
def Convert_to_Lowercase(s1):
return s1.lower()
MySuite.txt
| *Setting* | *Value* |
| Library | ./MyLibrary.py |
| *Test Case* | *Action* | *Argument*
#
| T100 | [Documentation] | Compare two strings ignoring case.
| | Compare Ignore Case | foo | FOO
#
| T101 | [Documentation] | Compare two strings where one is a variable.
# Should be Get Location in your case.
| | ${temp}= | MyKeyword that Returns a String
| | Compare Ignore Case | foo | ${temp}
I have not used the Selenium library, but the example in T101 should work for you.
Just in case someone else wants this:
Location should be '${expectedUrl}' disregard case
${currUrl} get location
${currUrl} evaluate "${currUrl}".lower()
${expectedUrl} evaluate "${expectedUrl}".lower()
should be equal ${currUrl} ${expectedurl}
Location should contain '${expected}' disregard case
${currUrl} get location
${currUrl} evaluate "${currUrl}".lower()
${expected} evaluate "${expected}".lower()
should contain ${currUrl} ${expected}

Bitwise operator predence

A gotcha I've run into a few times in C-like languages is this:
original | included & ~excluded // BAD
Due to precedence, this parses as:
original | (included & ~excluded) // '~excluded' has no effect
Does anyone know what was behind the original design decision of three separate precedence levels for bitwise operators? More importantly, do you agree with the decision, and why?
The operators have had this precedence since at least C.
I agree with the order because it is the same relative order as the relative order of the arithmetic operators that they are most similar to (+, * and negation).
You can see the similarity of & vs *, and | vs + here:
A B | A&B A*B | A|B A+B
0 0 | 0 0 | 0 0
0 1 | 0 0 | 1 1
1 0 | 0 0 | 1 1
1 1 | 1 1 | 1 2
The similarity of bitwise not and negation can be seen by this formula:
~A = -A - 1
To extend Mark Byers' answer, in boolean algebra (used extensively by electrical engineers to simplify logic circuits to the minimum number of gates and to avoid race conditions), the tradition is that bitwise AND takes precedent over bitwise OR. C was just following this established tradition. See http://en.wikiversity.org/wiki/Boolean_algebra#Combining_Operations :
Just as in ordinary algebra, where
multiplication takes priority over
addition, AND takes priority (or
precedence) over OR.