Semantics of Verilog >> operator applied to nets - operators

I'm trying to figure out a subtle detail of the semantics of the >> operator in Verilog. I haven't found anything relevant in the LRM or any other online reference/tutorial material.
As an example, suppose I want to convert binary to Gray code and back again. For binary-to-Gray, it's pretty straightforward:
wire [3:0] bin;
wire [3:0] gray;
assign gray[3] = bin[3];
assign gray[2] = bin[3] ^ bin[2];
assign gray[1] = bin[2] ^ bin[1];
assign gray[0] = bin[1] ^ bin[0];
Of course, the assignments can be reduced to
assign gray = bin ^ {1'b0, bin[3:1]};
Or even just
assign gray = bin ^ (bin >> 1);
However, going in the opposite direction is a little different. This involves a recurrence relationship, in which the computation of each bit relies on the previous result:
assign bin[3] = gray[3];
assign bin[2] = bin[3] ^ gray[2];
assign bin[1] = bin[2] ^ gray[1];
assign bin[0] = bin[1] ^ gray[0];
I can write this as
assign bin = gray ^ {1'b0, bin[3:1]};
But can I take the final step and write
assign bin = gray ^ (bin >> 1);
Obviously, that would never work in any software programming language. But to me as a hardware designer, these are all exactly equivalent in terms of the hardware connections that they describe. But which way will simulation and synthesis tools interpret it?
I'm about to go off and try a few simulators to see what happens, but I would appreciate a pointer to a definitive reference — even if it ends up being "unspecified" or "implementation-dependent".
Figures borrowed from here

This should work in simulation and synthesis. Recursion is fine as long as there is a statically deterministic number of recursions.

Related

Does Perl 6 have an infinite Int?

I had a task where I wanted to find the closest string to a target (so, edit distance) without generating them all at the same time. I figured I'd use the high water mark technique (low, I guess) while initializing the closest edit distance to Inf so that any edit distance is closer:
use Text::Levenshtein;
my #strings = < Amelia Fred Barney Gilligan >;
for #strings {
put "$_ is closest so far: { longest( 'Camelia', $_ ) }";
}
sub longest ( Str:D $target, Str:D $string ) {
state Int $closest-so-far = Inf;
state Str:D $closest-string = '';
if distance( $target, $string ) < $closest-so-far {
$closest-so-far = $string.chars;
$closest-string = $string;
return True;
}
return False;
}
However, Inf is a Num so I can't do that:
Type check failed in assignment to $closest-so-far; expected Int but got Num (Inf)
I could make the constraint a Num and coerce to that:
state Num $closest-so-far = Inf;
...
$closest-so-far = $string.chars.Num;
However, this seems quite unnatural. And, since Num and Int aren't related, I can't have a constraint like Int(Num). I only really care about this for the first value. It's easy to set that to something sufficiently high (such as the length of the longest string), but I wanted something more pure.
Is there something I'm missing? I would have thought that any numbery thing could have a special value that was greater (or less than) all the other values. Polymorphism and all that.
{new intro that's hopefully better than the unhelpful/misleading original one}
#CarlMäsak, in a comment he wrote below this answer after my first version of it:
Last time I talked to Larry about this {in 2014}, his rationale seemed to be that ... Inf should work for all of Int, Num and Str
(The first version of my answer began with a "recollection" that I've concluded was at least unhelpful and plausibly an entirely false memory.)
In my research in response to Carl's comment, I did find one related gem in #perl6-dev in 2016 when Larry wrote:
then our policy could be, if you want an Int that supports ±Inf and NaN, use Rat instead
in other words, don't make Rat consistent with Int, make it consistent with Num
Larry wrote this post 6.c. I don't recall seeing anything like it discussed for 6.d.
{and now back to the rest of my first answer}
Num in P6 implements the IEEE 754 floating point number type. Per the IEEE spec this type must support several concrete values that are reserved to stand in for abstract concepts, including the concept of positive infinity. P6 binds the corresponding concrete value to the term Inf.
Given that this concrete value denoting infinity already existed, it became a language wide general purpose concrete value denoting infinity for cases that don't involve floating point numbers such as conveying infinity in string and list functions.
The solution to your problem that I propose below is to use a where clause via a subset.
A where clause allows one to specify run-time assignment/binding "typechecks". I quote "typecheck" because it's the most powerful form of check possible -- it's computationally universal and literally checks the actual run-time value (rather than a statically typed view of what that value can be). This means they're slower and run-time, not compile-time, but it also makes them way more powerful (not to mention way easier to express) than even dependent types which are a relatively cutting edge feature that those who are into advanced statically type-checked languages tend to claim as only available in their own world1 and which are intended to "prevent bugs by allowing extremely expressive types" (but good luck with figuring out how to express them... ;)).
A subset declaration can include a where clause. This allows you to name the check and use it as a named type constraint.
So, you can use these two features to get what you want:
subset Int-or-Inf where Int:D | Inf;
Now just use that subset as a type:
my Int-or-Inf $foo; # ($foo contains `Int-or-Inf` type object)
$foo = 99999999999; # works
$foo = Inf; # works
$foo = Int-or-Inf; # works
$foo = Int; # typecheck failure
$foo = 'a'; # typecheck failure
1. See Does Perl 6 support dependent types? and it seems the rough consensus is no.

verilog assigning to same variable not working

I am having a strange problem with Verilog HDL.
I found in my code that if I multiply a variable by two, but then
assign that value to the same variable, it gets all messed up.
Sometimes, the simv program even crashes. I originally needed to do this,
because I had a for loop for shifting or rotating a certain amount. But,
then I found that not only shifting the same variable did not work, but
also, addition, subtraction, multiplication, or division does not work either.
So in my code example, if you set a to 16'b0001_0000_1010_0101, and b to 3,
then you get an output of 16'b0000_0000_0000_0000. Just note that I am ignoring b for now... I should get 16'b0010_0001_0100_1010... but something is going wrong.
So, this is my code file test.v:
// ALU module.
module test(in1, in2, out1);
input [15:0] in1;
input [15:0] in2;
output reg [15:0] out1;
// Variables for shifting right and for rotating left or right.
reg [15:0] shiftedValue;
always#(in1, in2)
begin
assign shiftedValue = in1;
assign shiftedValue = shiftedValue * 2;
assign out1 = shiftedValue;
// This display value is correct!
// but it's still wrong in the test bench.
$display("out1 == %b", out1);
end
endmodule
module testStim;
reg [15:0] a;
reg [15:0] b;
wire [15:0] c;
// create ALU instance.
test myTest(a, b, c);
initial
begin
a = 16'b0001_0000_1010_0101;
b = 3;
#10
$display("op1In == %b, op1Out == %b", a, c);
$finish;
end
endmodule
This is the output after running simv (I stripped out the erroneous garbage...):
out1 == 0010000101001010
op1In == 0001000010100101, op1Out == 0000000000000000
Thanks,
Erik W.
You have done what is called as procedural continuous assignment.
The difference between regular continuous assignments and procedural continuous assignments is this:
Continuous assignment can only drive wire/net data type. Procedural assignment can drive only reg data type and not nets.
Continuous assignment should appear outside procedural blocks(always, initial etc), while latter must be inside procedural blocks.
Continuous assignment executes each time the right hand side expression changes. Procedural assignment depends on sensitivity list of always block.
As soon as the always block ends, the effect of assign statement is removed. You must add deassign statement to retain the values (which I think is not the real intent) or Just remove assign statement from the code. As shown below:
shiftedValue = in1;
shiftedValue = shiftedValue * 2;
out1 = shiftedValue;
More information about assign, deassign is available at this, this and this links.

Resizing matrix using pointer attribute

I have a Fortran program which uses a routine in a module to resize a matrix like:
module resizemod
contains
subroutine ResizeMatrix(A,newSize,lindx)
integer,dimension(:,:),intent(inout),pointer :: A
integer,intent(in) :: newSize(2)
integer,dimension(:,:),allocatable :: B
integer,optional,intent(in) :: lindx(2)
integer :: i,j
allocate(B(lbound(A,1):ubound(A,1),lbound(A,2):ubound(A,2)))
forall (i=lbound(A,1):ubound(A,1),j=lbound(A,2):ubound(A,2))
B(i,j)=A(i,j)
end forall
if (associated(A)) deallocate(A)
if (present(lindx)) then
allocate(A(lindx(1):lindx(1)+newSize(1)-1,lindx(2):lindx(2)+newSize(2)-1))
else
allocate(A(newSize(1),newSize(2)))
end if
do i=lbound(B,1),ubound(B,1)
do j=lbound(B,2), ubound(B,2)
A(i,j)=B(i,j)
end do
end do
deallocate(B)
end subroutine ResizeMatrix
end module resizemod
The main program looks like:
program resize
use :: resizemod
implicit none
integer,pointer :: mtest(:,:)
allocate(mtest(0:1,3))
mtest(0,:)=[1,2,3]
mtest(1,:)=[1,4,5]
call ResizeMatrix(mtest,[3,3],lindx=[0,1])
mtest(2,:)=0
print *,mtest(0,:)
print *,mtest(1,:)
print *,mtest(2,:)
end program resize
I use ifort 14.0 to compile the codes. The issue that I am facing is that sometimes I don't get the desired result:
1 0 0
1 0 5
0 0 -677609912
Actually I couldn't reproduce the issue (which is present in my original program) using the minimal test codes. But the point that I noticed was that when I remove the compiler option -fast, this problem disappears.
Then my question would be
If the pieces of code that I use are completely legal?
If any other method for resizing the matrices would be recommended which is better than the one presented in here?
The relevance of the described issue and the compiler option "-fast".
If I've read the code right it's legal but incorrect. In your example you've resized a 2x3 array into 3x3 but the routine ResizeMatrix doesn't do anything to set the values of the extra elements. The strange values you see, such as -677609912, are the interpretation, as integers. of whatever bits were lying around in memory when the memory location corresponding to the unset array element was read (so that it's value could be written out).
The relevance of -fast is that it is common for compilers in debug or low-optimisation modes, to zero-out memory locations but not to bother when higher optimisation is switched on. The program is legal in the sense that it contains no compiler-determinable syntax errors. But it is incorrect in the sense that reading a variable whose value has not been initialised or assigned is not something you regularly ought to do; doing so makes your program, essentially, non-deterministic.
As for your question 2, it raises the possibility that you are not familiar with the intrinsic functions reshape or (F2003) move_alloc. The latter is almost certainly what you want, the former may help too.
As an aside: these days I rarely use pointer on arrays, allocatable is much more useful and generally easier and safer too. But you may have requirements of which I wot not.

How to 'checksum' an array of noisy floating point numbers?

What is a quick and easy way to 'checksum' an array of floating point numbers, while allowing for a specified small amount of inaccuracy?
e.g. I have two algorithms which should (in theory, with infinite precision) output the same array. But they work differently, and so floating point errors will accumulate differently, though the array lengths should be exactly the same. I'd like a quick and easy way to test if the arrays seem to be the same. I could of course compare the numbers pairwise, and report the maximum error; but one algorithm is in C++ and the other is in Mathematica and I don't want the bother of writing out the numbers to a file or pasting them from one system to another. That's why I want a simple checksum.
I could simply add up all the numbers in the array. If the array length is N, and I can tolerate an error of 0.0001 in each number, then I would check if abs(sum1-sum2)<0.0001*N. But this simplistic 'checksum' is not robust, e.g. to an error of +10 in one entry and -10 in another. (And anyway, probability theory says that the error probably grows like sqrt(N), not like N.) Of course, any checksum is a low-dimensional summary of a chunk of data so it will miss some errors, if not most... but simple checksums are nonetheless useful for finding non-malicious bug-type errors.
Or I could create a two-dimensional checksum, [sum(x[n]), sum(abs(x[n]))]. But is the best I can do, i.e. is there a different function I might use that would be "more orthogonal" to the sum(x[n])? And if I used some arbitrary functions, e.g. [sum(f1(x[n])), sum(f2(x[n]))], then how should my 'raw error tolerance' translate into 'checksum error tolerance'?
I'm programming in C++, but I'm happy to see answers in any language.
i have a feeling that what you want may be possible via something like gray codes. if you could translate your values into gray codes and use some kind of checksum that was able to correct n bits you could detect whether or not the two arrays were the same except for n-1 bits of error, right? (each bit of error means a number is "off by one", where the mapping would be such that this was a variation in the least significant digit).
but the exact details are beyond me - particularly for floating point values.
i don't know if it helps, but what gray codes solve is the problem of pathological rounding. rounding sounds like it will solve the problem - a naive solution might round and then checksum. but simple rounding always has pathological cases - for example, if we use floor, then 0.9999999 and 1 are distinct. a gray code approach seems to address that, since neighbouring values are always single bit away, so a bit-based checksum will accurately reflect "distance".
[update:] more exactly, what you want is a checksum that gives an estimate of the hamming distance between your gray-encoded sequences (and the gray encoded part is easy if you just care about 0.0001 since you can multiple everything by 10000 and use integers).
and it seems like such checksums do exist: Any error-correcting code can be used for error detection. A code with minimum Hamming distance, d, can detect up to d − 1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired.
so, just in case it's not clear:
multiple by minimum error to get integers
convert to gray code equivalent
use an error detecting code with a minimum hamming distance larger than the error you can tolerate.
but i am still not sure that's right. you still get the pathological rounding in the conversion from float to integer. so it seems like you need a minimum hamming distance that is 1 + len(data) (worst case, with a rounding error on each value). is that feasible? probably not for large arrays.
maybe ask again with better tags/description now that a general direction is possible? or just add tags now? we need someone who does this for a living. [i added a couple of tags]
I've spent a while looking for a deterministic answer, and been unable to find one. If there is a good answer, it's likely to require heavy-duty mathematical skills (functional analysis).
I'm pretty sure there is no solution based on "discretize in some cunning way, then apply a discrete checksum", e.g. "discretize into strings of 0/1/?, where ? means wildcard". Any discretization will have the property that two floating-point numbers very close to each other can end up with different discrete codes, and then the discrete checksum won't tell us what we want to know.
However, a very simple randomized scheme should work fine. Generate a pseudorandom string S from the alphabet {+1,-1}, and compute csx=sum(X_i*S_i) and csy=sum(Y_i*S_i), where X and Y are my original arrays of floating point numbers. If we model the errors as independent Normal random variables with mean 0, then it's easy to compute the distribution of csx-csy. We could do this for several strings S, and then do a hypothesis test that the mean error is 0. The number of strings S needed for the test is fixed, it doesn't grow linearly in the size of the arrays, so it satisfies my need for a "low-dimensional summary". This method also gives an estimate of the standard deviation of the error, which may be handy.
Try this:
#include <complex>
#include <cmath>
#include <iostream>
// PARAMETERS
const size_t no_freqs = 3;
const double freqs[no_freqs] = {0.05, 0.16, 0.39}; // (for example)
int main() {
std::complex<double> spectral_amplitude[no_freqs];
for (size_t i = 0; i < no_freqs; ++i) spectral_amplitude[i] = 0.0;
size_t n_data = 0;
{
std::complex<double> datum;
while (std::cin >> datum) {
for (size_t i = 0; i < no_freqs; ++i) {
spectral_amplitude[i] += datum * std::exp(
std::complex<double>(0.0, 1.0) * freqs[i] * double(n_data)
);
}
++n_data;
}
}
std::cout << "Fuzzy checksum:\n";
for (size_t i = 0; i < no_freqs; ++i) {
std::cout << real(spectral_amplitude[i]) << "\n";
std::cout << imag(spectral_amplitude[i]) << "\n";
}
std::cout << "\n";
return 0;
}
It returns just a few, arbitrary points of a Fourier transform of the entire data set. These make a fuzzy checksum, so to speak.
How about computing a standard integer checksum on the data obtained by zeroing the least significant digits of the data, the ones that you don't care about?

Clearing numerical values in Mathematica

I am working on fairly large Mathematica projects and the problem arises that I have to intermittently check numerical results but want to easily revert to having all my constructs in analytical form.
The code is fairly fluid I don't want to use scoping constructs everywhere as they add work overhead. Is there an easy way for identifying and clearing all assignments that are numerical?
EDIT: I really do know that scoping is the way to do this correctly ;-). However, for my workflow I am really just looking for a dirty trick to nix all numerical assignments after the fact instead of having the foresight to put down a Block.
If your assignments are on the top level, you can use something like this:
a = 1;
b = c;
d = 3;
e = d + b;
Cases[DownValues[In],
HoldPattern[lhs_ = rhs_?NumericQ] |
HoldPattern[(lhs_ = rhs_?NumericQ;)] :> Unset[lhs],
3]
This will work if you have a sufficient history length $HistoryLength (defaults to infinity). Note however that, in the above example, e was assigned 3+c, and 3 here was not undone. So, the problem is really ambiguous in formulation, because some numbers could make it into definitions. One way to avoid this is to use SetDelayed for assignments, rather than Set.
Another alternative would be to analyze the names in say Global' context (if that is the context where your symbols live), and then say OwnValues and DownValues of the symbols, in a fashion similar to the above, and remove definitions with purely numerical r.h.s.
But IMO neither of these approaches are robust. I'd still use scoping constructs and try to isolate numerics. One possibility is to wrap you final code in Block, and assign numerical values inside this Block. This seems a much cleaner approach. The work overhead is minimal - you just have to remember which symbols you want to assign the values to. Block will automatically ensure that outside it, the symbols will have no definitions.
EDIT
Yet another possibility is to use local rules. For example, one could define rule[a] = a->1; rule[d]=d->3 instead of the assignments above. You could then apply these rules, extracting them as say
DownValues[rule][[All, 2]], whenever you want to test with some numerical arguments.
Building on Andrew Moylan's solution, one can construct a Block like function that would takes rules:
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
You can then save your numeric rules in a variable, and use BlockRules[ savedrules, code ], or even define a function that would apply a fixed set of rules, kind of like so:
In[76]:= NumericCheck =
Function[body, BlockRules[{a -> 3, b -> 2`}, body], HoldAll];
In[78]:= a + b // NumericCheck
Out[78]= 5.
EDIT In response to Timo's comment, it might be possible to use NotebookEvaluate (new in 8) to achieve the requested effect.
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
nb = CreateDocument[{ExpressionCell[
Defer[Plot[Sin[a x], {x, 0, 2 Pi}]], "Input"],
ExpressionCell[Defer[Integrate[Sin[a x^2], {x, 0, 2 Pi}]],
"Input"]}];
BlockRules[{a -> 4}, NotebookEvaluate[nb, InsertResults -> "True"];]
As the result of this evaluation you get a notebook with your commands evaluated when a was locally set to 4. In order to take it further, you would have to take the notebook
with your code, open a new notebook, evaluate Notebooks[] to identify the notebook of interest and then do :
BlockRules[variablerules,
NotebookEvaluate[NotebookPut[NotebookGet[nbobj]],
InsertResults -> "True"]]
I hope you can make this idea work.