How does the Yosys ConstEval API work? - yosys

I'm trying to write a plugin that requires evaluating combinatorial circuits. From what I can gather ConstEval is the tool which does this. However, the API is not so clear to me. Is there somewhere a rundown of the members of ConstEval and what they do?
(Asked by jeremysalwen on github).

Using the ConstEval class is actually quite easy. You create a ConstEval object for a given module and set the known values using the void ConstEval::set(SigSpec, Const) method. After all the known signals have been set, the bool ConstEval::eval(SigSpec&, SigSpec&) method can be used to evaluate nets. The eval() method returns true when the evaluation was successful and replaces the net(s) in the first argument with the constant values the net evaluates to. Otherwise it returns false and sets the 2nd argument to the list of nets that need to be set in order to continue evaluation.
The methods push() and pop() can be used for creating local contexts for set(). The method stop() can be used to declare signals at which the evaluation should stop, even when there are combinatoric cells driving the net.
The following simple Yosys plugin demonstrates how to use the ConstEval API (evaldemo.cc):
#include "kernel/yosys.h"
#include "kernel/consteval.h"
USING_YOSYS_NAMESPACE
PRIVATE_NAMESPACE_BEGIN
struct EvalDemoPass : public Pass
{
EvalDemoPass() : Pass("evaldemo") { }
virtual void execute(vector<string>, Design *design)
{
Module *module = design->top_module();
if (module == nullptr)
log_error("No top module found!\n");
Wire *wire_a = module->wire("\\A");
Wire *wire_y = module->wire("\\Y");
if (wire_a == nullptr)
log_error("No wire A found!\n");
if (wire_y == nullptr)
log_error("No wire Y found!\n");
ConstEval ce(module);
for (int v = 0; v < 4; v++) {
ce.push();
ce.set(wire_a, Const(v, GetSize(wire_a)));
SigSpec sig_y = wire_y, sig_undef;
if (ce.eval(sig_y, sig_undef))
log("Eval results for A=%d: Y=%s\n", v, log_signal(sig_y));
else
log("Eval failed for A=%d: Missing value for %s\n", v, log_signal(sig_undef));
ce.pop();
}
}
} EvalDemoPass;
PRIVATE_NAMESPACE_END
Example usage:
$ cat > evaldemo.v <<EOT
module main(input [1:0] A, input [7:0] B, C, D, output [7:0] Y);
assign Y = A == 0 ? B : A == 1 ? C : A == 2 ? D : 42;
endmodule
EOT
$ yosys-config --build evaldemo.so evaldemo.cc
$ yosys -m evaldemo.so -p evaldemo evaldemo.v
...
-- Running command `evaldemo' --
Eval failed for A=0: Missing value for \B
Eval failed for A=1: Missing value for \C
Eval failed for A=2: Missing value for \D
Eval results for A=3: Y=8'00101010

Related

What's the protocol for calling Raku code from C code?

2023 update The last person to edit this Q deleted the critically important "LATEST LATEST UPDATE" part that #zentrunix had added near the top. I'm reinstating it.
LATEST LATEST UPDATE
Please see my answer below.
Thanks to everyone who took the time to answer and understand this question.
Original question
Say I have my event-driven TCP communications library in C.
From my Raku application, I can call a function in the C library using NativeCall.
my $server = create-server("127.0.0.1", 4000);
Now, from my callback in C (say onAccept) I want to call out to a Raku function in my application (say on-accept(connection) where connection will be a pointer to a C struct).
So, how can I do that: call my Raku function on-accept from my C function onAccept ?
ps. I tried posting using a simple title "How to call Raku code from C code", but for whatever reason stackoverflow.com wouldn't let me do it. Because of that I concocted this fancy title.
I was creating a 32-bit DLL.
We have to explicitly tell CMake to configure a 64-bit build.
cmake -G "Visual Studio 14 2015 Win64" ..
Anyway, now that the code runs, it's not really what I asked for, because the callback is still in C.
It seems that what I asked for it's not really possible.
I tried to use the approach suggested by Haakon, though I'm afraid I don't understand how it would work.
I'm in Windows, and unfortunately, Raku can't find my dlls, even if I put them in C:\Windows\System32. It finds "msvcrt" (C runtime), but not my dlls.
The dll code (Visual Studio 2015).
#include <stdio.h>
#define EXPORTED __declspec(dllexport)
typedef int (*proto)(const char*);
proto raku_callback;
extern EXPORTED void set_callback(proto);
extern EXPORTED void foo(void);
void set_callback(proto arg)
{
printf("In set_callback()..\n");
raku_callback = arg;
}
void foo(void)
{
printf("In foo()..\n");
int res = raku_callback("hello");
printf("Raku return value: %d\n", res);
}
Cmake code for the
CMAKE_MINIMUM_REQUIRED (VERSION 3.1)
add_library (my_c_dll SHARED my_c_dll.c)
Raku code.
use v6.d;
use NativeCall;
sub set_callback(&callback (Str --> int32))
is native("./my_c_dll"){ * }
sub foo()
is native("./my_c_dll"){ * }
sub callback(Str $str --> Int) {
say "Raku callback.. got string: {$str} from C";
return 32;
}
## sub _getch() returns int32 is native("msvcrt") {*};
## print "-> ";
## say "got ", _getch();
set_callback(&callback);
# foo();
When I run
$ raku test-dll.raku
Cannot locate native library '(null)': error 0xc1
in method setup at D:\tools\raku\share\perl6\core\sources
\947BDAB9F96E0E5FCCB383124F923A6BF6F8D76B (NativeCall) line 298
in block set_callback at D:\tools\raku\share\perl6\core\sources
\947BDAB9F96E0E5FCCB383124F923A6BF6F8D76B (NativeCall) line 594
in block <unit> at test-dll.raku line 21
Raku version.
$ raku -v
This is Rakudo version 2020.05.1 built on MoarVM version 2020.05
implementing Raku 6.d.
Another approach could be to save a callback statically in the C library, for example (libmylib.c):
#include <stdio.h>
static int (*raku_callback)(char *arg);
void set_callback(int (*callback)(char * arg)) {
printf("In set_callback()..\n");
raku_callback = callback;
}
void foo() {
printf("In foo()..\n");
int res = raku_callback("hello");
printf("Raku return value: %d\n", res);
}
Then from Raku:
use v6;
use NativeCall;
sub set_callback(&callback (Str --> int32)) is native('./libmylib.so') { * }
sub foo() is native('./libmylib.so') { * }
sub callback(Str $str --> Int) {
say "Raku callback.. got string: {$str} from C";
return 32;
}
set_callback(&callback);
foo();
Output:
In set_callback()..
In foo()..
Raku callback.. got string: hello from C
Raku return value: 32
Raku is a compiled language; depending on the implementation you've got, it will be compiled to MoarVM, JVM or Javascript. Through compilation, Raku code becomes bytecode in the corresponding virtual machine. So it's never, actually, binary code.
However, Raku code seems to be cleverly organized in a way that an object is actually a pointer to a C endpoint, as proved by Haakon Hagland answer.
WRT to your latest problem, please bear in mind that what you are calling is not a path, but a name that is converted to a navive shared library name and also uses local library path conventions to look for them (it's `PATH' on Windows). So if it's not finding it, add local path to it of simply copy the DLL to one of the searched directories.
First of all, my apologies to #Håkon and #raiph.
Sorry for being so obtuse. :)
Håkon's answer does indeed answer my question, although for whatever reason I have failed to see that until now.
Now the code I played with in order to understand Håkon's solution.
// my_c_dll.c
// be sure to create a 64-bit dll
#include <stdio.h>
#define EXPORTED __declspec(dllexport)
typedef int (*proto)(const char*);
proto raku_function;
extern EXPORTED void install_raku_function(proto);
extern EXPORTED void start_c_processing(void);
void install_raku_function(proto arg)
{
printf("installing raku function\n");
raku_function = arg;
}
void start_c_processing(void)
{
printf("* ----> starting C processing..\n");
for (int i = 0; i < 100; i++)
{
printf("* %d calling raku function\n", i);
int res = raku_function("hello");
printf("* %d raku function returned: %d\n", i, res);
Sleep(1000);
}
}
# test-dll.raku
use v6.d;
use NativeCall;
sub install_raku_function(&raku_function (Str --> int32))
is native("./my_c_dll.dll") { * }
sub start_c_processing()
is native("./my_c_dll.dll") { * }
sub my_raku_function(Str $str --> Int)
{
say "# raku function called from C with parameter [{$str}]";
return 32;
}
install_raku_function &my_raku_function;
start { start_c_processing; }
for ^1000 -> $i
{
say "# $i idling in raku";
sleep 1;
}
$ raku test-dll.raku
installing raku function
# 0 idling in raku
* ----> starting C processing..
* 0 calling raku function
# 0 raku function called from C with parameter [hello]
* 0 raku function returned: 32
# 1 idling in raku
* 1 calling raku function
# 1 raku function called from C with parameter [hello]
* 1 raku function returned: 32
# 2 idling in raku
* 2 calling raku function
# 2 raku function called from C with parameter [hello]
* 2 raku function returned: 32
# 3 idling in raku
* 3 calling raku function
# 3 raku function called from C with parameter [hello]
* 3 raku function returned: 32
# 4 idling in raku
* 4 calling raku function
# 4 raku function called from C with parameter [hello]
* 4 raku function returned: 32
# 5 idling in raku
* 5 calling raku function
# 5 raku function called from C with parameter [hello]
* 5 raku function returned: 32
^CTerminate batch job (Y/N)?
^C
What amazes me is that the Raku signature for my_raku_function maps cleanly to the C signature ... isn't Raku wonderful ? :)

How do I perform brute force function verification inside a function when running Rust tests?

When I'm testing functions that have an obvious, slower, brute-force alternative, I've often found it helpful to write both functions, and verify that the outputs match when debugging flags are on. In C, it might look something like this:
#include <inttypes.h>
#include <stdio.h>
#ifdef NDEBUG
#define _rsqrt rsqrt
#else
#include <assert.h>
#include <math.h>
#endif
// https://en.wikipedia.org/wiki/Fast_inverse_square_root
float _rsqrt(float number) {
const float x2 = number * 0.5F;
const float threehalfs = 1.5F;
union {
float f;
uint32_t i;
} conv = {number}; // member 'f' set to value of 'number'.
// approximation via Newton's method
conv.i = 0x5f3759df - (conv.i >> 1);
conv.f *= (threehalfs - (x2 * conv.f * conv.f));
return conv.f;
}
#ifndef NDEBUG
float rsqrt(float number) {
float res = _rsqrt(number);
// brute force solution to verify
float correct = 1 / sqrt(number);
// make sure the approximation is within 1% of correct
float err = fabs(res - correct) / correct;
assert(err < 0.01);
// for exposition sake: large scale systems would verify quietly
printf("DEBUG: rsqrt(%f) -> %f error\n", number, err);
return res;
}
#endif
float graphics_code() {
// graphics code that invokes rsqrt a bunch of different ways
float total = 0;
for (float i = 1; i < 10; i++)
total += rsqrt(i);
return total;
}
int main(int argc, char *argv[]) {
printf("%f\n", graphics_code());
return 0;
}
and execution might look like this (if the above code is in tmp.c):
$ clang tmp.c -o tmp -lm && ./tmp # debug mode
DEBUG: rsqrt(1.000000) -> 0.001693 error
DEBUG: rsqrt(2.000000) -> 0.000250 error
DEBUG: rsqrt(3.000000) -> 0.000872 error
DEBUG: rsqrt(4.000000) -> 0.001693 error
DEBUG: rsqrt(5.000000) -> 0.000162 error
DEBUG: rsqrt(6.000000) -> 0.001389 error
DEBUG: rsqrt(7.000000) -> 0.001377 error
DEBUG: rsqrt(8.000000) -> 0.000250 error
DEBUG: rsqrt(9.000000) -> 0.001140 error
4.699923
$ clang tmp.c -o tmp -lm -O3 -DNDEBUG && ./tmp # production mode
4.699923
I like to do this in addition to unit and integration tests because it makes the source of a lot of errors more obvious. It will catch boundary cases that I may have forgotten to unit test, and will naturally expand to the scope to whatever more complex cases I may need in the future (e.g. if the light settings change and I need accuracy for much higher values).
I'm learning Rust, and I really like the natively established separation of interests between testing and production code. I'm trying to do something similar to the above, but can't figure out what the best way to do it is. From what I gather in this thread, I could probably do it with some combination of macro_rules! and #[cfg!( ... )] in the source code, but it feels like I would be breaking the test/production barrier. Ideally I would like to be able to just drop a verification wrapper in around the already defined function, but only for testing. Are macros and cfg my best option here? Can I redefine the default namespace for the imported package just when testing, or do something more clever with macros? I understand that normally files shouldn't be able to modify how imports are linked, but is there an exception for testing? What if I also want it to be wrapped if the module importing it is being tested?
I'm also open to the response that this is a bad way to do testing/verification, but please address the advantages I mentioned above. (Or as a bonus, is there a way the C code can be improved?)
If this isn't currently possible, is it a reasonable thing to go into a feature request?
it feels like I would be breaking the test/production barrier.
Yes, but I don't get why you are concerned about this; your existing code already breaks that boundary. You can use debug_assert and friends to ensure that the function is only called and verified when debug assertions are enabled. If you want to be doubly-sure, you can use cfg(debug_assertions) to only define your slow function then as well:
pub fn add(a: i32, b: i32) -> i32 {
let fast = fast_but_tricky(a, b);
debug_assert_eq!(fast, slow_but_right(a, b));
fast
}
fn fast_but_tricky(a: i32, b: i32) -> i32 {
a + a + b - a
}
#[cfg(debug_assertions)]
fn slow_but_right(a: i32, b: i32) -> i32 {
a + b
}
I don't like this solution. I prefer to keep the testing code more distinct from the production code. What I do instead is use property-based testing to help ensure that my tests cover what is important. I've used proptest to...
Compare a Rust implementation against C
Compare a SIMD implementation against standard
I usually take any cases that are found and create dedicated unit tests for them.
In this case, the proptest might look like:
pub fn add(a: i32, b: i32) -> i32 {
// fast but tricky
a + a + b - a
}
#[cfg(test)]
mod test {
use super::*;
use proptest::{proptest, prop_assert_eq};
fn slow_but_right(a: i32, b: i32) -> i32 {
a + b
}
proptest! {
#[test]
fn same_as_slow_version(a: i32, b: i32) {
prop_assert_eq!(add(a, b), slow_but_right(a, b));
}
}
}
Which finds an error with my "clever" implementation in less than a tenth of a second:
thread 'test::same_as_slow_version' panicked at 'Test failed: attempt to add with overflow; minimal failing
input: a = 375403587, b = 1396676474

How to put a list of cells into a submodule in yosys

I am trying to write a procedure to put each Strongly Connected Component of the given circuit into a distinct sub-module.
So, I tried to add a function to SCC pass in Yosys to add each SCC into a submod. The function is:
void putSelectionIntoParition (RTLIL::Design *design,
std::vector<pair<std::string,RTLIL::Selection>>& SelectionVector)
{
int p_count = 0;
for (std::vector<pair<std::string,RTLIL::Selection>>::iterator it = SelectionVector.begin();
it != SelectionVector.end(); ++it)
{
design->selection_stack[0] = it->second;
design->selection_stack[0].optimize(design);
std::string command = "submod -name ";
command.append(it->first);
Pass::call_on_selection(design, it->second, command);
++p_count;
}
}
However, my code does not work properly.
I guess the problem is with "selection" process that I use. I was wondering if there is any utility/API inside the yosys source that accept vector of cells (as well and a name submodule) and put them into a sub-module.
The following should work just fine:
void putSelectionIntoParition(RTLIL::Design *design,
std::vector<pair<std::string, RTLIL::Selection>> &SelectionVector)
{
for (auto it : SelectionVector) {
std::string command = "submod -name " + it.first;
Pass::call_on_selection(design, it.second, command);
}
}
You definitely don't need (nor should) modify selection_stack.
I was wondering if there is any utility/API inside the yosys source that accept vector of cells (as well and a name submodule) and put them into a sub-module.
You would do this by setting the submod="<name>" attribute on the cells. Then simply run the submod command.
You might have seen that the scc documentation mentions a -set_attr option that is yet unimplemented. I have now implemented this option in commit ef603c6 (commit 914aa8a contains a bugfix for scc).
With this feature you can now accomplished what you have described using something like the following yosys script.
read_verilog test.v
prep
scc -set_attr submod scc{}
submod
show test
I have tested this with the folling test.v file:
module test(input A, B, output X, Y);
assign X = (A & B) ^ X, Y = A | (B ^ Y);
endmodule
3. Executing SCC pass (detecting logic loops).
Found an SCC: $xor$test.v:2$2
Found an SCC: $or$test.v:2$4 $xor$test.v:2$3
Found 2 SCCs in module test.
Found 2 SCCs.

Is there a language with higher order conditionals?

Sometimes, I have a control structure (if, for, ...), and depending on a condition I either want to use the control structure, or only execute the body. As a simple example, I can do the following in C, but it's pretty ugly:
#ifdef APPLY_FILTER
if (filter()) {
#endif
// do something
#ifdef APPLY_FILTER
}
#endif
Also it doesn't work if I only know apply_filter at runtime. Of course, in this case I can just change the code to:
if (apply_filter && filter())
but that doesn't work in the general case of arbitrary control structures. (I don't have a nice example at hand, but recently I had some code that would have benefited a lot from a feature like this.)
Is there any langugage where I can apply conditions to control structures, i.e. have higher-order conditionals? In pseudocode, the above example would be:
<if apply_filter>
if (filter()) {
// ...
}
Or a more complicated example, if a varable is set wrap code in a function and start it as a thread:
<if (run_on_thread)>
void thread() {
<endif>
for (int i = 0; i < 10; i++) {
printf("%d\n", i);
sleep(1);
}
<if (run_on_thread)>
}
start_thread(&thread);
<endif>
(Actually, in this example I could imagine it would even be useful to give the meta condition a name, to ensure that the top and bottom s are in sync.)
I could imagine something like this is a feature in LISP, right?
Any language with first-class functions can pull this off. In fact, your use of "higher-order" is telling; the necessary abstraction will indeed be a higher-order function. The idea is to write a function applyIf which takes a boolean (enabled/disabled), a control-flow operator (really, just a function), and a block of code (any value in the domain of the function); then, if the boolean is true, the operator/function is applied to the block/value, and otherwise the block/value is just run/returned. This will be a lot clearer in code.
In Haskell, for instance, this pattern would be, without an explicit applyIf, written as:
example1 = (if applyFilter then when someFilter else id) body
example2 = (if runOnThread then (void . forkIO) else id) . forM_ [1..10] $ \i ->
print i >> threadDelay 1000000 -- threadDelay takes microseconds
Here, id is just the identity function \x -> x; it always returns its argument. Thus, (if cond then f else id) x is the same as f x if cond == True, and is the same as id x otherwise; and of course, id x is the same as x.
Then you could factor this pattern out into our applyIf combinator:
applyIf :: Bool -> (a -> a) -> a -> a
applyIf True f x = f x
applyIf False _ x = x
-- Or, how I'd probably actually write it:
-- applyIf True = id
-- applyIf False = flip const
-- Note that `flip f a b = f b a` and `const a _ = a`, so
-- `flip const = \_ a -> a` returns its second argument.
example1' = applyIf applyFilter (when someFilter) body
example2' = applyIf runOnThread (void . forkIO) . forM_ [1..10] $ \i ->
print i >> threadDelay 1000000
And then, of course, if some particular use of applyIf was a common pattern in your application, you could abstract over it:
-- Runs its argument on a separate thread if the application is configured to
-- run on more than one thread.
possiblyThreaded action = do
multithreaded <- (> 1) . numberOfThreads <$> getConfig
applyIf multithreaded (void . forkIO) action
example2'' = possiblyThreaded . forM_ [1..10] $ \i ->
print i >> threadDelay 1000000
As mentioned above, Haskell is certainly not alone in being able to express this idea. For instance, here's a translation into Ruby, with the caveat that my Ruby is very rusty, so this is likely to be unidiomatic. (I welcome suggestions on how to improve it.)
def apply_if(use_function, f, &block)
use_function ? f.call(&block) : yield
end
def example1a
do_when = lambda { |&block| if some_filter then block.call() end }
apply_if(apply_filter, do_when) { puts "Hello, world!" }
end
def example2a
apply_if(run_on_thread, Thread.method(:new)) do
(1..10).each { |i| puts i; sleep 1 }
end
end
def possibly_threaded(&block)
apply_if(app_config.number_of_threads > 1, Thread.method(:new), &block)
end
def example2b
possibly_threaded do
(1..10).each { |i| puts i; sleep 1 }
end
end
The point is the same—we wrap up the maybe-do-this-thing logic in its own function, and then apply that to the relevant block of code.
Note that this function is actually more general than just working on code blocks (as the Haskell type signature expresses); you can also, for instance, write abs n = applyIf (n < 0) negate n to implement the absolute value function. The key is to realize that code blocks themselves can be abstracted over, so things like if statements and for loops can just be functions. And we already know how to compose functions!
Also, all of the code above compiles and/or runs, but you'll need some imports and definitions. For the Haskell examples, you'll need the impots
import Control.Applicative -- for (<$>)
import Control.Monad -- for when, void, and forM_
import Control.Concurrent -- for forkIO and threadDelay
along with some bogus definitions of applyFilter, someFilter, body, runOnThread, numberOfThreads, and getConfig:
applyFilter = False
someFilter = False
body = putStrLn "Hello, world!"
runOnThread = True
getConfig = return 4 :: IO Int
numberOfThreads = id
For the Ruby examples, you'll need no imports and the following analogous bogus definitions:
def apply_filter; false; end
def some_filter; false; end
def run_on_thread; true; end
class AppConfig
attr_accessor :number_of_threads
def initialize(n)
#number_of_threads = n
end
end
def app_config; AppConfig.new(4); end
Common Lisp does not let you redefine if. You can, however, invent your own control structure as a macro in Lisp and use that instead.

How to receive message from 'any' channel in PROMELA/SPIN

I'm modeling an algorithm in Spin.
I have a process that has several channels and at some point, I know a message is going to come but don't know from which channel. So want to wait (block) the process until it a message comes from any of the channels. how can I do that?
I think you need Promela's if construct (see http://spinroot.com/spin/Man/if.html).
In the process you're referring to, you probably need the following:
byte var;
if
:: ch1?var -> skip
:: ch2?var -> skip
:: ch3?var -> skip
fi
If none of the channels have anything on them, then "the selection construct as a whole blocks" (quoting the manual), which is exactly the behaviour you want.
To quote the relevant part of the manual more fully:
"An option [each of the :: lines] can be selected for execution only when its guard statement is executable [the guard statement is the part before the ->]. If more than one guard statement is executable, one of them will be selected non-deterministically. If none of the guards are executable, the selection construct as a whole blocks."
By the way, I haven't syntax checked or simulated the above in Spin. Hopefully it's right. I'm quite new to Promela and Spin myself.
If you want to have your number of channels variable without having to change the implementation of the send and receive parts, you might use the approach of the following producer-consumer example:
#define NUMCHAN 4
chan channels[NUMCHAN];
init {
chan ch1 = [1] of { byte };
chan ch2 = [1] of { byte };
chan ch3 = [1] of { byte };
chan ch4 = [1] of { byte };
channels[0] = ch1;
channels[1] = ch2;
channels[2] = ch3;
channels[3] = ch4;
// Add further channels above, in
// accordance with NUMCHAN
// First let the producer write
// something, then start the consumer
run producer();
atomic { _nr_pr == 1 ->
run consumer();
}
}
proctype consumer() {
byte var, i;
chan theChan;
i = 0;
do
:: i == NUMCHAN -> break
:: else ->
theChan = channels[i];
if
:: skip // non-deterministic skip
:: nempty(theChan) ->
theChan ? var;
printf("Read value %d from channel %d\n", var, i+1)
fi;
i++
od
}
proctype producer() {
byte var, i;
chan theChan;
i = 0;
do
:: i == NUMCHAN -> break
:: else ->
theChan = channels[i];
if
:: skip;
:: theChan ! 1;
printf("Write value 1 to channel %d\n", i+1)
fi;
i++
od
}
The do loop in the consumer process non-deterministically chooses an index between 0 and NUMCHAN-1 and reads from the respective channel, if there is something to read, else this channel is always skipped. Naturally, during a simulation with Spin the probability to read from channel NUMCHAN is much smaller than that of channel 0, but this does not make any difference in model checking, where any possible path is explored.