How to read char without echo - idris

In (*1) we have example how to turn off echo when use getChar in haskell. The idea is we can turn off echo using hSetEcho stdin False.
I would like do the same thing in Idris. Do I have a way?
(*1) Haskell read raw keyboard input

You might want to use the curses bindings.
If this is too heavy for your usage, you can write C code that handles this for your terminal, and use that code via FFI interfaces. For example, you can use termios.hfor Linux terminals like this:
termops.c:
#include "termops.h"
#include <termios.h>
void set_echo(int fd) {
struct termios term;
tcgetattr(0, &term);
term.c_lflag |= ECHO;
tcsetattr(fd, TCSAFLUSH, &term);
}
void unset_echo(int fd) {
struct termios term;
tcgetattr(0, &term);
term.c_lflag &= ~ECHO;
tcsetattr(fd, TCSAFLUSH, &term);
}
termops.h:
void set_echo(int fd);
void unset_echo(int fd);
termops.idr:
%include C "termops.h"
setEcho : IO ()
setEcho = foreign FFI_C "set_echo" (Int -> IO ()) 0
unsetEcho : IO ()
unsetEcho = foreign FFI_C "unset_echo" (Int -> IO ()) 0
getPasswd : IO String
getPasswd = do
c <- getChar
if c == '\n'
then pure ""
else do rek <- getPasswd
pure $ strCons c rek
main : IO ()
main = do
unsetEcho
passwd <- getPasswd
setEcho
printLn passwd
To compile and link the C library, then use idris termops.idr --cg-opt "termops.c".

Related

How to write a Raku declaration for a C function returning a whole struct?

I have this C code:
typedef struct {
double dat[2];
} gsl_complex;
gsl_complex gsl_poly_complex_eval(const double c[], const int len, const gsl_complex z);
The C function returns a whole struct, not just a pointer, so I cannot write the Raku declaration as:
sub gsl_poly_complex_eval(CArray[num64] $c, int32 $len, gsl_complex $z --> gsl_complex)
is native(LIB) is export { * }
Any suggestion?
For that you need a CStruct. The P5localtime module contains a more elaborate example.
The problem
Some C APIs work with structs using a three-phase approach, passing around structs by reference, like this:
struct mystruct *init_mystruct(arguments, ...);
double compute(struct mystruct *);
void clean_mystruct(struct mystruct *);
This way the implementation hides the data structure, but this comes with a price: the final users have to keep track of their pointers and remember to clean up after themselves, or the program will leak memory.
Another approach is the one that the library I was interfacing used: return the data on the stack, so it can be assigned to an auto variable and automatically discarded when it goes out of scope.
In this case the API is modeled as a two-phase operation:
struct mystruct init_mystruct(arguments, ...);
double compute(struct mystruct);
The structure is passed on the stack, by value and there's no need to clean up afterwards.
But Raku's NativeCall interface is only able to use C structs passing them by reference, hence the problem.
The solution
This is not a clean solution, because it steps back into the first approach described, the three-phase one, but it's the only one I have been able to devise so far.
Here I consider two C functions from the library's API: the first creates a complex number as a struct, the second adds up two numbers.
First I wrote a tiny C code interface, the file complex.c:
#include <gsl/gsl_complex.h>
#include <gsl/gsl_complex_math.h>
#include <stdlib.h>
gsl_complex *alloc_gsl_complex(void)
{
gsl_complex *c = malloc(sizeof(gsl_complex));
return c;
}
void free_gsl_complex(gsl_complex *c)
{
free(c);
}
void mgsl_complex_rect(double x, double y, gsl_complex *res)
{
gsl_complex ret = gsl_complex_rect(x, y);
*res = ret;
}
void mgsl_complex_add(gsl_complex *a, gsl_complex *b, gsl_complex *res)
{
*res = gsl_complex_add(*a, *b);
}
I compiled it this way:
gcc -c -fPIC complex.c
gcc -shared -o libcomplex.so complex.o -lgsl
Note the final -lgsl used to link the libgsl C library I am interfacing to.
Then I wrote the Raku low-level interface:
#!/usr/bin/env raku
use NativeCall;
constant LIB = ('/mydir/libcomplex.so');
class gsl_complex is repr('CStruct') {
HAS num64 #.dat[2] is CArray;
}
sub mgsl_complex_rect(num64 $x, num64 $y, gsl_complex $c) is native(LIB) { * }
sub mgsl_complex_add(gsl_complex $a, gsl_complex $b, gsl_complex $res) is native(LIB) { * }
sub alloc_gsl_complex(--> gsl_complex) is native(LIB) { * }
sub free_gsl_complex(gsl_complex $c) is native(LIB) { * }
my gsl_complex $c1 = alloc_gsl_complex;
mgsl_complex_rect(1e0, 2e0, $c1);
say "{$c1.dat[0], $c1.dat[1]}"; # output: 1 2
my gsl_complex $c2 = alloc_gsl_complex;
mgsl_complex_rect(1e0, 2e0, $c2);
say "{$c2.dat[0], $c2.dat[1]}"; # output: 1 2
my gsl_complex $res = alloc_gsl_complex;
mgsl_complex_add($c1, $c2, $res);
say "{$res.dat[0], $res.dat[1]}"; # output: 2 4
free_gsl_complex($c1);
free_gsl_complex($c2);
free_gsl_complex($res);
Note that I had to free explicitly the three data structures I created, spoiling the original C API careful design.

How do I perform brute force function verification inside a function when running Rust tests?

When I'm testing functions that have an obvious, slower, brute-force alternative, I've often found it helpful to write both functions, and verify that the outputs match when debugging flags are on. In C, it might look something like this:
#include <inttypes.h>
#include <stdio.h>
#ifdef NDEBUG
#define _rsqrt rsqrt
#else
#include <assert.h>
#include <math.h>
#endif
// https://en.wikipedia.org/wiki/Fast_inverse_square_root
float _rsqrt(float number) {
const float x2 = number * 0.5F;
const float threehalfs = 1.5F;
union {
float f;
uint32_t i;
} conv = {number}; // member 'f' set to value of 'number'.
// approximation via Newton's method
conv.i = 0x5f3759df - (conv.i >> 1);
conv.f *= (threehalfs - (x2 * conv.f * conv.f));
return conv.f;
}
#ifndef NDEBUG
float rsqrt(float number) {
float res = _rsqrt(number);
// brute force solution to verify
float correct = 1 / sqrt(number);
// make sure the approximation is within 1% of correct
float err = fabs(res - correct) / correct;
assert(err < 0.01);
// for exposition sake: large scale systems would verify quietly
printf("DEBUG: rsqrt(%f) -> %f error\n", number, err);
return res;
}
#endif
float graphics_code() {
// graphics code that invokes rsqrt a bunch of different ways
float total = 0;
for (float i = 1; i < 10; i++)
total += rsqrt(i);
return total;
}
int main(int argc, char *argv[]) {
printf("%f\n", graphics_code());
return 0;
}
and execution might look like this (if the above code is in tmp.c):
$ clang tmp.c -o tmp -lm && ./tmp # debug mode
DEBUG: rsqrt(1.000000) -> 0.001693 error
DEBUG: rsqrt(2.000000) -> 0.000250 error
DEBUG: rsqrt(3.000000) -> 0.000872 error
DEBUG: rsqrt(4.000000) -> 0.001693 error
DEBUG: rsqrt(5.000000) -> 0.000162 error
DEBUG: rsqrt(6.000000) -> 0.001389 error
DEBUG: rsqrt(7.000000) -> 0.001377 error
DEBUG: rsqrt(8.000000) -> 0.000250 error
DEBUG: rsqrt(9.000000) -> 0.001140 error
4.699923
$ clang tmp.c -o tmp -lm -O3 -DNDEBUG && ./tmp # production mode
4.699923
I like to do this in addition to unit and integration tests because it makes the source of a lot of errors more obvious. It will catch boundary cases that I may have forgotten to unit test, and will naturally expand to the scope to whatever more complex cases I may need in the future (e.g. if the light settings change and I need accuracy for much higher values).
I'm learning Rust, and I really like the natively established separation of interests between testing and production code. I'm trying to do something similar to the above, but can't figure out what the best way to do it is. From what I gather in this thread, I could probably do it with some combination of macro_rules! and #[cfg!( ... )] in the source code, but it feels like I would be breaking the test/production barrier. Ideally I would like to be able to just drop a verification wrapper in around the already defined function, but only for testing. Are macros and cfg my best option here? Can I redefine the default namespace for the imported package just when testing, or do something more clever with macros? I understand that normally files shouldn't be able to modify how imports are linked, but is there an exception for testing? What if I also want it to be wrapped if the module importing it is being tested?
I'm also open to the response that this is a bad way to do testing/verification, but please address the advantages I mentioned above. (Or as a bonus, is there a way the C code can be improved?)
If this isn't currently possible, is it a reasonable thing to go into a feature request?
it feels like I would be breaking the test/production barrier.
Yes, but I don't get why you are concerned about this; your existing code already breaks that boundary. You can use debug_assert and friends to ensure that the function is only called and verified when debug assertions are enabled. If you want to be doubly-sure, you can use cfg(debug_assertions) to only define your slow function then as well:
pub fn add(a: i32, b: i32) -> i32 {
let fast = fast_but_tricky(a, b);
debug_assert_eq!(fast, slow_but_right(a, b));
fast
}
fn fast_but_tricky(a: i32, b: i32) -> i32 {
a + a + b - a
}
#[cfg(debug_assertions)]
fn slow_but_right(a: i32, b: i32) -> i32 {
a + b
}
I don't like this solution. I prefer to keep the testing code more distinct from the production code. What I do instead is use property-based testing to help ensure that my tests cover what is important. I've used proptest to...
Compare a Rust implementation against C
Compare a SIMD implementation against standard
I usually take any cases that are found and create dedicated unit tests for them.
In this case, the proptest might look like:
pub fn add(a: i32, b: i32) -> i32 {
// fast but tricky
a + a + b - a
}
#[cfg(test)]
mod test {
use super::*;
use proptest::{proptest, prop_assert_eq};
fn slow_but_right(a: i32, b: i32) -> i32 {
a + b
}
proptest! {
#[test]
fn same_as_slow_version(a: i32, b: i32) {
prop_assert_eq!(add(a, b), slow_but_right(a, b));
}
}
}
Which finds an error with my "clever" implementation in less than a tenth of a second:
thread 'test::same_as_slow_version' panicked at 'Test failed: attempt to add with overflow; minimal failing
input: a = 375403587, b = 1396676474

Passing an inlined CArray in a CStruct to a shared library using NativeCall

This is a follow-up question to "How to declare native array of fixed size in Perl 6?".
In that question it was discussed how to incorporate an array of a fixed size into a CStruct. In this answer it was suggested to use HAS to inline a CArray in the CStruct. When I tested this idea, I ran into some strange behavior that could not be resolved in the comments section below the question, so I decided to write it up as a new question. Here is is my C test library code:
slib.c:
#include <stdio.h>
struct myStruct
{
int A;
int B[3];
int C;
};
void use_struct (struct myStruct *s) {
printf("sizeof(struct myStruct): %ld\n", sizeof( struct myStruct ));
printf("sizeof(struct myStruct *): %ld\n", sizeof( struct myStruct *));
printf("A = %d\n", s->A);
printf("B[0] = %d\n", s->B[0]);
printf("B[1] = %d\n", s->B[1]);
printf("B[2] = %d\n", s->B[2]);
printf("C = %d\n", s->C);
}
To generate a shared library from this i used:
gcc -c -fpic slib.c
gcc -shared -o libslib.so slib.o
Then, the Perl 6 code:
p.p6:
use v6;
use NativeCall;
class myStruct is repr('CStruct') {
has int32 $.A is rw;
HAS int32 #.B[3] is CArray is rw;
has int32 $.C is rw;
}
sub use_struct(myStruct $s) is native("./libslib.so") { * };
my $s = myStruct.new();
$s.A = 1;
$s.B[0] = 2;
$s.B[1] = 3;
$s.B[2] = 4;
$s.C = 5;
say "Expected size of Perl 6 struct: ", (nativesizeof(int32) * 5);
say "Actual size of Perl 6 struct: ", nativesizeof( $s );
say 'Number of elements of $s.B: ', $s.B.elems;
say "B[0] = ", $s.B[0];
say "B[1] = ", $s.B[1];
say "B[2] = ", $s.B[2];
say "Calling library function..";
say "--------------------------";
use_struct( $s );
The output from the script is:
Expected size of Perl 6 struct: 20
Actual size of Perl 6 struct: 24
Number of elements of $s.B: 3
B[0] = 2
B[1] = 3
B[2] = 4
Calling library function..
--------------------------
sizeof(struct myStruct): 20
sizeof(struct myStruct *): 8
A = 1
B[0] = 0 # <-- Expected 2
B[1] = 653252032 # <-- Expected 3
B[2] = 22030 # <-- Expected 4
C = 5
Questions:
Why does nativesizeof( $s ) give 24 (and not the expected value of 20)?
Why is the content of the array B in the structure not as expected when printed from the C function?
Note:
I am using Ubuntu 18.04 and Perl 6 Rakudo version 2018.04.01, but have also tested with version 2018.05
Your code is correct. I just fixed that bug in MoarVM, and added tests to rakudo, similar to your code:
In C:
typedef struct {
int a;
int b[3];
int c;
} InlinedArrayInStruct;
In Perl 6:
class InlinedArrayInStruct is repr('CStruct') {
has int32 $.a is rw;
HAS int32 #.b[3] is CArray;
has int32 $.c is rw;
}
See these patches:
https://github.com/MoarVM/MoarVM/commit/ac3d3c76954fa3c1b1db14ea999bf3248c2eda1c
https://github.com/rakudo/rakudo/commit/f8b79306cc1900b7991490eef822480f304a56d9
If you are not building rakudo (and also NQP and MoarVM) directly from latest source from github, you probably have to wait for the 2018.08 release that will appear here: https://rakudo.org/files

How does the Yosys ConstEval API work?

I'm trying to write a plugin that requires evaluating combinatorial circuits. From what I can gather ConstEval is the tool which does this. However, the API is not so clear to me. Is there somewhere a rundown of the members of ConstEval and what they do?
(Asked by jeremysalwen on github).
Using the ConstEval class is actually quite easy. You create a ConstEval object for a given module and set the known values using the void ConstEval::set(SigSpec, Const) method. After all the known signals have been set, the bool ConstEval::eval(SigSpec&, SigSpec&) method can be used to evaluate nets. The eval() method returns true when the evaluation was successful and replaces the net(s) in the first argument with the constant values the net evaluates to. Otherwise it returns false and sets the 2nd argument to the list of nets that need to be set in order to continue evaluation.
The methods push() and pop() can be used for creating local contexts for set(). The method stop() can be used to declare signals at which the evaluation should stop, even when there are combinatoric cells driving the net.
The following simple Yosys plugin demonstrates how to use the ConstEval API (evaldemo.cc):
#include "kernel/yosys.h"
#include "kernel/consteval.h"
USING_YOSYS_NAMESPACE
PRIVATE_NAMESPACE_BEGIN
struct EvalDemoPass : public Pass
{
EvalDemoPass() : Pass("evaldemo") { }
virtual void execute(vector<string>, Design *design)
{
Module *module = design->top_module();
if (module == nullptr)
log_error("No top module found!\n");
Wire *wire_a = module->wire("\\A");
Wire *wire_y = module->wire("\\Y");
if (wire_a == nullptr)
log_error("No wire A found!\n");
if (wire_y == nullptr)
log_error("No wire Y found!\n");
ConstEval ce(module);
for (int v = 0; v < 4; v++) {
ce.push();
ce.set(wire_a, Const(v, GetSize(wire_a)));
SigSpec sig_y = wire_y, sig_undef;
if (ce.eval(sig_y, sig_undef))
log("Eval results for A=%d: Y=%s\n", v, log_signal(sig_y));
else
log("Eval failed for A=%d: Missing value for %s\n", v, log_signal(sig_undef));
ce.pop();
}
}
} EvalDemoPass;
PRIVATE_NAMESPACE_END
Example usage:
$ cat > evaldemo.v <<EOT
module main(input [1:0] A, input [7:0] B, C, D, output [7:0] Y);
assign Y = A == 0 ? B : A == 1 ? C : A == 2 ? D : 42;
endmodule
EOT
$ yosys-config --build evaldemo.so evaldemo.cc
$ yosys -m evaldemo.so -p evaldemo evaldemo.v
...
-- Running command `evaldemo' --
Eval failed for A=0: Missing value for \B
Eval failed for A=1: Missing value for \C
Eval failed for A=2: Missing value for \D
Eval results for A=3: Y=8'00101010

Trying to parse OpenCV YAML ouput with yaml-cpp

I've got a series of OpenCv generated YAML files and would like to parse them with yaml-cpp
I'm doing okay on simple stuff, but the matrix representation is proving difficult.
# Center of table
tableCenter: !!opencv-matrix
rows: 1
cols: 2
dt: f
data: [ 240, 240]
This should map into the vector
240
240
with type float. My code looks like:
#include "yaml.h"
#include <fstream>
#include <string>
struct Matrix {
int x;
};
void operator >> (const YAML::Node& node, Matrix& matrix) {
unsigned rows;
node["rows"] >> rows;
}
int main()
{
std::ifstream fin("monsters.yaml");
YAML::Parser parser(fin);
YAML::Node doc;
Matrix m;
doc["tableCenter"] >> m;
return 0;
}
But I get
terminate called after throwing an instance of 'YAML::BadDereference'
what(): yaml-cpp: error at line 0, column 0: bad dereference
Abort trap
I searched around for some documentation for yaml-cpp, but there doesn't seem to be any, aside from a short introductory example on parsing and emitting. Unfortunately, neither of these two help in this particular circumstance.
As I understand, the !! indicate that this is a user-defined type, but I don't see with yaml-cpp how to parse that.
You have to tell yaml-cpp how to parse this type. Since C++ isn't dynamically typed, it can't detect what data type you want and create it from scratch - you have to tell it directly. Tagging a node is really only for yourself, not for the parser (it'll just faithfully store it for you).
I'm not really sure how an OpenCV matrix is stored, but if it's something like this:
class Matrix {
public:
Matrix(unsigned r, unsigned c, const std::vector<float>& d): rows(r), cols(c), data(d) { /* init */ }
Matrix(const Matrix&) { /* copy */ }
~Matrix() { /* delete */ }
Matrix& operator = (const Matrix&) { /* assign */ }
private:
unsigned rows, cols;
std::vector<float> data;
};
then you can write something like
void operator >> (const YAML::Node& node, Matrix& matrix) {
unsigned rows, cols;
std::vector<float> data;
node["rows"] >> rows;
node["cols"] >> cols;
node["data"] >> data;
matrix = Matrix(rows, cols, data);
}
Edit It appears that you're ok up until here; but you're missing the step where the parser loads the information into the YAML::Node. Instead, se it like:
std::ifstream fin("monsters.yaml");
YAML::Parser parser(fin);
YAML::Node doc;
parser.GetNextDocument(doc); // <-- this line was missing!
Matrix m;
doc["tableCenter"] >> m;
Note: I'm guessing dt: f means "data type is float". If that's the case, it'll really depend on how the Matrix class handles this. If you have a different class for each data type (or a templated class), you'll have to read that field first, and then choose which type to instantiate. (If you know it'll always be float, that'll make your life easier, of course.)