Path options for techmap calls in a pass - yosys

I'm extending Yosys with a pass I made myself. In this pass, I call the techmap pass multiple times, with a path to the map to apply.
However, depending on the directory I'm in while calling my pass, the correct paths to the maps differ. Is there a variable that points to Yosys's main directory I can use?
The following code line shows how I currently call techmap.
Pass::call(design, "techmap -map passes/decompose/" + decomposeFile + " -autoproc" + tag);

The + prefix refers to the Yosys share directory - if you need an example of this, have a look at the various FPGA synthesis passes

Related

intercept field access to log with bytebuddy

I am trying to log field writes with bytebuddy. After reading some earlier posts, I started using MemberSubstitution and got something going using the following code:
private static Method FIELD_INTERCEPTOR = // reflective reference to interceptFieldWrite
AsmVisitorWrapper VISITOR = MemberSubstitution.relaxed()
.field(ElementMatchers.any())
.onWrite()
.replaceWith(FIELD_INTERCEPTOR)
.on(ElementMatchers.isMethod());
..
public static void interceptFieldWrite(Object object,Object value) {
System.out.println("intercepted field write in object " + object + " , attempt to set value to " + value);
}
..
The part I am struggling with is how to pass a reference to the field for which the access is intercepted to interceptFieldWrite (as string or instance of Field). If possible I would of course like to avoid reflection completely. I don't actually want to completely substitute field access, but just want to add a method with some checks just before it takes place. Is there a feature in bytebuddy to do this, or do I have to use something more low-level than ASM to achieve this ?
Byte Buddy offers this but you will have to compose your own StackManipulations to do so. The mechanism in MemberSubstitution is called replaceWithChain. Here you specify Steps where each step can do what you intend:
invoke a method via MethodInvocation.
write a field via FieldAccessor.
You will have to load the arguments to the method call and the field access prior to using the above stack manipulations via the MethodVariableAccess where the targeted element's offsets are represented by offsets.
In your case, this would require to read the target instance via
MethodVaribaleAccess.of(parameters.get(0)).loadFrom(offsets.get(0));
MethodVaribaleAccess.of(parameters.get(1)).loadFrom(offsets.get(1));
and the to execute the method or field write in question. The targeted field will be passed as target, you can cast it to FieldDescription if you only ever intercept fields.
Make sure you only intercept non-static fields where the this instance will not be passed.

Define method type for meta dynamically created method

I'm using graphql-ruby, and I'd really like to be able to type the dynamic methods that created for things like arguments.
Small example:
class Test
argument :first_argument, String
argument :secondArgument, String, as: second_argument, required: false
def method
puts first_argument.length # this is okay
puts second_argument.length # this is a problem, because it can be nil
end
end
I've tried to define these by doing:
# ...
first_argument = T.let(nil, String)
second_argument = T.let(nil, T.nilable(String))
which doesn't seem to work. I also did
#...
sig { returns(String) }
def first_argument; ""; end
sig { returns(T.nilable(String)) }
def second_argument; end
which works, but is not overly pretty. Is there a nicer way to go about this?
There is some nascent, experimental support for typing methods declared by meta programming like this: https://sorbet.org/docs/metaprogramming-plugins
In this case, you might define a plugin file like:
# argument_plugin.rb
# Sorbet calls this plugin with command line arguments similar to the following:
# ruby --class Test --method argument --source "argument :first_argument, String"
# we only care about the source here, so we use ARGV[5]
source = ARGV[5]
/argument[( ]:([^,]*?), ([^,]*?)[) ]/.match(source) do |match_data|
puts "sig {return(#{match_data[2]})}" # writes a sig that returns the type
puts "def #{match_data[1]}; end" # writes an empty method with the right name
end
I've only included the "getter" for the argument here, but it should be simple to go ahead and write out the sig for the setter method as well. You'd also want to handle all variants of the argument method as I've only handled the one with Symbol, Type arguments. For what it's worth, I'm not sure if the "source" passed in to your plugin would be normalized with parens or not, so I've made the regex match either. I also suspect that this will not work if you pass in the symbol names as variables instead of literals.
We then use a YAML file to tell Sorbet about this plugin.
# triggers.yaml
ruby_extra_args:
# These options are forwarded to Ruby
- '--disable-gems' # This option speeds up Ruby boot time. Use it if you don't need gems
triggers:
argument: argument_plugin.rb # This tells Sorbet to run argument.rb when it sees a call to `argument`
Run Sorbet and pass in the yaml config file as the argument for --dsl-plugins:
❯ srb tc --dsl-plugins triggers.yaml ... files to type check ...
I'd really like to be able to type the dynamic methods that created for things like arguments
Sorbet doesn't support typing dynamic methods like that. But they do provide a T::Struct class that has similar functionality. I did something similar last week for my project and I'll describe what I did below. If T::Struct doesn't work for you, an alternative is writing some code to generate the Sigs that you'd write manually.
My approach is using T::Struct as the wrapper for “arguments” class. You can define args as props in a T::Struct like following:
const for arguments that don’t change
prop for arguments that may change
Use default to provide a default value when no value is given
Use T.nilable type for arguments that can be nil
Building on top of the vanilla T::Struct, I also add support for “maybe”, which is for args that is truly optional and can be nil. IE: when a value is not passed in, it should not be used at all. It is different from using nil as default value because when a value is passed in, it can be nil. If you’re interested in this “maybe” component, feel free to DM me.

How do I make Meson object constant?

As explained here, I like to create file objects in subdirs, and library / executables in the top-level file. However, since all the variables end up in global scope, two subdir files could accidentally use the same variable names. For example:
# Top-level meson.build
subdir('src/abc')
subdir('src/def')
# src/abc/meson.build
my_files=files('1.c','2.c')
# src/def/meson.build
my_files=files('3.c','4.c')
I want meson to throw an error when src/def/meson.build tries to assign a value to my_files. Is this possible in Meson 0.50?
Reassigning variables is rather legitimate operation in meson, so it looks as it is not possible to generate error in standard way. One way of avoiding this problem is following some naming rules e.g. according to folders/sub-folders' names (abc_files, def_files in your case).
But if you really need to have variables with the same name and make sure they are not reassigned, you can use is_variable() function which returns true if variable with given name has been assigned. So, place the following assert before each assignment:
assert(not is_variable('my_files'), 'my_files already assigned!!!')
my_files=files('3.c','4.c')

invoking TCL C API's inside tcl package procedures

I am using TCL-C API for my program.
and I read and created test program that is similar to this C++ example.
But I have a problem with this example. when I use this example in the shell (by loading it with load example.o) every input automatically invokes the interpreter of the API and run the command that is related to the input string.
But suppose that I want that the input will invoke tcl procedure that is inside a package required by me , this procedure will check the parameters and will print another message and only after this will invoke TCL-C API related function (kind of wrapper), In this case how can I do it?
I read somewhere that the symbol # is the symbol should be used for invoking external program but I just can't find where it was.
I will give a small example for make things more clear.
somepackage.tcl
proc dosomething { arg1 , arg2 , arg3 } {
# check args here #
set temp [ #invoke here TCL-C API function and set it's result in temp ]
return $temp
}
package provide ::somepackage 1.0
test.tcl
package require ::somepackage 1.0
load somefile.o # this is the object file which implements TCL-C API commands [doSomething 1 2 3 ]
...
But I have a problem with this example. when I use this example in the
shell (by loading it with load example.o) every input automatically
invokes the interpreter of the API and run the command that is related
to the input string.
Provided that you script snippets represent your actual implementation in an accurate manner, then the problem is that your Tcl proc named doSomething is replaced by the C-implemented Tcl command once your extension is loaded. Procedures and commands live in the same namespace(s). When the loading order were reversed, the problem would remain the same.
I read that everything is being evaluated by the tcl interperter so in
this case I should name the tcl name of the C wrap functions in
special way for example cFunc. But I am not sure about this.
This is correct. You have to organise the C-implemented commands and their scripted wrappers in a way that their names do not conflict with one another. Some (basic) options:
Use two different Tcl namespaces, with same named procedures
Apply some naming conventions to wrapper procs and commands (your cFunc hint)
If your API were provided as actual Itcl or TclOO objects, and the individual commands were the methods, you could use a subclass or a mixin to host refinements (using the super-reference, such as next in TclOO, to forward from the scripted refinement to the C implementations).
A hot-fix solution in your current setup, which is better replaced by some actual design, would be to rename or interp hide the conflicting commands:
load somefile.o
Hide the now available commands: interp hide {} doSomething
Define a scripted wrapper, calling the hidden original at some point:
For example:
proc doSomething {args} {
# argument checking
set temp [interp invokehidden {} doSomething {*}$args]
# result checking
return $temp
}

How to require multiple modules in a single statement?

I want to require several Lua modules at once, similar the to the asterisk signifier from Java (import java.awt.*). This the structure I organized my modules in subdirectories:
<myapp>
-- calculations
-- calc1
-- calc2
-- calc3
-- helper
-- help1
-- help2
-- print
--graphprinter
--matrixprinter
My client requires each module of a subpath:
local graphprinter = require("myapp.helper.print.graphprinter")
local matrixprinter = require("myapp.helper.print.matrixprinter")
I would prefer an automatic multi-require, which derives the local table names from the module path and requires a whole subpath at once. This could be the format: require("myapp.helper.print.*"). Automatically the local table names should be created for each module of the subdirectory, so that there isn't any difference as I would have required them module by module.
Why don't you just write an init.lua file for each folder that requires all the other libraries?
For example, in calculations you write a file that contains
return {
calc1 = require "calc1";
calc2 = require "calc2";
calc3 = require "calc3";
}
Then you can just write calculations = require "calculations" and it will automatically load calculations.calc<1-3>
This can be done for the entire directory structure, and require "helper" can call require "help1" which in turn calls require "print" and in the end you can find your functions in helper.help1.print.<function>
Short explanation of how this works: When you run require "library" lua will either try to include a file called library.lua or the file init.lua located in a library directory. This is also the reason why you do require "dir.lib" instead of require "dir/lib"; because, if done right, when you just require "dir" it will return a table that contains the field lib, so you would access it as dir.lib.<function>.
The module env partially achieves what you are looking for, though it is far from perfect.
It allows for grouped / named imports, with some caveats - the main one being you must manually manage your environments. In addition, you'll need to write index files (default init.lua, unless you write a custom path set), since it is intended for use with modules that export tables.
Here's a bit of an example. First we need to properly set up our file structure.
-- main.lua
-- calculations /
-- calc1.lua
-- calc2.lua
-- calc3.lua
-- init.lua
-- helper /
-- print /
-- init.lua
-- graphprinter.lua
-- matrixprinter.lua
The index files, which are slightly tedious:
-- calculations/init
return {
calc1 = require 'calculations.calc1',
calc2 = require 'calculations.calc2',
calc3 = require 'calculations.calc3'
}
and
-- helpers/print/init
return {
graphprinter = require 'helper.print.graphprinter',
matrixprinter = require 'helper.print.matrixprinter'
}
Inside your main file. The major caveat shows itself quickly, you must use the function returned by requiring 'env' to override your local environment. Passing no arguments will create a clone of your current environment (preserving require, etc.).
-- main.lua
local _ENV = require 'env' () -- (see notes below)
The new environment will be given an import function, which takes a single argument, a string represent the path or module name to import into the current environment. The return value is a transient table that can be used to further alter the environment state.
import 'helper/print' :use '*'
import 'calculations' :use '*'
One of the functions on the transient table is :use, which either takes a table indicating which values to pull from the required table, or the string '*', which indicates you want all values from the required table placed in your current environment
print(matrixprinter, graphprinter) --> function: 0x{...} function: 0x{...} (or so)
The final caveat is that all the paths you've seen are reliant on the cwd being the same as the one that holds main.lua. lua myapp/main.lua will fail loudly, unless you place your sub modules in a static location, and adjust package.path / import.paths correctly.
Seems like a lot of work to avoid a couple of lines of require statements.
Disclaimer: I wrote env as a bit of an experiment.
Note that import does not currently support the . syntax (you need to use your OS path delimiter), or proper exploding of tables into table chains. I have a bit of a patch in the works that addresses this.
Lua 5.2+ uses _ENV to override local environments. For Lua 5.1, you'll need to use setfenv.
As mentioned above, Lua has no real notion of directories. To really do what you want (with less overhead), you'll need to write your own custom module loader, environment handler, and likely make use of a module like LuaFileSystem to reliably 'automatically' load all files in a directory.
TL;DR:
This is a tricky topic.
There's nothing built into the language to make this easy.
You'll need to write something custom.
There will always be drawbacks.