Itcl What is the read property? - properties

I want to control read access to an Itcl public variable. I can do this for write access using something such as:
package require Itcl
itcl::class base_model_lib {
public variable filename ""
}
itcl::configbody base_model_lib::filename {
puts "in filename write"
dict set d_model filename $filename
}
The configbody defines what happens when config is called: $obj configure -filename foo.txt. But how do I control what happens during the read? Imagine that I want to do more than just look up a value during the read.
I would like to stay using the standard Itcl pattern of using cget/configure to expose these to the user.
So that is my question. However, let me describe what I really want to do and you tell me if I should do something completely different :)
I like python classes. I like that I can create a variable and read/write to it from outside the instance. Later, when I want to get fancy, I'll create methods (using #property and #property.setter) to customize the read/write without the user seeing an API change. I'm trying to do the same thing here.
My sample code also suggests something else I want to do. Actually, the filename is stored internally in a dictionary. i don't want to expose that entire dictionary to the user, but I do want them to be able to change values inside that dict. So, really 'filename' is just a stub. I don't want a public variable called that. I instead want to use cget and configure to read and write a "thing", which I may chose to make a simple public variable or may wish to define a procedure for looking it up.
PS: I'm sure I could create a method which took either one or two arguments. If one, its a read and two its a write. I assumed that wasn't the way to go as I don't think you could use the cget/configure method.

All Itcl variables are mapped to Tcl variables in a namespace whose name is difficult to guess. This means that you can get a callback whenever you read a variable (it happens immediately before the variable is actually read) via Tcl's standard tracing mechanism; all you need to do is to create the trace in the constructor. This requires the use of itcl::scope and is best done with itcl::code $this so that we can make the callback be a private method:
package require Itcl
itcl::class base_model_lib {
public variable filename ""
constructor {} {
trace add variable [itcl::scope filename] read [itcl::code $this readcallback]
}
private method readcallback {args} { # You can ignore the arguments here
puts "about to read the -filename"
set filename "abc.[expr rand()]"
}
}
All itcl::configbody does is effectively the equivalent for variable write traces, which are a bit more common, though we'd usually prefer you to set the trace directly these days as that's a more general mechanism. Demonstrating after running the above script:
% base_model_lib foo
foo
% foo configure
about to read the -filename
{-filename {} abc.0.8870089169996832}
% foo configure -filename
about to read the -filename
-filename {} abc.0.9588680136757288
% foo cget -filename
about to read the -filename
abc.0.694705847974264
As you can see, we're controlling exactly what is read via the standard mechanism (in this case, some randomly varying gibberish, but you can do better than that).

Related

intercept field access to log with bytebuddy

I am trying to log field writes with bytebuddy. After reading some earlier posts, I started using MemberSubstitution and got something going using the following code:
private static Method FIELD_INTERCEPTOR = // reflective reference to interceptFieldWrite
AsmVisitorWrapper VISITOR = MemberSubstitution.relaxed()
.field(ElementMatchers.any())
.onWrite()
.replaceWith(FIELD_INTERCEPTOR)
.on(ElementMatchers.isMethod());
..
public static void interceptFieldWrite(Object object,Object value) {
System.out.println("intercepted field write in object " + object + " , attempt to set value to " + value);
}
..
The part I am struggling with is how to pass a reference to the field for which the access is intercepted to interceptFieldWrite (as string or instance of Field). If possible I would of course like to avoid reflection completely. I don't actually want to completely substitute field access, but just want to add a method with some checks just before it takes place. Is there a feature in bytebuddy to do this, or do I have to use something more low-level than ASM to achieve this ?
Byte Buddy offers this but you will have to compose your own StackManipulations to do so. The mechanism in MemberSubstitution is called replaceWithChain. Here you specify Steps where each step can do what you intend:
invoke a method via MethodInvocation.
write a field via FieldAccessor.
You will have to load the arguments to the method call and the field access prior to using the above stack manipulations via the MethodVariableAccess where the targeted element's offsets are represented by offsets.
In your case, this would require to read the target instance via
MethodVaribaleAccess.of(parameters.get(0)).loadFrom(offsets.get(0));
MethodVaribaleAccess.of(parameters.get(1)).loadFrom(offsets.get(1));
and the to execute the method or field write in question. The targeted field will be passed as target, you can cast it to FieldDescription if you only ever intercept fields.
Make sure you only intercept non-static fields where the this instance will not be passed.

Separating operator definitions for a class to other files and using them

I have 4 files all in the same directory: main.rakumod, infix_ops.rakumod, prefix_ops.rakumod and script.raku:
main module has a class definition (class A)
*_ops modules have some operator routine definitions to write, e.g., $a1 + $a2 in an overloaded way.
script.raku tries to instantaniate A object(s) and use those user-defined operators.
Why 3 files not 1? Since class definition might be long and separating overloaded operator definitions in files seemed like a good idea for writing tidier code (easier to manage).
e.g.,
# main.rakumod
class A {
has $.x is rw;
}
# prefix_ops.rakumod
use lib ".";
use main;
multi prefix:<++>(A:D $obj) {
++$obj.x;
$obj;
}
and similar routines in infix_ops.rakumod. Now, in script.raku, my aim is to import main module only and see the overloaded operators also available:
# script.raku
use lib ".";
use main;
my $a = A.new(x => -1);
++$a;
but it naturally doesn't see ++ multi for A objects because main.rakumod doesn't know the *_ops.rakumod files as it stands. Is there a way I can achieve this? If I use prefix_ops in main.rakumod, it says 'use lib' may not be pre-compiled perhaps because of circular dependentness
it says 'use lib' may not be pre-compiled
The word "may" is ambiguous. Actually it cannot be precompiled.
The message would be better if it said something to the effect of "Don't put use lib in a module."
This has now been fixed per #codesections++'s comment below.
perhaps because of circular dependentness
No. use lib can only be used by the main program file, the one directly run by Rakudo.
Is there a way I can achieve this?
Here's one way.
We introduce a new file that's used by the other packages to eliminate the circularity. So now we have four files (I've rationalized the naming to stick to A or variants of it for the packages that contribute to the type A):
A-sawn.rakumod that's a role or class or similar:
unit role A-sawn;
Other packages that are to be separated out into their own files use the new "sawn" package and does or is it as appropriate:
use A-sawn;
unit class A-Ops does A-sawn;
multi prefix:<++>(A-sawn:D $obj) is export { ++($obj.x) }
multi postfix:<++>(A-sawn:D $obj) is export { ($obj.x)++ }
The A.rakumod file for the A type does the same thing. It also uses whatever other packages are to be pulled into the same A namespace; this will import symbols from it according to Raku's standard importing rules. And then relevant symbols are explicitly exported:
use A-sawn;
use A-Ops;
sub EXPORT { Map.new: OUTER:: .grep: /'fix:<'/ }
unit class A does A-sawn;
has $.x is rw;
Finally, with this setup in place, the main program can just use A;:
use lib '.';
use A;
my $a = A.new(x => -1);
say $a++; # A.new(x => -1)
say ++$a; # A.new(x => 1)
say ++$a; # A.new(x => 2)
The two main things here are:
Introducing an (empty) A-sawn package
This type eliminates circularity using the technique shown in #codesection's answer to Best Way to Resolve Circular Module Loading.
Raku culture has a fun generic term/meme for techniques that cut through circular problems: "circular saws". So I've used a -sawn suffix of the "sawn" typename as a convention when using this technique.[1]
Importing symbols into a package and then re-exporting them
This is done via sub EXPORT { Map.new: ... }.[2] See the doc for sub EXPORT.
The Map must contain a list of symbols (Pairs). For this case I've grepped through keys from the OUTER:: pseudopackage that refers to the symbol table of the lexical scope immediately outside the sub EXPORT the OUTER:: appears in. This is of course the lexical scope into which some symbols (for operators) have just been imported by the use Ops; statement. I then grep that symbol table for keys containing fix:<; this will catch all symbol keys with that string in their name (so infix:<..., prefix:<... etc.). Alter this code as needed to suit your needs.[3]
Footnotes
[1] As things stands this technique means coming up with a new name that's different from the one used by the consumer of the new type, one that won't conflict with any other packages. This suggests a suffix. I think -sawn is a reasonable choice for an unusual and distinctive and mnemonic suffix. That said, I imagine someone will eventually package this process up into a new language construct that does the work behind the scenes, generating the name and automating away the manual changes one has to make to packages with the shown technique.
[2] A critically important point is that, if a sub EXPORT is to do what you want, it must be placed outside the package definition to which it applies. And that in turn means it must be before a unit package declaration. And that in turn means any use statement relied on by that sub EXPORT must appear within the same or outer lexical scope. (This is explained in the doc but I think it bears summarizing here to try head off much head scratching because there's no error message if it's in the wrong place.)
[3] As with the circularity saw aspect discussed in footnote 1, I imagine someone will also eventually package up this import-and-export mechanism into a new construct, or, perhaps even better, an enhancement of Raku's built in use statement.
Hi #hanselmann here is how I would write this (in 3 files / same dir):
Define my class(es):
# MyClass.rakumod
unit module MyClass;
class A is export {
has $.x is rw;
}
Define my operators:
# Prefix_Ops.rakumod
unit module Prefix_Ops;
use MyClass;
multi prefix:<++>(A:D $obj) is export {
++$obj.x;
$obj;
}
Run my code:
# script.raku
use lib ".";
use MyClass;
use Prefix_Ops;
my $a = A.new(x => -1);
++$a;
say $a.x; #0
Taking my cue from the Module docs there are a couple of things I am doing different:
Avoiding the use of main (or Main, or MAIN) --- I am wary that MAIN is a reserved name and just want to keep clear of engaging any of that (cool) machinery
Bringing in the unit module declaration at the top of each 'rakumod' file ... it may be possible to use bare files in Raku ... but I have never tried this and would say that it is not obvious from the docs that it is even possible, or supported
Now since I wanted this to work first time you will note that I use the same file name and module name ... again it may be possible to do that differently (multiple modules in one file and so on) ... but I have not tried that either
Using the 'is export' trait where I want my script to be able to use these definitions ... as you will know from close study of the docs ;-) is that each module has it's own namespace (the "stash") and we need export to shove the exported definitions into the namespace of the script
As #raiph mentions you only need the script to define the module library location
Since you want your prefix multi to "know" about class A then you also need to use MyClass in the Prefix_Ops module
Anyway, all-in-all, I think that the raku module system exemplifies the unique combination of "easy things easy and hard thinks doable" ... all I had to do with your code (which was very close) was tweak a few filenames and sprinkle in some concise concepts like 'unit module' and 'is export' and it really does not look much different since raku keeps all the import/export machinery under the surface like the swan gliding over the river...

Define method type for meta dynamically created method

I'm using graphql-ruby, and I'd really like to be able to type the dynamic methods that created for things like arguments.
Small example:
class Test
argument :first_argument, String
argument :secondArgument, String, as: second_argument, required: false
def method
puts first_argument.length # this is okay
puts second_argument.length # this is a problem, because it can be nil
end
end
I've tried to define these by doing:
# ...
first_argument = T.let(nil, String)
second_argument = T.let(nil, T.nilable(String))
which doesn't seem to work. I also did
#...
sig { returns(String) }
def first_argument; ""; end
sig { returns(T.nilable(String)) }
def second_argument; end
which works, but is not overly pretty. Is there a nicer way to go about this?
There is some nascent, experimental support for typing methods declared by meta programming like this: https://sorbet.org/docs/metaprogramming-plugins
In this case, you might define a plugin file like:
# argument_plugin.rb
# Sorbet calls this plugin with command line arguments similar to the following:
# ruby --class Test --method argument --source "argument :first_argument, String"
# we only care about the source here, so we use ARGV[5]
source = ARGV[5]
/argument[( ]:([^,]*?), ([^,]*?)[) ]/.match(source) do |match_data|
puts "sig {return(#{match_data[2]})}" # writes a sig that returns the type
puts "def #{match_data[1]}; end" # writes an empty method with the right name
end
I've only included the "getter" for the argument here, but it should be simple to go ahead and write out the sig for the setter method as well. You'd also want to handle all variants of the argument method as I've only handled the one with Symbol, Type arguments. For what it's worth, I'm not sure if the "source" passed in to your plugin would be normalized with parens or not, so I've made the regex match either. I also suspect that this will not work if you pass in the symbol names as variables instead of literals.
We then use a YAML file to tell Sorbet about this plugin.
# triggers.yaml
ruby_extra_args:
# These options are forwarded to Ruby
- '--disable-gems' # This option speeds up Ruby boot time. Use it if you don't need gems
triggers:
argument: argument_plugin.rb # This tells Sorbet to run argument.rb when it sees a call to `argument`
Run Sorbet and pass in the yaml config file as the argument for --dsl-plugins:
❯ srb tc --dsl-plugins triggers.yaml ... files to type check ...
I'd really like to be able to type the dynamic methods that created for things like arguments
Sorbet doesn't support typing dynamic methods like that. But they do provide a T::Struct class that has similar functionality. I did something similar last week for my project and I'll describe what I did below. If T::Struct doesn't work for you, an alternative is writing some code to generate the Sigs that you'd write manually.
My approach is using T::Struct as the wrapper for “arguments” class. You can define args as props in a T::Struct like following:
const for arguments that don’t change
prop for arguments that may change
Use default to provide a default value when no value is given
Use T.nilable type for arguments that can be nil
Building on top of the vanilla T::Struct, I also add support for “maybe”, which is for args that is truly optional and can be nil. IE: when a value is not passed in, it should not be used at all. It is different from using nil as default value because when a value is passed in, it can be nil. If you’re interested in this “maybe” component, feel free to DM me.

invoking TCL C API's inside tcl package procedures

I am using TCL-C API for my program.
and I read and created test program that is similar to this C++ example.
But I have a problem with this example. when I use this example in the shell (by loading it with load example.o) every input automatically invokes the interpreter of the API and run the command that is related to the input string.
But suppose that I want that the input will invoke tcl procedure that is inside a package required by me , this procedure will check the parameters and will print another message and only after this will invoke TCL-C API related function (kind of wrapper), In this case how can I do it?
I read somewhere that the symbol # is the symbol should be used for invoking external program but I just can't find where it was.
I will give a small example for make things more clear.
somepackage.tcl
proc dosomething { arg1 , arg2 , arg3 } {
# check args here #
set temp [ #invoke here TCL-C API function and set it's result in temp ]
return $temp
}
package provide ::somepackage 1.0
test.tcl
package require ::somepackage 1.0
load somefile.o # this is the object file which implements TCL-C API commands [doSomething 1 2 3 ]
...
But I have a problem with this example. when I use this example in the
shell (by loading it with load example.o) every input automatically
invokes the interpreter of the API and run the command that is related
to the input string.
Provided that you script snippets represent your actual implementation in an accurate manner, then the problem is that your Tcl proc named doSomething is replaced by the C-implemented Tcl command once your extension is loaded. Procedures and commands live in the same namespace(s). When the loading order were reversed, the problem would remain the same.
I read that everything is being evaluated by the tcl interperter so in
this case I should name the tcl name of the C wrap functions in
special way for example cFunc. But I am not sure about this.
This is correct. You have to organise the C-implemented commands and their scripted wrappers in a way that their names do not conflict with one another. Some (basic) options:
Use two different Tcl namespaces, with same named procedures
Apply some naming conventions to wrapper procs and commands (your cFunc hint)
If your API were provided as actual Itcl or TclOO objects, and the individual commands were the methods, you could use a subclass or a mixin to host refinements (using the super-reference, such as next in TclOO, to forward from the scripted refinement to the C implementations).
A hot-fix solution in your current setup, which is better replaced by some actual design, would be to rename or interp hide the conflicting commands:
load somefile.o
Hide the now available commands: interp hide {} doSomething
Define a scripted wrapper, calling the hidden original at some point:
For example:
proc doSomething {args} {
# argument checking
set temp [interp invokehidden {} doSomething {*}$args]
# result checking
return $temp
}

Execute command block in primitive in NetLogo extension

I'm writing a primitive that takes in two agentsets and a command block. It needs to call a few functions, execute the command block in the current context, and then call another function. Here's what I have so far:
class WithContext(pushGraphContext: GraphContext => Unit, popGraphContext: api.World => GraphContext)
extends api.DefaultCommand {
override def getSyntax = commandSyntax(
Array(AgentsetType, AgentsetType, CommandBlockType))
def perform(args: Array[Argument], context: Context) {
val turtleSet = args(0).getAgentSet.requireTurtleSet
val linkSet = args(1).getAgentSet.requireLinkSet
val world = linkSet.world
val gc = new GraphContext(world, turtleSet, linkSet)
val extContext = context.asInstanceOf[ExtensionContext]
val nvmContext = extContext.nvmContext
pushGraphContext(gc)
// execute command block here
popGraphContext(world)
}
}
I looked at some examples that used nvmContext.runExclusively, but that looked like it's specifically for having a given agentset run the command block. I want the current agent (possibly the observer) to run it. Should I wrap nvm.agent in an agentset and pass that to nvmContext.runExclusively? If so, what's the easiest way to wrap an agent in agentset? If not, what should I do?
Method #1
The quicker-but-arguably-dirtier method is to use runExclusiveJob, as demonstrated in (e.g.) the create-red-turtles command in https://github.com/NetLogo/Sample-Scala-Extension/blob/master/src/SampleScalaExtension.scala .
To wrap the current agent in an agentset, you can use agent.AgentSetBuilder. (You could also pass an Array[Agent] of length 1 to one of the ArrayAgentSet constructors, but I'd recommend AgentSetBuilder since it's less reliant on internal implementation details which are likely to change.)
Method #2
The disadvantage of method #1 is the slight constant overhead associated with creating and setting up the extra AgentSet, Job, and Context objects and directing execution through them.
Creating and running a separate job isn't actually how built-in commands like if and while work. Instead of making a new job, they remain in the current job and cause commands in a command block to run (or not run) by manipulating the instruction pointer (nvm.Context.ip) to jump into them or skip over them.
I believe an extension command could do the same. I'm not certain if it has been tried before, but I can't see any reason it wouldn't work.
Doing it this way would involve understanding more about NetLogo engine internals, as documented at https://github.com/NetLogo/NetLogo/wiki/Engine-architecture . You'd model your primitive after e.g. https://github.com/NetLogo/NetLogo/blob/5.0.x/src/main/org/nlogo/prim/etc/_if.java , including altering your implementation of nvm.CustomAssembled. (Note that prim._extern, which runs extension commands, delegates its assemble method to the wrapped command's own assemble method, so this should work.) In your assemble method, instead of calling done() at the end to terminate the job, you'd just allow execution to fall through.
I could try to construct an example that works this way, but it'd take me a couple hours; it's probably not worth me doing unless there's a real need.