How does R3 use the Needs field of a script header? Is there any impact on namespaces? - rebol

I'd like to know the behaviour of R3 when processing the Needs field of a script header and what implications for word binding it has.
Background. I'm currently trying to port some R2 scripts to R3 in order to learn R3. In R2 the Needs field of a script header was essentially just documentation, though I made use of it with a custom function to reference scripts that are required to make my script run.
R3 appears to call the Needs referenced scripts itself, but the binding seems different to DOing the other scripts.
For example when %test-parent.r is:
REBOL [
title: {test parent}
needs: [%test-child.r]
]
parent: now
?? parent
?? child
and %test-child is:
REBOL [
title: {test child}
]
child: now
?? child
R3 Alpha (Saphiron build 22-Feb-2013/11:09:25) returns:
>> do %test-parent.r
Script: "test parent" Version: none Date: none
child: 9-May-2013/22:51:52+10:00
parent: 9-May-2013/22:51:52+10:00
** Script error: child has no value
** Where: get ajoin case ?? catch either either -apply- do
** Near: get :name
I don't understand why test-parent cannot access Child set by %test-child.r
If I remove the Needs field from test-parent.r header and instead insert a line to just DO %test-child.r then there is no error and the script performs as expected.

Ah, you've run into Rebol 3's policy to "do what you say, it can't read your mind". R3's Needs header is part of its module system, so anything you load with Needs is actually imported as a module, even if it isn't declared as such.
Loading scripts with Needs is a quick way to get them treated as modules even in the original author didn't declare them as such. Modules get their own contexts where their words are defined. Loading a script as a module is a great way to use a script that isn't that tidy, that leaks words into the shared script context. Like your %test-child.r script, it leaks the word child into the script context, what if you didn't want that to happen? Load it with Needs or import and that will clean that right up.
If you want a script treated as a script, use do to run it. Regular scripts use a (mostly) shared context, so when you do a script it has effect on the same context as the script you called it from. That is why the child: now statement affected child in the parent script. Sometimes that's what you want to do, which is why we worked so hard to make scripts work that way in R3.
If you are going to use Needs or import to load your own scripts, you might as well make them modules and export what you want, like this:
REBOL [
type: module
title: {test child}
exports: [child]
]
child: now
?? child
As before, you don't even have to include the type: module if you are going to be using Needs or import anyway, but it would help just in case you run your module with do. R3 assumes that if you declare your module to be a module, that you wrote it to be a module and depend on it working that way even if it's called with do. At the least, declaring a type header is a stronger statement than not declaring a type header at all, so it takes precedence in the conflicting "do what you say" situation.
Look here for more details about how the module system works: How are words bound within a Rebol module?

Related

used-defined verbosity level (alias) for UVM reporting (using uvm_info)

In UVM, there are pre-defined verbosity levels:
UVM_DEBUG
UVM_FULL
UVM_HIGH
UVM_MEDIUM
UVM_LOW
UVM_NONE
Actual reporting can be controlled using command line argument, e.g. +UVM_VERBOSITY=UVM_LOW
(1) Is there a way to have user-defined verbosity levels (or at least
aliases)?
Like e.g. "UVM_INFO" with same priority as UVM_NONE
(2) Is it possible to create completely user-defined verbosity level, with different priority ?
Say something between UVM_NONE and UVM_LOW
And, how to control such thing from CLI ??
Reason for this is that even with UVM_LOW some commercial VIP is still rather verbose. If possible, I would like to have a "level of my own" for some testbench elements...
I have an idea for uvm-1.2 version so I can tell in that context. Yes, you can set the custom verbosity. You can create an object of uvm_report_object and call the set_report_verbosity_level to set the verbosity of your choice.
For example a small testcase would look like -
module top;
import uvm_pkg::*;
uvm_report_object rep_obj;
initial begin;
rep_obj = new("");
rep_obj.set_report_verbosity_level(350);
rep_obj.uvm_report_info("", "", 350);
end
endmodule
"set_report_verbosity_level" will set the verbosity_level of uvm_report_handler class to 350 for that particular uvm_report_object. "set_report_info" is now being called with verbosity of 350 which otherwise takes UVM_MEDIUM as the default verbosity. In this example you don't have to add any additional command with the run commandline.
If you want to use the custom verbosity to use to prinout out new error messages, in that case you will have to create a class derived from uvm_report_object and create your own function definition for that custom message (For ex:- uvm_custom_report_info).
If you see the definitions of uvm_report_info/uvm_report_warning in your uvm_report_object class, it will give you the idea.
I hope this answers your question to some extent.

Set variables in Javascript job entry at root level

I need to set variables in root scope in one job to be used in a different job. The first job has a Javascript job entry, with the statements:
parent_job.setVariable("customers_full_path", "C:\\customers22.csv", "r");
true;
But the compilation fails with:
Couldn't compile javascript:
org.mozilla.javascript.EvaluatorException: Can't find method
org.pentaho.di.job.Job.setVariable(string,string,string). (#2)
How to set a variable at root level in a Javascript job entry?
Sorry for the passive agressive but:
I don't know if you are new to Pentaho but, the most common mistake for new users, with previous knowledge of programming, is to be sort of 'addicted' to know methods, as such you are using JavaScript for a functionality that is built in the tool. Both Transformations(KTR) and JOBs(KJB) have a similar step, you can better manipulate this in a KTR.
JavaScript steps slow down the flow considerably, so try to stay away from those as much as possible.
EDIT:
Reading This article, seems the only thing you're doing wrong is the actual syntax of the command..
Correct usage :
parent_job.setVariable("Desired Value", [name_of_variable]);
The command you described has 3 parameters, when it should be 2. If you have more than 1 variable you need to set, use 3 times the command. Try it out see if it works.

How to share a variable between two routers module in cro?

I try to use Cro to create a Rest API that will publish messages in rabbitMQ. I would like to split my routes in different modules and compose them with an "include". But I would like to be able to share the same connection to rabbitMQ in each of those modules too. I try with "our" but it does not work :
File 1:
unit module XXX::YYY;
use Cro::HTTP::Router;
use Cro::HTTP::Server;
use Cro::HTTP::Log::File;
use XXX::YYY::Route1;
use Net::AMQP;
our $rabbitConnection is export = Net::AMQP.new;
await $rabbitConnection.connect;
my $application = route {
include <api v1 run> => run-routes;
}
...
File 2:
unit module XXX::YYY::Route1;
use UUID;
use Cro::HTTP::Router;
use JSON::Fast;
use Net::AMQP;
my $channel = $XXX::YYY::rabbitConnection.open-channel().result;
$channel.declare-queue("test_task", durable=> True );
sub run-routes() is export { ... }
Error message:
===SORRY!===
No such method 'open-channel' for invocant of type 'Any'
Thanks!
When you define your exportable route function you can specify arguments then in your composing module you can create the shared objects and pass them to the routes. For example in your router module :
sub run-routes ($rmq) is export{
route {
... $rmq is available in here
}
}
Then in your main router you can create your Queue and pass it in when including
my $rmq = # Insert queue creation code here
include product => run-routes( $rmq );
I've not tried this but I can't see any reason why it shouldn't work.
The answer by #Scimon is certainly correct, but it does not addresses the OP. On the other hand, the two comments by #ugexe and #raiph are spot-on, so I'll try to summarize them here and explain what's going on.
The error itself
This is the error:
Error message:
===SORRY!=== No such method 'open-channel' for invocant of type 'Any'
It indicates the invocant ($XXX::YYY::rabbitConnection) is of type Any, which is the type usually assigned to variables when they don't have a defined value; that is, basically, $XXX::YYY::rabbitConnection is not defined. It certainly is not since XXX::YYY is not included among the imported modules, as indicated by #ugexe.
The additioal problem indicated by the OP
That module was eliminated from the imported list because, as indicated by the OP
I certainly code it the wrong way because if i try to add use
XXX::YYY;, i get a Circular module loading detected error
But of course. since use XXX::YYY::Route1; which is file 2, is included in File 1.
The final solution is to reorganize files
That circular dependence probably points out to the fact that they should be in the same file, or else common code should be factored out to a third file, which would be eventually be included by both. So you should have something like
unit module XXX::YYY::Common;
use Net::AMQP;
our $rabbitConnection is export = Net::AMQP.new;
await $rabbitConnection.connect;
And then
use XXX::YYY::Common;
in both modules.

Salt: Pass parameters to custom module executed inside a pillar

I am coding a custom module that is executed inside a pillar (to set a pillar variable) but I need it to retrieve an external parameter.
The idea is to retrieve a parameter from the master server. For example, if I execute
salt 'myminion' state.highstate
the custom module will be called and it should retrieve a parameter to generate the pillar.
I was looking into options like:
Using environment variables: It doesn't work as it seems that the execution modules does nothave access to the shell environment of the salt command.
Using command line paramenters: I dont know if it is even possible as I couldn't find any documentation.
Using an additional pillar in the command line: It doesn't work as the execution module is executed during pillar evaluation so it does not have access to __pillar__ or __salt__['pillar.get'] (both empty).
Reading from stdin: Does not workfrom a custom module.
Using a file to read info: I didn't even tryied this because it is not an option for me for security reasons. I dont want the information stored.
Any ideas if or how is this possible to do?
Thanks a lot!
By:
a custom module that is executed inside a pillar (to set a pillar variable)
do you mean an external pillar?
If so, passing it parameters is covered in that document:
You can pass a single argument, a list of arguments or a dictionary of arguments to your pillar:
ext_pillar:
- example_a: some argument
- example_b:
- argumentA
- argumentB
- example_c:
keyA: valueA
keyB: valueB
External pillars merge their data into the pillar dictionary, and are "custom modules", so I think that would fit your case.
If that's not what you're trying to do, can you update the question? Where is this parameter coming from? Is it different depending on the minion (minion_id is always passed to an external pillar)?
(edit) Adding a couple links about safely storing secrets:
using vault
dotgpg
blackbox

Asterisk with new functions

I created a write func odbc list records files in sql table:
[R]
dsn=connector
write=INSERT INTO ast_records (filename,caller,callee,dtime) VALUES
('${ARG1}','${ARG2}','${ARG3}','${ARG4}')
prefix=M
and set it in dialplan :
exten => _0X.,n,Set(
M_R(${MIXMONITOR_FILENAME}\,${CUSER}\,${EXTEN}\,${DTIME})= )
when I excute it I get an error : ast_func_write: M_R Function not registered:
note that : asterisk with windows
First thing I saw was you were performing the call to the function incorrectly...you need to be assigning values, not arguments....try this:
func_odbc.conf:
[R]
dsn=connector
prefix=M
writesql=INSERT INTO ast_records (filename,caller,callee,dtime) VALUES('${VAL1}','${VAL2}','${VAL3}','${VAL4}');
dialplan:
exten => _0X.,1,Set(M_R()=${MIXMONITOR_FILENAME}\,${CUSER}\,${EXTEN}\,${DTIME})
If that doesn't help you, continue on in my list :)
Make sure func_odbc.so is being loaded by Asterisk. (from the asterisk CLI: module show like func_odbc)... If it's not loaded, it can't "build" your custom odbc query function.
Make sure your DSN is configured in /etc/odbc.ini
Make sure that /etc/asterisk/res_odbc.conf is properly configured
Make sure you're calling the DSN by the right name (I see it happen all the time)
enable verbose and debug in your Asterisk logging, do a logger reload, core set verbose 5, core set debug 5, and then try the call again. when the call finishes, review the log, you'll see much more output regarding what happened...
Regarding the answer from recluze...Not to call you out here, but using a PHP AGI is serious overkill here. The func_odbc function works just fine, why create more overhead and potential security issues by calling an external script (which has to use a interpreter program on TOP itself)?
you should call func odbc function as "ODBC_connector". connector should be use in the func_odbc.conf file [connector]. In the dialplan it should call like this.
exten=> _0x.,n,ODBC_connector(${arg1},${arg2})
I don't really understand the syntax you're trying to use but how about using AGI (with php) for this. Just define your logic in a php script and call it from your dialplan as:
exten => _0X.,n,AGI(script-filename.php,${CUSER},${EXTEN},${DTIME})