Cro Template as Object - raku

The Scenario
I've been using templates in Cro (documented at https://cro.services/docs/reference/cro-webapp-template), and have enjoyed that there are subs in them.
I currently have a 'main' template, and some reports, let's say report1, report2, and report3.
Let's say that, from report3, I want to include an array of report1.
Now, let's say that the reports each have the following subs:
init: Some Javascript initialisation code (that should only be included once, no matter how many instances of the report are used)
HTML: Some HTML code that should be included for each instance of the report (with a few parameters to differentiate it, but that, due to the restriction of the Javascript framework, may not contain any <script> or <style> tags
data: A snippet of Javascript that again has to be repeated for each time the report is included
Currently I have each of the above in a separate sub in the template.
The Problem
Redeclaration of symbol '&__TEMPLATE_SUB__report-initial'.
The Question
While I can pass a report name (eg. "report1") to the main template, what I'm lacking is a way to have the main template call the subs on the report name that has been passed in, since there may be multiple reports involved.
Ideas I've tried
What would be ideal is if I could somehow create a "report" class that inherits from the template, and pass instances of the template class into the main report, and then call the subs as methods on the report class. However, I've been unable to figure out a way to do this.
I can see three likely options here:
My difficulty may be that I'm not thinking "The Cro Way". If there's a better way to achieve what I'm trying to do, please let me know
There may be a way to achieve what I want, and I've just been unable to understand the documentation (or it may be missing)
While unlikely, it's possible that Cro hasn't been designed with this kind of possibility in mind.
Any help anyone can provide would be appreciated.
Thanks!
Edit: I think a macro that can have multiple (named) "bodies" would solve the problem.

It looks like &__TEMPLATE_SUB__report1-initial is a global that is redeclared when you import report1 into report3. May I suggest to try and use template fragments instead of the whole template?

my initial response to your question is "please can you provide a minimal reproducible example of your code so that we can get a deeper view of the context and have something that we can experiment with"
my current understanding of what you need is "to use raku style classes & objects (with callbacks) in a Cro template setting" - and that the standard ways of doing this such as associative access to a nested topic variable are too limited
in itself, this is not necessarily a weakness of raku / Cro in that the power of a template slang needs to be limited to avoid potential security issues and, as with most template systems, is a bit more prosaic than a full blown coding language
my guess is that Cro template-parts which can chunk up web parts and steps in and out of the (real raku) root block can, depending on how you chunk things up, handle the report data structure that you describe - have you tried this?
if this is still not tenable, there are a couple of ways to expand the options such as dependency injection and route handlers

Related

How to build a call graph for a function module?

A while ago during documenting legacy code I found out there is a tool for displaying call graph (call stack) of any standard program. Absurdly I wasn't aware of this tool for years :D
It gives fancy list/hierarchy of calls of the program, though it is not a call graph in a full sense, it is very helpful in some cases.
The problem is this tool is linked only to SE93 so it can be used only for transactions.
I tried to search but didn't find any similar tool for reports or function modules. Yes, I can create a tcode for report, but for function module this approach doesn't work.
If I put FM call inside report and build a graph using this tool, it wraps this call as a single unit and does not analyze deeper. And that's it
Anybody knows a workaround how we can build graph for smth besides transaction?
The cynic in me thinks RS_CALL_HIERARCHY was left to rot. Sandra is right, it definitely used to work. Once OO came to abap, interfaces and dynamic/generic code became possible. So a call heirarchy based on static code analysis was pushing proverbial up hill.
IMO the best way to solve this is a FULL trace and then to extract the data from the trace.
There are even external tool that do that.
This is of course, still limited as running a trace on every execution path can be very time consuming. Did I hear someone say small Classes please ?
Trans SAT.
Make sure teh profile you use isnt aggregating, and measure the blocks you are interested.
Now wade you way through the trace.
https://help.sap.com/doc/saphelp_ewm93/9.3/en-US/4e/c3e66b6e391014adc9fffe4e204223/content.htm?no_cache=true
Have fun :)
The call hierarchy displays also works for programs and function modules.
In my S/4HANA system, for VA01, it displays:
Clicking the hierarchy of function module CJWI_INIT displays:
I get exactly the same result by calling the function module RS_CALL_HIERARCHY this way:
The parameter OBJECT_TYPE may have these values:
P : program
FF : function module
The "call graph" is not maintained anymore since at least Basis 4.6, and it doesn't work for classes and methods.
But the tool is buggy: in some cases, a function module containing a PERFORM at the first line, it may not be displayed, whatever the call graph is launched from SE93 or directly from RS_CALL_HIERARCHY.

Get the results of an (existing) code inspection

I am new to writing intellij plugins, so I apologize in advance if my question might be a bit unclear.
I know that (live) code inspections are achieved via Annotators or LocalInspectionTools. I also know there is an API to write a custom Annotator or Inspection tool and I have seen several examples.
What I do not know (my question): is there a manager/helper/"global inspector" that can provide me with the results of an existing code annotator/inspection process (done by the IDE's plugins or by some 3rd party plugin)?
For instance: I do not want to write a custom Lint annotator/inspection plugin for WebStorm. One can configure JSLint/JSHint inside WebStorm settings. The results of the live inspection can be seen over the current file/current open editor.
I would like to get the results of this live inspection, that occurs in the current open editor (inside my own custom code). For this I am interested in the API to get this annotator/inspector and/or the results it provides.
(I apologize for maybe using annotator and inspection terms in a confusing manner)
If there is another question (which I could not find) that duplicates what I have asked above, please re-direct me.
Thank you in advance!
Andrei.
Unfortunately regular annotating process for the linters is asynchronous so you cannot get the annotation results directly (by calling 'Manager' method).
You can create instances of JSLintInspection, JSHintInspection, etc. and call #createVisitor().visit(File) method but the operation is very slow and you must call it outside of AWT thread.
Also you can try to run the method com.intellij.codeInsight.daemon.impl.DaemonCodeAnalyzerEx#processHighlights but as I mentioned above the annotation results for linters can be not available (or outdated)

Organising data and code across modules in Prolog

I'm developing a simple web service which is going to add user provided facts to my Prolog database (using assert). I thought it's better to keep these dynamic facts ("data") separate from my service rules that operate on these facts ("code"), hence split them into two different modules. Main reason was that I wanted to persist the dynamic facts to disk periodically, while being able to develop the code with no issues and independently of user data.
I've been using assert(my_store:fact(...)) to add user data to the my_store module and then in the code module I started coding rules like
:- module (my_code, [a_rule/1, ...]).
a_rule(Term) :-
my_store:fact(...), ...
All seems ok but with this approach my_store is hard-coded in the code module, which is a little worrying. For example, what if after some time I decide to change data module name or, perhaps, I'll need two separate data modules one with frequent persistence, another with persistence done only sporadically?
Can anyone advise what are the best practices in code and data organisation? Perhaps separation of code and data is against "the Prolog way"? Are there any good books that cover these issues in depth?
Cheers,
Jacek
That's a good question, touching on several very important topics.
I hope that the following comments help you to sort out most of your questions, possibly more so if you follow up on the points that interest you most with new questions that address the specific question in isolation.
First, when accepting user code as input, make sure you only allow safe code to be added to your programs. In SWI-Prolog, there is safe_goal/1, which helps you to ensure some safety properties. It's not perfect, but better than nothing.
Again in SWI-Prolog, there is library(persistency). Please carefully study the documentation to see how it works, and how you can use it to store dynamic data on disk.
Regarding the module names, I have two comments:
Explicit module qualifications of goals are very rare. Simply load the module and use its clauses.
Note that you can load modules dynamically. That is, nothing prevents you from using use_module/1 with a parameter that was obtained from somewhere else. This allows you more flexibility to specify the module from which you want to fetch your definitions, without having to change and recompile your code.
Regarding separation of code and data: In Prolog, there is no such separation.
All Prolog clauses are Prolog terms.
Terms! TERMS ALL THE WAY DOWN!
Thanks to #mat for his suggestions which made me to read and think a little more. Now, I can post a potential solution to my issue; not ideal, not using the persistency library but a simple, first attempt.
As mentioned, user data are stored with assert(my_store:fact(...)). That means module my_store is created dynamically and there's no file which would allow use_module to be used. There's, however, the import/1 predicate which I can use to import dynamically asserted facts, and so my solution looks like this:
:- module(my_code, [a_rule/1, ...]).
:- initialization import_my_store.
import_my_store :-
import(my_store:fact/1),
import(my_store:another_fact/1),
...
a_rule(Term) :-
fact(...), ...
Note that I can use fact/1 without explicit specification of the my_store module. And I can also easily dump the user data to a file.
save_db(File) :-
tell(File),
my_store:listing,
told.
The downside is that on initialization the import/1 calls generate warnings such as: import/1: my_store:fact/1 is not exported (still imported into my_code). But that's not a big issue because they are still imported into my_code and I can use the user facts without explicit module specification.
Looking forward to hearing any comments. Cheers,
A solution using Logtalk, which provides an alternative to modules. First define an object with your code:
:- object(my_code).
:- public([a_rule/1, ...]).
:- private([fact/1, another_fact/1, ...]).
:- dynamic([fact/1, another_fact/1, ...]).
a_rule(Term) :-
::fact(...), ...
...
:- end_object.
Then, dynamically create any number of data stores as necessary as extensions (derived prototypes) of the my_code object:
?- create_object(my_store, [extends(my_code)], [], []).
To query a data store, simply send it a message:
?- my_store::a_rule(Term).
The create_object/4 built-in predicate can load the persistency file for the store if necessary (so that you resume where you left):
?- create_object(my_store, [extends(my_code)], [include('my_store.pl'))], []).
User data can by saved in a data store by asserting it as expected:
?- my_store::assertz(fact(...)).
You will need a predicate to dump a data store to a file as a public predicate in the my_code object. For example:
:- public(dump/0).
dump :-
self(Self),
atom_concat(Self, '.pl', File),
tell(File),
dump_dynamic_predicates,
told.
dump_dynamic_predicates :-
current_predicate(Functor/Arity),
functor(Template, Functor, Arity),
predicate_property(Template, (dynamic)),
::Template,
write_canonical(Template), write('.\n'),
fail.
dump_dynamic_predicates.
now you can dump a data store, by typing:
?- my_store::dump.
Note that with this solution is trivial to have concurrently any number of data stores. If a data store requires a specialized version of the code, then you can simply extend the code object and then create the specialized data store as an extension of that specialized object.

VB.net Share module code between solutions

I have certain Modules that I would like to setup to be referencable by multiple solutions, as the code always behaves in basically the same manner (ex. code for logging errors). They make no sense as classes, so it seems like a class library is out; and I haven't seen any other suggestions for sharing code between solutions.
So I'm left wondering what would be the best way to create one thing that I can just use across multiple solutions to avoid having to rewrite the same code?
It sounds like a class library is exactly what you want. Build it, reference it within each solution, and code against it. One source, multiple solutions running off that code.
You could also implement the functionality into a separate piece such as an API. This is dependent on the function of the code obviously, but logging errors is a good use case.

Access closure property names in the content block at runtime

I want to evaluate my content blocks before running my test suite but the closures' property names is in bytecode already. I'm ooking for the cleanest solution (compared with parsing source manually).
Already tried solution outlined in this post (and I'd still wind up doing some RegEx/parsing) but could only get it to work via script execution engine. It failed in IDE and GroovyConsole. Rather than embedding a Groovy script in project's code, I thought I'd try using Geb's native classes.
Is building on the suggestion about extending Geb Navigators here viable for Geb's PageContentSupport class whose contentTemplates contain a LinkedHashMap of exactly what I need? If yes, would someone provide guidance? If no, any suggestions?
It is currently not possible to get hold of all content elements for a given page/module. Feel free to create an issue for this in Geb's bug tracker, but remember that all that Geb can provide is either a list of content element names or a map from these names to closures that create these elements.
Having that information isn't a generic solution to your problem because it's possible for content elements to take parameters and there are situations where your content elements will be available on the page only after some other actions are performed (for example you have to click on button to reveal a section of a page that uses ajax to retrieve it's content). So I'm afraid that simply going over all elements and checking if they don't throw any errors will not cut it.
I'm still struggling to see what would "evaluating" all content elements prior to running the suite buy you. Are you after verifying that your content elements still work to get a faster feedback than running the whole suite? I'm pretty sure that you won't be able to fully automate detection of content definitions that don't work anymore. In my view it will be more effort than it's worth.