This question already has an answer here:
In Karate Scenario Outline test, how to pass param values in the Examples table through a variable
(1 answer)
Closed 1 year ago.
I want to run my scenario with all the images that i have in my resource folder. meaning every time hitting same api with different image(converting them to base64encoding first) .But as this images already are in resource folder than it does not make sense to save with some name or records in csv for scenario outline purpose. Can i call a my own function (having code to take the image from resource folder and convert it into base64) in Examples section , So that for every image it will hits the same api again.
Yes. First write some java code to get a list of the image files. You can refer this code for ideas: https://stackoverflow.com/a/65035825/143475
If that is too hard, then just create a CSV file with a list of paths. Let me say that Karate is designed for testing, but you seem to be expecting something else. Karate is not a "general purpose" programming language. But it can be made to do extreme things by Java interop.
One you have a JSON array, then it can be used as the data-source for the Examples: section: https://github.com/intuit/karate#json-array-data-source
You can refer other answers for Base64 conversion: https://stackoverflow.com/a/46452864/143475
All the things you want to do is possible, but please do some research and try. And the next time you ask a question, please show what you tried and give examples instead of just a broad question, thanks.
I have searched through the whole internet and didn't find anything useful. Could please anyone suggest how to make a variable explorer in Sublime text 3 like in Spyder?
(Spyder maintainer here) If you want to create a representation of your current namespace (what is shown in the Variable Explorer), you can take a look at how we build it here, specially the value_to_display function, the one really responsible for that.
The viewers (for lists, dicts, Numpy arrays and DataFrames) are implemented in PyQt, and you can find them here.
To bring the value of a variable from a running IPython kernel and pass it to the viewers, we created our own kernel that serializes a value and sends it to Spyder. In Spyder we deserialize it and pass it to the Viewers here (look for the CreateEditor method).
The process is really more complex than this little explanation, but I hope you can get an idea of how it works.
I'm using using AutoHotkey to drive SeleniumBasic v2.0.9.0
I'm new to Selenium and have been looking at a lot of different pages discussing how to get/set elements on a webpage. I've noticed there seems to be (at least )two different types of format for syntax.
Here are two examples:
1. driver.findElementByID("search_form_input_homepage").SendKeys("hello")
2. driver.findElement(By.id("search_form_input_homepage")).SendKeys("hello")
In my case the first one works but the second throws an error saying No such interface supported. I'm just curious of the origin of the second structure. Is it from Selenium 3?
Here is the Answer to your Question:
driver.findElementByID("search_form_input_homepage").SendKeys("hello") : Is in use through the VBA module maintained by #FlorentB.
driver.findElement(By.id("search_form_input_homepage")).SendKeys("hello") : Is in use through the Java bindings of Selenium.
Let me know if this Answers your Question.
I'm developing a simple web service which is going to add user provided facts to my Prolog database (using assert). I thought it's better to keep these dynamic facts ("data") separate from my service rules that operate on these facts ("code"), hence split them into two different modules. Main reason was that I wanted to persist the dynamic facts to disk periodically, while being able to develop the code with no issues and independently of user data.
I've been using assert(my_store:fact(...)) to add user data to the my_store module and then in the code module I started coding rules like
:- module (my_code, [a_rule/1, ...]).
a_rule(Term) :-
my_store:fact(...), ...
All seems ok but with this approach my_store is hard-coded in the code module, which is a little worrying. For example, what if after some time I decide to change data module name or, perhaps, I'll need two separate data modules one with frequent persistence, another with persistence done only sporadically?
Can anyone advise what are the best practices in code and data organisation? Perhaps separation of code and data is against "the Prolog way"? Are there any good books that cover these issues in depth?
Cheers,
Jacek
That's a good question, touching on several very important topics.
I hope that the following comments help you to sort out most of your questions, possibly more so if you follow up on the points that interest you most with new questions that address the specific question in isolation.
First, when accepting user code as input, make sure you only allow safe code to be added to your programs. In SWI-Prolog, there is safe_goal/1, which helps you to ensure some safety properties. It's not perfect, but better than nothing.
Again in SWI-Prolog, there is library(persistency). Please carefully study the documentation to see how it works, and how you can use it to store dynamic data on disk.
Regarding the module names, I have two comments:
Explicit module qualifications of goals are very rare. Simply load the module and use its clauses.
Note that you can load modules dynamically. That is, nothing prevents you from using use_module/1 with a parameter that was obtained from somewhere else. This allows you more flexibility to specify the module from which you want to fetch your definitions, without having to change and recompile your code.
Regarding separation of code and data: In Prolog, there is no such separation.
All Prolog clauses are Prolog terms.
Terms! TERMS ALL THE WAY DOWN!
Thanks to #mat for his suggestions which made me to read and think a little more. Now, I can post a potential solution to my issue; not ideal, not using the persistency library but a simple, first attempt.
As mentioned, user data are stored with assert(my_store:fact(...)). That means module my_store is created dynamically and there's no file which would allow use_module to be used. There's, however, the import/1 predicate which I can use to import dynamically asserted facts, and so my solution looks like this:
:- module(my_code, [a_rule/1, ...]).
:- initialization import_my_store.
import_my_store :-
import(my_store:fact/1),
import(my_store:another_fact/1),
...
a_rule(Term) :-
fact(...), ...
Note that I can use fact/1 without explicit specification of the my_store module. And I can also easily dump the user data to a file.
save_db(File) :-
tell(File),
my_store:listing,
told.
The downside is that on initialization the import/1 calls generate warnings such as: import/1: my_store:fact/1 is not exported (still imported into my_code). But that's not a big issue because they are still imported into my_code and I can use the user facts without explicit module specification.
Looking forward to hearing any comments. Cheers,
A solution using Logtalk, which provides an alternative to modules. First define an object with your code:
:- object(my_code).
:- public([a_rule/1, ...]).
:- private([fact/1, another_fact/1, ...]).
:- dynamic([fact/1, another_fact/1, ...]).
a_rule(Term) :-
::fact(...), ...
...
:- end_object.
Then, dynamically create any number of data stores as necessary as extensions (derived prototypes) of the my_code object:
?- create_object(my_store, [extends(my_code)], [], []).
To query a data store, simply send it a message:
?- my_store::a_rule(Term).
The create_object/4 built-in predicate can load the persistency file for the store if necessary (so that you resume where you left):
?- create_object(my_store, [extends(my_code)], [include('my_store.pl'))], []).
User data can by saved in a data store by asserting it as expected:
?- my_store::assertz(fact(...)).
You will need a predicate to dump a data store to a file as a public predicate in the my_code object. For example:
:- public(dump/0).
dump :-
self(Self),
atom_concat(Self, '.pl', File),
tell(File),
dump_dynamic_predicates,
told.
dump_dynamic_predicates :-
current_predicate(Functor/Arity),
functor(Template, Functor, Arity),
predicate_property(Template, (dynamic)),
::Template,
write_canonical(Template), write('.\n'),
fail.
dump_dynamic_predicates.
now you can dump a data store, by typing:
?- my_store::dump.
Note that with this solution is trivial to have concurrently any number of data stores. If a data store requires a specialized version of the code, then you can simply extend the code object and then create the specialized data store as an extension of that specialized object.
I want to evaluate my content blocks before running my test suite but the closures' property names is in bytecode already. I'm ooking for the cleanest solution (compared with parsing source manually).
Already tried solution outlined in this post (and I'd still wind up doing some RegEx/parsing) but could only get it to work via script execution engine. It failed in IDE and GroovyConsole. Rather than embedding a Groovy script in project's code, I thought I'd try using Geb's native classes.
Is building on the suggestion about extending Geb Navigators here viable for Geb's PageContentSupport class whose contentTemplates contain a LinkedHashMap of exactly what I need? If yes, would someone provide guidance? If no, any suggestions?
It is currently not possible to get hold of all content elements for a given page/module. Feel free to create an issue for this in Geb's bug tracker, but remember that all that Geb can provide is either a list of content element names or a map from these names to closures that create these elements.
Having that information isn't a generic solution to your problem because it's possible for content elements to take parameters and there are situations where your content elements will be available on the page only after some other actions are performed (for example you have to click on button to reveal a section of a page that uses ajax to retrieve it's content). So I'm afraid that simply going over all elements and checking if they don't throw any errors will not cut it.
I'm still struggling to see what would "evaluating" all content elements prior to running the suite buy you. Are you after verifying that your content elements still work to get a faster feedback than running the whole suite? I'm pretty sure that you won't be able to fully automate detection of content definitions that don't work anymore. In my view it will be more effort than it's worth.