Update - TL'DR:
When it comes to the compilable and cacheable JSR223 Elements, I've saw people using all sorts of tactics dancing around it. I had my doubts and I had my answers here, and found that most of tactics I saw are done wrong:
If your JSR223 scripts are full of args[0], args[1], args[2] everywhere, then that's the wrong choice of tactic, even it is the best practice of JMeter, it is not the best practice in the software engineering and easy-maintenance point of view.
Even if you assign args[n] to some meaningful-named variables, it is not the best practice in JMeter either, as there are much simpler and straightforward ways.
Similarly, if you are following the advices of "using vars.get("") to get variables" (then assign them to some meaningful-named variables), it is not the best practice in JMeter either, as there are much simpler and straightforward ways.
The advice of "Don't use ${} in JSR223 scripts" is more a myth than the truth, as all the ${} using examples in this question are just fine.
Also, the advices of breaking up expressions like "ValidAssetIds_${i+1}_g" with "+" into "ValidCatalogAssetIds_"+ (i+1) + "_g" is just another myth, and in most cases untruth, as illustrated in this question.
Now, as per JMeter's best practices for JSR223:
The reason JSR223 Elements is recommended for intensive load testing over Beanshell or Javascript, is because it implements the Compilable interface, as Groovy scripting engine implements Compilable.
And, it tells people to
ensure
to check (enable) the Cache compiled script if available property to ensure the script compilation is cached
the script does not use any variable using ${varName} as caching would take only first value of ${varName}. Instead use:
vars.get("varName"),
like:
Else, the other option is to pass them as Parameters to the script, like this:
Now, my question are,
What would happen if I use
def my_var = vars.get("MY_VARIABLE")
log.info("The value of my_var is ${my_var}")
in above example? Would log changes in each iteration when MY_VARIABLE changes?
Instead of above, I also tried to use
def my_var2 = __V(MY_VARIABLE)
def my_var3 = ${__V(MY_VARIABLE)}
but somehow I wasn't able to get the values of MY_VARIABLE. What I'm missing?
what if my ${varName} is dynamically defined, what would happen if I use ${varName} in such form? Like,
case 1:
for(def i = 0; i < validAssets.size(); i++) {
vars.put("ValidAssetIds_${i+1}_v","${i+1}")
}
case 2:
def varName = ${__time(/1000,)}
vars.put("MY_Log","abc${varName}")
Would each iteration have their own MY_Log values, or they all will be the same? I know I can guess my conclusion from observations, but the purpose of this question is to let me (or people) know the precautions when it comes to using JSR223 that we might not be aware of before. thanks.
All the "precautions" are described in the documentation
When using this feature, ensure your script code does not use JMeter variables or JMeter function calls directly in script code as caching would only cache first replacement. Instead use script parameters.
For example if you define a random string via User Parameters:
and try to refer it as ${randomString} in Groovy - it will be "random" only during the first iteration, on subsequent iterations the "first" value will be used:
Questions 1 and 3 are using Groovy's string interpolation feature, it's safe to use unless there is a clash with other JMeter Variables names
Question 2: you need to surround the __V() function with quotation marks otherwise the variable value is resolved but it's not defined in Groovy causing compilation error, you should see a message in jmeter.log file regarding it
def my_var2 = "${__V(MY_VARIABLE,)}"
Check out Apache Groovy: What Is Groovy Used For? article for more information on Groovy scripting in JMeter context.
Related
Find the example here.
def a = condition ? " karate match statement " : "karate match statement"
Is it possible to do something like this??
This is not recommended practice for tests because tests should be deterministic.
The right thing to do is:
craft your request so that the response is 100% predictable. do not worry about code-duplication, this is sometimes necessary for tests
ignore the dynamic data if it is not relevant to the Scenario
use conditional logic to set "expected value" variables instead of complicating your match logic
use self-validation expressions or schema-validation expressions for specific parts of the JSON
use the if keyword and call a second feature file - or you can even set the name of the file to call dynamically via a variable
in some cases karate.abort() can be used to conditionally skip / exit early
That said, if you really insist on doing this in the same flow, Karate allows you to do a match via JS in 0.9.6.RC4 onwards.
See this thread for details: https://github.com/intuit/karate/issues/1202#issuecomment-653632397
The result of karate.match() will return a JSON in the form { pass: '#boolean', message: '#string' }
If none of the above options work - that means you are doing something really complicated, so write Java interop / code to handle this
Find the example here.
def a = condition ? " karate match statement " : "karate match statement"
Is it possible to do something like this??
This is not recommended practice for tests because tests should be deterministic.
The right thing to do is:
craft your request so that the response is 100% predictable. do not worry about code-duplication, this is sometimes necessary for tests
ignore the dynamic data if it is not relevant to the Scenario
use conditional logic to set "expected value" variables instead of complicating your match logic
use self-validation expressions or schema-validation expressions for specific parts of the JSON
use the if keyword and call a second feature file - or you can even set the name of the file to call dynamically via a variable
in some cases karate.abort() can be used to conditionally skip / exit early
That said, if you really insist on doing this in the same flow, Karate allows you to do a match via JS in 0.9.6.RC4 onwards.
See this thread for details: https://github.com/intuit/karate/issues/1202#issuecomment-653632397
The result of karate.match() will return a JSON in the form { pass: '#boolean', message: '#string' }
If none of the above options work - that means you are doing something really complicated, so write Java interop / code to handle this
I would like to write Intellij plugin that can display values returned by class def() in python. I would like those values to be evaluated as much as possible and done by static analysis. I need this to work only for very simple expressions in one particular use case.
We have class definitions in our python code base that consist of a lot of very simple def()s.
All the defs are just one return statement returning very simple expression.
All of the code follows the same pattern and uses very few python operator.
the code is long and really hard to follow.
After few jumps "to definition" within this class I can't remember where I am anymore.
I am hoping that some intellij plugin can lessen the pain.
So for example. this is short and very simplified code fragment. hopefully it will be enough to demonstrate the problem.
class SomeClass(object):
def __init__(self, param):
self.param = param
def a(self):
return self.param + 1
def b(self):
return self.a + otherfunc()
def c(self):
return self.b + 3
I would like the plugin to display the following:
class SomeClass(object):
def __init__(self, param):
self.param = param
def a(self): # = param + 1
return self.param + 1
def b(self): # = param + 1 + otherfunc()
return self.a + otherfunc()
def c(self): # = param + 1 + otherfunc() + 3
return self.b + 3
This is just an illustration, real code makes more sense. but the expressions themselves are that simple.
Comments represent plugin output. I would like those values to be always visible as code hints, tooltips or whatever. and be updated as I type.
I don't want to evaluate the defs, because some of the values are not available before runtime. I want to get the expression itself from AST.
Obviously this is impossible to do in the general case. But I have a very specific use case in our code base
where very small python subset is used. And all the code follows the same pattern.
I already have a script that does this in python with ast module. I wonder if there is a way to do the same on the fly in Intellij.
Is there some way to achieve this? or something similar?
Is there a plugin that does something like that?
I doubt that there is. at least not exactly. So I want to try to implement it myself. (the use case is common and very annoying).
I skimmed through some of Intellij Platform Plugin SDK documentation. it's still not clear to me where to begin.
So what would be the easiest way to implement it from scratch or using another plugin as an example?
Is there an opensource plugin that does something similar that I can look at to figure out how to implement this myself?
My best guess is that I would need to implement:
create a call back that will be called every time def() is changed. (by implementing various extensions, no? which one?)
find this def in the file.
walk expression with PSI to extract the expression
create some GUI element to represent the def expression. (what are my options? is there some predefined elements that I can use?
ideally I would assign value to some existing GUI element)
assign value to the GUI element
but I don't know how to begin implementing any of the above. (I can probably figure out PSI part)
I searched for existing plugins, but couldn't find anything even remotely close. I skimmed the documentation, I did the tutorial, but I couldn't figure out which of the many extensions I need to use.
I considered using the debugger for that, but I don't see how debugger can help me here.
Any help (plugins, tutorials, extensions, plugins as an example, or details for implementation) would be greatly appreciated. thanks.
What you want to find is some extension point that will change the text user sees. I suggest you to look at the annotator class but maybe this is not the best extension point for you and you will need to find more suitable one (this is the most difficult part of creating plugins for JetBrain's IDEs). Full list of all available extension points you can find here.
After you find right extension point you need to implement it and change plugin.xml to let IDE know that some changes were made.
Some useful links:
Example plugins from developers
Official documentation
Quick course from JetBrain's developer (in Russian)
The following function iterates through the names of directories in the file system, and if they are not in there already, adds these names as records to a database table. (Please note this question applies to most languages).
def find_new_dirs():
dirs_listed_in_db = get_dirs_in_db()
new_dirs = []
for dir in get_directories_in_our_path():
if dir not in dirs_listed_in_db:
new_dirs.append(dir)
return new_dirs
I want to write a unit test for this function. However, the function has a dependency on an external component - a database. So how should I write this test?
I assume I should 'mock out' the database. Does this mean I should take the function get_dirs_in_db as a parameter, like so?
def find_new_dirs(get_dirs_in_db):
dirs_listed_in_db = get_dirs_in_db()
new_dirs = []
for dir in get_directories_in_our_path():
if dir not in dirs_listed_in_db:
new_dirs.append(dir)
return new_dirs
Or possibly like so?
def find_new_dirs(db):
dirs_listed_in_db = db.get_dirs()
new_dirs = []
for dir in get_directories_in_our_path():
if dir not in dirs_listed_in_db:
new_dirs.append(dir)
return new_dirs
Or should I take a different approach?
Also, should I design my whole project this way from the start? Or should I refactor them to this design when the need arises when writing tests?
What you're describing is called dependency injection and yes, it is a common way of writing testable code. The second method you outlined (where you would pass in the db) is probably more common. Also, you can have the db parameter to your function take a default value so you are able to only specify the mock db in testing cases.
Whether to write your code that way at the outset or modify it later would be a matter of opinion, but if you adhere to the Test-driven development (TDD) methodology then you would write your tests before your code-under-test anyway.
There are other ways to deal with this problem, but you're asking a broad question at that point.
I take it these code fragments are python, which I'm not familiar with, but in any case this looks like the methods are detached from any stateful object and I'm not sure if that's idiomatic python or simply your design.
In an OOD you'd want an object that holds a data access object in its state (similar to your 2nd version) and mock that object for tests. You'd also want to mock the get_directories_our_path part.
As for when this design should be done - as the first step before creating the first code file. You should use dependency injection throughout your code. This will aid in testing as well as decoupling and increased reusability of your classes.
Aside from getting any real work done, I have an itch. My itch is to write a view engine that closely mimics a template system from another language (Template Toolkit/Perl). This is one of those if I had time/do it to learn something new kind of projects.
I've spent time looking at CoCo/R and ANTLR, and honestly, it makes my brain hurt, but some of CoCo/R is sinking in. Unfortunately, most of the examples are about creating a compiler that reads source code, but none seem to cover how to create a processor for templates.
Yes, those are the same thing, but I can't wrap my head around how to define the language for templates where most of the source is the html, rather than actual code being parsed and run.
Are there any good beginner resources out there for this kind of thing? I've taken a ganer at Spark, which didn't appear to have the grammar in the repo.
Maybe that is overkill, and one could just test-replace template syntax with c# in the file and compile it. http://msdn.microsoft.com/en-us/magazine/cc136756.aspx#S2
If you were in my shoes and weren't a language creating expert, where would you start?
The Spark grammar is implemented with a kind-of-fluent domain specific language.
It's declared in a few layers. The rules which recognize the html syntax are declared in MarkupGrammar.cs - those are based on grammar rules copied directly from the xml spec.
The markup rules refer to a limited subset of csharp syntax rules declared in CodeGrammar.cs - those are a subset because Spark only needs to recognize enough csharp to adjust single-quotes around strings to double-quotes, match curley braces, etc.
The individual rules themselves are of type ParseAction<TValue> delegate which accept a Position and return a ParseResult. The ParseResult is a simple class which contains the TValue data item parsed by the action and a new Position instance which has been advanced past the content which produced the TValue.
That isn't very useful on it's own until you introduce a small number of operators, as described in Parsing expression grammar, which can combine single parse actions to build very detailed and robust expressions about the shape of different syntax constructs.
The technique of using a delegate as a parse action came from a Luke H's blog post Monadic Parser Combinators using C# 3.0. I also wrote a post about Creating a Domain Specific Language for Parsing.
It's also entirely possible, if you like, to reference the Spark.dll assembly and inherit a class from the base CharGrammar to create an entirely new grammar for a particular syntax. It's probably the quickest way to start experimenting with this technique, and an example of that can be found in CharGrammarTester.cs.
Step 1. Use regular expressions (regexp substitution) to split your input template string to a token list, for example, split
hel<b>lo[if foo]bar is [bar].[else]baz[end]world</b>!
to
write('hel<b>lo')
if('foo')
write('bar is')
substitute('bar')
write('.')
else()
write('baz')
end()
write('world</b>!')
Step 2. Convert your token list to a syntax tree:
* Sequence
** Write
*** ('hel<b>lo')
** If
*** ('foo')
*** Sequence
**** Write
***** ('bar is')
**** Substitute
***** ('bar')
**** Write
***** ('.')
*** Write
**** ('baz')
** Write
*** ('world</b>!')
class Instruction {
}
class Write : Instruction {
string text;
}
class Substitute : Instruction {
string varname;
}
class Sequence : Instruction {
Instruction[] items;
}
class If : Instruction {
string condition;
Instruction then;
Instruction else;
}
Step 3. Write a recursive function (called the interpreter), which can walk your tree and execute the instructions there.
Another, alternative approach (instead of steps 1--3) if your language supports eval() (such as Perl, Python, Ruby): use a regexp substitution to convert the template to an eval()-able string in the host language, and run eval() to instantiate the template.
There are sooo many thing to do. But it does work for on simple GET statement plus a test. That's a start.
http://github.com/claco/tt.net/
In the end, I already had too much time in ANTLR to give loudejs' method a go. I wanted to spend a little more time on the whole process rather than the parser/lexer. Maybe in version 2 I can have a go at the Spark way when my brain understands things a little more.
Vici Parser (formerly known as LazyParser.NET) is an open-source tokenizer/template parser/expression parser which can help you get started.
If it's not what you're looking for, then you may get some ideas by looking at the source code.