Continuous improvement: Is it possible to specify the tests in advance? - testing

I am used to "old fashioned" waterfall development cycles.
For a new project, continuous integration seems to better fit our need.
In waterfall, you have to specify the tests you will to implement in advance.
My questions:
What is the usual way with continuous integration development cycles regarding test specification?
If you don't specify the tests, can you imagine a way to specify them in advance?
Many thanks for your help.

At university we were taught that "test driven development" makes sense, especially if there is a proper coding specification.
If you're not able to write tests before coding -> the coding spec should be more specific / has issues.
I usually write unit-tests based on the coding spec for my java classes, which will afterwards be integrated and executed on our jenkins continuous integration server.
Forgive me if i am wrong but thats what i learned...
It always depends on the complexity of the required java classes, the trivial "domain" classes do not need a big specification info
In most cases we try to specify how the Classes or Methods should work (in words) and also write down the some example values.
Lets say you should write a method that should check if a value is in a specifig range:
// Example Specification:
// the method 'checkIfItsInRange' should return true when : the input lies within the range and it should be devidable by the distance value
// Lets say the range goes from -30,00 to +30,00 with a distance from 0,25
// valid values :30, -30, 15.25, 15.50, 17.75 etc. -> return true
// invalid : -31, -30.01, +30.08, 0.4 etc. -> return false
// MissingParameterException when one of the Parameters is null
public boolean checkIfItsInRange throws MissingParameterException (BigDecimal from, BigDecimal to, BigDecimal distance, BigDecimal input) {
// TODO implement depending on spec.
}
In this case you can already write some Unittests before you started to implement the method itself.
I hope that makes things a bit clearer.

Related

JMeter Compilable JSR223 Usage Precautions

Update - TL'DR:
When it comes to the compilable and cacheable JSR223 Elements, I've saw people using all sorts of tactics dancing around it. I had my doubts and I had my answers here, and found that most of tactics I saw are done wrong:
If your JSR223 scripts are full of args[0], args[1], args[2] everywhere, then that's the wrong choice of tactic, even it is the best practice of JMeter, it is not the best practice in the software engineering and easy-maintenance point of view.
Even if you assign args[n] to some meaningful-named variables, it is not the best practice in JMeter either, as there are much simpler and straightforward ways.
Similarly, if you are following the advices of "using vars.get("") to get variables" (then assign them to some meaningful-named variables), it is not the best practice in JMeter either, as there are much simpler and straightforward ways.
The advice of "Don't use ${} in JSR223 scripts" is more a myth than the truth, as all the ${} using examples in this question are just fine.
Also, the advices of breaking up expressions like "ValidAssetIds_${i+1}_g" with "+" into "ValidCatalogAssetIds_"+ (i+1) + "_g" is just another myth, and in most cases untruth, as illustrated in this question.
Now, as per JMeter's best practices for JSR223:
The reason JSR223 Elements is recommended for intensive load testing over Beanshell or Javascript, is because it implements the Compilable interface, as Groovy scripting engine implements Compilable.
And, it tells people to
ensure
to check (enable) the Cache compiled script if available property to ensure the script compilation is cached
the script does not use any variable using ${varName} as caching would take only first value of ${varName}. Instead use:
vars.get("varName"),
like:
Else, the other option is to pass them as Parameters to the script, like this:
Now, my question are,
What would happen if I use
def my_var = vars.get("MY_VARIABLE")
log.info("The value of my_var is ${my_var}")
in above example? Would log changes in each iteration when MY_VARIABLE changes?
Instead of above, I also tried to use
def my_var2 = __V(MY_VARIABLE)
def my_var3 = ${__V(MY_VARIABLE)}
but somehow I wasn't able to get the values of MY_VARIABLE. What I'm missing?
what if my ${varName} is dynamically defined, what would happen if I use ${varName} in such form? Like,
case 1:
for(def i = 0; i < validAssets.size(); i++) {
vars.put("ValidAssetIds_${i+1}_v","${i+1}")
}
case 2:
def varName = ${__time(/1000,)}
vars.put("MY_Log","abc${varName}")
Would each iteration have their own MY_Log values, or they all will be the same? I know I can guess my conclusion from observations, but the purpose of this question is to let me (or people) know the precautions when it comes to using JSR223 that we might not be aware of before. thanks.
All the "precautions" are described in the documentation
When using this feature, ensure your script code does not use JMeter variables or JMeter function calls directly in script code as caching would only cache first replacement. Instead use script parameters.
For example if you define a random string via User Parameters:
and try to refer it as ${randomString} in Groovy - it will be "random" only during the first iteration, on subsequent iterations the "first" value will be used:
Questions 1 and 3 are using Groovy's string interpolation feature, it's safe to use unless there is a clash with other JMeter Variables names
Question 2: you need to surround the __V() function with quotation marks otherwise the variable value is resolved but it's not defined in Groovy causing compilation error, you should see a message in jmeter.log file regarding it
def my_var2 = "${__V(MY_VARIABLE,)}"
Check out Apache Groovy: What Is Groovy Used For? article for more information on Groovy scripting in JMeter context.

How to measure static test coverage?

So, DLang (effectively) comes with code coverage built in. That's cool.
My issue is, I do a lot of metaprogramming. I tend to test my templates, by static asserts:
template CompileTimeFoo(size_t i)
{
enum CompileTimeFoo = i+3;
}
unittest
{
static assert(CompileTimeFoo!5 == 8);
}
Unfortunately (and obviously), when running tests with coverage, body of CompileTimeFoo will not be counted as "hits" nor "hittable" lines.
Now, let's consider slightly more complicated template:
enum IdentifierLength(alias X) = __traits(identifier, X).length;
void foo(){}
static assert(IdentifierLength!foo == 3);
In this case there still are no "hits", but there is one hittable line (where foo is defined). Because of that my coverage falls to 0% (in this example). If you look at this submodule of my pet project and into it's coverage on codecov, you'll see this exact case - I've prepared a not-that-bad test suite, yet it looks like the whole module is a wildland, because coverage is 0% (or close).
Disclaimer: I want to keep my tests in different source set. This is a matter of taste, but I dislike mixing tests with production code. I don't exactly know what would happen if I wrap foo in version(unittest){...}, but I expect that (since this code will be pushed to compiler) it wouldn't change much.
Again: I DO understand why that happens. I was wondering if there is some trick to work around that? Specifically: is there a way to calculate coverage for things that get called ONLY during compile time?
I can hack testing for coverage sake when I do mixins and just test code-generating things by comparing strings in runtime, but: 1. this would be ugly, and 2. it doesn't cover the case above.

Mono.Defer() vs Mono.create() vs Mono.just()?

Could someone help me to understand the difference between:
Mono.defer()
Mono.create()
Mono.just()
How to use it properly?
Mono.just(value) is the most primitive - once you have a value you can wrap it into a Mono and subscribers down the line will get it.
Mono.defer(monoSupplier) lets you provide the whole expression that supplies the resulting Mono instance. The evaluation of this expression is deferred until somebody subscribes. Inside of this expression you can additionally use control structures like Mono.error(throwable) to signal an error condition (you cannot do this with Mono.just).
Mono.create(monoSinkConsumer) is the most advanced method that gives you the full control over the emitted values. Instead of the need to return Mono instance from the callback (as in Mono.defer), you get control over the MonoSink<T> that lets you emit values through MonoSink.success(), MonoSink.success(value), MonoSink.error(throwable) methods.
Reactor documentation contains a few good examples of possible Mono.create use cases: link to doc.
The general advice is to use the least powerful abstraction to do the job: Mono.just -> Mono.defer -> Mono.create.
Although in general I agree with (and praise) #IlyaZinkovich's answer, I would be careful with the advice
The general advice is to use the least powerful abstraction to do the job: Mono.just -> Mono.defer -> Mono.create.
In the reactive approach, especially if we are beginners, it's very easy to overlook which the "least powerful abstraction" actually is. I am not saying anything else than #IlyaZinkovich, just depicting one detailed aspect.
Here is one specific use case where the more powerful abstraction Mono.defer() is preferable over Mono.just() but which might not be visible at the first glance.
See also:
https://stackoverflow.com/a/54412779/2886891
https://stackoverflow.com/a/57877616/2886891
We use switchIfEmpty() as a subscription-time branching:
// First ask provider1
provider1.provide1(someData)
// If provider1 did not provide the result, ask the fallback provider provider2
.switchIfEmpty(provider2.provide2(someData))
public Mono<MyResponse> provide2(MyRequest someData) {
// The Mono assembly is needed only in some corner cases
// but in fact it is always happening
return Mono.just(someData)
// expensive data processing which might even fail in the assemble time
.map(...)
.map(...)
...
}
provider2.provide2() accepts someData only when provider1.provide1() does not return any result, and/or the method assembly of the Mono returned by provider2.provide2() is expensive and even fails when called on wrong data.
It this case defer() is preferable, even if it might not be obvious at the first glance:
provider1.provide1(someData)
// ONLY IF provider1 did not provide the result, assemble another Mono with provider2.provide()
.switchIfEmpty(Mono.defer(() -> provider2.provide2(someData)))

Source of randomness in kotlin-stdlib-common

In kotlin-stdlib-common is there any source of randomness available out of the box? Whether it's some implementation of standard java.util.Random, kotlin.math.random* or basic current time millis that I can use to create my own random number generator? I can't find any.
If it's not there, how would you get the source of randomness without setting your own platform dependent implementations? This is the only method I need:
expect class Rng {
fun nextInt(): Int
}
I'm trying to make it platform agnostic.
The answer would be: wait for Kotlin 1.3 to get released where the common library will be enriched with classes and methods that can provide the source for random values.
https://kotlinlang.org/docs/reference/whatsnew13.html#multiplatform-random
This maybe a post with many links, which may cause the problem of Your answer is in another castle: when is an answer not an answer?, so I try my best to write the link description. And my understanding of Kotlin Multiplatform is Kotlin-Multiplatform = Kotlin-JVM + Kotlin-JS.
I believe that the random number of Kotlin-JVM is provided by java.util.Random, and Math.Random() if it is Kotlin-JS, with these following reasons:
How can I get a random number in Kotlin?, and there is an answer in that question said that Kotlin-JS can use Math.Random() to get random number.
Can't get any result of any random number related method of Kotlin-JVM, but there is a random() in Kotlin-JS.
The source code of Kotlin-JVM related file, while using Random(), there is a import java.util.*, or some file directly use java.util.Random for example kotlin/libraries/stdlib/jvm/src/kotlin/collections/MutableCollectionsJVM.kt#L78.
And, java.util.Random is designed as result platform-independent, and also implementation platform-independent, with these reason:
Is Java's RNG (using seeds) platform-independent?, though this question may be out-dated.
We can't find keyword "native" in the source of JDK8/java.util.Random or the source of JDK10/java.util.Random, and the RNG logic is clear in these source code, where seed is decided by nanoTime() if not provided, and RNG is the implementation of Volume 2, TAOCP.
So, I think,
How would you get the source of randomness without setting your own platform dependent implementations?
Maybe a random-enough seed and a random-enough (P)RNG.

How should I deal with external dependencies in my functions when writing unit tests?

The following function iterates through the names of directories in the file system, and if they are not in there already, adds these names as records to a database table. (Please note this question applies to most languages).
def find_new_dirs():
dirs_listed_in_db = get_dirs_in_db()
new_dirs = []
for dir in get_directories_in_our_path():
if dir not in dirs_listed_in_db:
new_dirs.append(dir)
return new_dirs
I want to write a unit test for this function. However, the function has a dependency on an external component - a database. So how should I write this test?
I assume I should 'mock out' the database. Does this mean I should take the function get_dirs_in_db as a parameter, like so?
def find_new_dirs(get_dirs_in_db):
dirs_listed_in_db = get_dirs_in_db()
new_dirs = []
for dir in get_directories_in_our_path():
if dir not in dirs_listed_in_db:
new_dirs.append(dir)
return new_dirs
Or possibly like so?
def find_new_dirs(db):
dirs_listed_in_db = db.get_dirs()
new_dirs = []
for dir in get_directories_in_our_path():
if dir not in dirs_listed_in_db:
new_dirs.append(dir)
return new_dirs
Or should I take a different approach?
Also, should I design my whole project this way from the start? Or should I refactor them to this design when the need arises when writing tests?
What you're describing is called dependency injection and yes, it is a common way of writing testable code. The second method you outlined (where you would pass in the db) is probably more common. Also, you can have the db parameter to your function take a default value so you are able to only specify the mock db in testing cases.
Whether to write your code that way at the outset or modify it later would be a matter of opinion, but if you adhere to the Test-driven development (TDD) methodology then you would write your tests before your code-under-test anyway.
There are other ways to deal with this problem, but you're asking a broad question at that point.
I take it these code fragments are python, which I'm not familiar with, but in any case this looks like the methods are detached from any stateful object and I'm not sure if that's idiomatic python or simply your design.
In an OOD you'd want an object that holds a data access object in its state (similar to your 2nd version) and mock that object for tests. You'd also want to mock the get_directories_our_path part.
As for when this design should be done - as the first step before creating the first code file. You should use dependency injection throughout your code. This will aid in testing as well as decoupling and increased reusability of your classes.