Test Suite in kotest - kotlin

I don't know if I am missing something but I could not find anything that says how to do a test suite like in JUnit. Can someone help me? I saw that documentation offers grouping tests, but when I run from Gradle, the logs are really large, and not very useful

You can group your tests using Tags, see https://kotest.io/docs/framework/tags.html.
For example, to group tests by operating system you could define the following tags:
object Linux : Tag()
object Windows: Tag()
Test cases can then be marked with tags using the config function:
import io.kotest.specs.StringSpec
class MyTest : StringSpec() {
init {
"should run on Windows".config(tags = setOf(Windows)) {
// ...
}
"should run on Linux".config(tags = setOf(Linux)) {
// ...
}
"should run on Windows and Linux".config(tags = setOf(Windows, Linux)) {
// ...
}
}
}
Then you can tell Gradle to run only tests with specific Tags, see https://kotest.io/docs/framework/tags.html#running-with-tags
Example: To run only test tagged with Linux, but not tagged with Database, you would invoke Gradle like this:
gradle test -Dkotest.tags="Linux & !Database"
Tags can also be included/excluded in runtime (for example, if you're running a project configuration instead of properties) through the RuntimeTagExtension:
RuntimeTagExpressionExtension.expression = "Linux & !Database"

Related

Karate tests run successfully but code coverage shows zero [duplicate]

How to get Jacoco reports for the Karate test feature files using Gradle.
My project is a Gradle project and I am trying to integrate jacoco report feature in my project for the karate tests. The server is running in my local on 8080 port.
I am doing the following way to generate jacoco report and please let me know is my approach correct and also give me a solution to get the jacoco report for the gradle project.
1) First I am trying to generate jacoco execution data with the help of jacocoagent.jar as follows with a Gradle task:
java -javaagent:/pathtojacocojar/jacocoagent.jar=destfile=/pathtojocofile/jacoco.exec -jar my-app.jar
2) Next, I am running a Gradle task to generate the report
project.task ('jacocoAPIReport',type: org.gradle.testing.jacoco.tasks.JacocoReport) {
additionalSourceDirs = files(project.sourceSets.main.allSource.srcDirs)
sourceDirectories = files(project.sourceSets.main.allSource.srcDirs)
classDirectories = files(project.sourceSets.main.output)
executionData = fileTree(dir: project.projectDir, includes: ["**/*.exec", "**/*.ec"])
reports {
html.enabled = true
xml.enabled = true
csv.enabled = false
}
onlyIf = {
true
}
doFirst {
executionData = files(executionData.findAll {
it.exists()
})
}
}
project.task('apiTest', type: Test) {
description = 'Runs the api tests'
group = 'verification'
testClassesDirs = project.sourceSets.apiTest.output.classesDirs
classpath =
project.sourceSets.apiTest.runtimeClasspath
useJUnitPlatform()
outputs.upToDateWhen { false }
finalizedBy jacocoAPIReport
}
I don't see any of my application's classes in the jococo.exec file. I think, bcz of that I am always getting the coverage report as 0%.
The server is running in my local on 8080 port.
I don't think that is going to work. Depending on how your code is structured you need to instrument the code of the server.
I suggest trying to get a simple unit test of a Java method to work with Gradle. If that works, then use the same approach for the server-side code and it will work.

Gradle: How to write tasks in Kotlin which runs another task?

I want there to be a lot of test tasks, all of which are based on a "parent task".
The parent task should start the client application to be tested and then the one special
Call test class. The background is that e.g. I currently call my client manually as follows
must "./gradlew client: run --args = 'profile = default_client'" and then start all test classes at once with "./gradlew test"
(by the way: these tests connect to the running client via RMI connection.
My approach so far looks like this
open class Testing : DefaultTask() {
#get:Input
var profileName = "client"
#TaskAction
fun testIt() {
// 1. run/start the client
dependsOn(":client:run --args='profile=" + profileName + "'")
// this won't work: Cannot call Task.dependsOn(Object...) on task ':testing01' after task has started execution.
// 2. run a SPECIFIC testsuit via gradle
// ???
}
}
tasks.register<Testing>("testing01") {
profileName = "client-tests"
}
Unfortunately I don't know what to do next and how to fix the running of different tasks.

Can gradle tasks be created that subset the tests in a project?

I am using the gradle tooling api to kick off tests based on receiving a webhook.
I don't see a way to pass parameters to the tooling API. I can run tests with something like:
String workingDir = System.getProperty("user.dir");
ProjectConnection connection = GradleConnector.newConnector()
.forProjectDirectory(new File(workingDir))
.connect();
try {
connection.newBuild().forTasks("test").run();
} catch (Exception ex) {
ex.printStackTrace();
} finally {
connection.close();
}
But I don't see a way to run something like "gradle test --tests=xxx" so I was hoping I could make gradle tasks that were subsets of tests like "gradle dev_tests", "gradle int_tests".
Does anyone know if this is possible and if so, how to do it?
Per the gradle docs, newBuild() functions, conveniently ,as a builder pattern.
You can set several parameters before calling run() on it .
//select tasks to run:
build.forTasks( "test");
//include some build arguments:
build.withArguments("--tests=xxx");
...
build.run();
Source:
https://docs.gradle.org/current/javadoc/org/gradle/tooling/BuildLauncher.html

Automatically re-run failed only scenario in cucumber java+testng

How can I make only failed scenarios to be run again automatically on failure ?
Here is some clue on what I am doing:
Pass TestRunner class from command line through cucumber-testng.xml file at run-time.
I am able to see rerun.txt file after scenario failed, with feature/GM/TK/payment.feature:71 (pointing to failed scenario) but failed scenario is not automatically re-run.
The "TestRunner" java file
#RunWith(Cucumber.class)
#CucumberOptions(strict = true,
features = { "src/test/resources/" }, //feature file location
glue = { "com/test/stepdefs", "com.test.cucumber.hooks" }, //hooks and stepdef location
plugin = { "json:target/cucumber-report-composite.json", "pretty", "rerun:target/rerun.txt"}
)
public class CucumberTestRunner extends AbstractTestNGCucumberTests
{
}
The "RunFailedTest" Class to re-run from rerun.txt file
#RunWith(Cucumber.class)
#CucumberOptions(
strict = false,
features = { "#target/rerun.txt" }, //rerun location
glue = { "com/test/stepdefs", "com.test.cucumber.hooks" }, //hooks and stepdef location
plugin = {"pretty", "html:target/site/cucumber-pretty", "json:target/cucumber.json"}
)
class RunFailedTest
{
}
you can achieve it by using gherkin with qaf it generates testng XML configuration for failed scenarios that you can use for rerun. It also support scenario rerun on fail by setting retry.count property.
using Cucumber + Maven + TestNG
first, you don't need "#RunWith(Cucumber.class)" as you have mentioned in your question, if you are using TestNG, only "#CucumberOptions" is required.
When you start your test execution, all scenario failures will be written to file "target/rerun.txt" as per the configuration mentioned in your Runner file.
Now, you need to create one more Runner file (for example - "FailureRunner") and in this file provide the path of "#target/rerun.txt" ( this already have the details of the failure scenarios ) as -> features = { "#target/rerun.txt" }
Now, you need to UPDATE your TestNG.xml file and include the "FailureRunner" as below-
<class name="Class path of Your First Runner Class name" />
<class name="class path of FailureRunner Class" />
Once you do all the above steps and run your execution, the first execution will write the failure scenarios in the "target/rerun.txt" and After that, the "FailureRunner" Class will be executed which will pick up the "#target/rerun.txt" file and Hence, Failure scenarios will be executed.
I have executed in the same way and it works fine, let me know if it helps !!

Can you test SetUp success/failure in Google Test?

Is there a way to check that SetUp code has actually worked properly in GTest fixtures, so that the whole fixture or test-application can be marked as failed rather than get weird test results and/or have to explicitly check this in each test?
If you put your fixture setup code into a SetUp method, and it fails and issues a fatal failure (ASSERT_XXX or FAIL macros), Google Test will not run your test body. So all you have to write is
class MyTestCase : public testing::Test {
protected:
bool InitMyTestData() { ... }
virtual void SetUp() {
ASSERT_TRUE(InitMyTestData());
}
};
TEST_F(MyTestCase, Foo) { ... }
Then MyTestCase.Foo will not execute if InitMyTestData() returns false. If you already have nonfatal assertions in your setup code (i.e., EXPECT_XXX or ADD_FAILURE), you can generate a fatal assertion from them by writing ASSERT_FALSE(HasFailure()); You can find more info on failure detection in the Google Test Advanced Guide wiki page.