How can I create a shortcut task for a single test in Gradle? - testing

In Gradle, I can run a single test from the command line as follows:
gradle -Dtest.single=VeryCriticalTestX test
VeryCriticalTestX is frequently executed alone, and I'd like to provide a more readable and flexible API to my fellow developers. Ideally, they would only need to run
gradle testCritical
without worrying about the test's name. This would also allow me to change the name over time without breaking Jenkins builds.
How do I do achieve this?

Gradle's Test-Tasks can be configured to only include tests matching a given name pattern. You can create a new task testCritical as follows:
task testCritical(type: Test) {
group = 'verification'
description = 'Runs a very critical test'
outputs.upToDateWhen { false }
include('**/VeryCriticalTestX.class')
}
With this, renaming VeryCriticalTestX to something else doesn't break other people's commands or Jenkins jobs. However, there is the risk that someone accidentally disables this task by renaming the VeryCriticalTestX without adapting the task configuration. This can be prevented with the following TaskExecutionListener:
// verify that testCritical is not skipped unexpectedly due to a renamed classfile
// we detect this using Gradle's NO-SOURCE TaskState
gradle.addListener(new TaskExecutionListener() {
void beforeExecute(Task task) {}
void afterExecute(Task task, TaskState state) {
if (checkJooqEnumBindings.equals(task) && state.getNoSource()) {
throw new GradleException("testCritical did not run because it couldn't find VeryCriticalTestX")
}
}
})

Related

Retrieve output log when test fails during setup

I'm running automated unit tests with SpecFlow and Selenium. SpecFlow offers BeforeTestRun, BeforeFeature, and BeforeScenario attributes to execute code between tests at the appropriate time.
I'm also using log4net to log test output.
When a test fails during the test or during the BeforeScenario phase, I can see the output logged.
But when a test fails during BeforeTestRun or BeforeFeature, there is no output available.
This makes it difficult to diagnose when the test fails during the early stages on the remote testing server and all I have are the output logs.
Is there any way to use log4net to get output logs when the test fails before the individual test has begun?
You can implement custom method, with reference of TestResult and TestContext objects. And you can call it in [TearDown] or somewhere on end of the Test:
if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Failed")
{
string message = TestContext.CurrentContext.Result.Message;
string logs = TestContext.CurrentContext.Result.StackTrace;
}
else if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Passed")
{
//User defined action
}
else
{
string otherlog = TestContext.CurrentContext.Result.Outcome.Status.ToString();
}
Actually it's weird that it doesn't show you log when a tests is failed.
It shows it well for me:
What I suggest to try is to check that you log4net pushes all the logs to console. Actually if you haven't done some special manipulation, then your logger should by default have a console appender.
I initiate my logger like this:
private static readonly ILog log =
LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
Another guess is that maybe when your test fails on OneTimeSetup Fixture it doesn't have any log yet.

Running a main-like in a non-main package

We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?
You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.
I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.

How to skip teststep in QAF using TestStepListener?

I am using QAF as my Test Automation Framework.
I want to skip specific teststep in the production environment. How can I skip execution of BDD teststep using TestStepListener?
Here is an example use case:
For shopping cart application I have developed 200+ scenarios. I was executing all scenarios on the test environment. Now I want to execute all scenarios on production environment. Now I want to skip last steps of payment and order review on production environment. How can I do that?
Will you please provide details of use case? If my understanding is correct you don't want to execute specific step in the production environment. You can use step listener to jump to specific step index but not to skip current step. One of the way is group steps to high-level step. For example instead of writing detailed steps in bdd
Given some situation
When performing some action
Then step-1
And step-2 not for production
and step-3
You can have high level step
Given some situation
When performing some action
Then generic step for all environments
Here your generic step for all environments step can have implementation for different environments in different package. configure step provider package at runtime.
Another trick is set and reset dry-run mode in step listener. For example, in your step definition you can provide additional meta-data. In the step listener depends on meta-data if require set dry-run mode in before method and reset it after in method.
Step definition:
#MetaData("{'skip_prod':true}")
#QAFTestStep(description = "do payment")
public static void doPayment() {
//TODO: write your code here
}
Step listener code may look like:
public void beforExecute(StepExecutionTracker stepExecutionTracker) {
Map<String, Object> metadata = stepExecutionTracker.getStep().getMetaData();
if (null != metadata && metadata.containsKey("skip_prod") && "prod".equalsIgnoreCase(getBundle().getString("env"))) {
//do not run this step
getBundle().setProperty(ApplicationProperties.DRY_RUN_MODE.key,true);
}
}
public void afterExecute(StepExecutionTracker stepExecutionTracker) {
Map<String, Object> metadata = stepExecutionTracker.getStep().getMetaData();
if (null != metadata && metadata.containsKey("skip_prod") && "prod".equalsIgnoreCase(getBundle().getString("env"))) {
// this is not dry run so reset
getBundle().setProperty(ApplicationProperties.DRY_RUN_MODE.key,false);
}
}

zip dependencies to a file does not work any more

basing on Gradle : Copy all test dependencies to a zip file
I created
task zipDeps(type: Zip) {
from configurations.testCompile.allArtifacts.files
from configurations.testCompile
exclude { details -> details.file.name.contains('servlet-api') }
exclude { details -> details.file.name.contains('el-api') }
exclude { details -> details.file.name.contains('jsp-api') }
exclude { it.file in configurations.providedCompile.files }
archiveName "${rootProjectName}-runtime-dependencies_full.zip"
doLast{
ant.copy (toDir : "$buildDir/libs/") {
fileset(file:"$buildDir/distributions/${rootProjectName}-runtime-dependencies_full.zip")
}
}
}
This worked fine until I migrated to gradle 2.0. If i leave that code like it was, the task is executed in the beginning and nothing happens at all. If I add << to the task and make it dependent to my war build task, at the end of the war build it claims to be up-to-date but nothing has happened.
One of my problems seems to be that the fileset to be copied is not created at all.
What can I do to get that stuff working again?
The task won't be executed in the beginning, but calling .files resolves the configurations too early. The first from line needs to go (it's redundant and also calls .files when it shouldn't). The doLast block is suspicious and should probably be turned into a separate Copy task. Instead of the second from and last exclude, try from (configurations.compile - configurations.providedCompile).

Teamcity rerun specific failed unstable tests

I have Teamcity 7.1 and around 1000 tests. Many tests are unstable and fail randomly. Even a single test fails the whole build fails and to run a new build takes 1 hour.
So I would like to be able to configure Teamcity to rerun failed tests within the same build a specific number of time. Any success for a test should be considered as success, not a failure. Is it possible?
Also now is tests in some module fail Teamcity does not proceed to the next module. How to fix it?
With respect, I think you might have this problem by the wrong end. A test that randomly fails is not providing you any value as a metric of deterministic behavior. Either fix the randomness (through use of mocks, etc.) or ignore the tests.
If you absolutely have to I'd put loops round some of your test code and catch say 5 failures before throwing the exception as a 'genuine' failure. Something like this C# example would do...
public static void TestSomething()
{
var counter = 0;
while (true)
{
try
{
// add test code here...
return;
}
catch (Exception) // catch more specific exception(s)...
{
if (counter == 4)
{
throw;
}
counter++;
}
}
}
While I appreciate the problems that can arise with testing asych code, I'm with #JohnHoerr on this one, you really need to fix the tests.
Rerun failed tests feature is part of Maven Surefire Plugin, if you execute mvn -Dsurefire.rerunFailingTestsCount=2 test
then tests will be run until they pass or the number of reruns has been exhausted.
Of course, -Dsurefire.rerunFailingTestsCount can be used in TeamCity or any other CI Server.
See:
http://maven.apache.org/surefire/maven-surefire-plugin/examples/rerun-failing-tests.html