Teamcity rerun specific failed unstable tests - testing

I have Teamcity 7.1 and around 1000 tests. Many tests are unstable and fail randomly. Even a single test fails the whole build fails and to run a new build takes 1 hour.
So I would like to be able to configure Teamcity to rerun failed tests within the same build a specific number of time. Any success for a test should be considered as success, not a failure. Is it possible?
Also now is tests in some module fail Teamcity does not proceed to the next module. How to fix it?

With respect, I think you might have this problem by the wrong end. A test that randomly fails is not providing you any value as a metric of deterministic behavior. Either fix the randomness (through use of mocks, etc.) or ignore the tests.

If you absolutely have to I'd put loops round some of your test code and catch say 5 failures before throwing the exception as a 'genuine' failure. Something like this C# example would do...
public static void TestSomething()
{
var counter = 0;
while (true)
{
try
{
// add test code here...
return;
}
catch (Exception) // catch more specific exception(s)...
{
if (counter == 4)
{
throw;
}
counter++;
}
}
}
While I appreciate the problems that can arise with testing asych code, I'm with #JohnHoerr on this one, you really need to fix the tests.

Rerun failed tests feature is part of Maven Surefire Plugin, if you execute mvn -Dsurefire.rerunFailingTestsCount=2 test
then tests will be run until they pass or the number of reruns has been exhausted.
Of course, -Dsurefire.rerunFailingTestsCount can be used in TeamCity or any other CI Server.
See:
http://maven.apache.org/surefire/maven-surefire-plugin/examples/rerun-failing-tests.html

Related

Retrieve output log when test fails during setup

I'm running automated unit tests with SpecFlow and Selenium. SpecFlow offers BeforeTestRun, BeforeFeature, and BeforeScenario attributes to execute code between tests at the appropriate time.
I'm also using log4net to log test output.
When a test fails during the test or during the BeforeScenario phase, I can see the output logged.
But when a test fails during BeforeTestRun or BeforeFeature, there is no output available.
This makes it difficult to diagnose when the test fails during the early stages on the remote testing server and all I have are the output logs.
Is there any way to use log4net to get output logs when the test fails before the individual test has begun?
You can implement custom method, with reference of TestResult and TestContext objects. And you can call it in [TearDown] or somewhere on end of the Test:
if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Failed")
{
string message = TestContext.CurrentContext.Result.Message;
string logs = TestContext.CurrentContext.Result.StackTrace;
}
else if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Passed")
{
//User defined action
}
else
{
string otherlog = TestContext.CurrentContext.Result.Outcome.Status.ToString();
}
Actually it's weird that it doesn't show you log when a tests is failed.
It shows it well for me:
What I suggest to try is to check that you log4net pushes all the logs to console. Actually if you haven't done some special manipulation, then your logger should by default have a console appender.
I initiate my logger like this:
private static readonly ILog log =
LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
Another guess is that maybe when your test fails on OneTimeSetup Fixture it doesn't have any log yet.

Running a main-like in a non-main package

We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?
You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.
I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.

#Test(enabled=false) is not shown as skipped test case

We have a number of TestNG tests that are disabled due to functionality not yet being present (enabled = false), but when the test classes are executed the disabled tests do not show up as skipped in the TestNG report. We'd like to know at the time of execution how many tests are disabled (i.e. skipped). At the moment we have to count the occurrences of enabled = false in the test classes which is an overhead.
Is there a different annotation to use or something else we are missing so that our test reports can display the number of disabled tests?
For example below method still gets executed : -
#Test(enabled=false)
public void ignoretestcase()
{
System.out.println("Ignore Test case");
}
you can use SkipException
throw new SkipException("Skip the test");
Check this post Skip TestNg test in runtime

How can I create a shortcut task for a single test in Gradle?

In Gradle, I can run a single test from the command line as follows:
gradle -Dtest.single=VeryCriticalTestX test
VeryCriticalTestX is frequently executed alone, and I'd like to provide a more readable and flexible API to my fellow developers. Ideally, they would only need to run
gradle testCritical
without worrying about the test's name. This would also allow me to change the name over time without breaking Jenkins builds.
How do I do achieve this?
Gradle's Test-Tasks can be configured to only include tests matching a given name pattern. You can create a new task testCritical as follows:
task testCritical(type: Test) {
group = 'verification'
description = 'Runs a very critical test'
outputs.upToDateWhen { false }
include('**/VeryCriticalTestX.class')
}
With this, renaming VeryCriticalTestX to something else doesn't break other people's commands or Jenkins jobs. However, there is the risk that someone accidentally disables this task by renaming the VeryCriticalTestX without adapting the task configuration. This can be prevented with the following TaskExecutionListener:
// verify that testCritical is not skipped unexpectedly due to a renamed classfile
// we detect this using Gradle's NO-SOURCE TaskState
gradle.addListener(new TaskExecutionListener() {
void beforeExecute(Task task) {}
void afterExecute(Task task, TaskState state) {
if (checkJooqEnumBindings.equals(task) && state.getNoSource()) {
throw new GradleException("testCritical did not run because it couldn't find VeryCriticalTestX")
}
}
})

Getting failure detail on failed scala/maven/specs tests

I am playing a bit with scala, using maven and scala plugin.
I can't find a way to have
mvn test
report failure details - in particular, whenever some function returns wrong reply, I am getting information about the failure, but I have no way to see WHICH wrong reply was reported.
For example, with test like:
object MyTestSpec extends Specification {
"my_function" should {
"return proper value for 3" {
val cnt = MyCode.my_function(3)
cnt must be_==(3)
}
}
}
in case my function returns something different than 3, I get only
Failed tests:
my_function should return proper value for 3
but there is no information what value was actually returned.
Is it possible to get this info somehow (apart from injecting manual println's)?
The Maven JUnit plugin only shows the failed tests in the console and not error messages. If you want to read the failure message you need to open the corresponding target/site/surefire-reports/MyTestSpecs.txt file.
You can also swap Maven for sbt-0.6.9 (Simple Build Tool) which can run specs specifications.
Eric.
cnt aka "my_function return value" must be_==(3)