Getting failure detail on failed scala/maven/specs tests - maven-2

I am playing a bit with scala, using maven and scala plugin.
I can't find a way to have
mvn test
report failure details - in particular, whenever some function returns wrong reply, I am getting information about the failure, but I have no way to see WHICH wrong reply was reported.
For example, with test like:
object MyTestSpec extends Specification {
"my_function" should {
"return proper value for 3" {
val cnt = MyCode.my_function(3)
cnt must be_==(3)
}
}
}
in case my function returns something different than 3, I get only
Failed tests:
my_function should return proper value for 3
but there is no information what value was actually returned.
Is it possible to get this info somehow (apart from injecting manual println's)?

The Maven JUnit plugin only shows the failed tests in the console and not error messages. If you want to read the failure message you need to open the corresponding target/site/surefire-reports/MyTestSpecs.txt file.
You can also swap Maven for sbt-0.6.9 (Simple Build Tool) which can run specs specifications.
Eric.

cnt aka "my_function return value" must be_==(3)

Related

Is there a class/variable of karate available to get the details of failed scenarios like failed Scenario name, count etc when run using TestRunner

I want to get the details of the failed testcase after running it using TestRunner class. I was able to get few details like error messages, failed count of scenarios and feature files using the Result class of karate.
Results results = Runner.path("classpath:errorData/pass.feature").parallel(5);
String errmsgs = results.getErrorMessages();
List<String> err = results.getErrors();
int failcount = results.getFailCount();
Now I wanted to know if there is a way to get the details like testcase status, failed scenario, failed step. Any input will be appreciated. Thanks in advance
If you can't find what you need in the Result class, please assume the answer is no. Since Karate is open source, you are welcome to figure this out yourself or contribute code for anything missing.

Kotlin scripting support fails with "wrong number of arguments" whenever I try to run any script

I'm trying to run a very basic script with org.jetbrains.kotlin:kotlin-scripting-jvm, but I get two errors, when I should get none. This is my script:
1
I expect to get back a ResultWithDiagnostics.Success with a resultValue of 1 but instead I get a Failure, with these reports:
The expression is unused
wrong number of arguments
Even if I fix the warning by modifying my script to
class Foo(val foo: String = "foo")
Foo()
I still get the wrong number of arguments error. I checked the source and it seems that in
BasicJvmScriptEvaluator:95
return try {
ctor.newInstance(*args.toArray()) <-- here
} finally {
Thread.currentThread().contextClassLoader = saveClassLoader
}
args is empty. What am I doing wrong? This is how I try to run the script:
private fun evalFile(scriptFile: File): ResultWithDiagnostics<EvaluationResult> {
val compilationConfiguration = createJvmCompilationConfigurationFromTemplate<TestScript> {
jvm {
dependenciesFromCurrentContext(wholeClasspath = true)
}
}
return BasicJvmScriptingHost().eval(scriptFile.toScriptSource(), compilationConfiguration, null)
}
and this is the stack trace for this wrong number of arguments error I get:
java.lang.IllegalArgumentException: wrong number of arguments
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at kotlin.script.experimental.jvm.BasicJvmScriptEvaluator.evalWithConfigAndOtherScriptsResults(BasicJvmScriptEvaluator.kt:95)
at kotlin.script.experimental.jvm.BasicJvmScriptEvaluator.invoke$suspendImpl(BasicJvmScriptEvaluator.kt:40)
at kotlin.script.experimental.jvm.BasicJvmScriptEvaluator.invoke(BasicJvmScriptEvaluator.kt)
at kotlin.script.experimental.host.BasicScriptingHost$eval$1.invokeSuspend(BasicScriptingHost.kt:47)
at kotlin.script.experimental.host.BasicScriptingHost$eval$1.invoke(BasicScriptingHost.kt)
at kotlin.script.experimental.host.BasicScriptingHost$runInCoroutineContext$1.invokeSuspend(BasicScriptingHost.kt:35)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:238)
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.kt:116)
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:80)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:54)
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:36)
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source)
at kotlin.script.experimental.host.BasicScriptingHost.runInCoroutineContext(BasicScriptingHost.kt:35)
at kotlin.script.experimental.host.BasicScriptingHost.eval(BasicScriptingHost.kt:45)
This isn't fix, just workaround.
You can pass the source code to Kotlin Compiler by the different ways:
From FileScriptSource - when you pass list of files in the configuration
From the list of source code content in memory - e.g. each file should be read and content should be placed inside StringScriptSource
From the single memory script, which is created just with all input source files concatenation.
As I found in my experiments:
If you have mockk+kotest jars in the same classpath, option 1 doesn't work. In that case I'd like to assume for you to make one change:
// this doesn't work - scriptFile.toScriptSource()
scriptFile.readText().toScriptSource() // ok - we read source from memory, not from file
If you have huge service with a lot of Spring jars, all options above work. It means that you couldn't test your compilation in the unit tests, however your service will work!
If you want to do compilation from Gradle Plugin, you can catch another kind of issue - class conflict with coroutines library, so all options above don't work.
Finally, I changed the following in my code:
On input I always have a lot of kt/kts files.
I have three compilation options (described above). So my code executes createJvmCompilationConfigurationFromTemplate with the different logic, according on my compilation mode (it is just enum).
For unit tests I have to use option 3 only.
For service I use the first option as it is the fastest one
For gradle plugin classpath I start separate instance of java (with fresh classpath) which executes the input kts files.

E2E testing multiple posibilities with exception

i have a problem that i don't exactly know how to solve. I'm implementing an E2E test in which using selenium i need to click in a Link and check that is sending me to the right URL.
Here starts the problem...
There are 3 possibilities, mix of 2 types of links, just one type of link or the other type of link. No problems with the situations in which there are both types of links but when there is just one type when it searches for the identifier we use for the links that are not in page it gives me a timeoutException. This is not a failure because it's a posible situation but i will like to know if there is a way in which to check that if it finds no links it asserts that the exception is thrown.
I though using a runCatching (or try catch) wait for the link to appear and if it doesn't appear the test asserts that when i look for the element the timeout exception is thrown again.
It smells a bit for me this way of doing it and i don't know if it's correctly done.
EDIT: Im ussing AssertK and JUnit5 for testing.
EDIT 2: I've done this, i dont know if it a correct way of doing it
runCatching {
driver.waitFor(numberOfWidgetsToBeMoreThan(BrowserSelector.cssSelector(OFFERS_WITH_PRICE_AND_DATE), 0), ofMillis(2000))
}.onFailure {
assertThrows<WaitTimeoutException> {
findLink(OFFERS_WITH_PRICE_AND_DATE)
}
}.onSuccess {
val widget = findLink(OFFERS_WITH_PRICE_AND_DATE)
widget.click()
assertThat(driver.url).contains(NO_DATE_TEXT)
}
I'm not sure I understood your problem correctly, but you can use assertFails to assert that a piece of code throws an exception:
#Test
fun test() {
val exception = assertFails {
// some code that should throw
}
// some more assertions on the type of exception etc. may go here
}

Spock & Spock Reports: how "catch" and customize the error message for AssertionError?

I am working with:
Spock Core
Spock Reports
Spock Spring
Spring MVC Testing
and I have the following code:
#FailsWith(java.lang.AssertionError.class)
def "findAll() Not Expected"(){
given:
url = PersonaUrlHelper.FINDALL;
when:
resultActions = mockMvc.perform(get(url)).andDo(print())
then:
resultActions.andExpect(status().isOk())
.andExpect(content().contentType(MediaType.APPLICATION_XML))
}
Here the code fails (how is expected) because the method being tested really returns (MediaType.APPLICATION_JSON) instead of (MediaType.APPLICATION_XML).
So that the reason of #FailsWith(java.lang.AssertionError.class).
Even if I use #FailsWith(value=java.lang.AssertionError.class, reason="JSON returned ...") I am not able to see the reason through Spock Reports
Question One: how I can see the reason on Spock Reports?.
I know Spock offers the thrown() method, therefore I am able to do:
then:
def e = thrown(IllegalArgumentException)
e.message == "Some expected error message"
println e.message
Sadly thrown does not work for AssertionError.
If I use thrown(AssertionError) the test method does not pass, unique way is through #FailsWith but I am not able to get the error message from AssertionError
Question Two how is possible get the Error Message from AssertionError?
I know I am able to do something like
then: "Something to show on Spock Reports"
But just curious if the question two can be resolved..
regarding Question one:
if you look at FailsWithExtension#visitFeatureAnnotation you can see that only value from the #FailsWith is evaluated, reason is not touched at all. What you could do is introduce you own type of annotation (custom one, e.g. same as #FailsWith) and override AbstractAnnotationDrivenExtension#visitFeatureAnnotation. There you have access to reason parameter.
regarding Question two:
please look at this link: http://spock-framework.3207229.n2.nabble.com/Validate-exception-message-with-FailsWith-td7573288.html
additionally maybe you could override AbstractAnnotationDrivenExtension#visitSpec and add custom listener (overriding AbstractRunListener). Then you have access to AbstractRunListener#error method whose documentation says:
Called for every error that occurs during a spec run. May be called multiple times for the same method, for example if both
* the expect-block and the cleanup-block of a feature method fail.
Didn't test for Question two, but it may work. I've used sth similar.
Enjoy,
Tommy

Teamcity rerun specific failed unstable tests

I have Teamcity 7.1 and around 1000 tests. Many tests are unstable and fail randomly. Even a single test fails the whole build fails and to run a new build takes 1 hour.
So I would like to be able to configure Teamcity to rerun failed tests within the same build a specific number of time. Any success for a test should be considered as success, not a failure. Is it possible?
Also now is tests in some module fail Teamcity does not proceed to the next module. How to fix it?
With respect, I think you might have this problem by the wrong end. A test that randomly fails is not providing you any value as a metric of deterministic behavior. Either fix the randomness (through use of mocks, etc.) or ignore the tests.
If you absolutely have to I'd put loops round some of your test code and catch say 5 failures before throwing the exception as a 'genuine' failure. Something like this C# example would do...
public static void TestSomething()
{
var counter = 0;
while (true)
{
try
{
// add test code here...
return;
}
catch (Exception) // catch more specific exception(s)...
{
if (counter == 4)
{
throw;
}
counter++;
}
}
}
While I appreciate the problems that can arise with testing asych code, I'm with #JohnHoerr on this one, you really need to fix the tests.
Rerun failed tests feature is part of Maven Surefire Plugin, if you execute mvn -Dsurefire.rerunFailingTestsCount=2 test
then tests will be run until they pass or the number of reruns has been exhausted.
Of course, -Dsurefire.rerunFailingTestsCount can be used in TeamCity or any other CI Server.
See:
http://maven.apache.org/surefire/maven-surefire-plugin/examples/rerun-failing-tests.html