Jenkins - How to send boolean parameter using curl, http? - api

I am trying to send boolean parameter using curl, http-get to Jenkins without success. Need someone to tell me what is wrong here.
I did try to send it as true , True and 1 but none of this works.
Job in jenkins is set as pipeline and is parametrized. There are 3 parameters added from GUI:
bp1 boolean 1
bp2 boolean 2
sp string (this one is to check if parameters are send at all
Default values are as below:
bp1 - false
bp2 - false
sp - null
Pipeline code:
stage ('bools') {
echo 'bool 1 is:' + params.bp1
echo 'bool 2 is:' + params.bp2
echo 'string is:' + params.sp
}
Command used to invoke build (in browser or postman):
http://X.X.X.X/jenkins/job/bool_debug/buildWithParameters?token=booltest&bp1=true&bp2=false$sp='this is text from param'
Expected result is bool 1 is:true but got bool 1 is:false. Jenkins did not change (tick checkbox) boolean parameter when invoking from API. In other means:
What I get:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] stage
[Pipeline] { (bools)
[Pipeline] echo
bool 1 is:false
[Pipeline] echo
bool 2 is:false
[Pipeline] echo
string is:'this is text from param'
[Pipeline] }
[Pipeline] // stage
[Pipeline] End of Pipeline
Finished: SUCCESS
What it should be:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] stage
[Pipeline] { (bools)
[Pipeline] echo
bool 1 is:true <--------------------------
[Pipeline] echo
bool 2 is:false
[Pipeline] echo
string is:'this is text from param'
[Pipeline] }
[Pipeline] // stage
[Pipeline] End of Pipeline
Finished: SUCCESS

Faced with the same issue, think it's a Jenkins bug. I ended up creating another pipeline which triggers the desired pipeline with params as a workaround.

Related

Overriding Nextflow Parameters with Commandline Arguments

Given the following nextflow.config:
google {
project = "cool-project"
region = "europe-west4"
lifeSciences {
bootDiskSize = "200 GB"
debug = true
preemptible = true
}
}
Is it possible to override one or more of those settings using command line arguments. For example, if I wanted to specify that no preemptible machines should be used, can I do the following:
nextflow run main.nf -c nextflow.config --google.lifeSciences.preemptible false
?
Overriding pipeline parameters can be done using Nextflow's command line interface by prefixing the parameter name with a double dash. For example, put the following in a file called 'test.nf':
#!/usr/bin/env nextflow
params.greeting = 'Hello'
names = Channel.of( "foo", "bar", "baz" )
process greet {
input:
val name from names
output:
stdout result
"""
echo "${params.greeting} ${name}"
"""
}
result.view { it.trim() }
And run it using:
nextflow run -ansi-log false test.nf --greeting 'Bonjour'
Results:
N E X T F L O W ~ version 20.10.0
Launching `test.nf` [backstabbing_cajal] - revision: 431ef92cef
[46/22b4f0] Submitted process > greet (1)
[ca/32992c] Submitted process > greet (3)
[6e/5880b0] Submitted process > greet (2)
Bonjour bar
Bonjour foo
Bonjour baz
This works fine for pipeline params, but AFAIK there's no way to directly override executor config like you describe on the command line. You can however, just parameterize these values and set them on the command line like described above. For example, in your nextflow.config:
params {
gc_region = false
gc_preemptible = true
...
}
profiles {
'test' {
includeConfig 'conf/test.config'
}
'google' {
includeConfig 'conf/google.config'
}
...
}
And in a file called 'conf/google.config':
google {
project = "cool-project"
region = params.gc_region
lifeSciences {
bootDiskSize = "200 GB"
debug = true
preemptible = params.gc_preemptible
}
}
Then you should be able to override these in the usual way:
nextflow run main.nf -profile google --gc_region "europe-west4" --gc_preemptible false
Note that you can also specify multiple configuration profiles by separating the profile names with a comma:
nextflow run main.nf -profile google,test ...

channel checks as empty even if it has content

I am trying to have a process that is launched only if a combination of conditions is met, but when checking if a channel has a path to a file, it always returns it as empty. Probably I am doing something wrong, in that case please correct my code. I tried to follow some of the suggestions in this issue but no success.
Consider the following minimal example:
process one {
output:
file("test.txt") into _chProcessTwo
script:
"""
echo "Hello world" > "test.txt"
"""
}
// making a copy so I check first if something in the channel or not
// avoids raising exception of MultipleInputChannel
_chProcessTwo.into{
_chProcessTwoView;
_chProcessTwoCheck;
_chProcessTwoUse
}
//print contents of channel
println "Channel contents: " + _chProcessTwoView.toList().view()
process two {
input:
file(myInput) from _chProcessTwoUse
when:
(!_chProcessTwoCheck.toList().isEmpty())
script:
def test = _chProcessTwoUse.toList().isEmpty() ? "I'm empty" : "I'm NOT empty"
println "The outcome is: " + test
}
I want to have process two run if and only if there is a file in the _chProcessTwo channel.
If I run the above code I obtain:
marius#dev:~/pipeline$ ./bin/nextflow run test.nf
N E X T F L O W ~ version 19.09.0-edge
Launching `test.nf` [infallible_gutenberg] - revision: 9f57464dc1
[c8/bf38f5] process > one [100%] 1 of 1 ✔
[- ] process > two -
[/home/marius/pipeline/work/c8/bf38f595d759686a497bb4a49e9778/test.txt]
where the last line are actually the contents of _chProcessTwoView
If I remove the when directive from the second process I get:
marius#mg-dev:~/pipeline$ ./bin/nextflow run test.nf
N E X T F L O W ~ version 19.09.0-edge
Launching `test.nf` [modest_descartes] - revision: 5b2bbfea6a
[57/1b7b97] process > one [100%] 1 of 1 ✔
[a9/e4b82d] process > two [100%] 1 of 1 ✔
[/home/marius/pipeline/work/57/1b7b979933ca9e936a3c0bb640c37e/test.txt]
with the contents of the second worker .command.log file being: The outcome is: I'm empty
I tried also without toList()
What am I doing wrong? Thank you in advance
Update: a workaround would be to check _chProcessTwoUse.view() != "" but that is pretty dirty
Update 2 as required by #Steve, I've updated the code to reflect a bit more the actual conditions i have in my own pipeline:
def runProcessOne = true
process one {
when:
runProcessOne
output:
file("inputProcessTwo.txt") into _chProcessTwo optional true
file("inputProcessThree.txt") into _chProcessThree optional true
script:
// this would replace the probability that output is not created
def outputSomething = false
"""
if ${outputSomething}; then
echo "Hello world" > "inputProcessTwo.txt"
echo "Goodbye world" > "inputProcessThree.txt"
else
echo "Sorry. Process one did not write to file."
fi
"""
}
// making a copy so I check first if something in the channel or not
// avoids raising exception of MultipleInputChannel
_chProcessTwo.into{
_chProcessTwoView;
_chProcessTwoCheck;
_chProcessTwoUse
}
//print contents of channel
println "Channel contents: " + _chProcessTwoView.view()
println _chProcessTwoView.view() ? "Me empty" : "NOT empty"
process two {
input:
file(myInput) from _chProcessTwoUse
when:
(runProcessOne)
script:
"""
echo "The outcome is: ${myInput}"
"""
}
process three {
input:
file(defaultInput) from _chUpstreamProcesses
file(inputFromProcessTwo) from _chProcessThree
script:
def extra_parameters = _chProcessThree.isEmpty() ? "" : "--extra-input " + inputFromProcessTwo
"""
echo "Hooray! We got: ${extra_parameters}"
"""
}
As #Steve mentioned, I should not even check if a channel is empty, NextFlow should know better to not initiate the process. But I think in this construct I will have to.
Marius
I think part of the problem here is that process 'one' creates only optional outputs. This makes dealing with the optional inputs in process 'three' a bit tricky. I would try to reconcile this if possible. If this can't be reconciled, then you'll need to deal with the optional inputs in process 'three'. To do this, you'll basically need to create a dummy file, pass it into the channel using the ifEmpty operator, then use the name of the dummy file to check whether or not to prepend the argument's prefix. It's a bit of a hack, but it works pretty well.
The first step is to actually create the dummy file. I like shareable pipelines, so I would just create this in your baseDir, perhaps under a folder called 'assets':
mkdir assets
touch assets/NO_FILE
Then pass in your dummy file if your '_chProcessThree' channel is empty:
params.dummy_file = "${baseDir}/assets/NO_FILE"
dummy_file = file(params.dummy_file)
process three {
input:
file(defaultInput) from _chUpstreamProcesses
file(optfile) from _chProcessThree.ifEmpty(dummy_file)
script:
def extra_parameters = optfile.name != 'NO_FILE' ? "--extra-input ${optfile}" : ''
"""
echo "Hooray! We got: ${extra_parameters}"
"""
}
Also, these lines are problematic:
//print contents of channel
println "Channel contents: " + _chProcessTwoView.view()
println _chProcessTwoView.view() ? "Me empty" : "NOT empty"
Calling view() will emit all values from the channel to stdout. You can ignore whatever value it returns. Unless you enable DSL2, the channel will then be empty. I think what you're looking for here is a closure:
_chProcessTwoView.view { "Found: $it" }
Be sure to append -ansi-log false to your nextflow run command so the output doesn't get clobbered. HTH.

Behaviour of feed operator pipeline

I'm experimenting with the feed operator using the following code:
my $limit=10000000;
my $list=(0,1...4);
sub task($_,$name) {
say "Start $name on $*THREAD";
loop ( my $i=0; $i < $limit; $i++ ){};
say "End $name";
Nil;
}
sub stage( &code, *#elements --> Seq) {
map(&code, #elements);
}
sub feeds() {
my #results;
$list
==> map ({ task($_, "stage 1"); say( now - ENTER now );$_+1})
==> map ({ task($_, "stage 2"); say( now - ENTER now );$_+1})
==> stage ({ task($_, "stage 3"); say( now - ENTER now );$_+1})
==> #results;
say #results;
}
feeds();
the task sub is just a loop to burn cpu cycles and displays the thread used
the stage sub is a wrapper around the map sub
each step of the feed pipeline maps to calling the cpu intensive task and timing it. The result of the map is the input +1
The output when run is:
Start stage 1 on Thread<1>(Initial thread)
End stage 1
0.7286811
Start stage 2 on Thread<1>(Initial thread)
End stage 2
0.59053989
Start stage 1 on Thread<1>(Initial thread)
End stage 1
0.5955893
Start stage 2 on Thread<1>(Initial thread)
End stage 2
0.59050998
Start stage 1 on Thread<1>(Initial thread)
End stage 1
0.59472201
Start stage 2 on Thread<1>(Initial thread)
End stage 2
0.5968531
Start stage 1 on Thread<1>(Initial thread)
End stage 1
0.5917188
Start stage 2 on Thread<1>(Initial thread)
End stage 2
0.587358
Start stage 1 on Thread<1>(Initial thread)
End stage 1
0.58689858
Start stage 2 on Thread<1>(Initial thread)
End stage 2
0.59177099
Start stage 3 on Thread<1>(Initial thread)
End stage 3
3.8549498
Start stage 3 on Thread<1>(Initial thread)
End stage 3
3.8560015
Start stage 3 on Thread<1>(Initial thread)
End stage 3
3.77634317
Start stage 3 on Thread<1>(Initial thread)
End stage 3
3.6754558
Start stage 3 on Thread<1>(Initial thread)
End stage 3
3.672909
[3 4 5 6 7]
The result in #result is correct (The input from $list in incremented 3 three times)
The output of the first two stages pulls/alternates but the third stage doesn't execute until the completion of all the input into stage 2
Is there an issue with my wrapper to the map sub causing this behaviour?
Also the time it takes to evaluate in the wrapper is significantly longer than a direct call to map.
Any help is appreciated.
The slurpy parameter is waiting to consume the entire sequence being passed to it. Consider the difference between:
perl6 -e 'sub foo($) { }; my $items := gather for 1..Inf { say $_ }; foo($items);'
perl6 -e 'sub foo(*#) { }; my $items := gather for 1..Inf { say $_ }; foo($items);'
The first example will finish, the second one will never finish. Thus you would want to change your stage function to:
sub stage( &code, $elements --> Seq) {
map(&code, $elements);
}

Running multiple exec commands and waiting to finish before continuing

I know I ask a lot of questions and I know there is a lot on here about this exactly what I am trying to do but I have not been able to get it to work in my script nor figure out why it does not let me do this. I am trying to run several commands using exec in the background and the tests can range anywhere between 5 and 45 minutes (longer if they have to cue for a license). It takes forever to run them back to back so I was wondering what I need to do to make my script wait for them to finish before moving on the the next section of script.
while {$cnt <= $len} {
# Begin count for running tests
set testvar [lindex $f $cnt]
if {[file exists $path0/$testvar] == 1} {
cd $testvar
} else {
exec mkdir $testvar
cd $testvar
exec create_symobic_link_here
}
# Set up test environment
exec -ignorestderr make clean
exec -ignorestderr make depends
puts "Running $testvar"
set runtest [eval exec -ignorestderr bsub -I -q lin_i make $testvar SEED=1 VPDDUMP=on |tail -n 1 >> $path0/runtestfile &]
cd ../
incr cnt
}
I know there is nothing here to make the script wait for the process to finish but I have tried many different things any this is the only way I can get it to run everything. It just doesn't wait.
One way is to modify your tests to create a "finished" file. This file should be created whether the test completes correctly or fails.
Modify the startup loop to remove this file before starting the test:
catch { file delete $path0/$testvar/finished }
Then create a second loop:
while { true } {
after 60000
set cnt 1 ; # ?
set result 0
while { $cnt <= $len } {
set testvar [lindex $f $cnt]
if { [file exists $path0/$testvar/finished] } {
incr result
}
incr cnt
}
if { $result == $len } {
break
}
}
This loop as written will never exit if any one test doesn't create the 'finished' file. So I would add in an additional stop condition (no more than one hour) to exit the loop.
Another way would be to save the process ids of the background processes in a list and then the second loop would check each process id to see if it is still running. This method would not require any modifications to the test, but is a little harder to implement (not too hard on unix/mac, harder on windows).
Edit: loop using process id check:
To use process ids, the main loop needs to be modified to save the process ids of the background jobs:
Before the main loop starts, clear the process id list:
set pidlist {}
In the main loop, save the process ids from the exec command (In tcl, [exec ... &] returns the background process id):
lappend pidlist $runtest ; # goes after the exec bsub...
A procedure to check for the existence of a process (for unix/mac). Tcl/Tk does not have any process control commands, so the unix 'kill' command is used. 'kill -0' on unix only checks for process existence, and does not affect the execution of the process.
# return 0 if the process does not exist, 1 if it does
proc checkpid { ppid } {
set pexists [catch {exec kill -0 $ppid}]
return [expr {1-$pexists}]
}
And the second loop to check to see if the tests are done becomes:
set tottime 0
while { true } {
after 60000
incr tottime 1 ; # in minutes
set result 0
foreach {pid} $pidlist {
if { ! [checkpid $pid] } {
incr result
}
}
if { $result == $len } {
break
}
if { $tottime > 120 } {
puts "Total test time exceeded."
break ; # or exit
}
}
If a test process gets hung and never exits, this loop will never exit, so a second stop condition on total time is used.

Gradle: How to Display Test Results in the Console in Real Time?

I would like to see test results ( system.out/err, log messages from components being tested ) as they run in the same console I run:
gradle test
And not wait until tests are done to look at the test reports ( that are only generated when tests are completed, so I can't "tail -f" anything while tests are running )
Here is my fancy version:
import org.gradle.api.tasks.testing.logging.TestExceptionFormat
import org.gradle.api.tasks.testing.logging.TestLogEvent
tasks.withType(Test) {
testLogging {
// set options for log level LIFECYCLE
events TestLogEvent.FAILED,
TestLogEvent.PASSED,
TestLogEvent.SKIPPED,
TestLogEvent.STANDARD_OUT
exceptionFormat TestExceptionFormat.FULL
showExceptions true
showCauses true
showStackTraces true
// set options for log level DEBUG and INFO
debug {
events TestLogEvent.STARTED,
TestLogEvent.FAILED,
TestLogEvent.PASSED,
TestLogEvent.SKIPPED,
TestLogEvent.STANDARD_ERROR,
TestLogEvent.STANDARD_OUT
exceptionFormat TestExceptionFormat.FULL
}
info.events = debug.events
info.exceptionFormat = debug.exceptionFormat
afterSuite { desc, result ->
if (!desc.parent) { // will match the outermost suite
def output = "Results: ${result.resultType} (${result.testCount} tests, ${result.successfulTestCount} passed, ${result.failedTestCount} failed, ${result.skippedTestCount} skipped)"
def startItem = '| ', endItem = ' |'
def repeatLength = startItem.length() + output.length() + endItem.length()
println('\n' + ('-' * repeatLength) + '\n' + startItem + output + endItem + '\n' + ('-' * repeatLength))
}
}
}
}
You could run Gradle with INFO logging level on the command line. It'll show you the result of each test while they are running. Downside is that you will get far more output for other tasks also.
gradle test -i
Disclaimer: I am the developer of the Gradle Test Logger Plugin.
You can simply use the Gradle Test Logger Plugin to print beautiful logs on the console. The plugin uses sensible defaults to satisfy most users with little or no configuration but also offers a number of themes and configuration options to suit everyone.
Examples
Standard theme
Mocha theme
Usage
plugins {
id 'com.adarshr.test-logger' version '<version>'
}
Make sure you always get the latest version from Gradle Central.
Configuration
You don't need any configuration at all. However, the plugin offers a few options. This can be done as follows (default values shown):
testlogger {
// pick a theme - mocha, standard, plain, mocha-parallel, standard-parallel or plain-parallel
theme 'standard'
// set to false to disable detailed failure logs
showExceptions true
// set to false to hide stack traces
showStackTraces true
// set to true to remove any filtering applied to stack traces
showFullStackTraces false
// set to false to hide exception causes
showCauses true
// set threshold in milliseconds to highlight slow tests
slowThreshold 2000
// displays a breakdown of passes, failures and skips along with total duration
showSummary true
// set to true to see simple class names
showSimpleNames false
// set to false to hide passed tests
showPassed true
// set to false to hide skipped tests
showSkipped true
// set to false to hide failed tests
showFailed true
// enable to see standard out and error streams inline with the test results
showStandardStreams false
// set to false to hide passed standard out and error streams
showPassedStandardStreams true
// set to false to hide skipped standard out and error streams
showSkippedStandardStreams true
// set to false to hide failed standard out and error streams
showFailedStandardStreams true
}
I hope you will enjoy using it.
You can add a Groovy closure inside your build.gradle file that does the logging for you:
test {
afterTest { desc, result ->
logger.quiet "Executing test ${desc.name} [${desc.className}] with result: ${result.resultType}"
}
}
On your console it then reads like this:
:compileJava UP-TO-DATE
:compileGroovy
:processResources
:classes
:jar
:assemble
:compileTestJava
:compileTestGroovy
:processTestResources
:testClasses
:test
Executing test maturesShouldBeCharged11DollarsForDefaultMovie [movietickets.MovieTicketsTests] with result: SUCCESS
Executing test studentsShouldBeCharged8DollarsForDefaultMovie [movietickets.MovieTicketsTests] with result: SUCCESS
Executing test seniorsShouldBeCharged6DollarsForDefaultMovie [movietickets.MovieTicketsTests] with result: SUCCESS
Executing test childrenShouldBeCharged5DollarsAnd50CentForDefaultMovie [movietickets.MovieTicketsTests] with result: SUCCESS
:check
:build
Since version 1.1 Gradle supports much more options to log test output. With those options at hand you can achieve a similar output with the following configuration:
test {
testLogging {
events "passed", "skipped", "failed"
}
}
As stefanglase answered:
adding the following code to your build.gradle (since version 1.1) works fine for output on passed, skipped and failed tests.
test {
testLogging {
events "passed", "skipped", "failed", "standardOut", "standardError"
}
}
What I want to say additionally (I found out this is a problem for starters) is that the gradle test command executes the test only one time per change.
So if you are running it the second time there will be no output on test results. You can also see this in the building output: gradle then says UP-TO-DATE on tests. So its not executed a n-th time.
Smart gradle!
If you want to force the test cases to run, use gradle cleanTest test.
This is slightly off topic but I hope it will help some newbies.
edit
As sparc_spread stated in the comments:
If you want to force gradle to always run fresh tests (which might not always be a good idea) you can add outputs.upToDateWhen {false} to testLogging { [...] }. Continue reading here.
Peace.
Add this to build.gradle to stop gradle from swallowing stdout and stderr.
test {
testLogging.showStandardStreams = true
}
It's documented here.
'test' task does not work for Android plugin, for Android plugin use the following:
// Test Logging
tasks.withType(Test) {
testLogging {
events "started", "passed", "skipped", "failed"
}
}
See the following: https://stackoverflow.com/a/31665341/3521637
As a follow up to Shubham's great answer I like to suggest using enum values instead of strings. Please take a look at the documentation of the TestLogging class.
import org.gradle.api.tasks.testing.logging.TestExceptionFormat
import org.gradle.api.tasks.testing.logging.TestLogEvent
tasks.withType(Test) {
testLogging {
events TestLogEvent.FAILED,
TestLogEvent.PASSED,
TestLogEvent.SKIPPED,
TestLogEvent.STANDARD_ERROR,
TestLogEvent.STANDARD_OUT
exceptionFormat TestExceptionFormat.FULL
showCauses true
showExceptions true
showStackTraces true
}
}
My favourite minimalistic version based on Shubham Chaudhary answer.
Put this in build.gradle file:
test {
afterSuite { desc, result ->
if (!desc.parent)
println("${result.resultType} " +
"(${result.testCount} tests, " +
"${result.successfulTestCount} successes, " +
"${result.failedTestCount} failures, " +
"${result.skippedTestCount} skipped)")
}
}
In Gradle using Android plugin:
gradle.projectsEvaluated {
tasks.withType(Test) { task ->
task.afterTest { desc, result ->
println "Executing test ${desc.name} [${desc.className}] with result: ${result.resultType}"
}
}
}
Then the output will be:
Executing test testConversionMinutes [org.example.app.test.DurationTest] with result: SUCCESS
If you have a build.gradle.kts written in Kotlin DSL you can print test results with (I was developing a kotlin multi-platform project, with no "java" plugin applied):
tasks.withType<AbstractTestTask> {
afterSuite(KotlinClosure2({ desc: TestDescriptor, result: TestResult ->
if (desc.parent == null) { // will match the outermost suite
println("Results: ${result.resultType} (${result.testCount} tests, ${result.successfulTestCount} successes, ${result.failedTestCount} failures, ${result.skippedTestCount} skipped)")
}
}))
}
Just add the following closure to your build.gradle. the output will be printed after the execution of every test.
test{
useJUnitPlatform()
afterTest { desc, result ->
def output = "Class name: ${desc.className}, Test name: ${desc.name}, (Test status: ${result.resultType})"
println( '\n' + output)
}
}
Merge of Shubham's great answer and JJD use enum instead of string
tasks.withType(Test) {
testLogging {
// set options for log level LIFECYCLE
events TestLogEvent.PASSED,
TestLogEvent.SKIPPED, TestLogEvent.FAILED, TestLogEvent.STANDARD_OUT
showExceptions true
exceptionFormat TestExceptionFormat.FULL
showCauses true
showStackTraces true
// set options for log level DEBUG and INFO
debug {
events TestLogEvent.STARTED, TestLogEvent.PASSED, TestLogEvent.SKIPPED, TestLogEvent.FAILED, TestLogEvent.STANDARD_OUT, TestLogEvent.STANDARD_ERROR
exceptionFormat TestExceptionFormat.FULL
}
info.events = debug.events
info.exceptionFormat = debug.exceptionFormat
afterSuite { desc, result ->
if (!desc.parent) { // will match the outermost suite
def output = "Results: ${result.resultType} (${result.testCount} tests, ${result.successfulTestCount} successes, ${result.failedTestCount} failures, ${result.skippedTestCount} skipped)"
def startItem = '| ', endItem = ' |'
def repeatLength = startItem.length() + output.length() + endItem.length()
println('\n' + ('-' * repeatLength) + '\n' + startItem + output + endItem + '\n' + ('-' * repeatLength))
}
}
}
}
Following on from Benjamin Muschko's answer (19 March 2011), you can use the -i flag along with grep, to filter out 1000s of unwanted lines. Examples:
Strong filter - Only display each unit test name and test result and the overall build status. Setup errors or exceptions are not displayed.
./gradlew test -i | grep -E " > |BUILD"
Soft filter - Display each unit test name and test result, as well as setup errors/exceptions. But it will also include some irrelevant info:
./gradlew test -i | grep -E -v "^Executing |^Creating |^Parsing |^Using |^Merging |^Download |^title=Compiling|^AAPT|^future=|^task=|:app:|V/InstrumentationResultParser:"
Soft filter, Alternative syntax: (search tokens are split into individual strings)
./gradlew test -i | grep -v -e "^Executing " -e "^Creating " -e "^Parsing " -e "^Using " -e "^Merging " -e "^Download " -e "^title=Compiling" -e "^AAPT" -e "^future=" -e "^task=" -e ":app:" -e "V/InstrumentationResultParser:"
Explanation of how it works:
The first command is ./gradlew test -i and "-i" means "Info/Verbose" mode, which prints the result of each test in real-time, but also displays large amounts of unwanted debug lines.
So the output of the first command, ./gradlew test -i, is piped to a second command grep, which will filter out many unwanted lines, based on a regular expression. "-E" enables the regular expression mode for a single string; "-e" enables regular expressions for multiple strings; and "|" in the regex string means "or".
In the strong filter, a unit test name and test result is allowed to display using " > ", and the overall status is allowed with "BUILD".
In the soft filter, the "-v" flag means "not containing" and "^" means "start of line". So it strips out all lines that start with "Executing " or start with "Creating ", etc.
Example for Android instrumentation unit tests, with gradle 5.1:
./gradlew connectedDebugAndroidTest --continue -i | grep -v -e \
"^Transforming " -e "^Skipping " -e "^Cache " -e "^Performance " -e "^Creating " -e \
"^Parsing " -e "^file " -e "ddms: " -e ":app:" -e "V/InstrumentationResultParser:"
Example for Jacoco unit test coverage, with gradle 4.10:
./gradlew createDebugCoverageReport --continue -i | grep -E -v "^Executing |^Creating |^Parsing |^Using |^Merging |^Download |^title=Compiling|^AAPT|^future=|^task=|:app:|V/InstrumentationResultParser:"
For Android, this works nicely:
android {
...
testOptions {
unitTests.all {
testLogging {
outputs.upToDateWhen { false }
events "passed", "failed", "skipped", "standardError"
showCauses true
showExceptions true
}
}
}
}
See Running Android unit / instrumentation tests from the console
For those using Kotlin DSL, you can do:
tasks {
named<Test>("test") {
testLogging.showStandardStreams = true
}
}
A more comprehensive response for those using th Kotlin DSL:
subprojects {
// all the other stuff
// ...
tasks.named<Test>("test") {
useJUnitPlatform()
setupTestLogging()
}
}
fun Test.setupTestLogging() {
testLogging {
events(
org.gradle.api.tasks.testing.logging.TestLogEvent.FAILED,
org.gradle.api.tasks.testing.logging.TestLogEvent.PASSED,
org.gradle.api.tasks.testing.logging.TestLogEvent.SKIPPED,
org.gradle.api.tasks.testing.logging.TestLogEvent.STANDARD_OUT,
)
exceptionFormat = org.gradle.api.tasks.testing.logging.TestExceptionFormat.FULL
showExceptions = true
showCauses = true
showStackTraces = true
addTestListener(object : TestListener {
override fun beforeSuite(suite: TestDescriptor) {}
override fun beforeTest(testDescriptor: TestDescriptor) {}
override fun afterTest(testDescriptor: TestDescriptor, result: TestResult) {}
override fun afterSuite(suite: TestDescriptor, result: TestResult) {
if (suite.parent != null) { // will match the outermost suite
val output = "Results: ${result.resultType} (${result.testCount} tests, ${result.successfulTestCount} passed, ${result.failedTestCount} failed, ${result.skippedTestCount} skipped)"
val startItem = "| "
val endItem = " |"
val repeatLength = startItem.length + output.length + endItem.length
val messages = """
${(1..repeatLength).joinToString("") { "-" }}
$startItem$output$endItem
${(1..repeatLength).joinToString("") { "-" }}
""".trimIndent()
println(messages)
}
}
})
}
}
This should produce an output close to #odemolliens answers.
If you are using jupiter and none of the answers work, consider verifying it is setup correctly:
test {
useJUnitPlatform()
outputs.upToDateWhen { false }
}
dependencies {
testImplementation 'org.junit.jupiter:junit-jupiter-api:5.7.0'
testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.7.0'
}
And then try the accepted answers
I've written a test logger for Kotlin DSL. You can put this block on your project scope build.gradle.kts file.
subprojects {
tasks.withType(Test::class.java) {
testLogging {
showCauses = false
showExceptions = false
showStackTraces = false
showStandardStreams = false
val ansiReset = "\u001B[0m"
val ansiGreen = "\u001B[32m"
val ansiRed = "\u001B[31m"
val ansiYellow = "\u001B[33m"
fun getColoredResultType(resultType: ResultType): String {
return when (resultType) {
ResultType.SUCCESS -> "$ansiGreen $resultType $ansiReset"
ResultType.FAILURE -> "$ansiRed $resultType $ansiReset"
ResultType.SKIPPED -> "$ansiYellow $resultType $ansiReset"
}
}
afterTest(
KotlinClosure2({ desc: TestDescriptor, result: TestResult ->
println("${desc.className} | ${desc.displayName} = ${getColoredResultType(result.resultType)}")
})
)
afterSuite(
KotlinClosure2({ desc: TestDescriptor, result: TestResult ->
if (desc.parent == null) {
println("Result: ${result.resultType} (${result.testCount} tests, ${result.successfulTestCount} passed, ${result.failedTestCount} failed, ${result.skippedTestCount} skipped)")
}
})
)
}
}
}