Bamboo vs CxxTest - bamboo

When I create a plan in Bamboo and add a task for running CxxTest test code(running function TS_ASSERT(1==1) or st). When I try to run for checking failure (TS_ASSERT(1==2)), this test case is fail and Bamboo output a log as:
12-Mar-2014 15:12:07 Failed 1 and Skipped 0 of 2 tests
12-Mar-2014 15:12:07 Success rate: 50%
12-Mar-2014 15:12:07 Failing task since return code was 1 while expected 0
So, does anyone here know why bamboo can understand the test result, and what is the return code here(return code was 1 while expected 0)?

From my observation, one windows, Bamboo consider value of the %ERRORLEVEL% as the result of the task. so return code was 1 while expected 0 means your task is returning 1 and Bamboo is expecting 0. Yes, it's expecting 0 since it considers any value other than 0 as failure.

Related

Why is ctests passing this "fail" test even when FAIL_REGULAR_EXPRESSION doesn't match?

I added a simple test to ctest with the following lines in a .cmake file:
add_test( NAME ktxsc-test-many-in-one-out
COMMAND ktxsc -o foo a.ktx2 b.ktx2 c.ktx2
)
set_tests_properties(
ktxsc-test-many-in-one-out
PROPERTIES
WILL_FAIL TRUE
FAIL_REGULAR_EXPRESSION "^Can't use -o when there are multiple infiles."
)
The test passes and the TestLog shows
----------------------------------------------------------
Test Pass Reason:
Error regular expression found in output. Regex=[^Can't use -o when there are multiple infiles.]
"ktxsc-test-many-in-one-out" end time: Jun 30 16:34 JST
"ktxsc-test-many-in-one-out" time elapsed: 00:00:00
----------------------------------------------------------
If I change FAIL_REGULAR_EXPRESSION to
FAIL_REGULAR_EXPRESSION "some rubbish"
the test still passes even though the app is printing the same message as before. This time the test log shows
----------------------------------------------------------
Test Passed.
"ktxsc-test-many-in-one-out" end time: Jun 30 16:53 JST
"ktxsc-test-many-in-one-out" time elapsed: 00:00:00
----------------------------------------------------------
which is what I normally see when no *_REGULAR_EXPRESSION is set.
Why is this happening? How can I get ctest to fail the test when the FAIL_REGULAR_EXPRESSION doesn't match?
Thanks to #Tsyvarev for this answer.
ctest is doing
match FAIL_REGULAR_EXPRESSION || exit code != 0
to determine if a test has failed. If the match succeeds the exit code is not checked. If the match fails the exit code is checked and the failed match is ignored.
The documentation is unclear. I've read it many times and still didn't figure out this behavior until steered in the right direction by #Tsyvarev. I also find this a poor behavioral choice. All my tools output both a non-zero error code and a message when there is an error condition. I need to test both. I previously asked this question about that. It requires duplicating tests.

Groovy assertion script is executed twice within SoapUI

Hello I'm trying to do a simple groovy script in Soapui
I try to get a testcase property and increment it then save it.
when I run then script it increments it two times I don't know why. I tried different syntaxes but nothing seems to work up to now.
Here is a screenshot that shows my problem
here I run the test 2 times, first the variable was 3, normally when I run the test the second time the before value should be at 4 and the after at 5, and not 5 and 6.
I believe that you do not want to have the increment logic in Script Assertion.
Instead increment counter in the Setup Script of test case.
If you need the counter value in the script assertion, just read it alone.
Hope this helps.
By the way, I do not see any issue with script you have shown.
Check if there any where else if this variable is being manipulated.
def cnt = context.testCase.getPropertyValue('COUNT') as Integer
if (cnt< 10){
log.info "before : $cnt"
cnt += 1
log.info "after : $cnt"
context.testCase.setPropertyValue('COUNT', cnt.toString())
}
Can't comment yet. I'm seeing the same issue in 5.3.0 - here's my script that grabs a string value from properties, converts it to an Integer, increments it and sends back as a string.
loopsInt = messageExchange.modelItem.testStep.testCase.getPropertyValue("loops").toInteger();
log.info loopsInt;
loopsInt++;
log.info loopsInt;
messageExchange.modelItem.testStep.testCase.setPropertyValue("loops", String.valueOf(loopsInt));
I log the value before I increment, and immediately after, and as you can see, the value is being incremented twice. Here I run the script 3 times:
Thu Mar 16 12:04:54 NZDT 2017:INFO:52
Thu Mar 16 12:04:54 NZDT 2017:INFO:53
Thu Mar 16 12:04:56 NZDT 2017:INFO:54
Thu Mar 16 12:04:56 NZDT 2017:INFO:55
Thu Mar 16 12:04:59 NZDT 2017:INFO:56
Thu Mar 16 12:04:59 NZDT 2017:INFO:57
I get the same result whether I use loopsInt++ or loopsInt = loopsInt + 1. The "loops" property is not being used anywhere else. Weird.
When you execute the assertion script with the green arrow in Script Assertion Window, it is executed twice.
I have used the following script:
def loopsInt = messageExchange.modelItem.testStep.testCase.getPropertyValue("myNum").toInteger();
log.info loopsInt
loopsInt++
messageExchange.modelItem.testStep.testCase.setPropertyValue("myNum", String.valueOf(loopsInt))
See the following picture. One window logs even numbers and second the odd numbers.
Please note that execution in Script Assertion Window shall be utilized just for debugging of the script. When you execute the test case (test step), the script is executed only once, as expected.
Anyway, I think there are better places to set test case properties (setUp script, Groovy Script test step and others). I recommend to use assertion scripts for checking the message exchange.
Karel
Found a very strange reason for this.
If the testStep having this assertion is been run successfully [i.e., if SOAP test step, then it becomes green], and after this if you open the assertion and run it separately, then it's incremented twice. Once in your editor, once within the test step itself.
Say the testSetp had failed [is red in color], then you try running the assertion separately, it works absolutely fine.

Can the ctest output be colored?

Is there a way to color the ctest output.
For example is it possible to have something similar to the last line being green in case of 100% success
100% tests passed, 0 test failed out of x
or red otherwise?

Gitlab-CI runner hangs after makefile test fails

I am using Gitlab-CI for my build tests. I have a very simple test which compares the output of the test install/build with the known output. I put the test in a makefile.
The Makefile entry looks like this:
test:clean
make install DESTDIR=$(TEST_DIR)
$(TEST_DIR)/path/to/executable > $(TEST_DIR)/tmp.out
diff test/test.result $(TEST_DIR)/tmp.out
When the diff passes, an exit code of 0 is returned, a exit code of 1 is returned if the diff shows a difference in the files.
What I've tried:
Running make test from any shell runs the tests and exits, regardless of diff result
Running make test from the shell as gitlab_ci_runner runs the tests and exists regardless of diff result
When ran from Gitlab-CI, and the diff exit status is 0, the build returns success
The problem:
When ran in the Gitlab-CI and the diff exit status is non-0, the build hangs.
The output on the build screen is the output of the diff, and the last line is the expected error: make: *** [test] Error 1
After that, the cycle symbol keeps on, the runner does not exit with a build fail.
Any ideas? I thought that it might be something with Makefiles, but the Gitlab-CI will exit with a fail status if the Make exits with Error 1 for any other test. I can only see it happening on the output of the diff.
Thanks!
Also posted this to the GitLab mailinglist https://groups.google.com/d/msgid/gitlabhq/77e82813-b98e-4abe-9755-f39e07043384%40googlegroups.com?utm_medium=email&utm_source=footer

MSBuild script fails but produces no errors

I have a MSBuild script that I am executing through TeamCity.
One of the tasks that is runs is from Xheo DeploxLX CodeVeil which obfuscates some DLLs. The task I am using is called VeilProject. I have run the CodeVeil Project through the interface manually and it works correctly, so I think I can safely assume that the actual obfuscate process is ok.
This task used to take around 40 minutes and the rest of the MSBuild file executed perfectly and finished without errors.
For some reason this task is now taking 1hr 20 minutes or so to execute. Once the VeilProject task is finished the output from the task says it completely successfully, however the MSBuild script fails at this point. I have a task directly after the VeilProject task and it does not get outputted. Using diagnostic output from MSBUild I can see the following:
My questions are:
Would it be possible that the MSBuild
script has timed out? Once the task
has completed it is after a certain
timeout period so it just fails?
Why would the build fail with no
errors and no warnings?
[05:39:06]: [Target "Obfuscate"] Finished.
[05:39:06]: [Target "Obfuscate"] Saving exception map
[05:49:21]: [Target "Obfuscate"] Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds
[05:49:22]: [Target "Obfuscate"] Done.
[05:49:51]: MSBuild output:
Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds (TaskId:8)
Done. (TaskId:8)
Done executing task "VeilProject" -- FAILED. (TaskId:8)
Done building target "Obfuscate" in project "AMK_Release.proj.teamcity.patch.tcprojx" -- FAILED.: (TargetId:12)
Done Building Project "C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx" (All target(s)) -- FAILED.
Project Performance Summary:
6535484 ms C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx 1 calls
6535484 ms All 1 calls
Target Performance Summary:
156 ms PreClean 1 calls
266 ms SetBuildVersionNumber 1 calls
2406 ms CopyFiles 1 calls
6532391 ms Obfuscate 1 calls
Task Performance Summary:
16 ms MakeDir 2 calls
31 ms TeamCitySetBuildNumber 1 calls
31 ms Message 1 calls
62 ms RemoveDir 2 calls
234 ms GetAssemblyIdentity 1 calls
2406 ms Copy 1 calls
6528047 ms VeilProject 1 calls
Build FAILED.
0 Warning(s)
0 Error(s)
Time Elapsed 01:48:57.46
[05:49:52]: Process exit code: 1
[05:49:55]: Build finished
If the .exe is not returning standard exit codes then you may want to specify to ignore the exit code when using the Exec task with IgnoreExitCode="true". If that doesn't work then try the additional parameter IgnoreStandardErrorWarningFormat="true".