Serenity BDD Report showing example in place of steps - serenity-bdd

<serenity.version>2.0.54</serenity.version>
<serenity.maven.version>2.0.16</serenity.maven.version>
<serenity.cucumber.version>1.9.20</serenity.cucumber.version>
In the report I can see all the task ran but it is not showing as grouped under cucmber steps which can be expanded to see the tasks.
Update 1:
update the dependencies still no luck.
<serenity.version>2.0.54</serenity.version>
<serenity.maven.version>2.0.54</serenity.maven.version>
<serenity.cucumber.version>1.0.14</serenity.cucumber.version>
<cucumber.version>4.2.0</cucumber.version>

Make sure your versions are aligned: see https://github.com/serenity-bdd/serenity-core#what-is-the-latest-stable-version-i-should-use

Related

I am not able to generate a cucumber report on Jenkins

I have add cucumber report in post build action but I am keep getting the same error.
Here is the error that I am getting
net.masterthought.cucumber.ValidationException: File '/var/lib/jenkins/jobs/HE_HQ_Automaton/builds/111/cucumber-html-reports/.cache/target/cucumber-reports/CucumberTestReport.json' is not a valid Cucumber report!
at net.masterthought.cucumber.ReportParser.parseForFeature(ReportParser.java:104)
at net.masterthought.cucumber.ReportParser.parseJsonFiles(ReportParser.java:72)
at net.masterthought.cucumber.ReportBuilder.generateReports(ReportBuilder.java:97)
at net.masterthought.jenkins.CucumberReportPublisher.generateReport(CucumberReportPublisher.java:563)
at net.masterthought.jenkins.CucumberReportPublisher.perform(CucumberReportPublisher.java:434)
at jenkins.tasks.SimpleBuildStep.perform(SimpleBuildStep.java:123)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:79)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:814)
at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:763)
at hudson.maven.MavenModuleSetBuild$MavenModuleSetBuildExecution.post2(MavenModuleSetBuild.java:1072)
at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:707)
at hudson.model.Run.execute(Run.java:1921)
at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:543)
at hudson.model.ResourceController.execute(ResourceController.java:101)
at hudson.model.Executor.run(Executor.java:442)
I did check and make sure that I have correct plugin and I was expecting to get a report but not successful so far. I was able to generate a report last week but not now anymore

Karate - Results from two series of tests aren't merged anymore after upgrading version from 0.9.3 to 1.2.0

We are facing an issue related to the tests results after upgrading our Karate project from V0.9.3 to V1.2.0. We are testing an API after the execution of two batchs. Therefore we have a first series of tests (Runner 1) executed on our API after our first batch run, then a second series of tests (Runner 2 on new features files) executed ofter our 2nd batch.
On the version we were using, the tests results were merged but on the updated version, we cannot achieve to get all the results on the same report : the results of the first run are deleted so we’re left with only the results of the second run.
Previously working code :
Results results1 = Runner.parallel(
Arrays.asList("#tag1,#tag2", "#ignore"),
Collections.singletonList("classpath:features"),
5,
"target/sources-rapports");
int totalFailCount = results1.getFailCount();
Results results2 = Runner.parallel(Arrays.asList("#tag3,#tag4", "#ignore"),Collections.singletonList("classpath:features"),5,"target/sources-rapports");
totalFailCount += results2.getFailCount();
generateReport(results2.getReportDir());
The report would contain all test features of results1 and results2. Whereas now, each execution seems to remove previous karate json files before generating the new ones.
New non-working code with the following syntax :
Runner.path("classpath:features")
.tags(Arrays.asList("#tag1,#tag2", "#ignore"))
.outputCucumberJson(true)
.parallel(5);
I'm looking for help to solve this problem. Do not hesitate to ask for more informations if you need.
Try this change:
Runner.path("classpath:features")
.tags(Arrays.asList("#tag1,#tag2", "#ignore"))
.outputCucumberJson(true)
.backupReportDir(false)
.parallel(5);
For further info: https://stackoverflow.com/a/66685944/143475

How to skip a testcase if a link is not present and go to next link in Robot framework

Scenario:
There are 5 Links in the Home page:
Link 1
Link 2
Link 3
Link 4
Link 5
Each of the above links are separate test cases, so there are a total of 5 test cases.
All the links may not present in all the sites, according to the requirements.
So I need to write a Robot framework test case which works dynamically for all the sites, Like 1 site may have 3 links only some has all the 5 links. So its like SKIPPING a particular Test case if that lisk is not present.
*** Keywords ***
Go to Manage Client Reports
Click Link link:Manage Client Reports
Can anyone help.
In the upcoming Robot Framework Release 4.0 a new test status skipped will be introduced. Here is a brief status about the release:
Past due by 27 days 87% complete
Major release concentrating on adding the skip status (#3622), IF/ELSE
(#3074) and enhancing the listener API (#3296 and #3538). Last major
release to support Python 2.
So it can be ready any time soon now.
This is what you can have New SKIP status #3622. There will be a Skip If and a Skip keywords and more to be used.
How to skip tests
There are going to be multiple ways:
A special exception that library keywords can use to mark a single test to be skipped. See also #3685.
BuiltIn keyword Skip (or Skip Test and Skip Task) that utilizes the aforementioned exception.
BuiltIn keyowrd Skip If to skip based on condition.
When the skipping exception is used in a suite setup, all tests in the suite are skipped.
Command line option --skip to unconditionally skip tests based on tags. Similar to --exclude but skipped tests are shown in logs/reports
with a skip status and not dropped from execution altogether.
Command line option --skiponfailure to skip tests if they fail. Similar effect than with the current --noncritical.
What about criticality
As already discussed in #2087, the skip status is very similar feature
than Robot's current criticality concept. There are many people who
would like to have both, but I don't think that's a good idea and
believe it's better to remove criticality when skipping is added.
Separate issue #3624 covers removing criticality and explains this in
more detail. Colors
Skip status needs a specific color to match current pass (green) and
fail (red). Yellow feels like a good candidate with a traffic light
metaphor, but I'm open for other ideas and we could possibly change
other colors as well. Probably should make colors configurable too --
currently only report background colors support it.
Report background color mentioned above needs some thinking as well.
Currently it's either green or red, but with the added skip status we
could use also yellow or whatever skip color we decide to use.
Different scenarios where different colors could be used are listed
below (assuming green/yellow/red scheme):
All tests pass. This is naturally green.
Any test fails. This is naturally red.
Any test is skipped (no failures). This probably should be green but could also be yellow.
All tests skipped. This could be yellow. Could also be green but that's a bit odd if all tests are yellow.
Depending on your deadlines you might won't be able to wait this release, nevertheless it is a good to know thing.
There is an advanced solution where you can generate your test cases run-time. To do so you have to implement a small library that also acts as a listener. This way it can have a start_suite method that will be invoked and it will get the suite(s) as Python object(s), robot.running.model.TestSuite. Then you could use this object along with Robot Framework's API to create new test cases. The idea below was inspired by and it is based on this blog post: Dynamically create test cases with Robot Framework.
DynamicTestLibrary.py:
from robot.running.model import TestSuite
class DynamicTestLibrary(object):
ROBOT_LISTENER_API_VERSION = 3
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = 0.1
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.top_suite = None
def _start_suite(self, suite, result):
self.top_suite = suite
self.top_suite.tests.clear() # remove placeholder test
def add_test_case(self, keyword, *args):
tc = self.top_suite.tests.create(name=keyword)
tc.keywords.create(name=keyword, args=args)
globals()[__name__] = DynamicTestLibrary
UPDATE for Robot Framework 4.0
Due to the backward incompatible changes (Running and result models have been changed) made in the 4.0 release the add_test_case function should be change as below if you are using version above 4.0.
def add_test_case(self, name, keyword, *args):
tc = self.top_suite.tests.create(name=name)
tc.body.create_keyword(name=keyword, args=args)
You can utilize this library in a suite setup, in which you check which links are present and add test cases for the ones that are available.
test.robot
*** Settings ***
Library DynamicTestLibrary
Suite Setup Check Links And Generate Test Cases
*** Variables ***
##{LINKS} Manage Clients # test input 1
#{LINKS} Manage Clients Manage Client Hardware # test input 2
##{LINKS} Manage Clients Manage Client Hardware Manage Client Reports # test input 3
*** Test Cases ***
Placeholder
[Documentation] Placeholder test that will be removed during execution.
No Operation
*** Keywords ***
Check Links And Generate Test Cases
FOR ${link} IN #{LINKS}
DynamicTestLibrary.Add Test Case Go to ${link}
END
Go to Manage Client Reports
Log Many Click Link link:Manage Client Reports
Go to Manage Client Hardware
Log Many Click Link link:Manage Client Hardware
Go to Manage Clients
Log Many Click Link link:Manage Clients
Go to ${link} will give the appropriate keyword name that will be called in a test case with the same name. You can check with each example input list that the number of executed tests will be equal with the length of the list.
Here is the output:
# robot --pythonpath . test.robot
==============================================================================
Test
==============================================================================
Go to Manage Clients | PASS |
------------------------------------------------------------------------------
Go to Manage Client Hardware | PASS |
------------------------------------------------------------------------------
Test | PASS |
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================

karate.abort() in v0.9.4 results in Failed scenario in cucumber html reports

karate.abort() results in skipped steps. There was a fix previous for this . However, cucumber reporting treats skipped tests as Failed.
Is there any workaround where I can use karate.abort() and not have Failed scenario, as I am using it deliberately to skip some DB checks.
Or is there any alternative to karate.abort()?
Yes we need some community help to resolve how third party reports treat skipped steps, please read this - and maybe you can be the one to find a solution: https://github.com/intuit/karate/issues/755#issuecomment-488710450
A workaround is to split into a second feature and then:
* if (condition) karate.call('second.feature')

CTest build ID not set

I have a CDash configured to accept posts for automatic builds and tests. However, when any system attempts to post results to the CDash, the following error is produced. The result is that each result gets posted four times (presumably the original posting attempt plus the three retries).
Can anyone give me a hint as to what sets this mysterious build ID? I found some code that seems to produce a similar error, but still no lead on what might be happening.
Build::GetNumberOfErrors(): BuildId not set
Build::GetNumberOfWarnings(): BuildId not set
Submit failed, waiting 5 seconds...
Retry submission: Attempt 1 of 3
Server Response:
The buildid for CDash is computed based on the site name, the build name and the build stamp of the submission. You should have a Build.xml file in a Testing/20110311-* directory in your build tree. Open that up and see if any of those fields (near the top) is empty. If so, you need to set BUILDNAME and SITE with -D args when configuring with CMake. Or, set CTEST_BUILD_NAME and CTEST_SITE in your ctest -S script.
If that's not it, then this is a mystery. I've not seen this error occur before...
I'm having the same issue though Site and Buildname are available in test.xml and are visible on cdash (4 times). I can see the jobs increment by refreshing between retries so it seems that the submission succeeds and reports a timeout.
Update: This seems to have started when I added the -j(nprocs) switch to the ctest command. changing CtestSubmitRetryDelay: 20 (was 5) allowed a server response through that indicates the cdash version may not be able to handle the multi-proc option I'll have to look into that for my issue. Perhaps setting CtestSubmitRetryDelay to a larger number will get you back a server response as it did for me. g'luck!
Out of range value for column 'processorclockfrequency'