Jenkins Allure report is not showing all the results when we have multiple scenarios - selenium

I have the following scenario outline and I am generating allure report, But in the report we are not getting all the scenarios data, its showing only last run data.
It is showing only | uat1_nam | Password01 | test data result
Jenkins plugin version i am using is 2.13.6
Scenario Outline: Find a transaction based on different criteria and view the details
Given I am login to application with user "<user id>" and password "<password>"
When I navigate to Balances & Statements -> Find a transaction
Then I assert I am on screen Balances & Statements -> Find a transaction -> Find a transaction
#UAT1
Examples:
| user id | password |
| uat1_moz | Password01 |
| uat1_nam | Password01 |

I got a similar issue.
We are running test with the same software on Linux and Windows, and generating results into 2 separate folders.
Then we have:
allure-reports
|_ linux_report
|_ windows_report
Then we are using the following command in the Jenkinsfile:
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'allure-reports/linux-report'], [path: 'allure-reports/windows-report']]
])
Similar to Sarath, only the results of the from the last run are available...
I also tried to run the cli directly on my machine, same results.
allure serve allure-reports/linux-report allure-reports/windows-report
I already found many methods, actually this one is very similar to my use case, but I do not understand why it works here, and not for me...
https://github.com/allure-framework/allure2/issues/1051
I also tried with the following method, but the Docker container is not running properly on Linux, due to permission issues... But I am running the container from a folder where I got all permissions. Same results if I give my userID in parameters:
https://github.com/fescobar/allure-docker-service#MULTIPLE-PROJECTS---REMOTE-REPORTS

I was able to get deeper into the topic, and I am finally able to prove why the data are overwritten.
I used a very simple example to generate 2 different reports, where only allure.epic was different.
As I thought, if we generate 2 different reports, with the same source folder, but generate 2 separate reports, then only the latest report will be considered (allure.epic name was updated in between).
If I have 2 different folders, with the same code (but only the allure.epic is different), then I have the all data available, and stored in different Suites!
Then, to make sure that allure considers the reports as different, and make a different classification for each OS, we have to make tests on code which is stored in different locations. Which does not fit with my usecase, as the same code is tested on both Linux and Windows.
Or maybe is there an option for pytest-allure to specify the root classification?

Related

Perforce: Integrate all revisions at once, but with the same result as integrating the revisions one by one?

Situation:
test/... containing a.txt was branched into test2/...
test/a.txt was deleted, and then re-added and edited
Now I'd like to integrate all revisions from test/... to test2/....
In my actual use case, there can be many files like a.txt
What I've tried:
I've tried three methods:
Method 1 - Integrating all revisions results in a conflict as it thinks that test/a.txt#3 is a new, unrelated file:
>p4 integrate //depot/test/... //depot/test2/...
//depot/test2/a.txt#1 - integrate from //depot/test/a.txt#3,#4
>p4 resolve -as
c:\depot\test2\a.txt - merging //depot/test/a.txt#3,#4
Diff chunks: 0 yours + 0 theirs + 0 both + 1 conflicting
Method 2 - Integrating using the p4 integrate -Di works in isolation, but I'm trying to make this part of an automated process that will integrate thousands of files at once. Since the -Di flag can't be used in all cases, it would at minimum need to check the file history to see if the file was moved/renamed, and starts to get very messy.
Method 3 - Integrating one revision at a time works, but only if I submit each revision separately. Otherwise multiple integrations on the same file can't be opened simultaneously. This is slow if I have hundreds of changelists to integrate, and results in unnecessary integrate changes in the file history.
>p4 integrate //depot/test/a.txt#2 //depot/test2/a.txt
//depot/test2/a.txt#1 - delete from //depot/test/a.txt#2
>p4 integrate //depot/test/a.txt#3 //depot/test2/a.txt
//depot/test2/a.txt - can't integrate (already opened for delete)
Question:
Method 3 has the result I want (no conflicts), but can I achieve that while integrating all revisions in one go as in Method 1?

Running manifests (classes) from a task or plan in Puppet Enterprise

TL;DR
In Puppet Enterprise, how do I run a manifest (testpp.pp) from a task or plan (not Bolt).
plan base_windows::testplan (
  TargetSpec $targets,
  Optional[String] $contents = undef,
  String $filename,
){
  $apply_prep($targets)
  $apply_results = apply($targets, '_catch_errors' => true) {
    class { 'base_windows::testpp': }
  }
  $apply_results.each | $result | {
    notice($result.report)
  }
}
apply_prep seems to succeed, but apply is failing with the following error:
{
"msg" : "Evaluation Error: Unknown function: 'report'. (file: /opt/puppetlabs/server/data/orchestration-services/code/environments/development/modules/base_windows/plans/testplan.pp, line: 16, column: 19)",
"kind" : "bolt/plan-failure",
"details" : {
"class" : "Bolt::PAL::PALError"
}
}
If I change the code to:
plan base_windows::testplan (
  TargetSpec $targets,
  Optional[String] $contents = undef,
  String $filename,
){
  apply_prep($targets)
  $apply_results = apply($targets, '_catch_errors' => true) {
# Is this how to call a class? I cannot find an example.    
class { 'base_windows::testpp': }
  }
  $apply_results.each |$result| {
$target = $result.target.name
if $result.ok {
  out::message("${target} returned a value: ${result.value}")
} else {
 out::message("${target} errored with a message: ${result.error.message}")
}
  }
}
The plan tells me it has failed, but there are no errors in the node's report. In fact, there is no entry for the time the plan was executed.
I cannot find any examples on how to call a class from a plan, so the above apply() is a guess, based on this documentation.
I have installed the puppetlabs_reboot module and successfully ran a plan using it, therefore, I conclude my system is set up correctly, it's just my code that is wrong.
Background
I may be going about this all wrong, so here is some background to the problem. Currently, I have a series of manifests that install various packages from the public Chocolatey repository depending on a node's classification. Package definitions are stored in Hiera data and each package' version is set to latest. At the end of the Package{} resource, some manifests include a reboot.
These manifests are used to provision new nodes and keep existing nodes up-to-date with the latest package version.
The Puppet agent is set to run once per hour and if the source package is updated in the Chocolatey repo, on the next Puppet run, the manifest will update the package, rebooting the node, if required.
Goal
New nodes are provisioned with the latest package version.
Prevent package updates at undetermined times on existing nodes.
Continue to allow Puppet agent runs every hour.
Make use of existing manifests.
Ideas
Split out the package{} code from the profile manifest and place them in tasks / plans, allowing packages to be updated out-of-hours.
Specify the actual package version in Hiera. Although this is more declarative and idempotent, it means keeping an eye on over 100 package version. I guess it would be fairly simple to interrogate the Chocolatey repos with code to pull the latest version number, but even so I am no better off.
Create a task with a script that runs choco upgrade all, however, the next Puppet run would revert package versions according to the version defined in Hiera, meaning Hiera still needs to be kept up-to-date.
Problems
As per the main crux of this question, how do I run manifests (classes) from plans? If I understand correctly, tasks are for ad-hoc scripts, whereas plans can run tasks and manifests. As a lot of time has been invested in writing manifests, I would prefer not to rewrite all my manifests as scripts.
I am confused by the Puppet documentation as it seems to switch between PE and Bolt syntax. I am using Puppet Enterprise where Puppet says they don't recommend using Bolt but their examples seem to site Bolt commands.
No errors in the node' report. apply_prep() reports executed successfully, albeit taking far longer to execute than puppetlabs_reboot module, but apply() results in a failure, but nothing is logged in the node's reports.
Using puppetlabs_reboot module as a reference, it appears their plan uses a bunch of tasks. It appears that they don't use apply() to run their reboot{} class. Is this not duplicating the work?
If anyone has any suggestions or ideas, I'd be grateful if you could share.
I've got it to work. The class I was trying to run, required parameters that I hadn't provided!
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
class { 'base_windows::testpp':
filename => $filename,
contents => $contents,
}
}
}
# Output the whole result_set in the PE console
return $apply_results
I found this out using the logs.
Turn on debug level logging in /etc/puppetlabs/puppetserver/logback.xml (root level="debug")
Tail the following logs:
tail -f /var/log/puppetlabs/bolt-server/bolt-server.log
tail -f /var/log/puppetlabs/puppetserver/puppetserver.log | grep -B 5 -A 5 'testplan'
tail -f /var/log/puppetlabs/orchestration-services/orchestration-services.log

pytest, any way to include a test file or list of test files?

I am looking for best practices advices regarding the following context:
I am using pytest to run integration tests on my IAC deployment
My IAC code base is structured as:
myapp
|
|_roles
| |_role1
| |_role2
|_resources
|_tomcat
|_java
I'd like to use the same kind of structure for my test files.
Tests are currently divided in file matching roles (role1, role2):
tests
|
|_roles
|_test_role1.py
|_test_role2.py
which lead to duplicated code, e.g:
role1 is a tomcat base app,
role2 holds pure java code,
So in both test files (test_role1.py and test_role2.py) there will be a java test function.
If I could add a dir structure as:
tests
|
|_roles
| |_test_role1.py
| |_test_role2.py
|
|_resources
|_test_tomcat.py
|_test_java.py
Then I could just "include / import" the test_java.py functions to use them in test_role1.py and test_role2.py without duplicating code...
What's the best way to achieve this ?
I am already using fixtures (defined in conftest.py), and I feel that the solution to my duplicated code is something along fixture or test modules but my poor python / pytest knowledge is keeping me away from the actual solution.
Thanks
If you don't mind running your tests as a module, you could turn your Python files into packages by placing a file called 'init.py' in the root of the project, in the directory with the code to be tested and in the directory with the testing code.
You can then perform relative imports to access the functions you need:
eg to access "_test_java.py" from "_test_role2.py"
from ../_roles import _test_java
A single dot represent the current directory. Two dots represents the parent directory.
You will need to use the -m flag when calling your code so Python understands you are running a module with relative imports.
In your case you might consider performing the messy relative imports in conftest.py
This post explains the above in more detail:
http://blog.habnab.it/blog/2013/07/21/python-packages-and-you/

How to define run once context steps with Gauge?

Using Gauge we can repeat a set of steps before each scenario using Context Steps right after a test specification heading. For example:
Delete project
==============
* User log in as "mike"
Delete single project
---------------------
* Delete the "example" project
* Ensure "example" project has been deleted
Delete multiple projects
------------------------
* Delete all the projects in the list
* Ensure project list is empty
In the above Delete Project test specification, the context step User log in as "mike" is going to be executed twice, one time for each of the two detete scenarios.
How to define steps that run once and before all scenarios of a test specification?
Since you cannot have it run once through the spec file a workaround could be to use the suite store.
public void loginAsMike(){
if((boolean) DataStoreFactory.getSuiteDataStore().get('loggedIn')){
//execute steps
DataStoreFactory.getSuiteDataStore().put('loggedIn', true);
}
}
This way it will only run once. The only issue here would be if you were to run multiple tests in parallel. However if your only logging in as mike in one spec file only then this is a good solution.

How to get discarded builds and the date of build execution from Jenkins API?

I'd like to retrieve from Jenkins api information about all builds of specified job.
I've set in configuration to Discard old builds:
Days to keep builds - 20
Max # of builds to keep - 10
1) Is it possible to get information about builds whatare discarded?
Also I could get following information about each not discarded build:
"builds" : [
{
"_class" : "hudson.model.FreeStyleBuild",
"number" : 34,
"url" : "http://myUrl/view/Myjob/34/"
}
]
I use following url for this: http://MyUrl/view/MyJob/api/json?tree=builds[url,number]&pretty=true
2) Is it possible to get date of build execution?
To discard a build means, well, to discard it, i.e. its information is gone.
I can imagine a solution where you:
collect, store and update(=extend) information about all existing builds on a regular basis
at times you'd like to know which builds have been discarded:
collect the information of all existing builds at this very moment
don't update your store, but ...
create a diff between the current (C) and the information previously stored in your store (S). The builds that are in S but not in C are the discarded builds.