How to append behat results to one html file - behat

Scenario:
1) I run php bin/behat features/1.feature > result.html
2) I then run php bin/behat features/2.feature > result.html
3) I should see the results of both 1.feature and 2.feature in results.html
How can I get this done ? does behat have a option for appending ?
maybe php bin/behat features/1.feature --append > results.html ?

You can use profiles in behat.yml and indicate which features to run per profile.
So instead of running behat several times, once for each .feature, you just run it once and obtain one html file instead of several.
Example: the following config has 2 profiles, one for login scenarios and one for booking scenarios.
It also filters out scenarios tagged #wip.
It outputs data on the command line ('pretty') and in an html file ('html'):
#behat.yml
default:
filters:
tags: "~#wip"
formatter:
name: pretty,html
parameters:
output_path: null,behat_report.html
login:
paths:
features: features/login
bootstrap: %behat.paths.features%/bootstrap
booking:
paths:
features: features/booking
bootstrap: %behat.paths.features%/bootstrap
You can then run the profile you want:
behat --profile login
which will run all the .feature files inside the features/login/ directory

Related

Azure CI Pipeline YAML for Specflow Integration

I am trying to configure my spec flow project with the Azure CI pipeline. When I try to create Specflow+LivingDoc with TestExecution.json, the pipeline is unable to find the path. Attaching my YAML, and specflow.json along with this. Can anybody help me with this??
YAML
- task: SpecFlowPlus#0
displayName: 'Upload SpecFlow Living Docs'
inputs:
projectFilePath: 'MyProjecct'
projectName: 'MyProjecct'
testExecutionJson: '**\TestExecution.json'
projectLanguage: 'en'
specflow.json
{
"livingDocGenerator": {
"enabled": true,
"filePath": "{CurrentDirectory}\\TestResults\\TestExecution.json"
}
}
Error
Error
##[error]Error: Command failed: dotnet D:\a_tasks\SpecFlowPlus_32f3fe66-8bfc-476e-8e2c-9b4b59432ffa\0.6.859\CLI\LivingDoc.CLI.dll feature-folder "D:\a\1\s\MyProjecct" --output-type JSON --test-execution-json "**/TestExecution.json" --output "D:\a\1\s\16707\FeatureData.json" --project-name "MyProjecct" --project-language "en"
Before the SpecFlowPlus task you can add a shell task (such as Bask) to execute the ls command to see if the TestExecution.json file has been generated or existing in the specified Feature folder under current directory.
ls -R
If the TestExecution.json file is not existing, you should go to check if there is any issue on the step that generates this file.
I got the same error when I tried to configure this a few days back.
Try giving the full path to your TestExecution.json, that should work. The pattern matching is not working in the file paths so provide full path to your json files as well as project/test assembly etc...

Define multiple tags and remove suites in behat.yml

I want to define multiple tags in behat.yml which correspond to suite names.
There are 4 suites inside the features folder and each of them have multiple .feature files.
Ex: admin, themes
Tags to be defined: #admin, #themes.
Previously the behat.yml file contained suites for admin and themes. Now I want to define 3 different profiles (server1, server2 and server3) corresponding to different testing environments and use tags instead of suites to run the feature files containing those tags. I have added the tags #admin and #themes in every feature file of themes and admin folder.
How should I implement this specific case in my config file?
Any assistance is greatly appreciated
You can have different profiles and themes as per the docs
So one possible way to do it, would be to group the suites for each profile.
server1:
suites:
default:
contexts:
- FeatureContext
filters:
tags: "#admin"
server2:
suites:
default:
contexts:
- FeatureContext
filters:
tags: "#admin"
server3:
suites:
default:
contexts:
- FeatureContext
filters:
tags: "#admin,#themes"

Generate HTML report for WebdriverIO/Cucumber framework

I am using WebdriverIO/Cucumber (wdio-cucumber-framework) for my test automation. I want to get the test execution result in a HTML file. As of now I am using Spec Reporter (wdio-spec-reporter). Which helps to print the results in console window. But I want all the execution reports in a HTML file.
How can I get WebdriverIO test execution result in a HTML file?
Thanks.
OK, finally got some spare time to tackle your question #Thangakumar D. WebdriverIO reporting is a vast subject (there are multiple ways to generate such a report), so I'll go ahead and start with my favorite reporter: Allure!
Allure Reporter:
[Preface: make sure you're in your project root]
Install your package (if you haven't already): npm install wdio-allure-reporter --save-dev
Install Allure CommandLine (you'll see why later): npm install -g allure-commandline --save-dev
Setup your wdio.config.js file to support Allure as a reporter
wdio.config.js:
reporters: ['allure', 'dot', 'spec', 'json'],
reporterOptions: {
outputDir: './wdio-logs/',
allure: {
outputDir: './allure-reports/allure/'
}
}
Run your tests! Notice that, once your regression ends, your /allure-results/ folder has been populated with multiple .json, .txt, .png (if you have screenshot errors), and .xml files. The cotent of this folder is going to be used by Allure CommandLine to render you HTML report.
Go to your /allure-results/ folder and generate the report via: allure generate <reportsFolderPath> (do it like this allure generate .
If you want your /allure-reports/ folder inside /allure-results/)
Now go into your /allure-reports folder and ope index.html into your browser of choice (use Firefox for starters)
Note: The generated index.html file won't have all the content loaded on Chrome unless you do some tweaks. It's due to default WebKit not being able to load all the AJAX calls required. Read more about it here.
If you're successfully completed all the previous steps, it should look something like this:
Hope this helped. Cheers!
Note: I'll try to UPDATE this post when I get some more time with other awesome ways to generate reports from your WebdriverIO reporter logs, especially if this post gets some love/upvotes along the way.
e.g.: Another combo that I enjoy using being: wdio-json-reporter/wdio-junit-reporter coupled with a easy-to-use templating language, Jinja2.
I have been using Mochawesome reporter and it looks beautiful, check it out
here.
Mochawesome reporter generates the mochoawesome.json which then can be used to create a beautiful report using Mochawesome report generator
Installation:
> npm install --save wdio-mochawesome-reporter
> npm install --save mochawesome-report-generator#2.3.2
It is easier to integrate by adding this line in the wdio.conf.js:
// sample wdio.conf.js
module.exports = {
// ...
reporters: ['dot', 'mochawesome'],
reporterOptions: {
outputDir: './', //mochawesome.json file will be written to this directory
},
// ...
};
Add the script to package.json:
"scripts": {
"generateMochawesome": "marge path/to/mochawesome.json --reportTitle 'My project results'"
},

change env when acceptance testing laravel app with codeception and selenium

I'm trying to write some acceptance tests for laravel 4 with codeception and the selenium module.
I got two problems with that.
The first one is that my app is running
in the homestead vagrant vm and the selenium server is running on the
host machine. So is there a easy way to run the selenium server in the vm and the call the browser o n the host machine ?
My second problem is that when testing the actual live database is used, because the environment of the laravel app is not set to testing. Obviously i would like to have it to use the test database and reset it after each test.
codeception.yaml
actor: Tester
paths:
tests: app/tests
log: app/tests/_output
data: app/tests/_data
helpers: app/tests/_support
settings:
bootstrap: _bootstrap.php
colors: true
memory_limit: 1024M
suite_class: \PHPUnit_Framework_TestSuite
modules:
config:
Db:
dsn: 'sqlite:app/tests/_data/testdb.sqlite'
user: ''
password: ''
dump: app/tests/_data/dump.sql
acceptance.yaml
class_name: AcceptanceTester
modules:
enabled: [WebDriver,AcceptanceHelper]
config:
WebDriver:
url: 'http://app.dev'
browser: firefox
window_size: 1920x1024
wait: 10
The easy way to run acceptance tests in the vm is to use phantom js in ghostdriver mode. There is a tutorial here:
https://gist.github.com/antonioribeiro/96ce9675e5660c317bcc
You can still see screenshots and the rendered html when tests fail, so it isn't a big deal that you can't see the browser.
For your second question, I prefer to keep a separate tests installation with it's own database. That way changes you make in dev don't alter your test results and it's a better approximation of production.
If you want to use the same installation, you can automate switching out your settings with .env files.
http://laravel.com/docs/4.2/configuration#protecting-sensitive-configuration
your config would look like this:
'host' => $_ENV['DB_HOST'],
'database' => $_ENV['DB_NAME'],
'username' => $_ENV['DB_USERNAME'],
'password' => $_ENV['DB_PASSWORD'],
and your .env.php would look like:
return array( 'DB_HOST' => 'hostname', 'DB_NAME' => '' ..etc
than you can use a task runner like robo to automatically update your .env file and run codeception tests.
http://robo.li/
$this->replaceInFile('.env.php')
->from('production_db_name')
->to('test_db_name')
->run();
$this->taskCodecept()->suite('acceptance')->run();
.env files are changing in Laravel 5, but this workflow still works with minimal modification.
http://mattstauffer.co/blog/laravel-5.0-environment-detection-and-environment-variables

Passing cli parameters to casperjs through grunt task and npm test

I'm running tests with npm test - that actually runs a grunt task grunt casperjs:
casperjs:{
options:{},
files:
['./test_index.js',
'./test_map_regression.js',
'./test_index_get_gush.js'] /
},
using the grunt-casperjs-plugin in order to automate testing with slimerjs along with phantomjs, both running under casperjs in Travis-ci.
In order to do that, I need to pass the engine as a variable from the command line. something like:
casperjs --engine=slimerjs test_suite.js
Question: I can't find a way to pass the options from grunt cli (and I assume npm command line options would delegate to grunt. correctly?) to the files array.
I tried to add:
var engine = grunt.option('engine') || 'phantomjs';
engine = '--engine='+engine;
and then in the file array do:
files:['./test_index.js '+engine,
'./test_map_regression.js '+enging,
'./test_index_get_gush.js '+engine]
but seems that file array has to get real file names without the added args.
I'll be glad for any ideas on how to solve this through.
I haven't tested this, but looking at the grunt-casperjs source, it looks as though you would want to pass the engine as an option.
So, something like this should work:
casperjs:{
options: {
'engine': 'slimerjs'
},
files: [
'./test_index.js',
'./test_map_regression.js',
'./test_index_get_gush.js'
]
}