How to send protractor report to different emails generated using protractor-html-screenshot-reporter - testing

I am using protractor-html-screenshot-reporter for report generation through protractor. It is providing me the html report.
But how can I send the report to different email ID's as attachment to my other team members?
Or is there any other plugin of protractor for email report generation?

It is not something I would solve on the protractor side. Delegate the task to your Continuous Integration server, e.g. jenkins or Bamboo or smth else.
Generate something your CI can understand - a test report in JUnit XML format with the help of JUnitXmlReporter from jasmine-reporters and configure your CI to send an email(s) when your tests fail.

Related

Cypress mochawesome reporter Report - how to send report

We are currently running the cypress tests on production and reports are being generated at the cypress dashboard . but my team wants me to use mochawesome reports . I have mochawesome implemented without any problem.
But I want to know how to share these HTML reports with my team :
Can I upload the HTML reports on slack channel ?
Can I automate such in a way that at the end of test run , HTML reports are sent as an email?
Can I share these HTML reports on notion?
Please suggest me what should be easiest solution to this.

How to create automation test flow with JMeter?

Is it possible to create Automation test flow with Jmeter?
So that all the Jmeter tests run it self and then I just gets the result generated for it?
Also can we add different scenarios for an application in that automation?
It would be great if some one can share on how to start from scratch for JMeter Automation.
Thanks
I don't think it is possible to have "Jmeter tests run it self ".
JMeter tests can be invoked in multiple ways:
Command-line
Ant task
Maven plugin
All options provide results either in .jtl file format or in HTML
If you want to invoke the tests in unattended manner - you can either rely on your operating system tasks scheduling mechanisms like Windows Task Scheduler or cron or even use a continuous integration tool like Jenkins, all of them are capable of kicking off the arbitrary tasks depending on various criteria, produce reports, display trends, etc.

Codeception-style reporting for Behat

Does anybody know if there's a way to have test reports for Behat similar to what we could have for Codeception?
I mean, Behat with pretty format output just gives us steps from feature files. Whereas Codeception reports exact request body with headers and also reports responses.
Is there any way to have similar reports for Behat? Maybe some extension or plugin? I did a search but didn't find anything similar.
Behat is a generic TDD framework and does not know anything about web requests by default. You could easily use it for unit testing in your application or for testing some CLI application.
So, if you need extended information on requests during the step failures, you have to:
Provide Behat with those Request/Response objects
Customize Behat behavior in AfterStep, AfterScenario and/or AfterFeature hooks to make use of those Request/Response objects
Find or write custom output formatter to output that info
You can see examples of custom formatters (initially written by yours truly), which are used in Allure reporting tool (worth giving it a try for rich reporting exprience).

How To Automate Online Workflow?

I've tried to research this topic and found resources like selenium, but I'm not entirely sure how to do what I need.
Basically here is the workflow:
A user completes a form on our website
The form inputs get emailed to me
I login to the related database system online (it's always the same) to produce the necessary report based on there request.
I then print a PDF version of the report and email it back to them with our email template (customized based on some of their inputs on the website)
Is there a way to automate this? Maybe even run it on a server so users can get the reports even when my computer is off?
Any help would be great!
Thanks.
If you are not able to use API of the resource which gives you PDF file...
I'd go like this:
Configure Jenkins CI on a server.
When a user completes a form - send HTTP POST request to Jenkins CI for building parametrized job (using data from user).
Jenkins job runs the Selenium tests to get desired PDF file.
Using Jenkins email notification plugin send customized email with PDF file from previous step.

Jmeter recording in NON GUI mode for Selenium testing

I would like to record the network requests of a Selenium test. Rather than using jmeter GUI, I would like to automate this process in a script. The idea would be to run a selenium test and record it simultaneously. Is there a built in way to do this?
GUI Jmeter is ideally for development of scripts, which are then run headless to get true performance figures.
You can develop selenium webdriver tests as junit tests, which integrate to JMeter very easily.
Running a recorded Selenium IDE test would not be so straightforward, better to export it as Java Webdriver.
Make sure you follow junit naming conventions and annotations (depending which version you use). Write the test class, including junit 'test' methods, then create a jmeter test plan with a junit sampler, and configure the junit sampler to run your test method[s].
maven and plugins can be used to make it all run seamlessly in headless mode on any host.
Once you have developed one of these, it will be easy to review the jmx test script and automate the process of creating more jmeter tests, if that is what you meant by automating the process.
Start JMeter's Proxy Server
Configure your Selenium script to use JMeter as a proxy. See Using a Proxy guide for configuration details for each driver
Run Selenium test - all requests should be recorded by JMeter.
Add the next test elements to JMeter test plan:
HTTP Cookie Manager - to represent browser cookies
HTTP Cache Manager - to represent browser cache
follow recommendations from How to make JMeter behave more like a real browser to properly configure embedded resources retrieval, user agent, request defaults, etc.
Configure Thread Group parameters according to your load scenario.
Replay the test.
I expect that you will need to apply some correlation, but it may be not required.