I know there is an existing post asking if it's possible to generate an HTML report of code coverage analysis for tests written and run with Intern, and it's been answered:
Generate HTML code coverage reports with intern
However, the post doesn't mention what type of environment that the OP runs in; i.e., are the tests running in a Node.js client? I ask because I am running my unittests using the Intern framework in a browser [edit: invoking tests comparably to http://path/to/intern-tutorial/node_modules/intern/client.html?config=tests/intern]. The article here:
https://github.com/theintern/intern/wiki/Using-and-Writing-Reporters#custom-reporters
outlines that HTML is the only reporter available for the browser platform; LCOV and LCOVHTML are not. But has that changed at all? This limited array of reporters
for browsers isn't very convenient, and I was hoping to take advantage of Istanbul built into Intern, rather than try to plug in another code coverage analysis tool (or hack my own thing :( ).
Code coverage information will be correctly retrieved from code running in browsers if you run your tests with intern-runner. The actual collation and output of the coverage results occurs on the server (Node.js) side.
Related
I am using the Karate framework to do the API testing. As part of CI efforts, we send an email at the end of test execution listing the summary of test results. There is a need to include the screeshot of the test execution counts from 'overview-feature.html' file.
I did so through the TestRunner.java file - launched Chrome using Chrome.start() and then using it to take screenshot. It all works well locally on Windows.
However when executing on CI server which is a Unix box, the chrome executable is not present in the default location (usr/bin/google-chrome) and hence the connection for the localhost fails.
Is there a way we can change the default location of the chrome executable?
PS: Apologies if this was too trivial to be asked.
Yes Chrome on CI is hard to get right, refer: https://stackoverflow.com/a/62325328/143475 - note that CI boxes typically are "headless" a browser may not be even installed.
I think the best thing for you is to ZIP the HTML and send it. But I really think you need to work with some CI experts, because the report generation and e-mailing business is normally done by things like Jenkins. What you are doing is certainly not normal or best-practice.
If you really want, there is a Karate Docker container that can give you a proper Chrome instance (see docs) but that is overkill for what you need.
EDIT: The Chrome Java API allows for customization of the executable path and this is in the docs: https://github.com/intuit/karate/tree/master/karate-core#chrome-java-api
It should be something like this:
Chrome.start("/opt/blah/chrome");
I really don't know how to ask question to Google about this, so I excuse me that it is naive.
Our team is developing SPA application in ReactJS. We also do back-end programming for NodeJS. Our project recently got more e2e tests. They are written using webdriver.io packages. Everything works as expected but circa 30 tests run about 50 minutes. It is too long to pause developer work and force him to run tests.
We came with the idea that now when we have so many tests, we need to run them on separate computer (other than a developer's laptop, further I call it e2e-laptop).
So I programmed a bash script and installed Ubuntu on a e2e-laptop. My idea is, that developer who wants to run e2e test logs in on e2e-laptop with ssh, runs specified script with arguments (eg: --rev= specific git revision the tests should run on, --email= where to send Allure report) and logs out. After tests are done he gets Allure report in his mailbox.
This all sounds to me OK, but not very well. It works - it is like a dirty MVP. But what I really would like to give my team is the web browser based UI that gives the features my script has. I can imagine this software is hosted on e2e-laptop, every developer can open its webpage address in his local browser. Then after authorization, there are options: run all specs, run chosen specs, send report and more. It would be the best if that software could also allow simultaneous running of tests commissioned by multiple developers.
What software I need?
You need a continuous integration tool. https://stackify.com/top-continuous-integration-tools/
I recommend Jenkins.
I would first try to run your selenium tests headless in a docker container on your laptop. Once you are able to do that, use that same configuration in your docker container running in Bitbucket pipelines. It could actually be the same container and the same scripts. Then, developers can just make a branch and work with the tests on that branch. If only a certain subset of tests need to run, then the developer can make the necessary changes on his or her local branch to run those tests and push it up to Bitbucket. This should help with the configuration https://github.com/SeleniumHQ/docker-selenium.
I'm running nunit tests externally via Nunit3-console.
I'm not able to see any console.logs/console.Writeln
I find it mandatory to be able to track in real time, every step of the test.
I've read that nunit3 framework created a parallel test run feature, which is why the real time test output logs have been taken off.
But if I want to enjoy both worlds?
How can I trigger console logs during a test run?
Thanks in advance.
Nunit3 framework doesn't support live hard coded console outputs when running tests from Nunit3-console, because nunit3 framework was meant to bring the parallel testing methodology, they found it pointless to allow such outputs when more than 1 test is running in parallel.
I solved this by downgrading to Nunit 2.6.4 which isnt supporting parallel testing and allows me to fire console outputs from my test.
I'm running tests using Siesta from command line.
Is that possible to generate screenshots on each test fail while running Siesta test runner?
Edit:
Need to wait for the feature
http://www.bryntum.com/products/siesta/changelog/
It's not possible in current version 2.0.8, but they are planning to add this in future.
More info can be found on their forum http://www.bryntum.com/forum/
Also Siesta can be runned in cloud on http://www.browserstack.com/ or https://saucelabs.com/. These services offers evene whole screencasts of the tests.
I have a Grails application that, when run on my local Windows machine, passes all tests in my integration test suite. When I deploy my app to my Test environment in Jenkins, and run the same suite of tests, a few of them are failing for inexplicable reasons.
I think the Test box is Linux but I am not sure. I am using mocks in my Grails app and am wondering if that may be causing confusion in values returned.
Has anyone any ideas?
EDIT:
My app translates an XML document into a new XML document. One of the elements in the returned XML document is supposed to be PRODUCT but comes back as product.
The place where this element is set is from an in-memory database that is populated from a DB script. It is the same DB script that is used locally and on my Test environment.
The app does not read any config files that would be different in different environments.
Like the others have stated the really isn't enough information here to help give a solid answer. A couple of things that I would look at are:
If it's integration tests that are failing maybe you've got some "bad tests" that are dependent on certain data that does not exist in your test environment that Jenkins is running against.
There is no guaranteed consistency for test execution order across machines/platforms. So it's entirely possible that the tests pass for you locally just because they run in a certain order and leave things mocked out or data setup from one test that is needed in another. I wrote a plugin a while ago (http://grails.org/plugin/random-test-order) to help identify these problems. I haven't updated the plugin since Grails 1.3.7 so it may not work with 2.0+ grails apps.
If the steps above don't identify the problem knowing any differences in how you are invoking the tests on Jenkins vs. Local would be beneficial. For example if you specify a specific grails environment (http://grails.org/doc/latest/guide/conf.html#environments) when running on Jenkins and what the differences are between that and the grails environment used on your local.