Cypress tagged hooks with mocha - automation

I have been building ui automation frameworks with Cypress for some time, but always using the Cypress-Cucumber-Preprocessor.
Now I need to build one without cucumber, just plain ol' mocha, but I found a problem. Seems like I can't use tagged hooks to execute code for specific tests (scenarios in Cucumber)
The scenario is basically this. I have a spec file with several tests. I have a "before" hook that seeds test data to a Mongo db, and eventually I might need to add a hook or hooks to execute something (whatever) before a specific test.
With Cucumber you have a way to tag a given scenario (#tag) and then you can create a hook that will be executed ONLY before or after that specific scenario
#tag
Scenario: Tagged scenario
Given condition
When I do this
Then I should see that
before({tag : '#tag'}, () => {
code
})
I haven't found a way to do this with mocha in Cypress... Anyone has found a way?
thx

You can use BeforeEach or Before, that does predominantly the same thing in Mocha.

Related

Testcafe: Ability to tell value of disableMultipleWindows inside of test?

I have a test that sometimes gets run with the command line flag --disable-multiple-windows and sometimes it doesn't. I would like the test to skip over a section if multiple windows are disabled. Is there an easy way to tell that within a test?
My best solution so far is something like the following
let disableMultipleWindows=true;
try{
await t.openWindow('www.google.com');
disableMultipleWindows=false
await t.closeWindow();
}
But I'm wondering if there's a better way.
At present, disabling support for multiple browser windows is possible only for tests run scenarios. There is no access to the --disable-multiple-windows option within a test body. You can learn more about this option at Disable Support for Multiple Windows.
Could you please clarify your scenario of using the --disable-multiple-windows option? I think you can split your tests into different tests suites: with/without multiple browser windows.

Intern: Execute HelperFunction for after every single functional Test

I'm currently trying to execute a specific helperFunction after every testcase.
The problem with the beforeEach Function is, that the test is already flagged as successfully/passed (TestLifeCycle already finished).
Is their any configuration possibility to execute a helper Function after every testcase, without pasting it in every single test case?
I'm using the Intern Testframework with the BDD Testinterface.
The docs for the BDD interface used for Intern tests are here:
https://theintern.io/docs.html#Intern/4/api/lib%2Finterfaces%2Fbdd
You can use the afterEach() method to execute whatever you like once each test in your suite has finished.

Ember -- component integration async tests aren't waiting until async calls are returned

I'm having a hard time testing async functionality in component integration tests. Input kicks off an async call to an endpoint, and when it returns, I send an action. I'm trying to test that the action sends the correct data.
I've tried putting my assertion in the wait() helper, but the assertion gets hit before the (dependent upon async) action is called.
Here's a twiddle showing that problem: https://ember-twiddle.com/79f9a80c639b642e538803ac64a1cf9d?openFiles=tests.integration.components.test-comp-test.js%2Ctemplates.components.test-comp.hbs
How can I correctly code my async component integration tests?
There are two things that fails your test:
Firstly, never make use of setTimeout (window.setTimeout) to schedule some future work with Ember. Make use the Ember way of doing it; I mean Ember.run.later. For the same thing that happened to me with acceptance tests; please see the following question and look through comments on the answer. The reason is that; Ember's test helpers really cannot handle setTimeout as we expect it.
You have a problem within the test itself; in the action handler you have written in test you need to change the name attribute instead of returning a promise.
Anyway please see the following twiddle I have updated. Testing in general with Ember is kind of a pain; since I believe there is not a proper comprehensive documentation. Good luck!

Protractor flakiness

I maintain a complex Angular (1.5.x) application that is being E2E tested using Protractor (2.5.x). I am experiencing a problem with this approach, which presents primarily in the way the tests seem flaky. Tests that worked perfectly well in one pull request fail in another. This concerns simple locators, such as by.linkTest(...). I debugged the failing tests and the app is on the correct page, the links are present and accessible.
Has anyone else experienced these consistency problems? Knows of a cause or workaround?
Just Say No to More End-to-End Tests!
That said, here are the few things you can do to tackle our mutual merciless "flakiness" enemy:
update to the latest Protractor (currently 4.0.0) which also brings latest selenium and chromedriver with it
turn off Angular animations
use dragons browser.wait() with a set of built-in or custom Expected Conditions. This is probably by far the most reliable way to approach the problem. Unfortunately, this is use-case and problem specific, you would need to modify your actual tests in the problematic places. For example, if you need to click an element, wait for it to be clickable:
var EC = protractor.ExpectedConditions;
var elm = $("#myid");
browser.wait(EC.elementToBeClickable(elm), 5000);
elm.click();
maximize the browser window (to avoid random element not visible or not clickable errors). Put this to onPrepare():
browser.driver.manage().window().maximize();
increase the Protractor and Jasmine timeouts
slow Protractor down by tweaking the Control Flow (not sure if it works for 4.0.0, please test)
manually call browser.waitForAngular(); in problematic places. I am not sure why this helps but I've seen reports where it definitely helped to fix a flaky test.
use the jasmine done() callback in your specs. This may help to, for example, not to start the it() block until done is called in beforeEach()
return a promise from the onPrepare() function. This usually helps to make sure things are prepared for the test run
use protractor-flake package that would automatically re-run failed tests. More like a quick workaround to the problem
There are also other problem-specific "tricks" like slow typing into the text box, clicking via JavaScript etc.
Yes, I think all of us experienced such flakiness issue.
Actually, the flakiness is quite common issue with any browser automation tool. However, this is supposed to be less in case of Protractor as Protractor has built-in wait consideration which performs actions only after loading the dom properly. But, in few cases you might have to use some explicit waits if you see intermittent failures.
I prefer to use few intelligent wait methods like:
function waitForElementToClickable(locator) {
var domElement = element(by.css(locator)),
isClickable = protractor.ExpectedConditions.elementToBeClickable(domElement);
return browser.wait(isClickable, 2000)
.then(function () {
return domElement;
});
}
Where 2000 ms is used as timeout, you can make it configurable using a variable.Sometimes I also go with browser.sleep() when none of my intelligent wait works.
It's been my experience that some methods (eg. sendKeys()) do not always fire at the expected time, within the controlFlow() queue, and will cause tests to be flakey. I work around this by specifically adding them to the controlFlow(). Eg:
this.enterText = function(input, text) {
return browser.controlFlow().execute(function() {
input.sendKeys(text);
});
};
A workaround that my team has been using is to re-run only failed tests using the plugin protractor-errors. Using this tool, we can identify real failures versus flakey tests within 2-3 runs. To add the plugin, just add a require statement to the bottom of the Protractor config's onPrepare function:
exports.config = {
...
onPrepare: function() {
require('protractor-errors');
}
}
You will need to pass these additional parameters when to run your tests with the plugin:
protractor config.js --params.errorsPath 'jasmineReports' --params.currentTime (timestamp) --params.errorRun (true or false)
There is also a cli tool that will handle generating the currentTime if you don't have an easy way to pass in a timestamp.

cucumber + capybara step definition - sending POST requests

i have the following step definition:
When /^I upload it$/ do
end
which relates to a file upload. the visit method in capybara, from what i can tell is a GET only method .. and the only way to do a POST request is by implementation:
visit "/files/new"
within('#upload-form') do
attach_file('File', #files_path+'/file.txt')
click_button('Upload')
end
this doesnt seem a very strong test, as its dependant on the HTML and form tags within the files/new template.
is there a better way to handle this, or is this OK to do? i had in mind something like this:
post files_new_path { file: => 'a_file_on_the_system.txt' }
but then again cucumber tests are integration tests .. so which is the 'official' or best way to write tests at this level?
The Capybara codes mimic human actions. You can't expect human to "POST" but only "visit", "click_button" etc.
The syntax you mentioned would fit controller test better, but not integration test with Capybara.
The best style in integration test, in my opinion, is to think and act like human being but not machine.