Is there a way to bypass UI for those actions which need to be performed before and (or) after the test? Is it possible to send simple GET or POST requests to the same test session instead of writing the script in the test?
For example, I want to write a test which checks if the record can be deleted. To do that, first of all I need to create the record. It doesn't seem to be a good choice to do it through the UI since it is not part of the test itself.
It really depends on the application under test. You probably don't want to go making SQL calls to your database to create these records, unless you really know what you're doing. Even then, it will make your test automation break when that record changes.
Perhaps your application under test provides an API which will allow you to create a target record. That would be ideal, allowing you to make an API request then all you have to do in the UI is navigate to where the "user" would delete it.
You can do pretty much everything by executing some Javascript into the page.
Here is an example send an HTTP request with a Javascript call:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.google.com")
driver.execute_script("""
var r = new XMLHttpRequest();
r.open('POST', '/search', 0);
r.setRequestHeader('Content-type','application/x-www-form-urlencoded');
r.send('q=bill+material&output=xml&client=test&site=operations&access=p');
return r.responseText;
""")
While it may be tempting to setup a test this way, I wouldn't recommend it since it will create new dependencies to the UI, increase the complexity and therefore increase the cost of maintenance of the tests.
Related
I have a test that sometimes gets run with the command line flag --disable-multiple-windows and sometimes it doesn't. I would like the test to skip over a section if multiple windows are disabled. Is there an easy way to tell that within a test?
My best solution so far is something like the following
let disableMultipleWindows=true;
try{
await t.openWindow('www.google.com');
disableMultipleWindows=false
await t.closeWindow();
}
But I'm wondering if there's a better way.
At present, disabling support for multiple browser windows is possible only for tests run scenarios. There is no access to the --disable-multiple-windows option within a test body. You can learn more about this option at Disable Support for Multiple Windows.
Could you please clarify your scenario of using the --disable-multiple-windows option? I think you can split your tests into different tests suites: with/without multiple browser windows.
I'm using testCafe for my functional test.
My project used a lot of XHR request and I don't want to waste my time to generate each single mock.
Exists an automocker like this: https://github.com/scottschafer/cypressautomocker for testcafe?
TestCafe does not provide the described functionality out of the box. However, you can use the combination of RequestLogger and RequestMock
The idea is in that you can create a JSON file with request results at the first run using the RequestLogger.
Then, based on results of the first run, you can configure your RequestMock object to respond with the results from the file for all consequent requests.
I maintain a complex Angular (1.5.x) application that is being E2E tested using Protractor (2.5.x). I am experiencing a problem with this approach, which presents primarily in the way the tests seem flaky. Tests that worked perfectly well in one pull request fail in another. This concerns simple locators, such as by.linkTest(...). I debugged the failing tests and the app is on the correct page, the links are present and accessible.
Has anyone else experienced these consistency problems? Knows of a cause or workaround?
Just Say No to More End-to-End Tests!
That said, here are the few things you can do to tackle our mutual merciless "flakiness" enemy:
update to the latest Protractor (currently 4.0.0) which also brings latest selenium and chromedriver with it
turn off Angular animations
use dragons browser.wait() with a set of built-in or custom Expected Conditions. This is probably by far the most reliable way to approach the problem. Unfortunately, this is use-case and problem specific, you would need to modify your actual tests in the problematic places. For example, if you need to click an element, wait for it to be clickable:
var EC = protractor.ExpectedConditions;
var elm = $("#myid");
browser.wait(EC.elementToBeClickable(elm), 5000);
elm.click();
maximize the browser window (to avoid random element not visible or not clickable errors). Put this to onPrepare():
browser.driver.manage().window().maximize();
increase the Protractor and Jasmine timeouts
slow Protractor down by tweaking the Control Flow (not sure if it works for 4.0.0, please test)
manually call browser.waitForAngular(); in problematic places. I am not sure why this helps but I've seen reports where it definitely helped to fix a flaky test.
use the jasmine done() callback in your specs. This may help to, for example, not to start the it() block until done is called in beforeEach()
return a promise from the onPrepare() function. This usually helps to make sure things are prepared for the test run
use protractor-flake package that would automatically re-run failed tests. More like a quick workaround to the problem
There are also other problem-specific "tricks" like slow typing into the text box, clicking via JavaScript etc.
Yes, I think all of us experienced such flakiness issue.
Actually, the flakiness is quite common issue with any browser automation tool. However, this is supposed to be less in case of Protractor as Protractor has built-in wait consideration which performs actions only after loading the dom properly. But, in few cases you might have to use some explicit waits if you see intermittent failures.
I prefer to use few intelligent wait methods like:
function waitForElementToClickable(locator) {
var domElement = element(by.css(locator)),
isClickable = protractor.ExpectedConditions.elementToBeClickable(domElement);
return browser.wait(isClickable, 2000)
.then(function () {
return domElement;
});
}
Where 2000 ms is used as timeout, you can make it configurable using a variable.Sometimes I also go with browser.sleep() when none of my intelligent wait works.
It's been my experience that some methods (eg. sendKeys()) do not always fire at the expected time, within the controlFlow() queue, and will cause tests to be flakey. I work around this by specifically adding them to the controlFlow(). Eg:
this.enterText = function(input, text) {
return browser.controlFlow().execute(function() {
input.sendKeys(text);
});
};
A workaround that my team has been using is to re-run only failed tests using the plugin protractor-errors. Using this tool, we can identify real failures versus flakey tests within 2-3 runs. To add the plugin, just add a require statement to the bottom of the Protractor config's onPrepare function:
exports.config = {
...
onPrepare: function() {
require('protractor-errors');
}
}
You will need to pass these additional parameters when to run your tests with the plugin:
protractor config.js --params.errorsPath 'jasmineReports' --params.currentTime (timestamp) --params.errorRun (true or false)
There is also a cli tool that will handle generating the currentTime if you don't have an easy way to pass in a timestamp.
I'm trying to write some end to end tests for our application's login process, but am having trouble getting my head around the best way to set up the scenario where the user needs to change his password.
When our server responds to a successful login, a user object is returned with a changePassword field. The client then inspects the response and redirects accordingly.
My problem is getting the test set up so that the changePassword field is set - what is the best approach to use?
I see my options as:
Have a test set up and tear-down script for the server that creates a brand new user specifically for the test run with changePassword flag set in the database.
This seems like the most end to end approach, but is probably also the most effort & code.
Somehow intercept the http response in the test and modify the changePassword flag to be set for this test only.
Mock the http response completely. Using this approach is the most removed from an end to end test, but is perhaps the simplest?
Which is the best or most common approach? Also any general pointers on how to actually implement the above (particularly 1 and 2) with protractor would be great - I'm finding it hard to conceptually get straight in my head, and hence hard to know what to search for.
I'm using protractor as the test framework, with angular.js powering the client side, and a node server running utilising (among other things) express.js and mongoDB.
Having thought about this further, option 1 is the best solution, but is not always possible.
Option 2 is also possible, and option 3 should be avoided.
For option two, a mock module can be created like so: (coffeescript)
e2eInterceptors =->
angular.module('e2eInterceptors', [])
.factory('loginInterceptor', ()->
response: (response)->
# Only edit responses we are interested in
return response unless response.match(/login/)
# do the modifiations
response.data.changePassword = true
# return the response
return response
)
.config(($httpProvider)->
$httpProvider.interceptors.push('loginInterceptor')
)
You can then inject this module into your tests using
browser.addMockModule('e2eInterceptors', e2eInterceptors)
If you want to do this globally, you can put this in the onPrepare function in your protractor file, otherwise just call it when needed in tests.
I think your first approach is the most appropriate.
It would be useful anyway to test the new user creation, so it is not a waste.
And for example this example seems to be something similar: http://product.moveline.com/testing-angular-apps-end-to-end-with-protractor.html
What's the best strategy to use when writing JMeters tests against a web application where the values of certain query-string and post variables are going to change for each run.
Quick, common, example
You go to a Web Page
Enter some information into a form
Click Save
Behind the scenes, a new record is entered in the database
You want to edit the record you just entered, so you go to another web page. Behind the scenes it's passing the page a parameter with the Database ID of the row you just created
When you're running step 5 of the above test, the page parameter/Database ID is going to change each time.
The workflow/strategy I'm currently using is
Record a test using the above actions
Make a note of each place where a query string variable may change from run to run
Use a XPath or Regular Expression Extractor to pull the value out of a response and into a JMeter variable
Replace all appropriate instances of the hard-coded parameter with the above variable.
This works and can be automated to an extent. However, it can get tedious, is error prone, and fragile. Is there a better/commonly accepted way of handling this situation? (Or is this why most people just use JMeter to play back logs? (-;)
Sounds to me like your on the right track. The best that can be achieved by JMeter is to extract page variables with a regular expression or xpath post processor. However your absolutely correct in that this is not a scalable solution and becomes increasingly tricky to maintain or grow.
If you've reached is point then you may want to consider a tool which is more specialised for this sort of problem. Have a look web testing tool such as Watir, it will automatically handle changing post parameters; but you would still need to extract parameters if you need to do a database update but using Watir allows for better code reuse making the problem less painful.
We have had great success in testing similar scenarios with JMeter by storing parameters in JMeter Variables within a JDBC assertion. We then do our http get/post and use a BSF Assertion and javascript do complex validation of the response. Hope it helps