Reusing authentication state between Jasmine blocks - selenium

My app requires users to sign in by submitting a form. I wonder what is the best place to do it in my tests. I came up with some options:
sign-in in beforeEach block (and signout in afterEach block)
sign-in in beforeAll block of every describe (and signout in the last afterAll of every describe)
describe('APP', function () {
describe('FEATURE 1', function () {
beforeAll(function () {
//sign in
});
afterAll(function () {
//sign out
});
//...
});
});
sign-in once for the whole test run in beforeAll of main describe
describe('MY APP', function () {
beforeAll(function () {
//sign in
});
describe('my feature 1', function () {
//...
});
});
Number 1 is the slowest, Number 2 is faster and Number 3 is the fastest, but you are required to have a single entry point for your test runner - not ideal. So which do think is better and why?

It's a matter of different things.
The best practice is closing your browser after EACH test case and it will definitely be the slowest way to run your test. But you will achieve the most clear and the most fair tests. Sure, you can use selenium grid and paralleling your tests and this will be definitely not so long as you expect.
Sometimes you have an application, which logic is not so much dependent on each other and you will probably want to run UI tests into the pre-commit hook for not breaking anything into the branch. Then the approach n.3 will be not so bad (the only thing you should know is what cookies, session vars and other artifacts you should delete in browser after each test).
Anyway that's up to you, but a common and the really BEST practice is opening a clear new browser on every test.

Related

Jest + puppeteer best architecture practices

I just entered the world of testing with puppeteer and jest, and I was wondering what the best practice was in terms of folder architecture and logic.
I've never done testing before and I think I'm getting a little lost in the different principles and concepts and how it all fits together.
I learned to do my tests based on the page-object model, so I have classes for each of my pages, but also for each of my modules ( or components ). For example, in my application, the header or the login modal are components.
Then I have a test file per page or per component.
(for example the landingPage.tests.js file, which uses the model of the LandingPage class in the LandingPage.js file)
Here is a concrete example:
I have different login cases and I'd like to test them all. For example I want to test to connect with a "normal" user, for which the process is simply login then password. Then I need to test with a user who has activated 2FA, or with a user from a company that uses SSO.
I first thought about putting my different tests in authentication.tests.js, in different describe blocks, thinking it would open a new tab each time, but it doesn't... I use puppeteer in incognito mode to make sure each tab is an isolated session.
So my questions are:
Where is the best place to do these test suites?
Am I supposed to have test files that "describe" the pages ( for example, the button must be present, such text must be here etc) and also have "scenario type" test file ( a set of contextual actions to a user, like for my different login cases) ?
Here is authentication.tests.js, in which I would like to tests all my different ways of logging in :
import HeaderComponent from "../../../pages/components/HeaderComponent";
import AuthenticationComponent from "../../../pages/components/AuthenticationComponent";
import LandingPage from "../../../pages/landing/LandingPage";
import {
JEST_TIMEOUT,
CREDENTIALS
} from "../../../config";
describe('Component:Authentication', () => {
let headerComponent;
let authenticationComponent;
let landingPage;
beforeAll(async () => {
jest.setTimeout(JEST_TIMEOUT);
headerComponent = new HeaderComponent;
authenticationComponent = new AuthenticationComponent;
landingPage = new LandingPage;
});
describe('Normal login ', () => {
it('should click on login and open modal', async () => {
await landingPage.visit();
await headerComponent.isVisible();
await headerComponent.clickOnLogin();
await authenticationComponent.isVisible();
});
it('should type a normal user email adress and validate', async () => {
await authenticationComponent.typeUsername(CREDENTIALS.normal.username);
await authenticationComponent.clickNext();
});
it('should type the correct password and validate', async () => {
await authenticationComponent.typePassword(CREDENTIALS.normal.password);
await authenticationComponent.clickNext();
});
it('should be logged in', async () => {
await waitForText(page, 'body', 'Success !');
});
});
describe('SSO login ', () => {
// todo ...
});
});
Thank you and sorry if it sounds confusing, like I said I'm trying to figure out how it all fits together.
Regarding the folder structure, Jest will find any files according to the match config, basically anything called *.spec.js or *.test.js. Looks like you know that already.
What that means is the folder structure is completely up to you. Some people like to have the tests for components in the same folders as the components themselves. Personally I prefer to have all the tests in one folder as it makes the project look cleaner.
The other benefit of having all the tests in one folder is that you can then start to distinguish between the types of tests. Component tests check that pure components render and operate as expected. You don't need Puppeteer for this, use snapshots if you're in a React app. Puppeteer is good for integration tests that navigate through so-called 'happy paths', login, signup, add to cart etc., using a headless Chromium browser.
To answer the specific problem you have been having with Jest / Puppeteer on a new page for each test:
//keep a reference to the browser
let browser
//keep a reference to the page
let page
// open puppeteer before all tests
beforeAll(async () => {
browser = await puppeteer.launch()
})
// close puppeteer after all tests
afterAll(async () => {
await browser.close()
})
// Get a new page for each test so that we start fresh.
beforeEach(async () => {
page = await browser.newPage()
})
// Remember to close pages after each test.
afterEach(async () => {
await page.close()
})
describe('Counter', () => {
// "it" blocks go here.
})
Hope that helps a bit.

Cypress - log response data from an request after a click()

Although I know this may not be considered as a best practice, but what I want to achieve is to silently delete a record from a database after the same was created throughout UI. In htat way I want to keep our test environment clear as much as possible and reduce the noise of test data.
After my tests create a new record by clicking over the UI, I wait for POST request to finish and then I would like to extract the id from the response (so I can reuse it to silently delete that record by calling the cy.request('DELETE', '/id')).
Here's a sample test I have put on as a showcase. I'm wondering why nothing is logged in this example.
it('GET cypress and log', () => {
cy.server()
.route('**/repos/cypress-io/cypress*')
.as('getSiteInfo');
cy.visit('https://www.cypress.io/dashboard');
cy.get('img[alt="Cypress.io"]')
.click()
.wait('#getSiteInfo')
.then((response) => {
cy.log(response.body)
})
})
As far as I can see from here https://docs.cypress.io/api/commands/wait.html#Alias this should be fine.
your code contains two problems.
First:
The click triggers a new page to be loaded but cypress does not wait until the PageLoad event is raised (because you do not use visit). On my PC the Request takes about 5 seconds until it is triggered after the click. So you should use wait(..., { timeout: 10000 }).
Second:
wait() yields the XHR object, not the response. So your code within then is not correct. Also the body is passed as object. So you should use JSON.stringify() to see the result in the command log.
This code works:
describe("asda", () => {
it('GET cypress and log', () => {
cy.server()
.route('**/repos/cypress-io/cypress*')
.as('getSiteInfo');
cy.visit('https://www.cypress.io/dashboard');
cy
.get('img[alt="Cypress.io"]')
.click()
.wait('#getSiteInfo', { timeout: 20000 })
.then((xhr) => {
cy.log(JSON.stringify(xhr.response.body))
})
})
})
Instead of route and server method, try intercept directly

Ember acceptance test - timeouts

I have a reasonably special use-case:
I have an input field which issues a search when the user has stopped typing for 500ms. This is developed as a reusable add-on.
I would like to write an acceptance test for this but I cannot make the tests pass properly. The first passes, the second doesn't.
Now, the Ember runloop has a nice description but it's behaviour is really "something else".
This is my helper to timeout the runloop:
import Ember from 'ember';
export default Ember.Test.registerAsyncHelper('pauseFor', function (time) {
return Ember.Test.promise(function (resolve) {
Ember.run.later(resolve, time);
});
});
And this is how I use it
it('should do something after 500ms', function () {
visit('/');
fillIn('.search-input', 'a');
pauseFor(500);
andThen(function () {
// do my assertions/expectations here...
});
});
This is the error I get:
The weird thing is that I have 2 test cases and the first passes happily.
I guess my question is:
How to do this properly? What am I missing here or what am I doing wrong? How can I just simply timeout the test case?
Thanks for the halp!

Qunit beforeEach, afterEach - async

Since start(), stop() will be removed in Qunit 2.0, what is the alternative for async setups and teardowns via the beforeEach, afterEach methods? For instance, if I want the beforeEach to wait for a promise to be finished?
QUnit basically wants people to stop using the global methods (not just start() and stop(), but also test(), expect(), etc). So, as of version 1.16.0, you should always use either the global namespace (QUnit) or the assert API argument passed into the test() functions. This includes the new async control:
QUnit.test( "testing async action", function( assert ) { // <-- note the `assert` argument here
var done = assert.async(); // tell QUnit we're doing async actions and
// hold onto the function it returns for later
setTimeout(function() { // do some async stuff
assert.ok( true, "This happened 100 ms later!" );
done(); // using the function returned from `assert.async()` we
// tell QUnit we're don with async actions
}, 100);
});
If you are familiar with the old start() and stop() way of doing things, you should see that this is extremely similar, but more compartmentalized and extensible.
Because the async() method call is on the assert argument into the test, it cannot be used in the beforeEach() function. If you have an example of how you were doing that before, please post it and we can try to figure out how to git it into the new way.
UPDATE
My mistake previously, the assert object is being passed into the beforeEach and afterEach callbacks on modules, so you should be able to do the same logic that you would do for a test:
QUnit.module('set of tests', {
beforeEach: function(assert) {
var done = assert.async();
doSomethingAsync(function() {
done(); // tell QUnit you're good to go.
});
}
});
(tested in QUnit 1.17.1)
Seeing that nobody has answered the beforeEach/afterEach part: a test suite is supposed to run as soon as the page loads. When that is not immediately possible, then resort to configuring QUnit:
QUnit.config.autostart = false;
and continue with setting up your test suite (initializing tests, feeding them to QUnit, asynchronously waiting for some components to load, be it AJAX or anything else), your site, and finally, when it's ready, just run:
QUnit.start();
QUnit's docsite has it covered.
Ember Qunit, has once exists beforeEach/setup, afterEach/teardown co-exist for a little while.
See PR: https://github.com/emberjs/ember-qunit/pull/125

How to perfectly isolate and clear environments between each test?

I'm trying to connect to SoundCloud using CasperJS. What is interesting is once you signed in and rerun the login feature later, the previous login is still active. Before going any further, here is the code:
casper.thenOpen('https://soundcloud.com/', function() {
casper.click('.header__login');
popup = /soundcloud\.com\/connect/;
casper.waitForPopup(popup, function() {
casper.withPopup(popup, function() {
selectors = {
'#username': username,
'#password': password
};
casper.fillSelectors('form.log-in', selectors, false);
casper.click('#authorize');
});
});
});
If you run this code at least twice, you should see the following error appears:
CasperError: Cannot dispatch mousedown event on nonexistent selector: .header__login
If you analyse the logs you will see that the second time, you were redirected to https://soundcloud.com/stream meaning that you were already logged in.
I did some research to clear environments between each test but it seems that the following lines don't solve the problem.
phantom.clearCookies()
casper.clear()
localStorage.clear()
sessionStorage.clear()
Technically, I'm really interested about understanding what is happening here. Maybe SoundCloud built a system to also store some variables server-side. In this case, I would have to log out before login. But my question is how can I perfectly isolate and clear everything between each test? Does someone know how to make the environment unsigned between each test?
To clear server-side session cache, calling: phantom.clearCookies(); did the trick for me. This cleared my session between test files.
Example here:
casper.test.begin("Test", {
test: function(test) {
casper.start(
"http://example.com",
function() {
... //Some testing here
}
);
casper.run(function() {
test.done();
});
},
tearDown: function(test) {
phantom.clearCookies();
}
});
If you're still having issues, check the way you are executing your tests.
Where did you call casper.clear() ?
I think you have to call it immediately after you have opened a page like:
casper.start('http://www.google.fr/', function() {
this.clear(); // javascript execution in this page has been stopped
//rest of code
});
From the doc: Clears the current page execution environment context. Useful to avoid having previously loaded DOM contents being still active.