Unable to call a function from Fixture.before Method - testing

I am trying to implement a fixture with multiple tests where all depend on each other
Therefore I want to clean the Database and perform login only one time from the Fixture.before Method
So it will look like this:
fixture `testProject`.page(baseUrl)
.before(async t => {
await loginPM.login()
await base.clearDB()
})
.beforeEach(async t => {
// some steps before each test
})
test 1
test 2
test 3
This scenario throws the following exception:
Error in fixture.before hook - Cannot implicitly resolve the test run in the context of which the test controller action should be executed. Use test function's 't' argument instead
Any ideas why testcafe not support calling functions from a Fixture.before Method

The fixture.before hook runs between tests and doesn't have access to the tested page. Please refer to the following help topic for details on its use: Fixture.before Method. If you need to execute test actions (click, typeText, etc) once per fixture before you start all tests, see this module: testcafe-once-hook module. Here is an example of how to use it: https://github.com/AlexKamaev/testcafe-once-hook-example.

Related

TestCafe unable to use testController (t) outside of test run (e.g. as a conditional to skip a test)

I'm trying to check which browser we're running tests on, and then skip a test/fixture based on the result (as mentioned in this TestCafe Issue).
import { t } from 'testcafe';
fixture `test`
.page('https://testcafe.devexpress.com')
if (t.browser.name.includes('Chrome')) {
test('is Chrome?', async () => {
console.log(t.browser.name);
await t.expect(t.browser.name.includes('Chrome').ok();
});
} else {
test.skip('is Chrome?')
};
Results in...
ERROR Cannot prepare tests due to an error.
Cannot implicitly resolve the test run in the context of which the test controller action should be executed. Use test function's 't' argument instead.
Is there any way I can call the testObject (t) outside of the test?
I don't have a solution to exactly your question. But I think it's better to do it slightly differently, so the outcome will be the same, but the means to achieve it will differ a bit. Let me explain.
Wrapping test cases in if statements is, in my opinion, not a good idea. It mostly clutters test files so you don't only see test or fixture at the left side, but also if statements that make you stop when reading such files. It presents more complexity when you just want to scan a test file quickly from top to bottom.
The solution could be you introduce meta data to your test cases (could work well with fixtures as well).
test
.meta({
author: 'pavelsaman',
creationDate: '16/12/2020',
browser: 'chrome'
})
('Test for Chrome', async t => {
// test steps
});
Then you can execute only tests for Chrome like so:
$ testcafe --test-meta browser=chrome chrome
That's very much the same as what you wanted to achieve with the condition, but the code is a bit more readable.
In case you want to execute tests for both chrome and firefox, you can execute more commands:
$ testcafe --test-meta browser=chrome chrome
$ testcafe --test-meta browser=firefox firefox
or:
$ testcafe --test-meta browser=chrome chrome && testcafe --test-meta browser=firefox firefox
If your tests are in a pipeline, it would probably be done in two steps.
The better solution, as mentioned in one of the comments in this question is to use the runner object in run your tests instead of the command line. Instead of passing the browser(s) as a CLI argument, you would pass it as an optional argument to a top-level script.
You would then read the browser variable from either the script parameter or the .testcaferc.json file.
You would need to tag all tests/fixtures with the browser(s) they apply to using meta data.
You then use the Runner.filter method to add a delegate that returns true if the browser in the meta data is equal to the browser variable in the top level script
var runner = testcafe.createRunner();
var browser = process.env.npm_package_config_browser || require("./testcaferc.json").browser;
var runner.filter((testName, fixtureName, fixturePath, testMeta, fixtureMeta) => {
return fixtureMeta.browser === browser || testMeta.browser === browser ;
}

How to call another test during my test in Cypress

I have a problem when creating test with Cypress
I am creating a test to which is required that the user should be in logged on his account to be able to run the test.
I have another test to test the log in function.
How can I run the test (Create account) so i dont need to have the code in the beginning of every test to log in first as in the example.
I start always with the log in script before I run the test.As u see I have the username and the password for each test.
I want to be able to run the log in first and then the rest.
cy.visit('https://devcloudarena.devtest.fastighet.vitec.se/test/mina-sidor/logga-in')
cy.contains('Mina sidor').click()
cy.contains('Logga in').click()
cy.get('#UserId').click();
cy.get('#UserId').type('19380412-6526');
cy.get('#Password').click();
cy.get('#Password').type('Vitec.Test20');
cy.get('.ml-auto > .button').click();
cy.wait('#getActivities').then((xhr) => { });
cy.url().should('contains', 'https://devcloudarena.devtest.fastighet.vitec.se/test/mina-sidor');
cy.get('.d-block .col-9').click();
cy.url().should('contains', 'https://devcloudarena.devtest.fastighet.vitec.se/test/mina-sidor/min-profil');
cy.get('.object-description-cc > a').click();
cy.url().should('contains', 'https://devcloudarena.devtest.fastighet.vitec.se/test/mina-sidor/uppdatera-kontaktuppgifter');
cy.get('.form-group > .button').click();
cy.get('#RegisterForm').submit();
cy.url().should('contains', 'https://devcloudarena.devtest.fastighet.vitec.se/test/mina-sidor');
You can put it as a Custom command and it will be reusable in every test with a simple command call.
To add to Rosen Mihaylov's answer, the user could convert the login script into a custom command and also use hooks to implement the pre-condition of the test by calling that specific custom command.
custom command pertaining to support/command.js
Cypress.Commands.add('login', (email, pw) => {})
using the custom command in the spec file
beforeEach(() => {
cy.login()
})

unexpected GET request on karma unit testing

I am using karma to test a controller in my angular app, the app it self works as expected but unit test throws error: unexpected request: GET views/home.html
as I got the problem is by ui-router and $httpBackend. so one approach was caching by ‘karma-preprocessor’ but what I have done is:
var MenuController, scope, $httpBackend;
// Initialize the controller and a mock scope
beforeEach(inject(function($controller, _$httpBackend_, $rootScope, menuService) {
// place here mocked dependencies
$httpBackend = _$httpBackend_;
$httpBackend.whenGET("views/header.html").respond(200,'');
$httpBackend.expectGET("http://localhost:3000/dishes").respond([{ ....
but the GET error shows for every template in my views folder.
NOW is there any way to ignore all templates and any other unexpected GET requests?
You can create a regular expression that matches all the templates requests:
httpBackend.whenGET(/views\/.*\.html/).respond(200, {}); or only
httpBackend.whenGET(/views\//).respond(200, {})

How can I target a test in the _before() hook

I have a Codeception cest file which has a number of tests in it.
In some of the tests, there are initializations which I would like to so in the _before() hook. These initializations are specific to those tests only and to no other test in the cest file.
How can I go about this?
The pseudocode would be something like
public _before($event)
{
if ($event->test_being_run == 'testThatFeature')
{
$init = something(here);
}
}
Through investigation, I have realized that the $event variable passed into the _before() hook is an instance of the generated AcceptanceTester; as opposed to \Codeception\Event\TestCase. So I cannot use the hopeful $event->getTest()->getTestFullName().
Codeception injects parameters based on type hinting.
If you want to get \Codeception\Event\TestCase, your code must look like this:
public _before(\Codeception\Event\TestCase $event)
All information that I found about receiving testcase in _before method, was about _before method of modules and extensions, it does not apply to tests.
If you want to run specific code for one test, just run it in the test code.

Protractor - How to separate each test to one file and separate variabiles

I have some komplex protractor test written but everything is in one file.
Where I'm on top of it loading all variabiles like:
var userLogin = "John";
and after that somewhere in code I use it together.
What I need to do is
1. Separate all variabiles to aditional file (some config file)
2. Each test to one file
1- I try to make config.js where I add all variabiles and i required it in protractor.conf.js it load correctly problem is that when i use any of this variabiles in some test it's not working (test fail with "userName is not defined")
I know there is a way where i requre config.file in each test script but that's really not best option in my eyes.
2- How can I know what I did in last script if it's separate, like for example how to know I am logged in?
Thanks.
There are multiple things you can make use of.
2) How can I know what I did in last script if it's separate, like for example how to know I am logged in?
This is where beforeEach(), afterEach() can help:
To help a test suite DRY up any duplicated setup and teardown code,
Jasmine provides the global beforeEach and afterEach functions. As the
name implies, the beforeEach function is called once before each spec
in the describe is run, and the afterEach function is called once
after each spec.
There are also beforeAll(), afterAll() available in jasmine 2, or via jasmine-beforeAll third-party for jasmine 1:
The beforeAll function is called only once before all the specs in
describe are run, and the afterAll function is called after all specs
finish. These functions can be used to speed up test suites with
expensive setup and teardown.
1) I try to make config.js where I add all variabiles and i required
it in protractor.conf.js it load correctly problem is that when i use
any of this variabiles in some test it's not working (test fail with
"userName is not defined") I know there is a way where i requre
config.file in each test script but that's really not best option in
my eyes.
One option which I've personally used would be to create a config.js file with all the reusable configuration variables you would need in multiple tests and require the file once - in the protractor config - then set it as a params configuration key value:
var config = require("./config.js");
exports.config = {
...
params: config,
...
};
where config.js is, for example:
var config;
config = {
user: {
login: "user",
password: "password"
}
};
module.exports = config;
Then, you would not need to require config.js in every test, but instead, you'll use browser.params. For example:
expect(browser.params.user.login).toEqual("user");
Also, if you need some sort of a global test preparation step, you can do it in onPrepare() function, see Setting Up the System Under Test. Example configuration that performs a "global" login step is available here.
And an another quick note: you can have custom globally defined variables (like built-in browser or protractor), set them using global in onPrepare. For example, I've defined protractor.ExpectedConditions as a custom global variable:
onPrepare: function () {
global.EC = protractor.ExpectedConditions;
}
Then, in tests, don't require anything, `EC variable would be available in the scope, e.g.:
browser.wait(EC.invisibilityOf(scope.page.dropdown), 5000)
Also, organizing your tests using "Page Object Pattern" would also help to solve the reusability and modularity problem.