I am writing a e2e test to a react native app(-v : 0.63.4) using Detox (-v : 18.2.2)
the app contains several screens and I want to start at certain point in the test while I am writing it just to spare time by writing the test and not start every time from the beginning because I run the test while I am writing it to check if everything like I want it to be
I know it is called e2e test but is there a way to do that ?
Edit: I should mention I'm using Jest as a test-runner, I don't know if you're using Jest, but I'll leave this here for now anyway.
The way I'm doing it in my project now is to use the .only function on my tests.
https://jestjs.io/docs/en/api#testonlyname-fn-timeout
test.only('the user can continue to account creation', async () => {
await element(by.id('continueButton')).tap()
await expect(element(by.id('accountCreationScreen'))).toBeVisible()
})
Notice though, that in my case I have to keep the tests that logs a user in to the app. But that's fine for me, I still skip all the other tests.
Hope that works out for you, or helps in some way.
Related
I have been building ui automation frameworks with Cypress for some time, but always using the Cypress-Cucumber-Preprocessor.
Now I need to build one without cucumber, just plain ol' mocha, but I found a problem. Seems like I can't use tagged hooks to execute code for specific tests (scenarios in Cucumber)
The scenario is basically this. I have a spec file with several tests. I have a "before" hook that seeds test data to a Mongo db, and eventually I might need to add a hook or hooks to execute something (whatever) before a specific test.
With Cucumber you have a way to tag a given scenario (#tag) and then you can create a hook that will be executed ONLY before or after that specific scenario
#tag
Scenario: Tagged scenario
Given condition
When I do this
Then I should see that
before({tag : '#tag'}, () => {
code
})
I haven't found a way to do this with mocha in Cypress... Anyone has found a way?
thx
You can use BeforeEach or Before, that does predominantly the same thing in Mocha.
I am using `TestCafe` to test our Electron app and need a way to know when the last test in a fixture has been executed BUT before `TestCafe` shuts our app down.
The standard hooks *(fixture.after, fixture.afterEach)* won't work. In particular, fixture.after won't work as it is called BETWEEN test runs (the test app will have been shutdown) and I need my app to still be around.
If I can get the number of tests active for this test run in the fixture I can count the runs myself and then call my custom code on the last test. If there is another way to do this that would be appreciated as well.
Any insights appreciated,
m
You can create a special 'teardown' fixture, place all necessary code into it, and pass it at the end of the test file list:
testcafe chrome tests/* teardown.js
Take a look at the testcafe-once-hook module which allows you to execute test actions once per fixture. Here is an example how to use it: https://github.com/AlexKamaev/testcafe-once-hook-example.
I am trying to figure out what parts of my React Native app are causing Detox to wait unnecessarily long as instructed in the documentation. However, when I run:
detox test --debug-synchronization 20
I get no additional output, only the regular Jest output. I know for a fact that there are network requests slower than that, setTimeout's of 400 ms and animations which are slowing Detox down, but it doesn't output them.
What could be causing the output not to work?
This feature had a bug before and they just fixed it in this release: 18.18.0
You may consider this apprach as well
await device.disableSynchronization();
put this line before interacting with the animated element
and then you can enable the synchorization again
await device.enableSynchronization();
The recommended testing framework for Meteor 1.7 seems to be meteortesting:mocha.
With Meteor 1.7.0.3 I created a default app (meteor create my-app), which has the following tests (in test/main.js)
import assert from "assert";
describe("my-app", function () {
it("package.json has correct name", async function () {
const { name } = await import("../package.json");
assert.strictEqual(name, "noteit");
});
if (Meteor.isClient) {
it("client is not server", function () {
assert.strictEqual(Meteor.isServer, false);
});
}
if (Meteor.isServer) {
it("server is not client", function () {
assert.strictEqual(Meteor.isClient, false);
});
}
});
I ran
meteor add meteortesting:mocha
meteor test --driver-package meteortesting:mocha
and with meteortesting:mocha#2.4.5_6 I got this in the console:
I20180728-12:06:37.729(2)? --------------------------------
I20180728-12:06:37.729(2)? ----- RUNNING SERVER TESTS -----
I20180728-12:06:37.729(2)? --------------------------------
I20180728-12:06:37.729(2)?
I20180728-12:06:37.730(2)?
I20180728-12:06:37.731(2)?
I20180728-12:06:37.737(2)? the server
✓ fails a test.753(2)?
I20180728-12:06:37.755(2)?
I20180728-12:06:37.756(2)?
I20180728-12:06:37.756(2)? 1 passing (26ms)
I20180728-12:06:37.756(2)?
I20180728-12:06:37.757(2)? Load the app in a browser to run client tests, or set the TEST_BROWSER_DRIVER environment variable. See https://github.com/meteortesting/meteor-mocha/blob/master/README.md#run-app-tests
=> Exited with code: 0
=> Your application is crashing. Waiting for file change.
Actually, it was repeated three times. Not pretty. And I wasn't expecting a passing test to crash my app.
Also in the browser I got this
I was expecting something more like the nice output, as per the Meteor testing guide:
As with most things Node.js, there are a multitude of forks of almost anything. So also with meteortesting:mocha.
cultofcoders:mocha seems to be a few commits ahead of practicalmeteor:mocha, which was at one point the recommended testing framework for Meteor.
If you run
meteor add cultofcoders:mocha
meteor test --driver-package cultofcoders:mocha
you'll get the nice output.
As a curiousity, I found that the version of cultofcoders:mocha I got (meteor list | grep mocha) was 2.4.6, a version that the github repo does not have...
The screenshot, you reference to, is made using practicalmeteor:mocha, but meteortesting:mocha is not (as the other answer claims) a fork of it but a separately developed package, aiming for the same goal, which is running of tests in Meteor.
The usage of the packages is very different and practicalmeteor:mocha might look a bit trickier to set up and this list only applies to it's version 1.0.1 and might change later.
But I have to admit that the documentation needs a refresh ... Anyways, here are some helpful tipps which I'll include in the documentation soon.
If you just want to get started, run this:
meteor add meteortesting:mocha
npm i --save-dev puppeteer#^1.5.0
TEST_BROWSER_DRIVER=puppeteer meteor test --driver-package meteortesting:mocha --raw-logs --once
Do you want to exit after the tests are completed or re-run them after file-change?
Usually, Meteor will restart your application when it exits (a normal exit or a crash), which includes the test-runner.
In case you want to use it in one of your CI or you just want to run the tests once, add --once to the meteor-command, otherwise set TEST_WATCH=1 before running this script. If you don't set the env variable, and don't define --once, Meteor will print these lines and restart the tests once they're finished:
=> Exited with code: 0
=> Your application is crashing. Waiting for file change.
As of now I haven't found a way to check if the flag --once is set, which would omit the env variable. The flexibility here to choose between CI and continuous testing is very useful.
Maybe you're currently working on a feature and want to run the tests as you work. If you have set TEST_WATCH=1 and are not using --once, Meteor will restart the tests once it registers that a file was changed. You can even limit the test collection using MOCHA_GREP.
Where and how do you want to see the results?
You currently have to choose between seeing all the test-results on the command-line or to show the server-tests in the commandline and the client-tests in the browser. Currently practicalmeteor:mocha does not support showing the result of the server- and client-tests in the browser, as your screenshot shows.
Please take a look at the package documentation for further details:
You should disable the Meteor timestamp to make it look better.
Tests might look quite gambled because of the timestamp added to every line. To avoid this, add --raw-logs to your command.
I hope this answers most of your question. I know that the documentation needs some improvements and would welcome if someone would take the time to take it into a more logical order for people who "just want to get started".
I am currently researching ways to integrate a testsuite for an application based on ember.js into travis-ci. So first off, we're not on the open-source service, we use it for private repositories, etc..
I looked at how several open-source projects run their ember.js test suite and it looks like they set up a server with their project which probably gets updated whenever someone pushes to the repository. Then PhantomJS is used to run the tests on that server (and actually not on travis-ci itself).
The problem I have with this approach is that this adds another step (and ultimately complexity): I have to update and maintain a server with the latest code so I can use PhantomJS to run the test suite.
Another drawback is that I don't see how it would enable us to test PRs (pull-requests) either. The server would have to be updated with code from the PR. Testing PRs before they are merge is one of the great things about travis-ci.
I couldn't find much/anything about running ember.js tests only through the CLI – I am hoping someone tackled this issue before me.
I can't speak to your questions about travis-ci ... but I can offer some thoughts about unit testing ember.js code with jasmine.
Before I started using ember.js I was unit testing with jasmine and a simple node.js module called jasmine-node. This allowed me to quickly run a suite of jasmine unit tests from the command line without having to open a browser or hack around with "js-test runner" / etc
That worked great when I had jasmine, jquery and simple javascript modules I used to keep my javascript code human readable. But the moment I needed to use ember/handlebars/etc the jasmine-node module fell down because it expects you have everything available on both global and window. But because ember is just a browser library not everything was on "global"
I started looking at PhantomJS and like yourself couldn't see myself adding the complexity. So instead of hacking around this I decided to take a weekend and write what was missing from the jasmine test runner space. I wanted the same power of jasmine-node (meaning all I would need on my CI box was a recent version of node.js and a simple npm module to run the tests)
I wrote a npm module called jasmine-phantom-node and at the core it's using node.js to run phantomJS => that in turn fires up a regular jasmine html runner and scrapes the page for test results using a very basic express web app.
I spent the time to put 2 different examples in the github project so others could see how it works quickly. It's opinionated so you will need an html file in your project root that will be used by the plugin to execute your tests. It also requires jasmine, and jasmine-html along with a recent jQuery.
It solved this issue for me personally and now I can write tests against ember using simple jasmine and run it from the cmd line without a browser.
Here is a sample jasmine unit test that I wrote against an ember view recently while spiking around with this test runner. Here is a link to the full ember / django project if you want to see how the view under test is used in the app.
require('static/script/vendor/filtersortpage.js');
require('static/script/app/person.js');
describe ("PersonApp.PersonView Tests", function(){
var sut, router, controller;
beforeEach(function(){
sut = PersonApp.PersonView.create();
router = new Object({send:function(){}});
controller = PersonApp.PersonController.create({});
controller.set("target", router);
sut.set("controller", controller);
});
it ("does not invoke send on router when username does not exist", function(){
var event = {'context': {'username':'', 'set': function(){}}};
var sendSpy = spyOn(router, 'send');
sut.addPerson(event);
expect(sendSpy).not.toHaveBeenCalledWith('addPerson', jasmine.any(String));
});
it ("invokes send on router with username when exists", function(){
var event = {'context': {'username':'foo', 'set': function(){}}};
var sendSpy = spyOn(router, 'send');
sut.addPerson(event);
expect(sendSpy).toHaveBeenCalledWith('addPerson', 'foo');
});
it ("does not invoke set context when username does not exist", function(){
var event = {'context': {'username':'', 'set': function(){}}};
var setSpy = spyOn(event.context, 'set');
sut.addPerson(event);
expect(setSpy).not.toHaveBeenCalledWith('username', jasmine.any(String));
});
it ("invokes set context to empty string when username exists", function(){
var event = {'context': {'username':'foo', 'set': function(){}}};
var setSpy = spyOn(event.context, 'set');
sut.addPerson(event);
expect(setSpy).toHaveBeenCalledWith('username', '');
});
});
Here is the production ember view that I'm unit testing above
PersonApp.PersonView = Ember.View.extend({
templateName: 'person',
addPerson: function(event) {
var username = event.context.username;
if (username) {
this.get('controller.target').send('addPerson', username);
event.context.set('username', '');
}
}
});