Ember -- component integration async tests aren't waiting until async calls are returned - testing

I'm having a hard time testing async functionality in component integration tests. Input kicks off an async call to an endpoint, and when it returns, I send an action. I'm trying to test that the action sends the correct data.
I've tried putting my assertion in the wait() helper, but the assertion gets hit before the (dependent upon async) action is called.
Here's a twiddle showing that problem: https://ember-twiddle.com/79f9a80c639b642e538803ac64a1cf9d?openFiles=tests.integration.components.test-comp-test.js%2Ctemplates.components.test-comp.hbs
How can I correctly code my async component integration tests?

There are two things that fails your test:
Firstly, never make use of setTimeout (window.setTimeout) to schedule some future work with Ember. Make use the Ember way of doing it; I mean Ember.run.later. For the same thing that happened to me with acceptance tests; please see the following question and look through comments on the answer. The reason is that; Ember's test helpers really cannot handle setTimeout as we expect it.
You have a problem within the test itself; in the action handler you have written in test you need to change the name attribute instead of returning a promise.
Anyway please see the following twiddle I have updated. Testing in general with Ember is kind of a pain; since I believe there is not a proper comprehensive documentation. Good luck!

Related

How can I organize TestCafé tests into multiple steps?

I am testing a large project with long scenarios (some with more than 100 interactions with webpage). I would like to break them down into shorter steps that run in sequence (like in Mocha) but I don't know how to do that.
Example: In a single test, I would like to run
fixture('test1')
test('test1', async (t) => {
...login
...createSubAccount
...modifySubAccount
...activateSubAccount
})
where each of the steps would show in console and in report. Right now, the only thing I know how to do is to put each step into its own test() context, but that means that if e.g. createSubAccount fails, modifySubAccount and activateSubAccount will still run (even though the workflow already failed). Also, there is the unhappy part that each test() clears the browser (but I can deal with that).
In short: How can I split the tests in a way that if a single substep of fixture fails, the whole fixture fails immediately? Or similar thing, but for test()?
Also, I don't want the whole pipeline to end on the first test failure, as would happen with --stopOnFirstFail flag - I want to run all the tests, to find which are failing.
test() is the smallest unit. The idea is it's an independent piece of testing code, e.i. a bunch of test steps. This doesn't change no matter what tool you use (TestCafe, Playwright, Puppeteer, Cypress, mocha, Jest, ...).
And so:
Right now, the only thing I know how to do is to put each step into its own test() context, but that means that if e.g. createSubAccount fails, modifySubAccount and activateSubAccount will still run (even though the workflow already failed).
seems like breaking one of the main principles of tests, that is they are independent. Don't split test steps that belong together between different tests.
If the only drawback now is the length of your test, why don't you do it like you hinted at in the example:
test('test1', async (t) => {
login();
createSubAccount();
modifySubAccount();
activateSubAccount();
});
you can create functions for login, createAccount etc. and then use only such function in your tests, which would make them as short as shown here. You can also easily create various scenarious:
test('activate account without modification', async (t) => {
login();
createSubAccount();
activateSubAccount();
});
test('create account', async (t) => {
login();
createSubAccount();
});
test('create account without login', async (t) => {
createSubAccount();
});
// and so on
It doesn't even look that long.
TestCafe does not support the functionality you require at the moment. The only solution I could think of is, as you proposed, to implement your test as a fixture with steps as tests, use disablePageReloads feature (NOTE: it is experimental), track the number of passed tests manually, and check it at the beginning of each test. It is a bit tedious, but it should work as you need.
Another solution that has not been implemented yet and the easiest way to split the long test into steps is to simply divide it into functions. The only issue that may arise is related to reporting. Even if you implement a custom reporter, there is no possibility to pass information about the steps into it (you can vote for the corresponding feature request).
Also, I would like to draw your attention to Page Model pattern. This can shrink your tests and make them more readable.
Please open a new feature request with a comprehensive description if you have a better idea of how this should be done.

Cypress tagged hooks with mocha

I have been building ui automation frameworks with Cypress for some time, but always using the Cypress-Cucumber-Preprocessor.
Now I need to build one without cucumber, just plain ol' mocha, but I found a problem. Seems like I can't use tagged hooks to execute code for specific tests (scenarios in Cucumber)
The scenario is basically this. I have a spec file with several tests. I have a "before" hook that seeds test data to a Mongo db, and eventually I might need to add a hook or hooks to execute something (whatever) before a specific test.
With Cucumber you have a way to tag a given scenario (#tag) and then you can create a hook that will be executed ONLY before or after that specific scenario
#tag
Scenario: Tagged scenario
Given condition
When I do this
Then I should see that
before({tag : '#tag'}, () => {
code
})
I haven't found a way to do this with mocha in Cypress... Anyone has found a way?
thx
You can use BeforeEach or Before, that does predominantly the same thing in Mocha.

How to tell if middleware contains a Run()?

Is there any way to tell in ASP.NET Core if any given middleware will contain a Run() call which will stop the pipeline? It seems that UseMvc() is one big one, but I am not even certain about that, I just keep reading that it needs to go at the end, I assume it is because it contains a call to Run().
Perhaps there is a way to generate a visualisation of the pipeline for all middleware currently in use, showing which one contains the Run() call?
There is no sure way to tell, beyond reading documentation on each specific piece of middleware.
quoting itminus in the comments on my question:
Not only Run(), but also MapWhen() will terminate the process. Also, anyone could create a custom middleware that doesn't invoke the next delegate and then cause to a terminate.
It's the duty of middleware to determine whether there's a need to to call next. There's no built-in way to visualize the pipeline except you read the document/source code. That's because all the middlewares will be built into a single final delegate at startup time. When there's an incoming message, the final delegate will be used to process requests. As a programmer, we know what will be done by the middlewares, we know the time when it branches, and we know the time it terminates that's because we write the code. But the program won't know it until it actually runs just because the final delegate is built at startup time.

How to test handle_cast in a GenServer properly?

I have a GenServer, which is responsible for contacting an external resource. The result of calling external resource is not important, ever failures from time to time is acceptable, so using handle_cast seems appropriate for other parts of code. I do have an interface-like module for that external resource, and I'm using one GenServer to access the resource. So far so good.
But when I tried to write test for this gen_server, I couldn't figure out how to test the handle_cast. I have interface functions for GenServer, and I tried to test those ones, but they always return :ok, except when GenServer is not running. I could not test that.
I changed the code a bit. I abstracted the code in handle_cast into another function, and created a similar handle_call callback. Then I could test handle_call easily, but that was kind of a hack.
I would like to know how people generally test async code, like that. Was my approach correct, or acceptable? If not, what to do then?
The trick is to remember that a GenServer process handles messages one by one sequentially. This means we can make sure the process received and handled a message, by making sure it handled a message we sent later. This in turn means we can change any async operation into a synchronous one, by following it with a synchronisation message, for example some call.
The test scenario would look like this:
Issue asynchronous request.
Issue synchronous request and wait for the result.
Assert on the effects of the asynchronous request.
If the server doesn't have any suitable function for synchronisation, you can consider using :sys.get_state/2 - a call meant for debugging purposes, that is properly handled by all special processes (including GenServer) and is, what's probably the most important thing, synchronous. I'd consider it perfectly valid to use it for testing.
You can read more about other useful functions from the :sys module in GenServer documentation on debugging.
A cast request is of the form:
Module:handle_cast(Request, State) -> Result
Types:
Request = term()
State = term()
Result = {noreply,NewState} |
{noreply,NewState,Timeout} |
{noreply,NewState,hibernate} |
{stop,Reason,NewState}
NewState = term()
Timeout = int()>=0 | infinity
Reason = term()
so it is quite easy to perform unit test just calling it directly (no need to even start a server), providing a Request and a State, and asserting the returned Result. Of course it may also have some side effects (like writing in an ets table, modifying the process dictionary...) so you will need to initialize those resources before, and check the effect after the assert.
For example:
test_add() ->
{noreply,15} = my_counter:handle_cast({add,5},10).

Intern:Leadfoot - testing drag-n-drop

I have a webapp that uses dojo widgets and drag-n-drop functionalities and I'm using Intern in order to test it. Now I want to test the drag-n-drop mechanism, and for this I hoped to use the Leadfoot's helper, DragAndDrop.js
As seen in the script's example, here my code:
return new DragAndDrop(remote)
.findByXpath(source)
.dragFrom()
.end()
.findByXpath(target)
.dragTo()
I have the return statement because this code is part of a promise chain.
However, it seems to be not working and I do not get any kind of errors|exceptions, neither in the browser neither in selenium neither on intern side. Honestly, I have no idea where to start from.
Any suggestion? May I provide further information?
Have you tried
return remote.findByXpath(target)
.then(function(targetNode){
return remote.findByXpath(source)
.moveMouseTo(1,1)
.pressMouseButton().sleep(500)
.moveMouseTo(targetNode).sleep(500)
.releaseMouseButton();
});
Note: sleep isn't necessary, I put it here so that you can see clearer the actions