How to run separate tests in parallel with NightwsatchJS - testing

The end goal is to run two Different tests in parallel. For example, to simulate a video conference between a teacher and a student.
I have tried searching for a solution; however, all the results I find are for running the same tests in parallel.
Is this possible with NightwatchJS? If so, how?

Yes, its possible. You should realise two tests in async functions and use Promise.all
Something like that
cons [first, second]=await Promise.all([ func1,func2]).
Where first its result of func1, second -func2. They will work parallel and promise wlll wait for ending them all.
Find right syntax on the site please, but Im sure, you have understood the idea.

Related

Can a test case find more than one bug?

I'm studying measurement of good quality test case through Effective & Efficient.
Effective: it finds a high percentage of existing bugs.
60 test cases -> 60 bugs is better than 60 test cases -> 30 bugs.
Efficient: it has a high rate of success (bugs found/test cases).
20 test cases -> 8 bugs is better than 40 test cases -> 8 bugs.
Then it got me thinking, is it possible for a single test case to find multiple bugs? If so, can you give an example? May be for a program that do summation of two integer values.
For me, I think it's impossible because each test case only have one expected value, thus it only aims to uncover a single bug.
Yes, it's possible, you can have multiple asserts on different things. But is it desirable ? That's a different question. A good test case tests one thing and only one thing. And don't forget that a test does not test for bugs - it tests that functionality works as expected. A given functionality may fail for multiple reasons. For example, a loop might fail because of a counter that is not incremented, an incorrect exit condition, or some other reason.
Here are 2 more measures for you :
Does the test enable rapid identification of the problem. Don't
forget that tests are not just run on new code, but are also run to
check that a modification has not broken existing code. You could
put all your tests into a single mega-test, but then if the test
failed you would not know what was broken.
Is the test tolerant of code modification? Will the test need to
be re-written when I modify code being tested ? If I make
minor changes to my object under test I don't want to rewrite all
my tests.

Can I save or fork the current state of Selenium browser?

I have several tests to run and all of them share a certain number x of initial actions (say login, fill form fields, click buttons, etc.), then they diverge.
Is it possible to let the browser execute the first x actions just once, save the current state and then execute all the test separately (in parallel if possible), each one with a separate browser instance?
Thanks
You should try to avoid duplicating effort in your tests. However, you must aim for consistency above all, and maintainability is probably just as important.
What that means is that using the browser in a way a real user wouldn't (I think your state-saving idea counts) is very risky for consistency, and may fail to give you the meaningful results you need.
Another alternative - a 'monolithic' test that attempts to cover multiple scenarios within one user session - is also problematic, because it's slower to run and slower to write and debug.
To be honest I think the idea of "browser state" is one that isn't a good fit for the real web.
My suggestion is to run dedicated, self-contained, clean tests - even if they do duplicate things like login/registration forms. However, if it is important to minimise the length of your test runs, try running them in parallel: ideally on multiple VMs, or via Selenium Grid.

In “Given-When-Then” style BDD tests, is it OK to have multiple “When”s conjoined with an “And”?

I read Bob Martin's brilliant article on how "Given-When-Then" can actual be compared to an FSM. It got me thinking. Is it OK for a BDD test to have multiple "When"s?
For eg.
GIVEN my system is in a defined state
WHEN an event A occurs
AND an event B occurs
AND an event C occurs
THEN my system should behave in this manner
I personally think these should be 3 different tests for good separation of intent. But other than that, are there any compelling reasons for or against this approach?
When multiple steps (WHEN) are needed before you do your actual assertion (THEN), I prefer to group them in the initial condition part (GIVEN) and keep only one in the WHEN section. This kind of shows that the event that really triggers the "action" of my SUT is this one, and that the previous one are more steps to get there.
Your test would become:
GIVEN my system is in a defined state
AND an event A occurs
AND an event B occurs
WHEN an event C occurs
THEN my system should behave in this manner
but this is more of a personal preference I guess.
If you truly need to test that a system behaves in a particular manner under those specific conditions, it's a perfectly acceptable way to write a test.
I found that the other limiting factor could be in an E2E testing scenario that you would like to reuse a statement multiple times. In my case the BDD framework of my choice(pytest_bdd) is implemented in a way that a given statement can have a singular return value and it maps the then input parameters automagically by the name of the function that was mapped to the given step. Now this design prevents reusability whereas in my case I wanted that. In short I needed to create objects and add them to a sequence object provided by another given statement. The way I worked around this limitation is by using a test fixture(which I named test_context), which was a python dictionary(a hashmap) and used when statements that don't have same singular requirement so the '(when)add object to sequence' step looked up the sequence in the context and appended the object in question to it. So now I could reuse the add object to sequence action multiple times.
This requirement was tricky because BDD aims to be descriptive. So I could have used a single given statement with the pickled memory map of the sequence object that I wanted to perform test action on. BUT would it have been useful? I think not. I needed to get the sequence constructed first and that needed reusable statements. And although this is not in the BDD bible I think in the end it is a practical and pragmatic solution to a very real E2E descriptive testing problem.

Unit tests for Stored Procedures in SQL Server

I want to implement Test First Development in a project that will be implemented only using stored procedures and function in SQL Server.
There is a way to simplify the implementation of unit tests for the stored procedures and functions? If not, what is the best strategic to create those unit tests?
It's certainly possible to do xUnit style SQL unit testing and TDD for database development - I've been doing it that way for the last 4 years. There are a number of popular T-SQL based test frameworks, such as tsqlunit. Red Gate also have a product in this area that I've briefly looked at.
Then of course you have the option to write your tests in another language, such as C#, and use NUnit to invoke them, but that's entering the realm of integration rather than unit tests and are better for validating the interaction between your back-end and your SQL public interface.
http://sourceforge.net/apps/trac/tsqlunit/
http://tsqlt.org/
Perhaps I can be so bold as to point you towards the manual for my own free (100% T-SQL) SQL Server unit testing framework - SS-Unit - as that provides some idea of how you can write unit tests, even if you don't intend on using it:-
http://www.chrisoldwood.com/sql.htm
http://www.chrisoldwood.com/sql/ss-unit/manual/SS-Unit.html
I also gave a presentation to the ACCU a few years ago on how to unit test T-SQL code, and the slides for that are also available with some examples of how you can write unit tests either before or after.
http://www.chrisoldwood.com/articles.htm
Here is a blog post based around my database TDD talk at the ACCU conference a couple of years ago that collates a few relevant posts (all mine, sadly) around this way of developing a database API.
http://chrisoldwood.blogspot.co.uk/2012/05/my-accu-conference-session-database.html
(That seems like a fairly gratuitous amount of navel gazing. It's not meant to be, it's just that I have a number of links to bits and pieces that I think are relevant. I'll happily delete the answer if it violates the SO rules)
It is doable. Create tests and in the setup create a new instance of db and give it some data and then execute the procs. Validate your assumptions, like I got the correct data back. Drop the test db then do it all again in the next test.
Unit testing in database is actually big topic,and there is a lot of different ways to do it.I The simplest way of doing it is to write you own test like this:
BEGIN TRY
<statement to test>
THROW 50000,'No error raised',16;
END TRY
BEGIN CATCH
if ERROR_MESSAGE() not like '%<constraint being violated>%'
THROW 50000,'<Description of Operation> Failed',16;
END CATCH
In this way you can implement different kind of data tests:
- CHECK constraint,foreign key constraint tests,uniqueness tests and so on...

DRY for JMeter tests

Is there a way to modularize JMeter tests.
I have recorded several use cases for our application. Each of them is in a separate thread group in the same test plan. To control the workflow I wrote some primitives (e.g. postprocessor elements) that are used in many of these thread groups.
Is there a way not to copy these elements into each thread group but to use some kind of referencing within the same test plan? What would also be helpful is a way to reference elements from a different file.
Does anybody have any solutions or workarounds. I guess I am not the only one trying to follow the DRY principle...
I think this post from Atlassian describes what you're after using Module controllers. I've not tried it myself yet, but have it on my list of things to do :)
http://blogs.atlassian.com/developer/2008/10/performance_testing_with_jmete.html
Jared
You can't do this with JMeter. The UI doesn't support it. The Workbench would be a perfect place to store those common elements but it's not saved in JMX.
However, you can parameterize just about anything so you can achieve similar effects. For example, we use the same regex post processor in several thread groups. Even though we can't share the processor, the whole expression is a parameter defined in the test plan, which is shared. We only need to change one place when the regex changes.
They are talking about saving Workbench in a future version of Jmeter. Once that's done, it's trivial to add some UI to refer to the element in Workbench.
Module controllers are useful for executing the same samples in different thread groups.
It is possible to use the same assertions in multiple thread groups very easily.
At your Test Plan level, create a set of User Defined variables with names like "Expected_Result_x". Then, in your response assertion, simply reference the variable name ${Expected_Result_x}. You would still need to add the assertion manually to every page you want a particular assertion on, but now you only have to change it one place if the assertion changes.