Only having one assertion statement in a functional test? - selenium

This question is similar to this one but regarding functional tests rather than unit tests.
I'm currently testing a UI using Selenium and I was wondering if only one assertion statement is needed, or if it depends on the test.
For example if I wanna test a basic Facebook login, would it suffice just use an assertion statement for the end case (ex: finding an element that only exists when logged in) or should the test be more detailed and include more than one assertion statement (check if you're on the correct site, check inputs, check for an element that only exists when logged in, etc).

Let me try to answer your Questions one by one:
if only one assertion statement is needed, or if it depends on the test - Let us speak of a manual Testcase. A Testcase consists of several steps but at the end we do cross check the Actual Result against the Expected Result. The same ideology is implemented in Automation through assertions. So ideally as per best practices, an assertion statement is a must but it's not mandatory.
should the test be more detailed and include more than one assertion statement - You can always have multiple assertions in a Testcase. No issues in that. But you must remember, if one Assertions fails the rest of the assertions won't be executed. Which gives you a single result either Pass or Fail. Now if you want to keep multiple validation points then you have to take help of if/else block so all your validations gets executed irrespective of each of them Pass/Fail.
Let me know if this Answers your query.

The one assertion per testcase rule is bit over the top for functional tests. For functional end-to-end tests I think as a general guide it should be test only ONE behavior. Use as many asserts you need to verify this ONE behavior.
If a test is failing you want to understand what is not working without reading the actual test-code. Having multiple assertions in a single test could lead to testing multiple behaviors and multiple reasons to fail. This is sub-optimal.
Do be practical as functional end-to-end tend to be slow. Multiplying the same steps to test a slightly different assert seems a waste of run time. Your test-suites should also be fast if you want to run them on each check-in. Therefor don't write too many tests on this level. Keep a good balance as the test-pyramid suggests.
For your example I think one assert would be enough. The behavior is to check if the user is now logged in, not if all the elements are on the page. Also keep in mind if the implementation of the site changes you need to update all your tests. The less you assert to more maintainable your tests will be.

Related

robot framework - BDD style- No Given present

When writing BDD style test case in robot framework if there is no test or keyword, then what should be written in the Given statement?
I am writing an API test using the BDD style in the robot framework. But for the very first test which will be executed there is no Given statement that will be executed. Is there any placeholder which we can use?
Please suggest
If there is no Given statement, then you should skip it. IMHO, You shouldn't write for styling as well. BDDs are meant to express the business requirement and having a dummy G-W-T statement, which doesn't explain a requirement doesn't make sense.
For an example, your BDD should look like
When the user registers only with First Name and Last Name
And doesn't enter Age
Then the error message "Please enter Age" should be displayed
You just skip it, don't write it; or, if you do want to have something - for styling for example, write Given No Operation.
There's always some context in which the scenario is taking place. Maybe the application is already running, or already installed, or you're already on the home page. In the case of the API, whatever service you're providing is available.
That context is your Given.
The trigger for behaviour - the thing which causes a change to happen - is the event, or the When.
It's often the case that when you start up an application or service, a default state is created. There's no trigger for it; it's just how things start. So for instance, you might see something like:
Given the Tetris game is running
Then the grid should be empty
If your scenario is concerned with whether the game starts up correctly, you can phrase it as a When:
When I start the game
Then the grid should be empty
Even then, there's probably a:
Given the game is installed
If working with an API, with the assumption that the API is available, I might put a check here to find out whether it really is (yes, I really do mean putting an assertion in the Given). If the test fails, it's usually because the service didn't start; which is usually because you've got environment problems. It's a great way to flag it up.
It's also OK to put the step in as English, but leave it empty in code, if you're really confident about the starting state being in good shape.
Given I am on Google.com
The purpose of scenarios in BDD isn't just to automate tests; it's to illustrate the intent and value of the behaviour using concrete examples. So assume you start with nothing. No internet. No application. No API. What needs to change for your scenario to run? That's your context. It's more common, I've found, to have a missing When than a missing Given, for those starting scenarios, since no user has triggered anything.
Thinking of your scenarios as living documentation with examples, rather than tests, might help to clarify what you need to include. BDD scenarios are tests as a nice by-product of exploration through conversation, and automating the result.
You might also like this blog post on 4 different ways of handling Givens.

Is there a way we can chain the scenarios in karate like java method chaining

I have been using Karate for the past 6 months, I am really impressed with the features it offers.
I know karate is meant to test API(s) individually but we are also trying to use it for E2E tests that involves calling multiple scenarios step by step.
Our feature file looks like below
1.Call Feature1:Scenario1
2.Call Feature2:Scenario2
.....
Note: We are re-using a scenarios for both API Testing and E2E testing.Sometimes I find it difficult to remember all feature files.
Can we chain the scenario call like java, I doubt feature file will let us to do that. We need your valuable suggestion. pls let us know if you feel our approach is not correct
First, I'd like to quote the documentation: https://github.com/intuit/karate#script-structure
Variables set using def in the Background will be re-set before every Scenario. If you are looking for a way to do something only once per Feature, take a look at callonce. On the other hand, if you are expecting a variable in the Background to be modified by one Scenario so that later ones can see the updated value - that is not how you should think of them, and you should combine your 'flow' into one Scenario. Keep in mind that you should be able to comment-out a Scenario or skip some via tags without impacting any others. Note that the parallel runner will run Scenario-s in parallel, which means they can run in any order.
So by default, I actually recommend teams to have Scenario-s with multiple API calls within them. There is nothing wrong with that, and I really don't understand why some folks assume that you should have a Scenario for every GET or POST etc. I thought the "hello world" example would have made that clear, but evidently not.
If you have multiple Scenario-s in a Feature just run the feature, and all Scenario-s will be executed or "chained". So what's the problem ?
I think you need to change some of your assumptions. Karate is designed for integration testing. If you really need to have a separate set of tests that test one API at a time, please create separate feature files. The whole point of Karate is that there is so little code needed - that code-duplication is perfectly ok.
Let me point you to this article by Google. For test-automation, you should NOT be trying to re-use things all over the place. It does more harm than good.
For a great example of what happens when you try to apply "too much re-use" in Karate, see this: https://stackoverflow.com/a/54126724/143475

E2E tests for random scenarios

Interested in approach to test random scenarios in case of E2E testing.
Q1: So we need to check all the system parts are connected correctly and what does it mean in case of random answer from the server etc (happy parts instead of single happy part)?
Q2: How to test errors in e2e scenarios? As example of this case different server errors etc. Does it have to be tested at all?
My experience and
approach to test random scenarios in case of E2E testing
is related to Gamifcation logic, which is hard to test, especially if you have to bend automation to do it. Randomness on every step is not the best case scenario for checking. I had similar issues while automating an questionnaire feature on a web platform and betting games on another.
Just to give you context - every challenge was loaded based on user level, most questions were influenced or did influence others.
After a lot of discussions and tryouts, it was clear that we should cover the main business (money) paths and leave the interesting part for exploratory testing. So, extract the most stable/predictable journeys and cover those with automation that is (reasonbly) insensitive to expected events and can record-retry steps in order to complete the scenario. My takeaway was, to find the right implementation-costs balance.

What tools exist for managing a large suite of test programs?

I apologize if this has been answered before, but I'm having trouble finding a tool that fits my needs.
I have a few dozen test programs, but each one can be run with a large number of parameters. I need to be able to automatically run sweeps of many of the parameters across all or some of the test programs. I have my own set of tools for running an individual test, which I can't really change, but I'm looking for a tool that would manage the entire suite.
Thus far, I've used a home-grown script for this. The main problem I run across is that an individual test program might take 5-10 parameters, each with several values. Although it would be easy to write something that would just do a nested for loop and sweep over every parameter combination, the difficulty is that not every combination of parameters makes sense, and not every parameter makes sense for every test program. There is no general way (i.e., that works for all parameters) to codify what makes sense and what doesn't, so the solutions I've tried before involve enumerating each sensible case. Although the enumeration is done with a script, it still leads to a huge cross-product of test cases which is cumbersome to maintain. We also don't want to run the giant cross-product of cases every time, so I have other mechanisms to select subsets of it, which gets even more cumbersome to deal with.
I'm sure I'm not the first person to run into a problem like this. Are there any tools out there that could help with this kind of thing? Or even ideas for writing one?
Thanks.
Adding a clarification ---
For instance, if I have parameters A, B, and C that each represent a range of values from 1 to 10, I might have a restriction like: if A=3, then only odd values of B are relevant and C must be 7. The restrictions can generally be codified, but I haven't found a tool where I could specify something like that. As for a home-grown tool, I'd either have to enumerate the tuples of parameters (which is what I'm doing) or put or implement something quite sophisticated to be able to specify and understand constraints like that.
We rolled our own, we have a whole test infrastructure. It manages the tests, has a number of built in features for allowing the tests to log results, the logs are managed by the test infrastructure to go into a searchable database for all kinds of report generation.
Each test has a class/structure that has information about the test, name of test, author, and a variety of other tags. When running a test suite you can run everything or run everything with a certain tag. So if you want to only test SRAM you can easily run only tests tagged sram.
Our tests are all considered either pass or fail. pass/fail criteria is determined by the author of the individual test, but the infrastructure wants to see either pass or fail. You need to define what your possible results are, as simple as pass/fail or you might want to add pass and keep going, pass but stop testing, fail but keep going, and fail and stop testing. Stop testing meaning if there are 20 tests scheduled and test 5 fails then you stop you dont go on to 6.
You need a mechanism to order the tests which could be alphabetical but it might benefit from a priority scheme (must perform the power on test before performing a test that requires the power to be on). It may also benefit from a random ordering some tests may be passing due to dumb luck because a test before them made something work, remove that prior test and this test fails. or vice versa this test passes until it is preceeded by a specific test and those two dont get along in that order.
To shorten my answer I dont know of an existing infrastructure, but I have built my own and worked with home built ones that were tailored to our business/lab/process. You wont hit a home run the first time, dont expect to. but try to predict a managable set of rules for individual tests, how many types of pass/fail return values it can return. The types of filters you want to put in place. The type of logging you may wish to do and where you want to store that data. then create the infrastructure and the mandantory shell/frame for each test, then individual testers have to work within that shell. Our current infrastructure is in python which lent itself to this nicely, and we are not restricted to only python based tests we can use C or python and the target can run whatever languages/programs it can run. Abstraction layers are good, we use a simple read/write of an address to access the unit under test, and with that we can test against a simulation of the target or against real hardware when the hardware arrives. We can access the hardware through a serial debugger, or jtag or pcie, and the majority of the tests dont know or care because the are on the other side of the abstraction.

functional integration testing

Is software testing done in the following order?
Unit testing
Integration testing
Functional testing
I want to confirm if Functional testing is done after Integration testing or not.
Thanx
That is a logical ordering, yes. Often followed by User Acceptance Testing and then any form of public alpha/beta testing before release if appropriate.
In a TDD coding environment, the order in which these tests are made to pass generally follows your order; however, they are often WRITTEN in the reverse order.
When a team gets a set of requirements, one of the first things they should do is turn those requirements into one or more automated acceptance tests, which prove that the system meets all functional requirements defined. When this test passes, you're done (IF you wrote it properly). The test, when first written, obviously shouldn't pass. It may not even compile, if it references new object types that you haven't defined yet.
Once the test is written, the team can usually see what is required to make it pass at a high level, and break up development along these lines. Along the way, integration tests (which test the interaction of objects with each other) and unit tests (which test small, atomic pieces of functionality in near-complete isolation from other code) are written. Using refactoring tools like ReSharper, the code of these tests can be used to create the objects and even the logic of the functionality being tested. If you're testing that the output of A+B is C, then assert that A+B == C, then extract a method from that logic in the test fixture, then extract a class containing that method. You now have an object with a method you can call that produces the right answer.
Along the way, you have also tested the requirements: if the requirements assert that an answer, given A and B, must be the logical equivalent of 1+2==5, then the requirements have an inconsistency indicating a need for further clarification (i.e. somebody forgot to mention that D=2 should be added to B before A+B == C) or a technical impossibility (i.e. the calculation requires there to be 25 hours in a day or 9 bits in a byte). It may be impossible (and is generally considered infeasible by Agile methodologies) to guarantee without a doubt that you have removed all of these inconsistencies from the requirements before any development begins.