E2E tests for random scenarios - testing

Interested in approach to test random scenarios in case of E2E testing.
Q1: So we need to check all the system parts are connected correctly and what does it mean in case of random answer from the server etc (happy parts instead of single happy part)?
Q2: How to test errors in e2e scenarios? As example of this case different server errors etc. Does it have to be tested at all?

My experience and
approach to test random scenarios in case of E2E testing
is related to Gamifcation logic, which is hard to test, especially if you have to bend automation to do it. Randomness on every step is not the best case scenario for checking. I had similar issues while automating an questionnaire feature on a web platform and betting games on another.
Just to give you context - every challenge was loaded based on user level, most questions were influenced or did influence others.
After a lot of discussions and tryouts, it was clear that we should cover the main business (money) paths and leave the interesting part for exploratory testing. So, extract the most stable/predictable journeys and cover those with automation that is (reasonbly) insensitive to expected events and can record-retry steps in order to complete the scenario. My takeaway was, to find the right implementation-costs balance.

Related

Differences between User Acceptance Test and Test Case Scenario and Functional Test [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
In the context of Agile software development, what's the difference between User Acceptance Test (UAT), Test Case Scenario and Functional Test?
The members of the team I am part of, they consider the three things as different, but I see them as exactly the same thing.
In fact, all of them are designed having the end user in mind
There's a lot of different sorts of testing. Many of them overlap. Many use the same tools. Many are specializations of other more general terms. Often they blur together. People argue about the terminology all the time.
You're correct that they all have the end user in mind, but they are different.
User Acceptance Test
This is a specific form of an acceptance test where a subject-matter expert, ideally the client or their representative, tests the software. This is in addition to functional and acceptance testing done by QA. It's designed to simulate, as closely as possible, an actual end-user using the software; the tester is asked to perform a bunch of common tasks with the new system, but not given specific instructions nor coaching on how to do it.
For example, if you were creating a site for an airline, they might be asked to register, login, book a flight, make a payment, check in, check their flight status, and so on.
Functional Test
This is blackbox testing done by the QA role. It verifies the thing does what it's supposed to do; you give it inputs, you check the outputs. Typically this is testing against the specification and/or requirements document.
"Functional" here doesn't refer to code functions, but that the system functions as expected. Testing specific functions is unit testing.
They can be purely functional, "when I do X I get Y". They can be about resource use, "when I do X it uses no more than Y memory/time". Or about error checking, "when I give it garbage I get a well formed error". Anything that validates it meets the requirements.
Test Case Scenario
Sounds like Scenario Testing: this uses stories, similar to user stories, that help a tester work through a complicated testing scenario. Scenario testing tests complicated combinations of things which might arise during actual use and often cut across multiple systems.
An example of a test scenario might be: "in the middle of processing the system runs out of disk space; verify an admin is notified, that processing resumes once space is cleared, and that no data is lost".
A User Acceptance Test might use Scenario Testing.
These are my rules of thumb:
Unit testing: does this one function work?
Integration testing: do the functions work together?
Functional testing: does it function as required?
Acceptance testing: is it acceptable to the client?
Regression testing: does it still work like it used to?
User Acceptance Testing is having business users trying out your app.
There is also an Acceptance Testing done by QA when they check the new functionality - you can call it a Story Acceptance Testing to distinguish between these. These are not necessarily Functional Tests (could be Security, Performance testing, etc.).
Test Case is a number of steps to check a small piece of functionality. It has Prerequisites, Steps, Expected Result, Actual Result. This is one of the ways of carrying out Functional Testing. Others could be: exploratory testing, checklists.
Test Scenario - steps that cover a bigger picture. Often they cover cases of how real users would use the app. But these are carried out by QA team.
Functional Test - a test that checks the functionality as opposed to e.g. Performance. This can be a unit test as well, but since this terminology is mostly used by QA - when people talk about them they usually mean Functional System Test.
Note, that different authorities may use different definitions of the same terms. Check out Holes in testing terminology: Test Types and Test Levels. Since it's impossible to find the one true terminology it's more important that you use terms consistently within your team even if they are used differently in other companies and teams.
User acceptance testing is a process that obtains confirmation that a system meets agreed customer/product manager requirements.
Functional Testing is actual functionality test of the software there can be many different types of testing but in simple word testing the functionality what is should be expected.
The test scenario is the high level of testing cases when the first classification of module testing then module dividing into a scenario and at last small and specifical test steps with expected result says test cases, so the test scenario is group test cases with limited to specific functionality and module.

Is there a difference between Scenario Testing and Functional Testing or are they synonymous?

So I'm looking into introducing a more extensive testing process and I was reading about Functional Testing. I've worked with Scenario Testing before, but I'm fairly new to the term Functional Testing.
It seems that the two are synonymous, however I've not been able to find any information on whether they are synonymous or whether one is a subcategory of the other or whether they are two separate things. I've searched on stackoverflow, I've read the wiki pages for both, I've read some blogs on both and some university pages on both and can't seem to find the answer.
So as the title says; is there a difference between Scenario Testing and Functional Testing or are they synonymous?
Scenario testing got introduced with BDD (Behavorial driven development). So, basically while testing a single scenario, you will most of the times have multiple functional testing scenarios. The aim of Scenario testing is to test the scenario as a whole and BDD promotes it majorly.
Let me try to elaborate it with an example. So, say that a project is following BDD and the scenario to be tested is usually specified in the form of Acceptance Criteria as below:
As an authenticated user, in order to withdraw cash, I should be able to use the ATM.
So, in this case to test this Scenario, you would need multiple functional tests to run. Listing few of them below:
Authenticate user
Enter valid cash to withdraw
Cash withdrawn
This just one flow and there will be multiple positive and negative functional tests involved in this scenario which need to be designed.
So, we can say that a scenario consists of multiple functional tests. However, they are not synonyms since Scenario testing can also include scenarios related to performance, usability testing etc. and is not just restricted to functional testing.
Hope I could resolve your query to certain extent.

Are scenario tests groups of sequential unit tests?

I read the Wikipedia article on scenario testing, but I am sad to say it is very short. I am left wondering: are scenario tests a collection of sequential unit tests? Or, perhaps, like a single multi-step unit test? Do many frameworks support scenario tests, or are they covered by unit testing?
If they have nothing to do with automation, what are they?
I don't think there's any fixed relationship between the number and distribution of tests and scenario tests.
I think the most common code-representation of a scenario is a specific set of business data required to support a specific story (scenario). This is often provided in the form of database data, fake stub data or a combination of both.
The idea is that this dataset has known and well-defined characteristics that will provide well defined results all across a given business process.
For a web application I could have a single web-test (or several for variations) that click through the full scenario. In other cases the scenario is used at a lower level, possibly testing a part of the scenario in a functional test or a unit test. In this case I normally never group the tests by scenario, but choose the functional grouping of tests I normally use for unit/functional tests. Quite often there's a method within "Subsystem1Test" that is called "testScenario1" or maybe "testScenarioInsufficientCredit". I prefer to give my scenarios names.
In addition to korsenvoid's response, in my experience scenario based testing will often be automated as it will be included in regression testing. Regression testing is regularly automated as to do it manually does not scale well with regular releases.
In commercial software, good examples of scenerio tests are tutorials included with the user documentation. These obviously must work in each release or be removed from the docs, and hence must be tested.
While you can carry out scenario testing using sequenced unit tests, my guess is that it is more common to use GUI based automation tools. For example, I use TestComplete in this role with a scripting framework to good effect. Scenario tests are typically carried out from a user/client perspective which can be difficult to accurately replicate at a unit level.
IMHO, scenario testing is a testing activity, as opposed to development activity ; hence it's about testing a product, not unit(s) of that product. The test scenario are end-to-end scenarios, using the natural interfaces of the product. If the product has programmatic interfaces, then you could use an unit test framework, or Fitnesse.

In agile like development, who should write test cases? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Our team has a task system where we post small incremental tasks assigned to each developer.
Each task is developed in its own branch, and then each branch is tested before being merged to the trunk.
My question is: Once the task is done, who should define the test cases that should be done on this task?
Ideally I think the developer of the task himself is best suited for the job, but I have had a lot of resistance from developers who think it's a waste of their time, or that they simply don't like doing it.
The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.
But likewise, the down part of developers doing the test cases, is that they may leave out things that they think will break. (even subconsciously maybe)
As the project manager, I ended up writing the test cases for each task myself, but my time is taxed and I want to change this.
Suggestions?
EDIT: By test cases I mean the description of the individual QA tasks that should be done to the branch before it should be merged to the trunk. (Black Box)
The Team.
If a defect gets to a customer, it is the team's fault, therefore the team should be writing test cases to assure that defects don't reach the customer.
The Project Manager (PM) should understand the domain better than anyone on the team. Their domain knowledge is vital to having test cases that make sense with regard to the domain. They will need to provide example inputs and answer questions about expectations on invalid inputs. They need to provide at least the 'happy path' test case.
The Developer(s) will know the code. You suggest the developer may be best for the task, but that you are looking for black box test cases. Any tests that a developer comes up with are white box tests. That is the advantage of having developers create test cases – they know where the seams in the code are.
Good developers will also be coming to the PM with questions "What should happen when...?" – each of these is a test case. If the answer is complex "If a then x, but if b then y, except on Thursdays" – there are multiple test cases.
The Testers (QA) know how to test software. Testers are likely to come up with test cases that the PM and the developers would not think of – that is why you have testers.
I think the Project Manager, or Business Analyst should write those test cases.
They should then hand them over to the QA person to flesh out and test.
That way you ensure no missing gaps between the spec, and what's actually tested and delivered.
The developer's should definately not do it, as they'll be testing their unit tests.
So it's a waste of time.
In addition these tests will find errors which the developer will never find as they are probably due to a misunderstanding in the spec, or a feature or route through the code not having been thought through and implemented correctly.
If you find you don't have enough time for this, hire someone else, or promote someone to this role, as it's key to delivering an excellent product.
From past experience, we had pretty good luck defining tests at different levels to test slightly different things:
1st tier: At the code/class level, developers should be writing atomic unit tests. The purpose is to test individual classes and methods as much as possible. These tests should be run by developers as they code, presumably before archiving code into source control, and by a continuous-integration server (automated) if one is being used.
2nd tier: At the component integration level, again have developers creating unit tests, but that test the integration between components. The purpose is not to test individual classes and components, but to test how they interact with each other. These tests should be run manually by an integration engineer, or automated by a continuous-integration seerver, if one is in use.
3rd tier: At the application level, have the QA team running their system tests. These test cases should be based off the business assumptions or requirements documents provided by a product manager. Basically, test as if you were an end user, doing the things end users should be able to do, as documented int eh requirements. These test cases should be written by the QA team and the product managers who (presumably) know what the customer wants and how they are expected to use the application.
I feel this provides a pretty good level of coverage. Of course, tiers 1 and 2 above should ideally be run before sending a built application to the QA team.
Of course, you can adapt this to whatever fits your business model, but this worked pretty well at my last job. Our continous-integration server would kick out an email to the development team if one of the unit tests failed during the build/integration process too, incase someone forgot to run their tests and committed broken code into the source archive.
We experimented with a pairing of the developer with a QA person with pretty good results. They generally 'kept each other honest' and since the developer had unit tests to handle the code, s/he was quite intimate with the changes already. The QA person wasn't but came at it from the black box side. Both were held accountable for completeness. Part of the ongoing review process helped to catch unit test shortcomings and so there weren't too many incidents that I was aware of where anyone was purposely avoiding writing X test because it would likely prove there was a problem.
I like the pairing idea in some instances and think it worked pretty well. Might not always work, but having those players from different areas interact helped to avoid the 'throw it over the wall' mentality that often happens.
Anyhow, hope that is somehow helpful to you.
The reason I don't like having my QA people do it, is because I don't like the idea of them creating their own work. For example they might leave out things that are simply too much work to test, and they may not know the technical detail that is needed.
Yikes, you need to have more trust in your QA department, or a better one. I mean, imagine of you had said "I don't like having my developers develop software. I don't like the idea of them creating their own work."
As a developer, I Know that there are risks involved in writing my own tests. That's not to say I don't do that (I do, especially if I am doing TDD) but I have no illusions about test coverage. Developers are going to write tests that show that their code does what they think it does. Not too many are going to write tests that apply to the actual business case at hand.
Testing is a skill, and hopefully your QA department, or at least, the leaders in that department, are well versed in that skill.
"developers who think it's a waste of their time, or that they simply don't like doing it" Then reward them for it. What social engineering is necessary to get them to create test cases?
Can QA look over the code and test cases and pronounce "Not Enough Coverage -- Need More Cases". If so, then the programmer that has "enough" coverage right away will be the Big Kahuna.
So, my question is: Once the task is done, who should define the goal of "enough" test cases for this task? Once you know "enough", you can make the programmers responsible for filling in "enough" and QA responsible for assuring that "enough" testing is done.
Too hard to define "enough"? Interesting. Probably this is the root cause of the conflict with the programmers in the first place. They might feel it's a waste of their time because they already did "enough" and now someone is saying it isn't "enough".
the QA people, in conjunction with the "customer", should define the test cases for each task [we're really mixing terminology here], and the developer should write them. first!
Select (not just pick randomly) one or two testers, and let them write the test cases. Review. It could also be useful if a developer working with a task looks at the test cases for the task. Encourage testers to suggest improvements and additions to test sets - sometimes people are afraid to fix what the boss did. This way you might find someone who is good at test design.
Let the testers know about the technical details - I think everyone in an agile team should have read access to code, and whatever documentation is available. Most testers I know can read (and write) code, so they might find unit tests useful, possibly even extend them. Make sure the test designers get useful answers from the developers, if they need to know something.
My suggestion would be to having someone else look over the test cases before the code is merged to ensure quality. Granted this may mean that a developer is overlooking another developer's work but that second set of eyes may catch something that wasn't initially caught. The initial test cases can be done by any developer, analyst or manager, not a tester.
QA shouldn't write the test cases as they may be situations where the expected result hasn't been defined and by this point, it may be hard to have someone referee between QA and development if each side thinks their interpretation is the right one. It is something I have seen many many times and wish it didn't happen as often as it does.
I loosely break my tests down into "developer" tests and "customer" tests, the latter of which would be "acceptance tests". The former are the tests that developers write to verify that their code is performing correctly. The later are tests that someone other than developers write to ensure that behavior matches the spec. The developers must never write the accepatance tests because their creation of the software they're testing assumes that they did the right thing. Thus, their acceptance tests are probably going to assert what the developer already knew to be true.
The acceptance tests should be driven by the spec and if they're written by the developer, they'll get driven by the code and thus by the current behavior, not the desired behavior.
The Agile canon is that you should have (at least) two layers of tests: developer tests and customer tests.
Developer tests are written by the same people who write the production code, preferably using test driven development. They help coming up with a well decoupled design, and ensure that the code is doing what the developers think it is doing - even after a refactoring.
Customer tests are specified by the customer or customer proxy. They are, in fact, the specification of the system, and should be written in a way that they are both executable (fully automated) and understandable by the business people. Often enough, teams find ways for the customer to even write them, with the help of QA people. This should happen while - or even before - the functionality gets developed.
Ideally, the only tasks for QA to do just before the merge, is pressing a button to run all automated tests, and do some additional exploratory (=unscripted) testing. You'll want to run those tests again after the merge, too, to make sure that integrating the changes didn't break something.
A test case begins first in the story card.
The purpose of testing is to drive defects to the left (earlier in the software development process when they are cheaper and faster to fix).
Each story card should include acceptance criteria. The Product Owner pairs with the Solution Analyst to define the acceptance criteria for each story. This criteria is used to determine if a story card's purpose has been meet.
The story card acceptance criteria will determine what automated unit tests need to be coded by the developers as they do Test Driven Development. It will also drive the automated functional test implemented by the autoamted testers (and perhaps with developer support if using tools like FIT).
Just as importantly, the acceptance criteria will drive the automated performance tests and can be used when analyzing the profiling of the application by the developers.
Finally, the user acceptance test will be determined by the acceptance criteria in the story cards and should be designed by the business partner and or users. Follow this process and you will likely release with zero defects.
I've rarely have heard of or seen Project Managers write test cases except for in the smaller teams. In any large,complex software application have to have an analyst that really knows the application. I worked at a mortgage company as a PM - was I to understand sub-prime lending, interest rates, and the such? Maybe at a superficial level, but real experts needed to make sure those things worked. My job was to keep the team healthy, protect the agile principles, and look for new opportunities for work for my team.
The system analyst should review over all test-cases and its correct relation with the use-cases.
Plus the Analyst should perform the final UAT, which could be based on test-cases also.
So the analyst and the quality guy are making sort of peer-review.
The quality is reviewing the use-cases while he is building test-cases, and the analyst is reviewing the test-cases after they are written and while he is performing UAT.
Of course BA is the domain expert, not from technical point of view. BA understands the requirements and the test cases should be mapped to the requirements. Developers should not be the persons writing the test cases to test against their code. QA can write detail test steps per requirement. But the person who writes the requirement should dictate what needs to be tested. Who actually writes the test cases, I dont care too much as long as the test cases can be traced back to requirements. I would think it makes sense that BA guides the testing direction or scope, and QA writes the granular testing plans.
We need to evolve from the "this is how it has been done or should be done mentality" it is failing and failing continuously. The best way to resolve the test plan/cases writing issue is that test cases should be written on the requirements doc in waterfall or the user story in agile as those reqs/user stories are being written. This way there is no question what needs to be tested and QA and UAT teams can execute the test case(s) and focus time on actual testing and defect resolution.

How do you organise/layout your test scripts

I'm interested in how others organise their test scripts or have seen good test scripts organised anywhere they've worked. Also, what level of detail is in those test scripts. This specifically relates to test scripts created for manual testing as opposed to those created for any automated test purposes.
The problem as I see it is this, there is a lot of complexity in test scripts but without the benefit of the principles used in organising a complex or large code base. You need to be able to specify what a piece of code should do but without boring someone to death as they read it.
Also, How do you layout test scripts, I'm not keen to create fully specified scripts suitable to be run by data entry types as that isn't the team we have and the overhead of maintaining them seems too high. Also, it feels to me that specifying the process in such detail removes responsibility from the person actually doing the testing for the quality of the product. Do people specify every button click and value to be entered? If not then what level of detail is specified.
Tests executed by humans should be at a very high level of abstraction.
E.g. a test case for stackoverflow registration:
Good:
A site visitor with an existing OpenId
account registers as a stackoverflow
user and posts an answer.
Bad:
1) Navigate to
http://stackoverflow.com 2) Click on
the login link 3) Etc...
This is important for several reasons:
a) it keeps the tests maintainable. So you don't have to update your test script every time navigation elements are relabeled (e.g. 'login' changes to 'sign in').
b) it saves your testers from going insane from the tedium of minute details.
c) writing detailed manual test scripts is a poor use of your finite test resources.
Detailed manual test scripts will divert your testers into writing bugs for minor documentation issues. You want to use your time to find the real bugs that will impact customers.
Tests can be grouped by priority. The BVT/smoke tests could have the highest priority with functional, integration, regression, localization, stress, and performance having lower priorities. Depending on your test pass you would select a priority and run all tests with that or higher priorities. All you need to do is determine which priority a particular test is.
I try to make manual tests fit into an automated structure---you can have both.
The organization schemes used by automated tests (e.g., the xUnit frameworks) work for
me. In fact, they can be used to semi-automate the tests, by stopping and calling for a manual test to be run, or input put to be entered, or a GUI to be inspected. The scheme usually is to mirror the directory structure of the production code, or to include the tests inside the production code, sometimes as inner classes. Tests above the unit level can often be fit into the higher level directories (assuming you have a deep enough directory tree). These higher level tests can go in (mirrored) directories that have no production code, but are there for organizational purposes.
The level of detail---well, that depends, right?
Matt Andresen has provided good answer, in general case, but there are situations, when you can't do it that way. For example when you are working on validated applications, that must comply with regulations form other parties like FDA, and it goes through very intensive audit, review, sign off, than 2 answer form your example is required. Although I would opt for moving into automation with HP QuickTestPro or IBM RationaRobot in this case.
Maybe you should try with some tests repository? There are again tools from HP QualityCenter and IBM products, but this can expensive. You could find some cheaper, that will let you organize them into tree structures, by requirement/feature, assign them priorities, group them into test suits for releases, group them into regression testing suits etc...