Functional testing of UI - testing

I am supposed to write functional test cases for a UI. Performance testing is out of scope.should I write Seperate test cases for each button, lets say review button which takes you to another page. So do we write them as seperate test cases or not .

Make sure test cases are modular and test case steps are as granular as possible.
You can refer the LINK

Related

How to reduce UI tests in Test pyramid without affecting the quality of UI delivery

We have a big java based desktop application in our company that we are building test cases for.
We want to follow test pyramid approach as:
1) We ask devs write a lot of unit tests (but don't verify where they have written good quality unit tests or not).
2) We write service tests where we go through each and every line of the code and write Junit tests to test every possible method and condition in the code.
3) We are planning to create UI tests to ensure the UI works correctly.
I read a lot of blogs about test pyramid approach and understood that we should invest much less time in writing the UI tests as they are not good with testing ROI because they generally take a lot of time to execute and they are brittle due to their dependency over the UI elements. I absolutely agree on these points.
But, the question is, when we say we need a much lower number of UI tests, do we mean we just need UI tests for priority-1 cases (or smoke tests)? On the contrary, the UI is the element the user interacts with so do we not need to make sure it is not broken in the first place? I mean, when we say we need to reduce number of UI tests, won't it affect the quality of the UI delivery?
For example, I have written a lot of service tests and made sure the backend business logic is perfect but what if the UI is messed up? Is it not equally important?
I don't think the number of UI tests is that important.
What I think the Test Automation Pyramid means is that a single UI test case covers many of the lower level tests. For example, a single UI test case might make 5 API calls and invoke 10 methods. That makes UI tests more brittle and complex, so better write them after the API and unit layer has been sufficiently tested.

How to include *_test.go files in HTML coverage reports

I would like to know if there is a way that I can generate an HTML coverage report that also includes statements covered on the tests themselves.
Regarding the merits of doing such a thing, I would like to see that my tests are as useful as the rest of my code. I've become accustomed to including my test code coverage in python and this is something I find helpful.
Update for clarification:
People seem to think I'm talking about testing my tests. I'm not. I just want to see that the statements in my tests are definitely being hit in the HTML coverage report. For example, code coverage on a function in my application might show me that everything's been hit, but it won't necessarily show me that every boundary has been tested. Seeing statements lit up in my test sources show me that I wrote my test well enough. Yes, better factored code shouldn't be so complex as to need that assurance, but sometimes things just aren't better.
I'm not sure I understand the reasoning behind this.
Unit tests, especially in Go, should be simple and straight-forward enough that by just reading them you should be able to spot if a statement is useless.
If that is not the case, maybe you are implementing your unit tests in a way that is too complicated?
If that is the case, I can recommend checking table-driven tests for most cases (not suited for most concurrency-heavy code or methods that depend a lot on manipulating the state, though) as well as trying out TDD (test-driven development).
By using TDD, instead of building your tests in order to try to cover all of your code, you would be writing simple tests that simply validate the specs of your code.
You don't write tests for your tests. Where does it end at that point if you do? Those tests for tests aren't covered. You'll need to write tests for your tests for your tests. But wait! Those tests for your tests for your tests don't have coverage so you better write tests for your tests for your tests for your tests.

What should a developer take into consideration when writing code in order to make it easier for a tester to automate tests with selenium webdriver?

All I think of is writing the code from a tester's perspective. Having id or name for buttons and fields. Should anything else matter to make automation easier?
I am at the beginning of my career as a tester and I need to know what to request from the developers.
A few tips:
Put an [unique] ID where you can.
Make use of the data-* attributes. This allows QA to find elements using relevant data.
Avoid "divitus". This makes for brittle selectors.
Make event triggers known. I have filled out a form but the submit button wouldn't enable until I asked my dev and he explained that I needed to trigger the blur event.
Use the automation. You don't have to fix the issues, but you can alert the QA person that your feature introduces false positives until the tests are updated.
Write Unit Tests (not UI Tests) for your project.

What is the difference between integration testing and functional testing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
The community reviewed whether to reopen this question 6 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Are functional testing and integration testing the same?
You begin your testing through unit testing, then after completing unit testing you go for integration testing where you test the system as a whole. Is functional testing the same as integration testing? You still taking the system as a whole and testing it for functionality conformance.
Integration testing is when you test more than one component and how they function together. For instance, how another system interacts with your system, or the database interacts with your data abstraction layer. Usually, this requires a fully installed system, although in its purest forms it does not.
Functional testing is when you test the system against the functional requirements of the product. Product/Project management usually writes these up and QA formalizes the process of what a user should see and experience, and what the end result of those processes should be. Depending on the product, this can be automated or not.
Functional Testing:
Yes, we are testing the product or software as a whole functionally whether it is functionally working properly or not (testing buttons, links etc.)
For example: Login page.
you provide the username and password, you test whether it is taking you to home page or not.
Integration Testing:
Yes, you test the integrated software only but you test where the data flow is happening and is there any changes happening in the database.
For example: Sending e-mail
You send one mail to someone, there is a data flow and also change in database (the sent table increases value by 1)
Remember - clicking links and images is not integration testing. Hope you understood why, because there is no change in database by just clicking on a link.
Hope this helped you.
Functional Testing: It is a process of testing where each and every component of the module is tested. Eg: If a web page contains text field, radio botton, Buttons and Drop down etc components needed to be checked.
Integration Testing: Process where the dataflow between 2 modules are checked.
This is an important distinction, but unfortunately you will never find agreement. The problem is that most developers define these from their own point of view. It's very similar to the debate over Pluto. (If it were closer to the Sun, would it be a planet?)
Unit testing is easy to define. It tests the CUT (Code Under Test) and nothing else. (Well, as little else as possible.) That means mocks, fakes, and fixtures.
At the other end of the spectrum there is what many people call system integration testing. That's testing as much as possible, but still looking for bugs in your own CUT.
But what about the vast expanse between?
For example, what if you test just a little bit more than the CUT? What if you include a Fibonacci function, instead of using a fixture which you had injected? I would call that functional testing, but the world disagrees with me.
What if you include time() or rand()? Or what if you call http://google.com? I would call that system testing, but again, I am alone.
Why does this matter? Because system-tests are unreliable. They are necessary, but they will sometimes fail for reasons beyond your control. On the other hand, functional tests should always pass, not fail randomly; if they are fast, they might as well be used from the start in order to use Test-Driven Development without writing too many tests for your internal implementation. In other words, I think that unit-tests can be more trouble than they are worth, and I have good company.
I put tests on 3 axes, with all their zeroes at unit-testing:
Functional-testing: using real code deeper and deeper down your call-stack.
Integration-testing: higher and higher up your call-stack; in other words, testing your CUT by running the code which would use it.
System-testing: more and more unrepeatable operations (O/S scheduler, clock, network, etc.)
A test can easily be all 3, to varying degrees.
I would say that both are tightly linked to each other and very tough to distinguish between them.
In my view, Integration testing is a subset of functional testing.
Functionality testing is based on the initial requirements you receive. You will test the application behaviour is as expected with the requirements.
When it comes to integration testing, it is the interaction between modules. If A module sends an input, B module able to process it or not.
Integration testing - Integration testing is nothing but the testing of different modules. You have to test relationship between modules. For ex you open facebook then you see login page after entering login id and password you can see home page of facebook hence login page is one module and home page is another module. you have to check only relationship between them means when you logged in then only home page must be open not message box or anything else. There are 2 main types of integration testing TOP-DOWN approach and BOTTOM UP approach.
Functional Testing - In functional testing you have to only think about input and output. In this case you have to think like a actual user. Testing of What input you gave and what output you got is Functional testing. you have to only observe output. In functional testing you don't need to test coding of application or software.
In a Functional testing tester focuses only Functionality and sub functionality of application. Functionality of app should be working properly or not.
In integration testing tester have to check dependency between modules or sub-modules.Example for modules records should be fetching and displayed correctly in another module.
Integration Test:-
When Unit testing is done and issues are resolved to the related components then all the required components need to integrate under the one system so that it can perform an operation.
After combining the components of the system,To test that whether the system is working properly or not,this kind of testing is called as Integration Testing.
Functional Testing:-
The Testing is mainly divided into two categories as
1.Functional Testing
2.Non-Functional Testing
**Functional Testing:-
To test that whether the software is working according to the requirements of the user or not.
**Non-Functional Testing:-
To test that whether the software satisfies the quality criteria like Stress Test,Security test etc.
Usually,Customer will provide the requirements for Functional Test only and for Non Functional test,Requirements should not be mentioned but the application necessarily perform those activity.
Integration testing It can be seen as how the different modules of the system work together.
We mostly refers to the integrated functionality of the different modules, rather different components of the system.
For any system or software product to work efficiently, every component has to be in sync with each other.
Most of the time tool we used for integration testing will be chosen that we used for unit testing.
It is used in complex situations, when unit testing proves to be insufficient to test the system.
Functional Testing
It can be defined as testing the individual functionality of modules.
It refers to testing the software product at an individual level, to check its functionality.
Test cases are developed to check the software for expected and unexpected results.
This type of testing is carried out more from a user perspective. That is to say, it considers the expectation of the user for a type of input.
It is also referred as black-box testing as well as close-box testing
Checking the functionality of the application is generally known as functional testing, where as the integration testing is to check the flow of data from one module to other.
Lets take example of money transfer app.Suppose we have page in which we enter all the credentials and if we press transfer button and after that if we getting any success, Then this is functional testing. But in same example if we verify the amount transfer then it is integration testing.
Authors diverge a lot on this. I don't believe there is "the" correct interpretation for this. It really depends.
For example: most Rails developers consider unit tests as model tests, functional tests as controller tests and integration tests as those using something like Capybara to explore the application from a final user's perspective - that is, navigating through the page's generated HTML, using the DOM to check for expectations.
There is also acceptance tests, which in turn are a "live" documentation of the system (usually they use Gherkin to make it possible to write those in natural language), describing all of the application's features through multiple scenarios, which are in turn automated by a developer. Those, IMHO, could be also considered as both, functional tests and integration tests.
Once you understand the key concept behind each of those, you get to be more flexible regarding the right or wrong. So, again IMHO, a functional test could also be considered an integration test. For the integration test, depending on the kind of integration it's exercising, it may not be considerate a functional test - but you generally have some requirements in mind when you are writing an integration test, so most of the time it can be also considerate as a functional test.

User Stories To Code [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Suppose I have a bunch of User Stories ( as a result of the planing Session I went through with my team ). I don't have any code in the application yet and going to start with my 'A'
or highest Priority Stories/Epics
Say for example
"As A User I should be able to Search for more users so that I can add more friends on the website"
So how should the team go about coding the application while doing TDD.
Team starts with creating Unit tests ie .that take care of Creating models
Then everybody takes a story and starts writing functional tests to create my controllers / Views ( So Should they be doing integration testing while writing functional tests )
Then do the integration tests
I am actually confused how the integration tests fit in.if all the integration tests work ( ie all the functional, units tests should anyway pass )
So, If the application is just starting ( ie no Code has been written yet ). What is the process people usually take for TDD/BDD when they pick up a Story and start, for implementing a application from scratch
Very good question! The TDD/BDD way would suggest you take the user stories and write validation points (read high level tests). They use GWT (Given/When/Then) speak as follows.
"As A User I should be able to Search for more users so that I can add more friends on the website"
given the website URL
when the site loads
then a search field should be visible/accessible.
This is your first piece of feedback and first opportuniuty to iterate with the product owner. Ask questions like where should the search bar go? Should it auto complete? Etc. Next you assign "behavior to the UI objects. These also have validation points.
This would define the behavior of the search button:
given a go button next to the search field
when then button is clicked
then a search should be performed
this would describe the logic of your search:
given a search term "John" and a user set including "John, Joan, Jim, Steve"
when a search is performed
then the results should contain "John" an "Joan"
The first validation point would describe the behavior of linking the controller search button to an arbitrary model implementing the search algorithm. The second validation point describes the search algorithm itself. The advantage is that these pieces are defined independently and can be designed in parallel. It also gives you a nice API and small easily to plan features to iterate on. It also gives you the ability to iterate/refine on any piece of the puzzle without affecting the rest of the pie.
Update I also want to mention that what I refer to as validation points can loosely be associated with UATs or User Acceptance Tests. Don't get hung up on the terms because they're irrelevant. Focus on the idea behind them. You need to take the user story and break it down into specs. (can be done in one or many passes using either UATs or validation points or both or magic beans just make sure you break them down.) If what you have broken your user stories into can be written in a tool like FitNesse, JUnit, or RSpec then use one of these tools, other wise you need either further conversation (Are your user stories too vague?) or another pass over what you have to break down further (UATs to validation points). Don't obsess over the tools and feel like you need to automate everything from the beginning. Leave Selenium alone until you get the manual process down. Eventually you want specs that can be written in programmatic test-like form at this time you should be able to use something as simple as JUnit to start coding. When you get better/fancier you can pick up EasyB or RSpec story runner and other things.
This is where we usually start off with a Sprint 0 and in this spring is where we'll have what XP call's a Spike Session (or a Throw away code session). In this session is where you can begin prototyping.
In your session write a few user acceptance tests (preferrably in the BDD format) and then start writing a test first to match one of your UAT's.
For example:
Given a search is requested
where user's name is "testUser"
1 result should be returned.
With this you now have a goal for your first test, which you write, then begin writing code to make that test pass. As you go forward you should begin to see how the app should be put together to complete the story.
Then I would begin in the next sprint building stories/task's to complete the feature as needed based upon what you discovered in the sprint 0.
"I am actually confused how the integration tests fit in.if all the
integration tests work ( ie all the functional, units tests should anyway pass )"
It depends. Sure, it's possible to write integration tests in such a way that all unit and functional tests pass. It's just much more difficult.
Imagine that you have 3 models, 3 controllers and 3 views. Imagine that all are super simple with no conditions or loops and have one method each.
You can now (unit) test each one of these for a total of 9 assertions and have full coverage. You can throw in an integration test to make sure all these things work well together.
If instead you skip units/functionals and needed to have full coverage, you're going to need 27 assertions (3 x 3 x 3).
In practice things are more complicated of course. You'll need a much larger amount of integration tests to get to the same level of coverage.
Also, if you practice TDD/BDD, more often than not will wind up with lots of unit tests anyway. The integration test is there to make sure all these pieces fit well together and do what the customer wants. The pieces themselves have been tested individually by the unit tests.
First, break the story apart. You'll need:
A User object: What does it do? Create some tests to figure this out and write the code
A way to search users; maybe a SearchUserService? Again create tests and write the code
A way to connect users ...
Now, you have the model. Next, you do the same for the controllers. When they work, you start with the views.
Or, when you're a pro and have done this a thousand times already, you might be able to roll several steps at once.
But you must first chop the problem into digestible pieces.
Next for the integration tests. They will simulate what a user does. In the general case, you need to write a set of unit tests (they are just called integration tests but you should also be able to run them automatically). These tests need to talk to the web app just like the user does, so you need a simulated web browser, etc.
You can try httpunit or env.js for this.
If you're doing TDD, you start with a test that shows that the system does not perform the required behaviour described by the user story. When that is failing in the way you expect, with useful diagnostics, you then start implementing the behaviour by adding or modifying classes, working unit-test first.
So, in TDD you write integration tests before you write unit tests.
To bootstrap the whole process, one usually writes a "walking skeleton": a system that performs the thinnest slice of realistic functionality possible. The walking skeleton let's one build up the integration test infrastructure against simple functionality.
The rest of the project then fleshes out that skeleton.