Given the importance of automated tests I am wondering if there are possibilities to measure the test coverage for Pact tests.
Remember that for unit tests, most frameworks/IDEs provide means to check which parts of the code were executed and thus to check if at least every part of the code (say in a class) was executed by some test.
Correspondingly, a Pact test (that intends to check the conformance of a provider or consumer to an API specification) gains credibility by testing most of the possible interactions. If it does, then it can reduce the number of e2e tests by splitting their logic into a functional part, covered by unit and integration tests within one component, and pact tests that assert the conformity of both components to the agreed upon API. (See https://docs.pact.io/getting_started/what_is_pact_good_for/ for more details.)
Given that it is possible to automatically discover the REST API of some web components via libraries like enunciate or swagger, I am wondering whether it is possible to compare their findings with the PACT test descriptions to see whether any important parameters or endpoints are missing in the latter. (Also, I am not sure if it means doubling the efforts to provide enunciate annotations and a PACT contract, but it seemed to me that the enunciate annotations in Java are not that heavy, i.e. mostly identifying the REST endpoints by the REST annotations that are necessary for functionality anyway.)
Please let me know if my question is too general for stackoverflow and I should rather ask it on more academic sites like stackexchange.
From the docs.pact.io question archive:
How can I tell if I have good contract test coverage of my API?
This is actually the wrong question to be asking. Contract tests aren't intended to provide any particular percentage coverage of the provider (that's what the provider's own functional tests are for). Contract tests are meant to provide (as close to) 100% coverage of the consumer code that makes the calls to the provider (you can think of this as the "provider client code"). If you execute your consumer Pact tests in a separate step in your test suite, you can use standard code coverage tools to determine whether or not your Pact tests have covered a sufficient percentage of your provider client code.
https://docs.pact.io/faq/question_archive#how-can-i-tell-if-i-have-good-contract-test-coverage-of-my-api
This doesn't tell you whether you've covered all the different variations of the parameters unfortunately. It's often not completely feasible to ensure you have a contract test for every single parameter variation (imagine having to try and cover 20 different combinations of a particular enum). At that stage, it can be a more efficient option to put in a number of well chosen test cases, and then make sure your code can gracefully handle any exceptions (eg. have good monitoring that reports unexpected values back to the dev team).
Related
I have a integration test that should test the creation of a new account in a CRM software.
The account creation triggers several things:
Creates the basic profile of the company
Creates every user (you can define the number of users on the registration)
Initialize the basic configuration of the account
Sends a welcome email with the starting information
etc
The test checks every aspect with several asserts, but I don't know if this is correct or if I should do a separate test for every one.
If I go for separate tests, the setup would be the same for all, so I feel like it would be a waste of time.
What you explain there sounds more like an end-to-end test. It's ok to have some end-to-end tests, but they are usually very expensive to write, to maintain, and brittle.
For me, the tests in a service should give you confidence that the software you are delivering will work in production. So maybe it's ok to have a very small number of end-to-end tests that check that everything is glued together properly, but most of the actual functionality should be in normal tests. An example of what I would try to avoid is to is have an end-to-end test that checks what happens when a downstream service is down.
Another very important aspect is that tests are written for other developers, they are not written for the compiler, so keeping a tests simple is important for maintainability. I want to stress this because if a test has 10 lines of assertions, that will be unreadable for most developers. even a test of 10 lines of code is difficult to grok.
Here's how I try to build services:
If you are familiar with ATDD and hexagonal architecture, most of the features should be tested stubbing the adaptors, which allows the tests to run super fast and fiddle with the adapters using test doubles. These tests shouldnt interact with anything outside the JVM, and give one a good level of confidence that the features will work. If the feature has too many side effects, I try to pick the assertions carefully. For example if a feature is to create an account, I won't check that the account is actually on the DB (because the chances of that breaking are minuscle), but I would check that all messages that need to be triggered are sent. Sometimes I do create multiple tests if the test starts to become unclear. For example one tests that checks the returned value and another tests that verifies the side effects (e.g. messages being produced).
Having as minimum a good coverage of the critical code with unit tests and integration tests (here I mean test classes that interact with external services) builds up the confidence that the classes work as expected. So end-to-end tests don't need to cover the miriad of combinations.
And last a very small number of end-to-end tests to ensure everything is glued together nicely.
Bottom line: create multiple test with the same setup if it helps understanding the code.
edit
About integration tests: It's just terminology. I call integration test a class or small group of classes that interact with an external service (database, queue, files, etc); A component test is something that verifies a single service or module; and end-to-end test something that tests all the services or modules working together.
What you mentioned about stored procs changes the approach. Do you have unit tests for them? Otherwise you could write some sort of integration tests that verify the stored procs work as expected.
About readability of the test: for me, the real test is to ask someone from another team or a product owner and ask them if the test name, the setup, what is asserted and the relatioship between those things is clear. If they struggle, it means that the test should be simplified.
What is the real difference between acceptance tests and functional tests?
What are the highlights or aims of each? Everywhere I read they are ambiguously similar.
In my world, we use the terms as follows:
functional testing: This is a verification activity; did we build a correctly working product? Does the software meet the business requirements?
For this type of testing we have test cases that cover all the possible scenarios we can think of, even if that scenario is unlikely to exist "in the real world". When doing this type of testing, we aim for maximum code coverage. We use any test environment we can grab at the time, it doesn't have to be "production" caliber, so long as it's usable.
acceptance testing: This is a validation activity; did we build the right thing? Is this what the customer really needs?
This is usually done in cooperation with the customer, or by an internal customer proxy (product owner). For this type of testing we use test cases that cover the typical scenarios under which we expect the software to be used. This test must be conducted in a "production-like" environment, on hardware that is the same as, or close to, what a customer will use. This is when we test our "ilities":
Reliability, Availability: Validated via a stress test.
Scalability: Validated via a load test.
Usability: Validated via an inspection and demonstration to the customer. Is the UI configured to their liking? Did we put the customer branding in all the right places? Do we have all the fields/screens they asked for?
Security (aka, Securability, just to fit in): Validated via demonstration. Sometimes a customer will hire an outside firm to do a security audit and/or intrusion testing.
Maintainability: Validated via demonstration of how we will deliver software updates/patches.
Configurability: Validated via demonstration of how the customer can modify the system to suit their needs.
This is by no means standard, and I don't think there is a "standard" definition, as the conflicting answers here demonstrate. The most important thing for your organization is that you define these terms precisely, and stick to them.
I like the answer of Patrick Cuff. What I like to add is the distinction between a test level and a test type which was for me an eye opener.
test levels
Test level is easy to explain using V-model, an example:
Each test level has its corresponding development level. It has a typical time characteristic, they're executed at certain phase in the development life cycle.
component/unit testing => verifying detailed design
component/unit integration testing => verifying global design
system testing => verifying system requirements
system integration testing => verifying system requirements
acceptance testing => validating user requirements
test types
A test type is a characteristics, it focuses on a specific test objective. Test types emphasize your quality aspects, also known as technical or non-functional aspects. Test types can be executed at any test level. I like to use as test types the quality characteristics mentioned in ISO/IEC 25010:2011.
functional testing
reliability testing
performance testing
operability testing
security testing
compatibility testing
maintainability testing
transferability testing
To make it complete. There's also something called regression testing. This an extra classification next to test level and test type. A regression test is a test you want to repeat because it touches something critical in your product. It's in fact a subset of tests you defined for each test level. If a there's a small bug fix in your product, one doesn't always have the time to repeat all tests. Regression testing is an answer to that.
The difference is between testing the problem and the solution. Software is a solution to a problem, both can be tested.
The functional test confirms the software performs a function within the boundaries of how you've solved the problem. This is an integral part of developing software, comparable to the testing that is done on mass produced product before it leaves the factory. A functional test verifies that the product actually works as you (the developer) think it does.
Acceptance tests verify the product actually solves the problem it was made to solve. This can best be done by the user (customer), for instance performing his/her tasks that the software assists with. If the software passes this real world test, it's accepted to replace the previous solution. This acceptance test can sometimes only be done properly in production, especially if you have anonymous customers (e.g. a website). Thus a new feature will only be accepted after days or weeks of use.
Functional testing - test the product, verifying that it has the qualities you've designed or build (functions, speed, errors, consistency, etc.)
Acceptance testing - test the product in its context, this requires (simulation of) human interaction, test it has the desired effect on the original problem(s).
The answer is opinion. I worked in a lot of projects and being testmanager and issuemanager and all different roles and the descriptions in various books differ so here is my variation:
functional-testing: take the business requirements and test all of it good and thorougly from a functional viewpoint.
acceptance-testing: the "paying" customer does the testing he likes to do so that he can accept the product delivered. It depends on the customer but usually the tests are not as thorough as the functional-testing especially if it is an in-house project because the stakeholders review and trust the test results done in earlier test phases.
As I said this is my viewpoint and experience. The functional-testing is systematic and the acceptance-testing is rather the business department testing the thing.
Audience. Functional testing is to assure members of the team producing the software that it does what they expect. Acceptance testing is to assure the consumer that it meets their needs.
Scope. Functional testing only tests the functionality of one component at a time. Acceptance testing covers any aspect of the product that matters to the consumer enough to test before accepting the software (i.e., anything worth the time or money it will take to test it to determine its acceptability).
Software can pass functional testing, integration testing, and system testing; only to fail acceptance tests when the customer discovers that the features just don't meet their needs. This would usually imply that someone screwed up on the spec. Software could also fail some functional tests, but pass acceptance testing because the customer is willing to deal with some functional bugs as long as the software does the core things they need acceptably well (beta software will often be accepted by a subset of users before it is completely functional).
Functional Testing: Application of test data derived from the specified functional
requirements without regard to the final program structure. Also known as
black-box testing.
Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to
accept the system.
In my view the main difference is who says if the tests succeed or fail.
A functional test tests that the system meets predefined requirements. It is carried out and checked by the people responsible for developing the system.
An acceptance test is signed off by the users. Ideally the users will say what they want to test but in practice it is likely to be a sunset of a functional test as users don't invest enough time. Note that this view is from the business users I deal with other sets of users e.g. aviation and other safety critical might well not have this difference,
Acceptance testing:
... is black-box testing performed on a system (e.g. software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery.
Though this goes on to say:
It is also known as functional testing, black-box testing, release acceptance, QA testing, application testing, confidence testing, final testing, validation testing, or factory acceptance testing
with a "citation needed" mark.
Functional testing (which actually redirects to System Testing):
conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.
So from this definition they are pretty much the same thing.
In my experience acceptance test are usually a subset of the functional tests and are used in the formal sign off process by the customer while functional/system tests will be those run by the developer/QA department.
Acceptance testing is just testing carried out by the client, and includes other kinds of testing:
Functional testing: "this button doesn't work"
Non-functional testing: "this page works but is too slow"
For functional testing vs non-functional testing (their subtypes) - see my answer to this SO question.
The relationship between the two:
Acceptance test usually includes functional testing, but it may include additional tests. For example checking the labeling/documentation requirements.
Functional testing is when the product under test is placed into a test environment which can produce variety of stimulation (within the scope of the test) what the target environment typically produces or even beyond, while examining the response of the device under test.
For a physical product (not software) there are two major kind of Acceptance tests: design tests and manufacturing tests. Design tests typically use large number of product samples, which have passed manufacturing test. Different consumers may test the design different ways.
Acceptance tests are referred as verification when design is tested against product specification, and acceptance tests are referred as validation, when the product is placed in the consumer's real environment.
They are the same thing.
Acceptance testing is performed on the completed system in as identical as possible to the real production/deployement environment before the system is deployed or delivered.
You can do acceptance testing in an automated manner, or manually.
There are a lot of testing methods out there i.e. blackbox, graybox, unit, functional, regression etc.
Obviously, a project cannot take on all testing methods. So I asked this question to gain an idea of what test methods to use and why should I use them. You can answer in the following format:
Test Method - what you use it on
e.g.
Unit Testing - I use it for ...(blah, blah)
Regression Testing - I use it for ...(blah, blah)
I was asked to engage into TDD and of course I had to research testing methods. But there is a whole plethora of them and I don't know what to use (because they all sound useful).
1. Unit Testing is used by developers to ensure unit code he wrote is correct. This is usually white box testing as well as some level of black box testing.
2. Regression Testing is a functional testing used by testers to ensure that new changes in system has not broken any of existing functionality
3. Functional testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. Functionality testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic
.
This Test-driven development and Feature Driven Development wiki articles will be of great help for you.
For TDD you need to follow following process:
Document feature (or use case) that
you need to implement or enhance
in your application that
currently does not exists.
Write set of functional test
cases that can ensure above
feature (from step 1) works. You may need to
write multiple test cases for
above feature to test all different
possible work flows.
Write code to implement above feature (from step 1).
Test this code using test cases you
had written earlier (in step 2). The actual
testing can be manual but I would recommend to create automated tests
if possible.
If all test cases pass, you are good to
go. If not, you need to update code (go back to step 3)
so as to make the test case pass.
TDD is to ensure that functional test cases which were written before you coded should work and does not matter how code was implemented.
There is no "right" or "wrong" in testing. Testing is an art and what you should choose and how well it works out for you depends a lot from project to project and your experience.
But as a professional Test Expert my suggestion is that you have a healthy mix of automated and manual testing.
(Examples below are in PHP but you can easily find the correct examples for what ever langauge/framework you are using)
AUTOMATED TESTING
Unit Testing
Use PHPUnit to test your classes, functions and interaction between them.
http://phpunit.sourceforge.net/
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.webinade.com/web-development/functional-testing-in-php-using-selenium-ide
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
This answer is (almost) identical to one that I gave to another question. Check out that question since it had some other good answers that might help you.
How can we decide which testing method can be used?
I usually do the following things:
Page consistency in case of multi-page web sties.
Testing the database connections.
Testing the functionalities that can be affected by the change I just made.
I test functions with sample input to make sure they work fine (especially those that are algorithm-like).
In some cases I implement features very simply hard-coding most of the settings then implement the settings later, testing after implementing every setting.
Most of these apply to applications, too.
Well before going to the answer i would like to clear testing concept about multiple methods.
There are six main testing types which cover all most all testing methods.
Black Box Testing
White Box Testing
Grey Box Testing
Functional Testing
Integration Testing
Usability Testing
Almost all Testing methods lies under these types, you can also use some testing method in multiple types like you can use Smoke testing in black box or white box approach on the basis of resources available to test.
So for testing a web site completely you need to use at least following testing methods on the basis of resources available to test. These are at least methods which should be used to test a web site, but there may be some more imp methods on the basic of nature of website.
Requirement Testing
Smock Testing
System Testing
Integration Testing
Regression Testing
Security Testing
Performance & Load Testing
Deployment Testing
You should at least use all of above (8) testing methods to test a web site no matter what testing type you are focusing. You can automate you test in some areas and you can do this manually it all depends upon the resources availability.
There is specifically no hard and fast rule to follow any testing type or any method. As you know "Testing Is An ART" so art don't have rules or boundaries. Its totally up to you What you use to test and how.......
Hope you got the answer of question.
Selenium is very good for testing websites.
The answer depends on the Web framework used (if any). Django for example has built-in testing functions.
For PHP (or functional web testing), SimpleTest is pretty good and well... simple. It support Unit Testing (PHP only) and Web Testing. Tests can run in the IDE (Eclipse), or in the browser (meaning on your server).
The other answers posted so far focus on unit/functional/performance/etc. testing, and they are all reasonable.
However, one the key questions you should ask is, "how effective is my testing?".
This is often answered with test coverage tools, that determine which parts of your application actually get exercised by some set of tests. The ideal test coverage tool lets you test your application by any method you can imagine (including all the standard answers above) and will then report what part and what percentage of your code was exercised. Most importantly, it will tell you what code you did not exercise. You can then inspect that code and decide if more testing is warranted, or if you don't care. If the untested code has to do with "disk full error handling" and you belive that 1TB disks are common, you might decide to ignore that. If the untested code is the input validation logic leading to SQL queries, you might decide that you must test that logic to ensure that no SQL injection attacks can occur.
What test coverage tools let you do it to make a rational decision that you have tested adequately, using data about what parts of your code has been exercised. So regardless of how you test, best practices indicates you should also do test coverage analysis.
Test coverage tools can be obtained from a variety of sources. SD provides a family of test coverage tools that handle C, C++, Java, C#, PHP and COBOL, all of which are used to support web site testing in various ways.
I am writing an application that uses 3rd party libraries to instantiate and make some operations on virtualmachines.
At first I was writing integration tests to every functionality of the application. But them I found that these tests were not really helping since my environment had to be at a determined state, which turned the tests more and more difficult to write. And I decided to make only the unit and acceptance tests.
So, my question ... is/can there be method or a clue to notice when the integration tests are not to be used?? (or I am wrong and on all cases they should be written)
When you don't plan on actually hooking your application up to anything "real"; no real containers, databases, resources or actual services. That's what an integration test is supposed to verify; that everything works properly together.
Integration tests are good to test a full system that has well-defined inputs and outputs that are unlikely to change. If your expected input/outputs change often then maintaining the test may become a maintenance challenge, or, worse, you may choose against improving an interface because of the amount of work that may be required to upgrade the integration tests.
The easy and short rule is: Test in integration test what breaks due to integration and test the rest in unit tests in isolation.
You can even hate integration tests. Writing a unit test for a function that takes only one integer parameter is hard enough. All possible combinations of state (internal and external(time, external systems)) and input can make integration testing practically impossible (for a decent application.)
I read the Wikipedia article on scenario testing, but I am sad to say it is very short. I am left wondering: are scenario tests a collection of sequential unit tests? Or, perhaps, like a single multi-step unit test? Do many frameworks support scenario tests, or are they covered by unit testing?
If they have nothing to do with automation, what are they?
I don't think there's any fixed relationship between the number and distribution of tests and scenario tests.
I think the most common code-representation of a scenario is a specific set of business data required to support a specific story (scenario). This is often provided in the form of database data, fake stub data or a combination of both.
The idea is that this dataset has known and well-defined characteristics that will provide well defined results all across a given business process.
For a web application I could have a single web-test (or several for variations) that click through the full scenario. In other cases the scenario is used at a lower level, possibly testing a part of the scenario in a functional test or a unit test. In this case I normally never group the tests by scenario, but choose the functional grouping of tests I normally use for unit/functional tests. Quite often there's a method within "Subsystem1Test" that is called "testScenario1" or maybe "testScenarioInsufficientCredit". I prefer to give my scenarios names.
In addition to korsenvoid's response, in my experience scenario based testing will often be automated as it will be included in regression testing. Regression testing is regularly automated as to do it manually does not scale well with regular releases.
In commercial software, good examples of scenerio tests are tutorials included with the user documentation. These obviously must work in each release or be removed from the docs, and hence must be tested.
While you can carry out scenario testing using sequenced unit tests, my guess is that it is more common to use GUI based automation tools. For example, I use TestComplete in this role with a scripting framework to good effect. Scenario tests are typically carried out from a user/client perspective which can be difficult to accurately replicate at a unit level.
IMHO, scenario testing is a testing activity, as opposed to development activity ; hence it's about testing a product, not unit(s) of that product. The test scenario are end-to-end scenarios, using the natural interfaces of the product. If the product has programmatic interfaces, then you could use an unit test framework, or Fitnesse.