I'm part of a 20 odd group of developers who maintain around 7 components (websites and microservices) for a single domain in our company (e.g. delivery tracking domain).
In order to ensure quality, we have a "domain scoped end to end tests". Our E2E testing however has one problem: our E2E environment can have a component that's on a version that's different from production. The reason for this disparity is the components (e.g. microservice) have their own release schedules.
How do you handle this problem? My question is almost similar to this
Do you enforce that E2E must be run on the correct microservice versions? If so how?
Or do you accepted the fact that CI and production will never have identical microservice versions and just rely on monitoring production?
Lastly, how important is E2E in your CI/CD pipeline?
Note that I'm only interested in E2E testing of our microservices since it's impossible for me to control when a 3rd party service gets released.
Thanks
Consider testing in general as a net that catches bugs, and e2e as another part of that net. One cannot expect to build an infinitely dense net that will catch all the bugs, and the marginal value decreases as we invest more effort in improving that net.
With these terms, what bugs are we trying to catch with the e2e part of the net, specifically with testing multiple microservices in different versions? We are probably focused on a class of bugs that emerge when two services fail to properly integrate after changes, or worse, some special edge case that was until now hidden but will be revealed by new behavior.
In any case, since the effort in e2e test is large, specifically around bringing up the environment properly, then its preferred to cover these bugs in different ways - API tests, integration tests between two services.
So actually, where are e2e tests useful? In my opinion, there strength is finding not subtle but major failures that can affect the quality of the entire system. I tend to focus my e2e efforts on covering as much of the architecture, such that each microservice, database, communication path, etc, is covered. Usually there is good correlation with feature groups in this aspect so there is also good coverage as far as product is concerned.
To summarize - should I care that I merged code that wasn't tested exactly how the production will be deployed? I don't think so - if there was a major failure, it would have been noticed with a different set of versions. If there was a subtle bug, it would have been caught by an integration or API test. If there's a really hard to catch bug, then I'll pay the price seeing it in production - which is the case sometimes.
Related
We have Eclipse UI in the frontend and have a non Java based backend.
We generally write Unit tests separately for both frontend and backend.
Also we write PDE tests which runs Eclipse UI against a dummy backend.
My question is do we need to have integration tests which test end to end.
One reason i might see these integration tests are useful are when i upgrade my frontend /backend i can run end to end tests and i find defects.
I know these kind of questions are dependent on particular scenario.
But would like to what is the general and best practice followed by all here.
cheers,
Saurav
As you say, the best approach is dependant on the application. However, in general it is a good idea to have a suite of integration tests that can test your application end-to-end, to pick up any issues that may occur when you upgrade only one layer of the application without taking those changes into account in another layer. This sounds like it would be definitely worthwhile in your case, given that you have system components written in different languages, which naturally creates more chance of issues arising due added complexity around the component interfaces.
One thing to be aware of when writing end-to-end integration tests (which some would call system tests) is that they tend to be quite fragile when compared to unit tests, which is a combination of a number of factors, including:
They require multiple components to be available for the tests, and for the communication between these components to be configured correctly.
They exercise more code than a unit test, and therefore there are more things that can go wrong that can cause them to fail.
They often involve asynchronous communication, which is more difficult to write tests for than synchronous communication.
They often require complex backend data setup before you can drive tests through the entire application.
Because of this fragility, I would advise trying to write as few tests as possible that go through the whole stack - the focus should be on covering as much functionality as possible in the fewest tests possible, with a bias towards your most important functional use-cases. A good strategy to get started would be:
Pick one key use-case (which ideally touches as many components in the application as possible), and work on getting an end-to-end test for this (even just having this single test will bring a lot of value). Focus on making this test as realistic as possible (i.e. use a production-like deployment), as reliable as possible, and as automated as possible (ideally it should run as part of continuous integration). Even just having this single test brings a lot of value.
Build out tests for other use-cases one test at a time, again focusing on your most important use-cases at first.
This approach will help to ensure that your end-to-end tests are of high quality, which is vital for their long-term health and usefulness. Too many times I have seen people try to introduce a comprehensive suite of such tests to an application, but ultimately fail because the tests are fragile & unreliable, people lose faith in them, don't run or maintain them, and eventually they forget they even had the tests in the first place.
Good luck and have fun!
I'm working on an app that integrates with a 3rd party web service. I currently have separate integration / regression tests that call the web service to do the following:
Modify Policy - Add Vehicle
Modify Policy - Remove Vehicle
Modify Policy - Add Multiple Vehicles
Modify Policy - Add Insured
...
Most of these tests were created as bugs were found & fixed. The 3rd party web service is slooow and I'm trying to speed the testing process up. Because each test calls the web service, combining them into one test that only calls the web service once would make things much faster.
Would combining these tests be bad practice because each test was written for a specific bug? My concern is that a mistake in refactoring could potentially allow a bug to be re-introduced later on.
Yes, combining them would be a bad practice. Think instead about how to mitigate the risk without combining the tests. One approach - probably your best bet - would be to mock out the web service, so that the tests are much faster without jeopardizing their ability to detect a regression. Another would be to split your slow regression tests into their own suite that is run less frequently (but still frequently enough!) than your usual set of tests. Finally, you could combine them - but I would recommend explicitly reintroducing all the original bugs into your code to verify that the combined test still detects them.
Specific, pointed, direct, unit tests are very valuable; it's nice to know exactly what has broken. Combining tests compromises that value.
I wouldn't recommend combining them, unless you keep the ability to run them separately (maybe keep them separate in your overnight build, and combined in your continuous build).
Try parallelizing them (on separate 'policies'), if your test framework supports it.
I would suggest including them in your nightly build, so that they do run once a day when you are asleep and not watching the clock. And only removing them your developer time tests.
Of course that assumes they are not soooo sloooow that one night is not enough.
Just combining your tests into one big test is likely to make them useless or worse. Thats's not much better than just deleting them.
I've always worked alone and my method of testing is usually compiling very often and making sure the changes I made work well and fix them if they don't. However, I'm starting to feel that that is not enough and I'm curious about the standard kinds of tests there are.
Can someone please tell me about the basic tests, a simple example of each, and why it is used/what it tests?
Thanks.
Different people have slightly different ideas about what constitutes what kind of test, but here are a few ideas of what I happen to think each term means. Note that this is heavily biased towards server-side coding, as that's what I tend to do :)
Unit test
A unit test should only test one logical unit of code - typically one class for the whole test case, and a small number of methods within each test. Unit tests are (ideally) small and cheap to run. Interactions with dependencies are usually isolated with a test double such as a mock, fake or stub.
Integration test
An integration test will test how different components work together. External services (ones not part of the project scope) may still be faked out to give more control, but all the components within the project itself should be the real thing. An integration test may test the whole system or some subset.
System test
A system test is like an integration test but with real external services as well. If this is automated, typically the system is set up into a known state, and then the test client runs independently, making requests (or whatever) like a real client would, and observing the effects. The external services may be production ones, or ones set up in just a test environment.
Probing test
This is like a system test, but using the production services for everything. These run periodically to keep track of the health of your system.
Acceptance test
This is probably the least well-defined term - at least in my mind; it can vary significantly. It will typically be fairly high level, like a system test or an integration test. Acceptance tests may be specified by an external entity (a standard specification or a customer).
Black box or white box?
Tests can also be "black box" tests, which only ever touch the public API, or "white box" tests which take advantage of some extra knowledge to make testing easier. For example, in a white box test you may know that a particular internal method is used by all the public API methods, but is easier to test. You can test lots of corner cases by calling that method directly, and then do fewer tests with the public API. Of course, if you're designing the public API you should probably design it to be easily testable to start with - but it doesn't always work out that way. Often it's nice to be able to test one small aspect in isolation of the rest of the class.
On the other hand, black box testing is generally less brittle than white box testing: by definition, if you're only testing what the API guarantees in its contracts, then the implementation can change as much as it wants without the tests changing. White box tests, on the other hand, are sensitive to implementation changes: if the internal method changes subtly - or gains an extra parameter, for example - then you'll need to change the tests to reflect that.
It all boils down to balance, in the end - the higher the level of the test, the more likely it is to be black box. Unit tests, on the other hand, may well include an element of white box testing... at least in my experience. There are plenty of people who would refuse to use white box testing at all, only ever testing the public API. That feels more dogmatic than pragmatic to me, but I can see the benefits too.
Starting out
Now, as for where you should go next - unit testing is probably the best thing to start with. You may choose to write the tests before you've designed your class (test-driven development) or at roughly the same time, or even months afterwards (not ideal, but there's a lot of code which doesn't have tests but should). You'll find that some of your code is more amenable to testing than others... the two crucial concepts which make testing feasible (IMO) are dependency injection (coding to interfaces and providing dependencies to your class rather than letting them instantiate those dependencies themselves) and test doubles (e.g. mocking frameworks which let you test interaction, or fake implementations which do everything a simple way in memory).
I would suggest reading at least book about this, since the domain is quite huge, and books tend to synthesize better such concepts.
E.g. A very good basis might be: Software Testing Testing Across the Entire Software Development Life Cycle (2007)
I think such a book might explain better everything than some out of context examples we could post here.
Hi… I would like to add on to what Jon Skeet Sir’s answer..
Based on white box testing( or structural testing) and black box testing( or functional testing) the following are the other testing techniques under each respective category:
STRUCTURAL TESTING Techniques
Stress Testing
This is used to test bulk volumes of data on the system. More than what a system normally takes. If a system can stand these volumes, it can surely take normal values well.
E.g.
May be you can take system overflow conditions like trying to withdraw more than available in your bank balance shouldn’t work and withdrawing up to a maximum threshold should work.
Used When
This can be mainly used we your unsure about the volumes up to your system can handle.
Execution Testing
Done in order to check how proficient is a system.
E.g.
To calculate turnaround time for transactions.
Used when:
Early in the development process to see if performance criteria is met or not.
Recovery Testing
To see if a system can recover to original form after a failure.
E.g.
A very common e.g. in everyday life is the System Restore present in Windows OS..
They have restore points used for recovery as one would well know.
Used when:
When a user feels an application critical to him/her at that point of time has stopped working and should continue to work, for which he performs recovery.
Other types of testing which you could find use of include:-
Operations Testing
Compliance Testing
Security Testing
FUNCTIONAL TESTING Techniques include:
Requirements Testing
Regression Testing
Error-Handling Testing
Manual-Support Testing
Intersystem testing
Control Testing
Parallel Testing
There is a very good book titled “Effective methods for Software Testing” by William Perry of Quality Assurance Institute(QAI) which I would suggest is a must read if you want to go in depth w.r.t. Software Testing.
More on the above mentioned testing types would surely be available in this book.
There are also two other very broad categories of Testing namely
Manual Testing: This is done for user interfaces.
Automated Testing: Testing which basically involves white box testing or testing done
through Software Testing tools like Load Runner, QTP etc.
Lastly I would like to mention a particular type of testing called
Exhaustive Testing
Here you try to test for every possible condition, hence the name. This is as one would note pretty much infeasible as the number of test conditions could be infinite.
Firstly there are various tests one can perform. The Question is how does one organize it. Testing is a Vast & enjoying process.
Start Testing with
1.Smoke Testing. Once passed , go ahead with Functionality Testing. This is the Backbone of Testing. If Functionality works fine then 80% of Testing is profitable.
2.Now go with User Interface testing. AS at times User Interface is something that attracts the Client more than functionality. It is the look & feel that a client gets more attracted to it.
3.Now its time to have a look on Cosmetics bugs. Generally these bugs are ignored because of time constraint. But these play a major role depending on the page it is found. A spelling mistake turns to be Major when found on Splash Screen Or Your landing page or the App name itself. Hence these can not be overlooked as well.
4.Do Conduct Compatibility Testing. i,e Testing on Various Browsers & browser Versions. May be devices & OS for Responsive applications.
Happy testing :)
I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy?
The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month.
And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application?
One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time.
These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment.
Any advice?
The idea behind a framework is that it's supposed to be the "slow moving code". You shouldn't be changing the framework as frequently as the applications it supports. Try getting the framework on a slower development cycle: perhaps a release no more often than every three or six months.
My guess is that you're still working out some of the architectural decisions in this framework. If you think the framework changes really need to be that dynamic, find out what parts of the framework are being changed so often, and try to refactor those out to the applications that need them.
Agile doesn't have to mean unlimited changes to everything. Your architect could place boundaries on what constitutes the framework, and keep people from tweaking it so readily for what are likely application shortcuts. It may take a few iterations to get it settled down, but after that it should be more stable.
I wouldn't call it an Agile approach unless you have (unit) test coverage. One of the key tenets of Agile is that you have robust unit tests that provide a safety net for frequent refactoring and new feature development. There is a lot of risk in your scenario. Deploying twenty to thirty applications a month when 1) most of them don't add any new business value to their users; and 2) there are no tests in place would not qualify as a good idea in my book. And I'm a strong believer in Agile. But you can't pick and choose only the parts of it that are convenient.
If the business application has not changed, I wouldn't release it just to compile in a new framework. Imagine every .NET application needing to be re-released every time the framework changed. Reading into your question, I wonder if the common database is driving the need for this. If your framework is isolating the schema and you're finding you need to rebuild apps whenever the schema changes, then you need to tackle that problem first. Check out Refactoring Databases, by Scott Ambler for some tips.
As another aside, there's a big difference between integration test and unit tests. Your regression tests are integration tests. It's very difficult to automate at that level. I think the breakthroughs that are happening in testing are all about writing highly testable code that makes unit testing more and more of the code base possible.
Here are some tips I can think of:
1. break the framework into independent parts, so that changing one part requires only running a small portion of test cases.
2. Employ a test case prioritizaion technique. That is, you only rerun a small portion of the test pools of the applications selected by some strategy. Additional branch and ART have better performance than others usually. They require to know the branch coverage information of each test case.
3. Update the framework less frequently. If an application doesn't need change, it means its ok not to change it. So I guess its ok for these applications to use the old version of the framework. You can update the framework for these applications say every 3 months.
Regression testing is a way of life. You will need to regression test every application before it is released. However, since time and money are not usually infinite, you should focus your testing on the areas with the most changes. A quick and dirty way to identify these areas is to count the lines of code changed in a given business area; say "accounting" or "user management". Those should get the most testing first along with any areas that you have identified as “mission critical”.
Now I know that lines of code changed is not necessarily the best way to measure change. If you have well defined change requests, it is actually better to evaluate these hot spots by looking at the number and complexity of the change requests. But not everyone has that luxury.
When you are talking about making a change to the framework, you probably don't need to test all the code that uses it. If you're talking about a change to something like the DAL, that would basically amount to everything anyway. You just need to test a large enough sample of the code to be reasonably comfortable that the change is solid. Again, start with the "mission critical" areas and the area most heavily affected.
I find it helpful to divide the project into 3 distinct code streams; Development, QA, and Production. Development is open to all changes, QA is feature locked, and Production is code locked (well, as locked as it gets anyway). If you are releasing to production on a monthly cycle, you probably want to branch a QA build from the Development code at least 1 month before the release. Then you spend that month acceptance testing the new changes and regression testing everything else that you can. You'll probably have to complete testing the changes about a week before the release so that the app can be staged and you can dry run the installation a few times. You won't get to regression test everything, so have a strategy ready for releasing patches to Production. Don't forget to merge those patches back into the QA and Development code streams too.
Automating the regression tests would be a really great thing; theoretically. In practice, you end-up spending more time updating the testing code then you would spend running the test scripts manually. Besides, you can hire two or three testing monkeys for the price of one really good test script developer. Sad but true.
Each release it seems that our customers find a few old issues with our software. It makes it look like every release has multiple bugs, when in reality our new code is generally solid.
We have tried to implement some additional testing where we have testers do several hours of monthly regression testing on a single app each month in an effort to stay ahead of small issues. We refer to this process as our Software Hardening process, but it does not seem like we are catching enough of the bugs and it feels like a very backburner process since there is always new code to write.
Is there a trick to this kind of testing? Do I need to target one specific feature at a time?
When you develop your testing procedures, you may want to implement these kind of tests:
unit testing (testing invididual components of your project to test their functionality), these tests are important because they allow you to pinpoint where in the software the error may come from. Basically in these tests you will test a single functionality and use mock objects to simulate the behavior, return value of other objects/entities.
regression testing, which you mentioned
characterization testing, one example could be running automatically the program on automatically generated input (simulating the user input), storing the results and compare the results of every version against these results.
At the beginning this will be very heavy to put in place, but with more releases and more bugs fixes being added to the automated non-regression tests, you should be starting to save time.
It is very important that you do not fall in the trap of designing huge numbers of dumb tests. Testing should make your life easier, if you spend too much time understanding where the tests have broken you should redesign the tests such as they give you better messages/understanding of the problem so you can locate the issue quickly.
Depending of your environment, these tests can be linked to the development process.
In my environment, we use SVN for versioning, a bot runs the tests against every revision and returns the failing tests and messages with the name of the revision which broke it and the contributor (his login).
EDIT:
In my environment, we use a combination of C++ and C# to deliver analytics used in Finance, the code was C++ and is quite old while we are trying to migrate the interfaces toward C# and keep the core of the analytics in C++ (mainly because of speed requirements)
Most of the C++ tests are hand-written unit tests and regression tests.
On the C# side we are using NUnit for unit testing. We have a couple of general tests.
We have a 0 warnings policy, we explicitely forbid people to commit code which is generating warnings unless they can justify why it is useful to bypass the warning for this part of the code. We have as well conventions about exception safety, the use of design patterns and many other aspects.
Setting explicitely conventions and best practices is another way to improve the quality of your code.
Is there a trick to this kind of testing?
You said, "we have testers do several hours of monthly regression testing on a single app each month in an effort to stay ahead of small issues."
I guess that by "regression testing" you mean "manually exercising old features".
You ought to decide whether you're looking for old bugs which have never been found before (which means, running tests which you've never run before); or, whether you're repeating previously-run tests to verify that previously-tested functionality is unchanged. These are two opposite things.
"Regression testing" implies to me that you're doing the latter.
If the problem is that "customers find a few old issues with our software" then either your customers are running tests which you've never run before (in which case, to find these problems you need to run new tests of old software), or they're finding bugs which you have previous tested and found, but which you apparently never fixed after you found them.
Do I need to target one specific feature at a time?
What are you trying to do, exactly:
Find bugs before customers find them?
Convince customers that there's little wrong with the new development?
Spend as little time as possible on testing?
Very general advice is that bugs live in families: so when you find a bug, look for its parents and siblings and cousins, for example:
You might have this exact same bug in other modules
This module might be buggier than other modules (written by somone on an off day, perhaps), so look for every other kind of bug in this module
Perhaps this is one of a class of problems (performance problems, or low-memory problems) which suggests a whole area (or whole type of requirement) which needs better test coverage
Other advice is that it's to do with managing customer expectations: you said, "It makes it look like every release has multiple bugs, when in reality our new code is generally solid" as if the real problem is the mistaken perception that the bug is newly-written.
it feels like a very backburner process since there is always new code to write
Software develoment doesn't happen in the background, on a burner: either someone is working on it, or they're not. Management must to decide whether to assign anyone to this task (i.e. look for existing previously-unfound bugs, or fix-previously-found-but-not-yet-reported bugs), or whether they prefer to concentrate on new development and let the customers do the bug-detecting.
Edit: It's worth mentioning that testing isn't the only way to find bugs. There's also:
Informal design reviews (35%)
Formal design inspections (55%)
Informal code reviews (25%)
Formal code inspections (60%)
Personal desk checking of code (40%)
Unit test (30%)
Component test (30%)
Integration test (35%)
Regression test (25%)
System test (40%)
Low volume beta test (<10 sites) (35%)
High-volume beta test (>1000 sites) (70%)
The percentage which I put next to each is a measure of the defect-removal rate for each technique (taken from page 243 of McConnel's Software Estimation book). The two most effective techniques seem to be formal code inspection, and high-volume beta tests.
So it might be a good idea to introduce formal code reviews: that might be better at detecting defects than black-box testing is.
As soon as your coding ends, first you should go for the unit testing. THere you will get some bugs which should be fixed and you should perform another round of unit testing to find if new bugs came or not. After you finish Unit testing you should go for functional testing.
YOu mentioned here that your tester are performing regression testing on a monthly basis and still there are old bugs coming out. So it is better to sit with the tester and review the test cases as i feel that they need to be updated regularly. Also during review put stress on which module or functionality the bugs are coming. Stress on those areas and add more test cases in those areas and add those in your rgression test cases so that once new build comes those test cases should be run.
YOu can try one more thing if your project is a long term one then you should talk with the tester to automate the regression test cases. It will help you to run the test cases at off time like night and in the next day you will get the results. Also the regression test cases should be updated as the major problem comes when regression test cases are not updated regularly and by running old regression test cases and new progression test cases you are missing few modules that are not tested.
There is a lot of talk here about unit testing and I couldn't agree more. I hope that Josh understands that unit testing is a mechanized process. I disagree with PJ in that unit tests should be written before coding the app and not after. This is called TDD or Test Driven Development.
Some people write unit tests that exercise the middle tier code but neglect testing the GUI code. That is imprudent. You should write unit tests for all tiers in your application.
Since unit tests are also code, there is the question of QA for your test suite. Is the code coverage good? Are there false positives/negatives errors in the unit tests? Are you testing for the right things? How do you assure the quality of your quality assurance process? Basically, the answer to that comes down to peer review and cultural values. Everyone on the team has to be committed to good testing hygiene.
The earlier a bug is introduced into your system, the longer it stays in the system, the harder and more costly it is to remove it. That is why you should look into what is known as continuous integration. When set up correctly, continuous integration means that the project gets compiled and run with the full suite of unit tests shortly after you check in your changes for the day.
If the build or unit tests fail, then the offending coder and the build master gets a notification. They work with the team lead to determine what the most appropriate course correction should be. Sometimes it is just as simple as fix the problem and check the fix in. A build master and team lead needs to get involved to identify any overarching patterns that require additional intervention. For example, a family crisis can cause a developer's coding quality to bottom out. Without CI and some managerial oversight, it might take six months of bugs before you realize what is going on and take corrective action.
You didn't mention what your development environment is. If yours were a J2EE shop, then I would suggest that you look into the following.
CruiseControl for continuous integration
Subversion for the source code versioning control because it integrates well with CruiseControl
Spring because DI makes it easier to mechanize the unit testing for continuous integration purposes
JUnit for unit testing the middle tier
HttpUnit for unit testing the GUI
Apache JMeter for stress testing
Going back and implementing a testing strategy for (all) existing stuff is a pain. It's long, it's difficult, and no one will want to do it. However, I strongly recommend that as a new bug comes in, a test be developed around that bug. If you don't get a bug report on it, then either is (a) works or (b) the user doesn't care that it doesn't work. Either way, a test is a waste of your time.
As soon as its identified, write a test that goes red. Right now. Then fix the bug. Confirm that it is fixed. Confirm that the test is now green. Repeat as new bugs come in.
Sorry to say that but maybe you're just not testing enough, or too late, or both.