How to make sure contracts in DbC being tested before rollout? - testing

How do you make sure the contracts you defined for your software components using Design by Contract (DbC) are being tested at some point?
Shall I write unit tests for every single contract I define?
One benefit I see in DbC vs. isolated testing of single units is that I'm able to make sure the contract works between real collaborators. But how can I make sure the contracts are being tested before I rollout the software?

One way is to write a program that simulates users of your application, i.e. a bot. Startup your application with contracts enabled and have the bots exercise the app.
You can implement randomized behavior in your bots to have them exercise a larger set of use cases and edge cases.
Personally I often extend bots I write to verify performance to enable the sort of testing you're after.

Related

Is Contract testing necessary when both consumer and provider are developed by the same company in different scrum teams?

Is Contract testing necessary when both consumer and provider are developed by the same company in different scrum teams ?
Yes, definitely. Contract testing is particularly useful when you rely on an 'external' service, where by external I mean any service that is not under your direct control, including the case you mentioned. Here is an interesting article from Martin Fowler.
Short answer: no, contract testing isn't necessary in any situation, the same as unit testing.
Long answer: Not having testing greatly reduces your confidence as a developer to deploy without breaking anything. Unit testing is good for testing an individual function, while contract testing is good at figuring if your changes will affect any consumers of the data you provide. The consumers of your data could be anyone, it could be someone across the room from you, a client external of the company or even yourself. The whole point is to try to segment and simplify the development process so that problems are caught earlier on. It also has the added benefit that you don't need to run the data producer locally just to have the consumer working while developing, which is definitely a great bonus when the consumer doesn't (or can't) have access provider code, like an external client.
These tools are meant to make your life as a developer simpler and easier to manage, Pact strives to accomplish this in your workflow and to prevent issues from happening in production and giving the developer a quicker feedback loop of potential issues.
The team that wrote Pact in the first place was responsible for both ends of the integration, and they still found contract testing valuable. Just because you're developing both sides now, doesn't mean that you will continue to be responsible for both sides in the future. Contract tests will ensure that changes made by future developers will not break anything.

Need of Integration testing

We have Eclipse UI in the frontend and have a non Java based backend.
We generally write Unit tests separately for both frontend and backend.
Also we write PDE tests which runs Eclipse UI against a dummy backend.
My question is do we need to have integration tests which test end to end.
One reason i might see these integration tests are useful are when i upgrade my frontend /backend i can run end to end tests and i find defects.
I know these kind of questions are dependent on particular scenario.
But would like to what is the general and best practice followed by all here.
cheers,
Saurav
As you say, the best approach is dependant on the application. However, in general it is a good idea to have a suite of integration tests that can test your application end-to-end, to pick up any issues that may occur when you upgrade only one layer of the application without taking those changes into account in another layer. This sounds like it would be definitely worthwhile in your case, given that you have system components written in different languages, which naturally creates more chance of issues arising due added complexity around the component interfaces.
One thing to be aware of when writing end-to-end integration tests (which some would call system tests) is that they tend to be quite fragile when compared to unit tests, which is a combination of a number of factors, including:
They require multiple components to be available for the tests, and for the communication between these components to be configured correctly.
They exercise more code than a unit test, and therefore there are more things that can go wrong that can cause them to fail.
They often involve asynchronous communication, which is more difficult to write tests for than synchronous communication.
They often require complex backend data setup before you can drive tests through the entire application.
Because of this fragility, I would advise trying to write as few tests as possible that go through the whole stack - the focus should be on covering as much functionality as possible in the fewest tests possible, with a bias towards your most important functional use-cases. A good strategy to get started would be:
Pick one key use-case (which ideally touches as many components in the application as possible), and work on getting an end-to-end test for this (even just having this single test will bring a lot of value). Focus on making this test as realistic as possible (i.e. use a production-like deployment), as reliable as possible, and as automated as possible (ideally it should run as part of continuous integration). Even just having this single test brings a lot of value.
Build out tests for other use-cases one test at a time, again focusing on your most important use-cases at first.
This approach will help to ensure that your end-to-end tests are of high quality, which is vital for their long-term health and usefulness. Too many times I have seen people try to introduce a comprehensive suite of such tests to an application, but ultimately fail because the tests are fragile & unreliable, people lose faith in them, don't run or maintain them, and eventually they forget they even had the tests in the first place.
Good luck and have fun!

What kinds of tests are there?

I've always worked alone and my method of testing is usually compiling very often and making sure the changes I made work well and fix them if they don't. However, I'm starting to feel that that is not enough and I'm curious about the standard kinds of tests there are.
Can someone please tell me about the basic tests, a simple example of each, and why it is used/what it tests?
Thanks.
Different people have slightly different ideas about what constitutes what kind of test, but here are a few ideas of what I happen to think each term means. Note that this is heavily biased towards server-side coding, as that's what I tend to do :)
Unit test
A unit test should only test one logical unit of code - typically one class for the whole test case, and a small number of methods within each test. Unit tests are (ideally) small and cheap to run. Interactions with dependencies are usually isolated with a test double such as a mock, fake or stub.
Integration test
An integration test will test how different components work together. External services (ones not part of the project scope) may still be faked out to give more control, but all the components within the project itself should be the real thing. An integration test may test the whole system or some subset.
System test
A system test is like an integration test but with real external services as well. If this is automated, typically the system is set up into a known state, and then the test client runs independently, making requests (or whatever) like a real client would, and observing the effects. The external services may be production ones, or ones set up in just a test environment.
Probing test
This is like a system test, but using the production services for everything. These run periodically to keep track of the health of your system.
Acceptance test
This is probably the least well-defined term - at least in my mind; it can vary significantly. It will typically be fairly high level, like a system test or an integration test. Acceptance tests may be specified by an external entity (a standard specification or a customer).
Black box or white box?
Tests can also be "black box" tests, which only ever touch the public API, or "white box" tests which take advantage of some extra knowledge to make testing easier. For example, in a white box test you may know that a particular internal method is used by all the public API methods, but is easier to test. You can test lots of corner cases by calling that method directly, and then do fewer tests with the public API. Of course, if you're designing the public API you should probably design it to be easily testable to start with - but it doesn't always work out that way. Often it's nice to be able to test one small aspect in isolation of the rest of the class.
On the other hand, black box testing is generally less brittle than white box testing: by definition, if you're only testing what the API guarantees in its contracts, then the implementation can change as much as it wants without the tests changing. White box tests, on the other hand, are sensitive to implementation changes: if the internal method changes subtly - or gains an extra parameter, for example - then you'll need to change the tests to reflect that.
It all boils down to balance, in the end - the higher the level of the test, the more likely it is to be black box. Unit tests, on the other hand, may well include an element of white box testing... at least in my experience. There are plenty of people who would refuse to use white box testing at all, only ever testing the public API. That feels more dogmatic than pragmatic to me, but I can see the benefits too.
Starting out
Now, as for where you should go next - unit testing is probably the best thing to start with. You may choose to write the tests before you've designed your class (test-driven development) or at roughly the same time, or even months afterwards (not ideal, but there's a lot of code which doesn't have tests but should). You'll find that some of your code is more amenable to testing than others... the two crucial concepts which make testing feasible (IMO) are dependency injection (coding to interfaces and providing dependencies to your class rather than letting them instantiate those dependencies themselves) and test doubles (e.g. mocking frameworks which let you test interaction, or fake implementations which do everything a simple way in memory).
I would suggest reading at least book about this, since the domain is quite huge, and books tend to synthesize better such concepts.
E.g. A very good basis might be: Software Testing Testing Across the Entire Software Development Life Cycle (2007)
I think such a book might explain better everything than some out of context examples we could post here.
Hi… I would like to add on to what Jon Skeet Sir’s answer..
Based on white box testing( or structural testing) and black box testing( or functional testing) the following are the other testing techniques under each respective category:
STRUCTURAL TESTING Techniques
Stress Testing
This is used to test bulk volumes of data on the system. More than what a system normally takes. If a system can stand these volumes, it can surely take normal values well.
E.g.
May be you can take system overflow conditions like trying to withdraw more than available in your bank balance shouldn’t work and withdrawing up to a maximum threshold should work.
Used When
This can be mainly used we your unsure about the volumes up to your system can handle.
Execution Testing
Done in order to check how proficient is a system.
E.g.
To calculate turnaround time for transactions.
Used when:
Early in the development process to see if performance criteria is met or not.
Recovery Testing
To see if a system can recover to original form after a failure.
E.g.
A very common e.g. in everyday life is the System Restore present in Windows OS..
They have restore points used for recovery as one would well know.
Used when:
When a user feels an application critical to him/her at that point of time has stopped working and should continue to work, for which he performs recovery.
Other types of testing which you could find use of include:-
Operations Testing
Compliance Testing
Security Testing
FUNCTIONAL TESTING Techniques include:
Requirements Testing
Regression Testing
Error-Handling Testing
Manual-Support Testing
Intersystem testing
Control Testing
Parallel Testing
There is a very good book titled “Effective methods for Software Testing” by William Perry of Quality Assurance Institute(QAI) which I would suggest is a must read if you want to go in depth w.r.t. Software Testing.
More on the above mentioned testing types would surely be available in this book.
There are also two other very broad categories of Testing namely
Manual Testing: This is done for user interfaces.
Automated Testing: Testing which basically involves white box testing or testing done
through Software Testing tools like Load Runner, QTP etc.
Lastly I would like to mention a particular type of testing called
Exhaustive Testing
Here you try to test for every possible condition, hence the name. This is as one would note pretty much infeasible as the number of test conditions could be infinite.
Firstly there are various tests one can perform. The Question is how does one organize it. Testing is a Vast & enjoying process.
Start Testing with
1.Smoke Testing. Once passed , go ahead with Functionality Testing. This is the Backbone of Testing. If Functionality works fine then 80% of Testing is profitable.
2.Now go with User Interface testing. AS at times User Interface is something that attracts the Client more than functionality. It is the look & feel that a client gets more attracted to it.
3.Now its time to have a look on Cosmetics bugs. Generally these bugs are ignored because of time constraint. But these play a major role depending on the page it is found. A spelling mistake turns to be Major when found on Splash Screen Or Your landing page or the App name itself. Hence these can not be overlooked as well.
4.Do Conduct Compatibility Testing. i,e Testing on Various Browsers & browser Versions. May be devices & OS for Responsive applications.
Happy testing :)

Best practices for TDD BDD with code that uses external services / api

I'm using a twitter gem which basically accesses twitter and lets me grab tweets, timeline etc. Its really good but I have a lot of my code that uses the stuff it returns and I need to test it. The things the gem returns aren't exactly simple strings, there pretty complex objects (scary as well) so im left scratching my head.
So basically I'm looking for an answer, book, blog, open-source project that can show me the rights and wrongs of testing around external services.
answers that are either not language centric or ruby/rails centric would most greatly be appreciated.
What you are really talking about are two different kinds of testing that you would want to accomplish - unit tests and integration tests.
Unit tests will test the validity of the methods, independently of any external data. You should look into some sort of mocking framework, based on whatever language it is that you are using. You are basically looking to say, with the tests, something equivalent to "if these assumptions are qualified, then this test should yield..." The making framework will define your assumptions, in terms of saying that certain classes/objects are set in a particular way and can be assumed to be valid. These are the tests that will not rely on Twitter being alive, or the third part library/API being responsive.
Integration tests will perform tests live against the data source, consuming the library/API to perform actual actions. Where it gets tricky, since you are using a third party service, is in writing out to the service (i.e. if you are creating new Tweets). If you are, you could always create an account on Twitter that could be used just for write operations. Generally, if you were testing against a local database - for example - you could then, instead, use transactions to test similar operations; rolling back the transactions instead of committing them.
Here are a couple of non-language specific, high-level definitions:
Wikipedia (Software Testing)
Wikipedia (Mock Object)
I am from a .NET stack, so I won't pretend to know much about Ruby. A quick Google search, though, did reveal the following:
Mocha (Ruby Mocking Framework)
You can easily stub at the http layer using something like wiremock http://wiremock.org/ I've used this on a few projects now and it's quite powerful and fast. This will eliminate all the set up code of code based mocking - just fire up the jar with related mappings and bob's your uncle.

How to get your clients to test

I build web apps for a living.
An important but often painful process is client/user acceptance testing.
How do you manage this process?
i.e. How do you get them to test? Do you give them test scripts?
Do you give them a system to log bugs and change requests/feedback. How do you get the client to understand the difference between a bug and a feature change?
How go get clients to give you repeatable steps to create a bug/issue?
Any good web apps for managing this process (thinking a Basecamp like app would be very uesful for this)
Thanks,
Ed
Don't give them test scripts.
To me that invalidates the testing process to a large degree because if you're thinking up test cases your software probably handles them because you've thought of them.
The idea of good testing is that there is a level of independence in testing so you can't cater for known test cases and also the client is likely to think of scenarios that you won't, which is the whole idea.
But how do you motivate them? Well, honestly I'd be surprised if they weren't motivated. I've generally found that motivating them to comment on func specs, requirements and other preliminary documentation is a far tougher battle. By the time you get to testing, you've eliminated an important psychological hurdle in that the software is now "real".
How you handle this depends to a large extent on the nature of your relationship to the client. If you have a formal process with an agreed upon spec, you should really be saying that the client has a certain period to sign off and accept the software and inaction is implied acceptance.
If it's an internal client well then that's harder. It probably all comes down to who's driving the project? Who are the stakeholders? These are the people you need to motivate such activity.
Usually the best method that I've come across for client testing is having them send screenshots of the problem and some of the things they did to create it. By this point, most of the testing should have been done in house and the egregious bugs should be weeded out. Having a system that automatically emails that an error occurs lets me know they are testing and I get most of the gory details from the stacktrace in the email.