What kind of test is that - checking interaction of two classes - testing

There is a class A doing some calculations and class B, which is calling methods from the class A.
Unit tests were fine for both classes but when used classes together, I discovered that it does not work. The issue was that the types of parameters were incorrect. As this was a part of school assignement, I was supposed to say what kind of test it is. I think it is an integration one, is that correct?
I think so because integration means integrating more modules into one system. And I am integrating two classes together here.

Strictly speaking, a unit test would be one module (class) tested in isolation, with any external dependencies stubbed out.
But in reality, so-called unit tests will often break this rule. For example in Rails, unit tests often hit the database (but this can be avoided).
In the situation you describe here, 'integration' is probably the best term to use.
Note that the meaning of these terms can vary a lot depending on the context. I would call what Nathan Hughes describes a system integration test to distinguish it from more granular tests.

Typically "integration test" means a test that touches some other system, like a database or file system or a web service or whatever. Since in your case it's two classes in the same program, I would categorize it as a unit test.
There's an expectation that a unit test have a small scope, but there isn't any hard and fast rule that it be limited to one method or one class.
Unit tests check that the code within the program is internally consistent, which is what needs doing here.

Related

Code reuse in automated acceptance tests without excessive abstraction

I'm recently hired as part of a team at my work whose focus is on writing acceptance test suites for our company's 3D modeling software. We use an in-house C# framework for writing and running them which essentially amounts to subclassing the TestBase class and overriding the Test() method, where generally all the setup, testing, and teardown is done.
While writing my tests, I've noticed that a lot of my code ends up being boilerplate code I rewrite often. I've been interested in trying to extract that code to be reusable and make my code DRYer, but I've struggled to find a way to do so without overabstracting my tests when they should be largely self-contained. I've tried a number of approaches, but they've run into issues:
Using inheritance: the most naive solution, this works well at first and lets me write tests quickly but has run into the usual trappings, i.e., test classes becoming too rigid and being unable to share my code across cousin subclasses, and logic being obfuscated within the inheritance tree. For instance, I have abstract RotateVolumeTest and TranslateVolumeTest classes that both inherit from ModifyVolumeTest, but I know relatively soon I'm going to want to rotate and translate a volume, so this is going to be a problem.
Using composition through interfaces: this solves a lot of the problems with the previous approach, letting me reuse code flexibly for my tests, but it leads to a lot of abstraction and seeming 'class bloat' -- now I have IVolumeModifier, ISetsUp, etc., all of which make the code less clear in what it's actually doing in the test.
Helper methods in a static utility class: this has been very helpful, especially for using a Factory pattern to generate the complex objects I need for tests quickly. However, it's felt 'icky' to put some methods in there that I know aren't actually very general, instead being used for a small subset of tests that need to share very specific code.
Using a testing framework like xUnit.net or similar to share code through [SetUp] and [TearDown] methods in a test suite, generally all in the same class: I've strongly preferred this approach, as it offers the reusability I've wanted without abstracting away from the test code, but my team isn't interested in it; I've tried to show the potential benefits of adopting a framework like that for our tests, but the consensus seems to be that it's not worth the refactoring effort in rewriting our existing test classes. It's a valid point and I think it's unlikely I'll be able to convince them further, especially as a relatively new hire, so unless I want to make my test classes vastly different from the rest of the code base, this one's off the table.
Copy and paste code where it's needed: this is the current approach we use, along with #3 and adding methods to TestBase. I know opinions differ on whether copy/paste coding is acceptable for test code where it of course isn't for production, but I feel that using this approach is going to make my tests much harder to maintain or change in the long run, as I now have N places I need to fix logic if a bug shows up (which plenty have already and only needed to be fixed in one).
At this point I'm really not sure what other options I have but to opt for #5, as much as it slows down my ability to write tests quickly or robustly, in order to stay consistent with the current code base. Any thoughts or input are very much appreciated.
I personally believe the most important thing for a successful testing framework is abstractions. Make it as easy as possible to write the test. The key points for me are that you will end up with more tests and the writer focuses more on what they are testing than how to write the test. Every testing framework I have seen that doesn't focus on abstraction has failed in more ways than one and ended up being maintainability nightmares.
If the logic is not used anywhere else but a single test class then leave in that test class but refactor later if it is needed in more than one place
I would opt in for all except #5.

How to unit test non-public logic

In some cases unit testing can be really difficult. Normally people say to only test your public API. But in some cases this is just not possible. If your public API depends on files or databases you can't unit test properly. So what do you do?
Because it's my first time TDD-ing, I'm trying to find "my style" for unit testing, since it seems there is just not the one way to do so. I found two approaches on this problem, that aren't flawless at all. On the one hand, you could try to friend your assemblies and test the features that are internal. On the other hand, you could implement interfaces (only for the purpose of unit testing) and create fake objects within your unit tests. This approach looks quite nice first but becomes more ugly as you try to transport data using these fakes.
Is there any "good" solution to this problem? Which of those is less flawed? Or is there even a third approach?
I made a couple of false starts in TDD, grappling with this exact same problem. For me the breakthrough came when I realized what my mentor meant when he said : "We don't want to test the framework." (In our case that was the .Net framework).
In your case it sounds as if you have some business logic that interfaces to files and databases. What I would do is to abstract the file and database logic in the thinnest layers possible. You can then use Mock (of fakes or stubs) to simulate the file and database layers. This will allow you to test scenarios like if-my-database-returns-this-kind-of-information-does-my-business-logic-handle-it-correctly? Likewise for file access you can test the code that figures out which file in which path to open and you can test that your logic would be able to pull apart the contents of any given file correctly and able to use it correctly.
If for example your file access layer consists of a single function that takes a path name and a file name and returns the contents of the file in a long string then you don't really need to test it because essentially you are making a single call to the framework/OS and there is not a lot that can go wrong there.
At the moment I am working on a system that wraps our database as a bunch of functions that return lists of POCO's. Easy to understand for the business layer and easy to simulate via mocks.
Working this way takes some getting used to but it is absolutely byoo-ti-full once it clicks in your mind.
Finally, from your question I guess that you are working with legacy code and trying to do TDD for a new component. This is quite a bit harder than doing TDD on a completely new development. If it is at all possible, try to do your first TDD attempts on new (or well isolated) systems. Once you have learnt the mechanics it would be a lot easier to introduce partially TDD'd bits to legacy systems.
If your public API depends on files or databases you can't unit test properly. So what do you do?
There is an abstraction level that can be used.
IFileSystem/ IFileStorage (for files)
IRepository/ IDataStorage (for databases)
Since this level is very thin its integration tests will be easy to write and maintain. All other code will be unit-test friendly because it is easy to mock interaction with filesystem and database.
On the one hand, you could try to friend your assemblies and test the features that are internal.
People face this problem when their classes violates single responsibility principle (SRP) and dependency injection (DI) is not used.
There is a good rule that classes should be tested via their public methods/properties only. If internal methods are used by others then it is acceptable to test them. Private or protected methods should not be made internal because of testing.
On the other hand, you could implement interfaces (only for the purpose of unit testing) and create fake objects within your unit tests.
Yes, interfaces are easy to mock because of limitations of mocking frameworks.
If you can create an instance (fake/stub) of a type then your dependency should not implement an interface.
Sometimes people use interfaces for their domain entities but I do not support them.
To simplify working with fakes there are two patterns used:
Object Mother
Test Data Builder
When I started writing unit tests I started with 'Object Mother'. Now I am using 'Test Data Builder's.
There are a lot of good ideas that can help you in the book Working Effectively with Legacy Code by Michael Feathers.
Don't let the hard stuff get in your way... If it's inherently hard to test due to db or file integration, just ignore it for the moment. Most likely you can refactor that hard to test stuff into easier to test stuff using mocks with Dependency Injection etc... Until then, test the easy stuff and get a good unit test suite built up... when you do the refactoring of the hard to test stuff, you will have a much higher confidence interval that it's not breaking anything else... And refactoring to make something more easily testable IS a good reason to refactor...

functional integration testing

Is software testing done in the following order?
Unit testing
Integration testing
Functional testing
I want to confirm if Functional testing is done after Integration testing or not.
Thanx
That is a logical ordering, yes. Often followed by User Acceptance Testing and then any form of public alpha/beta testing before release if appropriate.
In a TDD coding environment, the order in which these tests are made to pass generally follows your order; however, they are often WRITTEN in the reverse order.
When a team gets a set of requirements, one of the first things they should do is turn those requirements into one or more automated acceptance tests, which prove that the system meets all functional requirements defined. When this test passes, you're done (IF you wrote it properly). The test, when first written, obviously shouldn't pass. It may not even compile, if it references new object types that you haven't defined yet.
Once the test is written, the team can usually see what is required to make it pass at a high level, and break up development along these lines. Along the way, integration tests (which test the interaction of objects with each other) and unit tests (which test small, atomic pieces of functionality in near-complete isolation from other code) are written. Using refactoring tools like ReSharper, the code of these tests can be used to create the objects and even the logic of the functionality being tested. If you're testing that the output of A+B is C, then assert that A+B == C, then extract a method from that logic in the test fixture, then extract a class containing that method. You now have an object with a method you can call that produces the right answer.
Along the way, you have also tested the requirements: if the requirements assert that an answer, given A and B, must be the logical equivalent of 1+2==5, then the requirements have an inconsistency indicating a need for further clarification (i.e. somebody forgot to mention that D=2 should be added to B before A+B == C) or a technical impossibility (i.e. the calculation requires there to be 25 hours in a day or 9 bits in a byte). It may be impossible (and is generally considered infeasible by Agile methodologies) to guarantee without a doubt that you have removed all of these inconsistencies from the requirements before any development begins.

Am I unit testing or integration testing?

I am starting out with automated testing and I would like to test one of my data access methods. I am trying to test what the code does if the database returns no records.
Is this something that should be done in a unit test or an integration test?
Thanks
If your test code connects to an actual database and relies on the presence of certain data (or lack of data) in order for the test to pass, it's an integration test.
I ususally prefer to test something like this by mocking out the component that the "data access method" used to get the actual data, whether that's a JDBC connection or web service proxy or whatever else. With a mock, you say "when this method is called, return this" or "make sure that this method is called N times", and then you tell the class under test to use the mock component rather than the real component. This then is a "unit test", because you are testing how the class under test behaves, in a closed system where you've declared exactly how the other components will behave. You've isolated the class under test completely and can be sure that your test results won't be volatile and dependent on the state of another component.
Not sure what language/technology you are working with, but in the Java world, you can use JMock, EasyMock, etc for this purpose.
I think more time has been wasted arguing about what is a unit vs. what is an integration test than value has been added.
I don't care.
Let me put it a different way: If I were testing it, I'd see two ways to do it - fake out the database returning zero rows, or actually connect to a database that has no data for the select. I'd probably start testing with whatever was easiest to do and simplest to implement - if it ran fast enough for me to get meaningful feedback. Then I'd consider the other if I needed it to run faster or thought there would be some advantage.
For example, I'd probably start connecting to the actual test DB at my work. But if the software needed to work with many different databases - Oracle, PostGres, MySQL, SQL server and DB, or if the test DB at work was down for 'refreshes' a lot, I'd probably write the 'pure/unit' test that existed totally in isolation.
In my old age, I prefer to use the term 'developer-facing' vs. 'customer facing' more often, and do the kind of testing that makes more sense. I find using terms like "unit" extensively, then getting a definition-weenie about it leads to people doing things like mocking out the filesystem or mocking getters and setters - activity that I find unhelpful.
I believe this strongly; I've presented before google on it.
http://www.google.com/url?sa=t&source=web&oi=video_result&ct=res&cd=1&url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DPHtEkkKXSiY&ei=9-wKSobjEpKANvHT_MEB&rct=j&q=heusser+GTAC+2007&usg=AFQjCNHOgFzsoVss50Qku1p011J4-UjhgQ
good luck! Let us know how it goes!
Do your test and let other people spend time with taxonomy.
My perspective is that you should categorize the test based on scope:
A unit test can be run standalone
without any external dependencies
(File IO, Network IO, Database,
External Web Services).
An integration test can touch external systems.
If the test requires a real database to run then call it an integration test and keep it separate from the unit tests. This is important because if you mix integration and unit tests than you make your code less maintainable.
A mixed bag of tests mean that new developers may need a whole heap of external dependencies in order to run the test suite. Imagine that you want to make a change to a piece of code that is related to the database but doesn't actually require the database to function, you're going to be frustrated if you need a database just to run the tests associated with the project.
If the external dependency is difficult to mock out (for example, in DotNet, if you are using Rhino Mocks and the external classes don't have interfaces) then create a thin wrapper class that touches the external system. Then mock out that wrapper in the unit tests. You shouldn't need a database to run this simple test so don't require one!
There are those (myself included) who have strict rules about what constitutes a unit test vs an integration test.
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can’t run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it
Which may be one way to make a distinction between what a unit test will be doing for you using mocking for example, rather than any of the real resource providers - filesystem, db etc.
An integration test can be viewed as a test of very coupling of systems/application layers, so the fundamentals are tested in the unit and the system interoperability is the focus of an integration test.
Its still a grey area though because one can often pinpoint certain exceptions to these sorts of rules.
I think the important question is "What SHOULD I be doing?"
In this case I think you should be unit testing. Mock the code that talks to the DB and have it return a reliable result (no rows), this way your test checks what happens when there are no rows, and not what happens when the DB returns whatever is in the DB at the point you test.
Definitely unit test it!
[TestMethod]
public void ForgotMyPassword_SendsAnEmail_WhenValidUserIsPassed()
{
var userRepository = MockRepository.GenerateStub<IUserRepository>();
var notificationSender = MockRepository.GenerateStub<INotificationSender>();
userRepository.Stub(x => x.GetUserByEmailAddressAndPassword("me#home.com", "secret")).Return(new User { Id = 5, Name = "Peter Morris" });
new LoginController(userRepository, notificationSender).ResendPassword("me#home.com", "secret");
notificationSender.AssertWasCalled(x => x.Send(null),
options => options.Constraints(Text.StartsWith("Changed")));
}
I believe that it is possible to test that as a unit test, without a real database. Instead of using a real interface to the database, replace it with a mock/stub/fake object (better visualized PDF is here).
If writing it as a unit test proves to be too hard, and you are not able to refactor the code that testing it would be easy, then you better write it as an integration test. It will run slower, so you might not be able to run all the integration tests after ever code change (unlike unit tests which you can run hundreds and thousands per second), but as long as they are run regularly (for example as part of continous integration), they produce some value.
Most likely a unit test ... but there is a blurred line here. It really depends upon how much code is being executed - if it is contained to a library or class then its unit test, if it spans multiple components then it's more of an integration test.
I believe that should be done in a unit test. You aren't testing that it can connect to the database, or that you can call your stored procedures... you are testing the behavior of your code.
I could be wrong, but that's what I think unless someone gives me a reason to think otherwise.
that is a unit test, by definition: you are testing a single isolated element of the code on a specific path

Are scenario tests groups of sequential unit tests?

I read the Wikipedia article on scenario testing, but I am sad to say it is very short. I am left wondering: are scenario tests a collection of sequential unit tests? Or, perhaps, like a single multi-step unit test? Do many frameworks support scenario tests, or are they covered by unit testing?
If they have nothing to do with automation, what are they?
I don't think there's any fixed relationship between the number and distribution of tests and scenario tests.
I think the most common code-representation of a scenario is a specific set of business data required to support a specific story (scenario). This is often provided in the form of database data, fake stub data or a combination of both.
The idea is that this dataset has known and well-defined characteristics that will provide well defined results all across a given business process.
For a web application I could have a single web-test (or several for variations) that click through the full scenario. In other cases the scenario is used at a lower level, possibly testing a part of the scenario in a functional test or a unit test. In this case I normally never group the tests by scenario, but choose the functional grouping of tests I normally use for unit/functional tests. Quite often there's a method within "Subsystem1Test" that is called "testScenario1" or maybe "testScenarioInsufficientCredit". I prefer to give my scenarios names.
In addition to korsenvoid's response, in my experience scenario based testing will often be automated as it will be included in regression testing. Regression testing is regularly automated as to do it manually does not scale well with regular releases.
In commercial software, good examples of scenerio tests are tutorials included with the user documentation. These obviously must work in each release or be removed from the docs, and hence must be tested.
While you can carry out scenario testing using sequenced unit tests, my guess is that it is more common to use GUI based automation tools. For example, I use TestComplete in this role with a scripting framework to good effect. Scenario tests are typically carried out from a user/client perspective which can be difficult to accurately replicate at a unit level.
IMHO, scenario testing is a testing activity, as opposed to development activity ; hence it's about testing a product, not unit(s) of that product. The test scenario are end-to-end scenarios, using the natural interfaces of the product. If the product has programmatic interfaces, then you could use an unit test framework, or Fitnesse.