Am I unit testing or integration testing? - testing

I am starting out with automated testing and I would like to test one of my data access methods. I am trying to test what the code does if the database returns no records.
Is this something that should be done in a unit test or an integration test?
Thanks

If your test code connects to an actual database and relies on the presence of certain data (or lack of data) in order for the test to pass, it's an integration test.
I ususally prefer to test something like this by mocking out the component that the "data access method" used to get the actual data, whether that's a JDBC connection or web service proxy or whatever else. With a mock, you say "when this method is called, return this" or "make sure that this method is called N times", and then you tell the class under test to use the mock component rather than the real component. This then is a "unit test", because you are testing how the class under test behaves, in a closed system where you've declared exactly how the other components will behave. You've isolated the class under test completely and can be sure that your test results won't be volatile and dependent on the state of another component.
Not sure what language/technology you are working with, but in the Java world, you can use JMock, EasyMock, etc for this purpose.

I think more time has been wasted arguing about what is a unit vs. what is an integration test than value has been added.
I don't care.
Let me put it a different way: If I were testing it, I'd see two ways to do it - fake out the database returning zero rows, or actually connect to a database that has no data for the select. I'd probably start testing with whatever was easiest to do and simplest to implement - if it ran fast enough for me to get meaningful feedback. Then I'd consider the other if I needed it to run faster or thought there would be some advantage.
For example, I'd probably start connecting to the actual test DB at my work. But if the software needed to work with many different databases - Oracle, PostGres, MySQL, SQL server and DB, or if the test DB at work was down for 'refreshes' a lot, I'd probably write the 'pure/unit' test that existed totally in isolation.
In my old age, I prefer to use the term 'developer-facing' vs. 'customer facing' more often, and do the kind of testing that makes more sense. I find using terms like "unit" extensively, then getting a definition-weenie about it leads to people doing things like mocking out the filesystem or mocking getters and setters - activity that I find unhelpful.
I believe this strongly; I've presented before google on it.
http://www.google.com/url?sa=t&source=web&oi=video_result&ct=res&cd=1&url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DPHtEkkKXSiY&ei=9-wKSobjEpKANvHT_MEB&rct=j&q=heusser+GTAC+2007&usg=AFQjCNHOgFzsoVss50Qku1p011J4-UjhgQ
good luck! Let us know how it goes!

Do your test and let other people spend time with taxonomy.

My perspective is that you should categorize the test based on scope:
A unit test can be run standalone
without any external dependencies
(File IO, Network IO, Database,
External Web Services).
An integration test can touch external systems.
If the test requires a real database to run then call it an integration test and keep it separate from the unit tests. This is important because if you mix integration and unit tests than you make your code less maintainable.
A mixed bag of tests mean that new developers may need a whole heap of external dependencies in order to run the test suite. Imagine that you want to make a change to a piece of code that is related to the database but doesn't actually require the database to function, you're going to be frustrated if you need a database just to run the tests associated with the project.
If the external dependency is difficult to mock out (for example, in DotNet, if you are using Rhino Mocks and the external classes don't have interfaces) then create a thin wrapper class that touches the external system. Then mock out that wrapper in the unit tests. You shouldn't need a database to run this simple test so don't require one!

There are those (myself included) who have strict rules about what constitutes a unit test vs an integration test.
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can’t run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it
Which may be one way to make a distinction between what a unit test will be doing for you using mocking for example, rather than any of the real resource providers - filesystem, db etc.
An integration test can be viewed as a test of very coupling of systems/application layers, so the fundamentals are tested in the unit and the system interoperability is the focus of an integration test.
Its still a grey area though because one can often pinpoint certain exceptions to these sorts of rules.

I think the important question is "What SHOULD I be doing?"
In this case I think you should be unit testing. Mock the code that talks to the DB and have it return a reliable result (no rows), this way your test checks what happens when there are no rows, and not what happens when the DB returns whatever is in the DB at the point you test.
Definitely unit test it!
[TestMethod]
public void ForgotMyPassword_SendsAnEmail_WhenValidUserIsPassed()
{
var userRepository = MockRepository.GenerateStub<IUserRepository>();
var notificationSender = MockRepository.GenerateStub<INotificationSender>();
userRepository.Stub(x => x.GetUserByEmailAddressAndPassword("me#home.com", "secret")).Return(new User { Id = 5, Name = "Peter Morris" });
new LoginController(userRepository, notificationSender).ResendPassword("me#home.com", "secret");
notificationSender.AssertWasCalled(x => x.Send(null),
options => options.Constraints(Text.StartsWith("Changed")));
}

I believe that it is possible to test that as a unit test, without a real database. Instead of using a real interface to the database, replace it with a mock/stub/fake object (better visualized PDF is here).
If writing it as a unit test proves to be too hard, and you are not able to refactor the code that testing it would be easy, then you better write it as an integration test. It will run slower, so you might not be able to run all the integration tests after ever code change (unlike unit tests which you can run hundreds and thousands per second), but as long as they are run regularly (for example as part of continous integration), they produce some value.

Most likely a unit test ... but there is a blurred line here. It really depends upon how much code is being executed - if it is contained to a library or class then its unit test, if it spans multiple components then it's more of an integration test.

I believe that should be done in a unit test. You aren't testing that it can connect to the database, or that you can call your stored procedures... you are testing the behavior of your code.
I could be wrong, but that's what I think unless someone gives me a reason to think otherwise.

that is a unit test, by definition: you are testing a single isolated element of the code on a specific path

Related

Multiple asserts and multiple actions on integration test

I have a integration test that should test the creation of a new account in a CRM software.
The account creation triggers several things:
Creates the basic profile of the company
Creates every user (you can define the number of users on the registration)
Initialize the basic configuration of the account
Sends a welcome email with the starting information
etc
The test checks every aspect with several asserts, but I don't know if this is correct or if I should do a separate test for every one.
If I go for separate tests, the setup would be the same for all, so I feel like it would be a waste of time.
What you explain there sounds more like an end-to-end test. It's ok to have some end-to-end tests, but they are usually very expensive to write, to maintain, and brittle.
For me, the tests in a service should give you confidence that the software you are delivering will work in production. So maybe it's ok to have a very small number of end-to-end tests that check that everything is glued together properly, but most of the actual functionality should be in normal tests. An example of what I would try to avoid is to is have an end-to-end test that checks what happens when a downstream service is down.
Another very important aspect is that tests are written for other developers, they are not written for the compiler, so keeping a tests simple is important for maintainability. I want to stress this because if a test has 10 lines of assertions, that will be unreadable for most developers. even a test of 10 lines of code is difficult to grok.
Here's how I try to build services:
If you are familiar with ATDD and hexagonal architecture, most of the features should be tested stubbing the adaptors, which allows the tests to run super fast and fiddle with the adapters using test doubles. These tests shouldnt interact with anything outside the JVM, and give one a good level of confidence that the features will work. If the feature has too many side effects, I try to pick the assertions carefully. For example if a feature is to create an account, I won't check that the account is actually on the DB (because the chances of that breaking are minuscle), but I would check that all messages that need to be triggered are sent. Sometimes I do create multiple tests if the test starts to become unclear. For example one tests that checks the returned value and another tests that verifies the side effects (e.g. messages being produced).
Having as minimum a good coverage of the critical code with unit tests and integration tests (here I mean test classes that interact with external services) builds up the confidence that the classes work as expected. So end-to-end tests don't need to cover the miriad of combinations.
And last a very small number of end-to-end tests to ensure everything is glued together nicely.
Bottom line: create multiple test with the same setup if it helps understanding the code.
edit
About integration tests: It's just terminology. I call integration test a class or small group of classes that interact with an external service (database, queue, files, etc); A component test is something that verifies a single service or module; and end-to-end test something that tests all the services or modules working together.
What you mentioned about stored procs changes the approach. Do you have unit tests for them? Otherwise you could write some sort of integration tests that verify the stored procs work as expected.
About readability of the test: for me, the real test is to ask someone from another team or a product owner and ask them if the test name, the setup, what is asserted and the relatioship between those things is clear. If they struggle, it means that the test should be simplified.

Unit testing with a "real" database using efcore

I've got a fairly complex application to test that uses EFCore, including things like NetTopologySuite. It runs on MS SQL Server locally and Azure SQL in the cloud and has to be tested against these, so in memory databases and SQLite are not enough. Is anybody aware of some helper framework to automate the creation/wiping of test databases? I've found EfCore.TestSupport, but it doesn't seem to support NetTopologySuite, which is a must have for us.
By definition a unit test cannot use a real database, as you're then no longer testing a single unit of functionality. Using a real database makes your test a systems test, which is a much higher-level test, which should not be part of your build/CI pipeline. This is because the real database could fail, which would then fail your unit test, even if there's nothing wrong with your code (i.e. false negatives).
Additionally, you're "testing the framework" here. If you're going to use a library like NetTopologySuite, you should always ensure that it is itself well-tested, but you should not test it yourself. That is not your code. Likewise with EF Core. EF Core has an extensive test suite already, you do not need to, nor should you, test it yourself. Your responsibility is to test code you write, code that is unique to your application. Mocks and fakes should be used abstract third-party libraries like this. In other words, you'd simply use a fake/mocked data result that you'd get from NetTopologySuite and run that through your code, rather than actually using that library directly.

How to unit test non-public logic

In some cases unit testing can be really difficult. Normally people say to only test your public API. But in some cases this is just not possible. If your public API depends on files or databases you can't unit test properly. So what do you do?
Because it's my first time TDD-ing, I'm trying to find "my style" for unit testing, since it seems there is just not the one way to do so. I found two approaches on this problem, that aren't flawless at all. On the one hand, you could try to friend your assemblies and test the features that are internal. On the other hand, you could implement interfaces (only for the purpose of unit testing) and create fake objects within your unit tests. This approach looks quite nice first but becomes more ugly as you try to transport data using these fakes.
Is there any "good" solution to this problem? Which of those is less flawed? Or is there even a third approach?
I made a couple of false starts in TDD, grappling with this exact same problem. For me the breakthrough came when I realized what my mentor meant when he said : "We don't want to test the framework." (In our case that was the .Net framework).
In your case it sounds as if you have some business logic that interfaces to files and databases. What I would do is to abstract the file and database logic in the thinnest layers possible. You can then use Mock (of fakes or stubs) to simulate the file and database layers. This will allow you to test scenarios like if-my-database-returns-this-kind-of-information-does-my-business-logic-handle-it-correctly? Likewise for file access you can test the code that figures out which file in which path to open and you can test that your logic would be able to pull apart the contents of any given file correctly and able to use it correctly.
If for example your file access layer consists of a single function that takes a path name and a file name and returns the contents of the file in a long string then you don't really need to test it because essentially you are making a single call to the framework/OS and there is not a lot that can go wrong there.
At the moment I am working on a system that wraps our database as a bunch of functions that return lists of POCO's. Easy to understand for the business layer and easy to simulate via mocks.
Working this way takes some getting used to but it is absolutely byoo-ti-full once it clicks in your mind.
Finally, from your question I guess that you are working with legacy code and trying to do TDD for a new component. This is quite a bit harder than doing TDD on a completely new development. If it is at all possible, try to do your first TDD attempts on new (or well isolated) systems. Once you have learnt the mechanics it would be a lot easier to introduce partially TDD'd bits to legacy systems.
If your public API depends on files or databases you can't unit test properly. So what do you do?
There is an abstraction level that can be used.
IFileSystem/ IFileStorage (for files)
IRepository/ IDataStorage (for databases)
Since this level is very thin its integration tests will be easy to write and maintain. All other code will be unit-test friendly because it is easy to mock interaction with filesystem and database.
On the one hand, you could try to friend your assemblies and test the features that are internal.
People face this problem when their classes violates single responsibility principle (SRP) and dependency injection (DI) is not used.
There is a good rule that classes should be tested via their public methods/properties only. If internal methods are used by others then it is acceptable to test them. Private or protected methods should not be made internal because of testing.
On the other hand, you could implement interfaces (only for the purpose of unit testing) and create fake objects within your unit tests.
Yes, interfaces are easy to mock because of limitations of mocking frameworks.
If you can create an instance (fake/stub) of a type then your dependency should not implement an interface.
Sometimes people use interfaces for their domain entities but I do not support them.
To simplify working with fakes there are two patterns used:
Object Mother
Test Data Builder
When I started writing unit tests I started with 'Object Mother'. Now I am using 'Test Data Builder's.
There are a lot of good ideas that can help you in the book Working Effectively with Legacy Code by Michael Feathers.
Don't let the hard stuff get in your way... If it's inherently hard to test due to db or file integration, just ignore it for the moment. Most likely you can refactor that hard to test stuff into easier to test stuff using mocks with Dependency Injection etc... Until then, test the easy stuff and get a good unit test suite built up... when you do the refactoring of the hard to test stuff, you will have a much higher confidence interval that it's not breaking anything else... And refactoring to make something more easily testable IS a good reason to refactor...

Dependencies in Acceptance Testing

Me and a co-worker are having a debate. We are on a craptastic legacy project and are slowly adding acceptance tests. He believes that we should be doing the work in the gui/watin then using a low level library to query the database directly for to get an 'end to end' test as he puts it.
We are using NHibernate and I advocate using gui/watin then those nhibernate objects to do the assertions in the acceptance testing. He dislikes the dependency of NHibernate in the test. My assertion was that we have/should have integration tests against the NHibernate objects to make sure they are working with DB the way we intend at which point there is no downside in using them in the acceptance test to assert proper operation. I also think his low level sql dependence will make the tests fragile and duplicate business logic in alot of cases.
Integration testing in our shop basically means its a single component with a dependency e.g. fileRepository/FileSystem Domain-NhibernateObject/Database. Acceptance testing means coming in through the GUI. Unit means all dependencies have/can be mocked/stubbed out and you've got a pure test in memory with only the method under test actually doing any real work. Let me know if my defs are off.
Anyway any articles/docs/parchment with opinions on this subject you can point me at would be appreciated.
The only reason you'd ever automate tests is to make things easier to change. If you weren't changing them, you could get away with manual testing. Tying the tests to the database will make the database much harder to change.
Tying them to the NHibernate objects won't help very much either, I'm afraid!
The users of your system won't be using the database or NHibernate. How do they get the benefit (or provide the benefit to other stakeholders)? How will they be able to tell that it's working well? If you can capture that in the Acceptance Tests, you'll be able to change the underlying code and data while still maintaining the value of your application. If someone generates reports from the data, why not generate the same reports and check that their contents are what you expect? If the data is read by another system, can you get a copy of that system and see what it outputs to its users?
Anyway, that's my opinion - keep acceptance tests as close to the business value as possible - and here's a blog post I wrote which might help. You could also try the Behavior Driven Development group on Yahoo, who have a fair bit of experience amongst them.
Oh, and doing integration tests to check that your (N)Hibernate bindings are good is an excellent idea. Saved us on a couple of projects.

When not to Use Integration Tests

I am writing an application that uses 3rd party libraries to instantiate and make some operations on virtualmachines.
At first I was writing integration tests to every functionality of the application. But them I found that these tests were not really helping since my environment had to be at a determined state, which turned the tests more and more difficult to write. And I decided to make only the unit and acceptance tests.
So, my question ... is/can there be method or a clue to notice when the integration tests are not to be used?? (or I am wrong and on all cases they should be written)
When you don't plan on actually hooking your application up to anything "real"; no real containers, databases, resources or actual services. That's what an integration test is supposed to verify; that everything works properly together.
Integration tests are good to test a full system that has well-defined inputs and outputs that are unlikely to change. If your expected input/outputs change often then maintaining the test may become a maintenance challenge, or, worse, you may choose against improving an interface because of the amount of work that may be required to upgrade the integration tests.
The easy and short rule is: Test in integration test what breaks due to integration and test the rest in unit tests in isolation.
You can even hate integration tests. Writing a unit test for a function that takes only one integer parameter is hard enough. All possible combinations of state (internal and external(time, external systems)) and input can make integration testing practically impossible (for a decent application.)