quarkus test with multiple databases - junit5

I need to run my Quarkus Tests against multiple different databases. Right now I write an abstract class with the tests and for each database a class that inherits this class with
#QuarkusTest
#TestMethodOrder(MethodOrderer.OrderAnnotation.class)
#TestProfile(OracleProfile.class)
Where OracleProfile is my test profile with the QuarkusTestResourceLifecycleManager implementation to start Oracle in Test containers and set the environment. The same is there for postgres and mysql.
This works but leads to an inflation of test classes with only Annotations. Is there a better way to do Tests against multiple profiles?

Related

If I write tests for database calls then I'm writing unit tests or integration tests?

I am starting to write tests for database calls and queries. But I was wondering, since it doesn't depend on any other function, then writing test for database calls are unit tests?
Edit: This is around a Node.js environment
The database is separate to the application that you are testing, and as such, the tests would be integration tests rather than unit tests.
Note that unit tests are limited to dealing with single pieces of software in isolation. If you explicitly want to unit test your code as it stands (without making actual database calls), you can make use of a mocking framework, such as Moq.

Integration and Unit tests

In the last days I studied about tests with Jest, but i don't understood the next.
When I have integration tests I don't use mock? Mock are used just on unit tests?
UPDATE
Today, in my company, the approach that we follow is: Unit tests always mock all external access data, and integration tests should not mock.
Is interesting, associate integration tests with hlg environment, because you can discovery easily what and where broke software
Mocks are a kind of Test Double - a test-specific replacement of a dependency, for purposes of making automated tests deterministic.
There's no universally accepted formal definition of what constitutes a unit test, but in this context, I find the following definition (essentially my own wording) useful:
A unit test is an automated test that tests a unit in isolation of its dependencies.
This definition, however, conveniently avoids defining what a unit is, but that's less important in this context.
Likewise, we can define an integration test as an automated test that exercises the System Under Test (SUT) with its real dependencies. Thus, instead of replacing the database dependency with a Test Double, the test exercises the SUT integrated with a real database, and so on.
Thus, with this view of integration testing, no Test Doubles are required because all real dependencies are integrated.
There's another view of integration testing that considers integration testing the exercise of various software components (units, if you will) with each other, while still replacing out-of-process resources like databases or web services with Test Doubles. This is often easier to accomplish, and can be a valuable technique, but whether you decide to call these unit tests or integration tests is largely a question of personal preference.
Unfortunately, there's no universally accepted consistent definition of these terms. I usually try to stick with the vocabulary documented in xUnit Test Patterns, which is the most comprehensive and internally consistent body of work on the topic (that I know of).
From ISTQB definition, Integration is “A test level that focuses on interactions between components or systems.”
So you can have integration test between units, or between different components, or between subsystems. You may also integrate system of systems.
You can read unit test in wikipedia.
So you can use unit test framework (mock/stub) to do integration test also, but when integration test of whole application usually requires a full environment setup, which unit test framework can not do.
Here are my 2 cents:
Unit tests - always use mock. The "unit" of test is a method.
Integration tests - never use mock. The "unit" of test is a class.
End-to-end tests - uses the actual program. The "unit" of test is a single "happy path".

Gemfire Junit for multi module project tests are taking too long

I have a project which is composed of several modules each module holds it's region and repository classes.
The problem with that is that each module has it's own Gemfire gfe:cache on it's own spring context.
So my problem is that when I run my mvn test every module starts it's own Gemfire and closes it after it's tests and this makes my test runt to take almost 10 minutes, every instance of Gemfire takes 40s to start.
So I would like to know what is the best way to avoid that?
I was thinking about having the parent module (holds all repositories and regions) holding and creating the regions and then using lookups on the children modules to use them. But I also need the individual modules to run by themsleves in case I would like to run only one of the modules tests.
Is there a way to use lookup and in case it fails create a cache with that same region lookedup? Or having the cache to be created once (first test) and get the regions added to it while the other contexts are started instead of being closed?
Thanks
In certain cases, you cannot avoid it, especially if you dirty the Spring context or your GemFire instance in such a way that it would lead to conflicts when running subsequent tests in your test suite.
I.e. it maybe more work to try to manage isolation/separation in each new test class and/or test case you write, having to keep track of what test touches what (e.g. Region), rather then perhaps just restarting the GemFire instance per test class (or in the worse case, per test case). E.g., think of a OQL Query pulling in unexpected results due to a previous test's actions, or something of that nature.
The Spring Data GemFire test suite is very similar in that it starts up GemFire instances and stops them, either per test case, or in most cases, per test class. The entire build with tests (900+ tests) runs on avg in ~15 minutes. GemFire's own test suite (unit + integration/distributed tests, etc) runs in about 8-12 hours depending on how effectively you can "parallelize" the tests, o.O
I am a firm believer in the 10 minute build, but GemFire is a complex beast and writing tests, especially distributed tests, effectively takes careful planning.
Mosts of the tests in Spring Data GemFire are peer cache applications (i.e. the test JVM is embeds the GemFire instance, for example... RegionDataPolicyShortcutsIntegrationTest and it's associated Spring context configuration file).
I have found that if you set of a few GemFire properties (such as setting log-level to "warning" and particularly mcast-port to 0) you can reduce the runtime substantially, for example. Setting mcast-port to 0 sets up a "loner" GemFire node and substantially improves startup time.
There are other Spring Data GemFire tests, which are "ClientCaches", that even spawn a separate JVM with a GemFire Server process to test the client/server interaction. You can imagine those taking even longer to start/stop, and in fact they do. For example, the ClientCacheFunctionExecutionWithPdxIntegrationTest and the associated Spring context configuration files... (server) and of course the (client). Note, the test is the GemFire ClientCache VM in this case.
Now, you might be able to get a way with use mocks in certain tests. Many of the Spring Data GemFire tests mock the interactions between SDG and GemFire using the mocks provided in the org.springframework.data.gemfire.test package.
The test classes in SDG that use mocks declare a special kind of Spring ApplicationContextInitializer like so in the GemfireTemplateTest, which uses the SDG GemfireTestApplicationContextInitializer. The real magic is in the GemfireTestBeanPostProcessor though. If you trace the code through, you will get the idea.
In time, I hope to formalize these mocks a bit and create a separate project with these mocks, the GemfireTestApplicatonContextInitializer and associated GemfireTestBeanPostProcessor for developers testing purposes involving Spring and GemFire (and also Apache Geode).
Hope this gives you a few ideas to ease the pain of the test setup and runtime.
Cheers!

What kind of test is that - checking interaction of two classes

There is a class A doing some calculations and class B, which is calling methods from the class A.
Unit tests were fine for both classes but when used classes together, I discovered that it does not work. The issue was that the types of parameters were incorrect. As this was a part of school assignement, I was supposed to say what kind of test it is. I think it is an integration one, is that correct?
I think so because integration means integrating more modules into one system. And I am integrating two classes together here.
Strictly speaking, a unit test would be one module (class) tested in isolation, with any external dependencies stubbed out.
But in reality, so-called unit tests will often break this rule. For example in Rails, unit tests often hit the database (but this can be avoided).
In the situation you describe here, 'integration' is probably the best term to use.
Note that the meaning of these terms can vary a lot depending on the context. I would call what Nathan Hughes describes a system integration test to distinguish it from more granular tests.
Typically "integration test" means a test that touches some other system, like a database or file system or a web service or whatever. Since in your case it's two classes in the same program, I would categorize it as a unit test.
There's an expectation that a unit test have a small scope, but there isn't any hard and fast rule that it be limited to one method or one class.
Unit tests check that the code within the program is internally consistent, which is what needs doing here.

Am I unit testing or integration testing?

I am starting out with automated testing and I would like to test one of my data access methods. I am trying to test what the code does if the database returns no records.
Is this something that should be done in a unit test or an integration test?
Thanks
If your test code connects to an actual database and relies on the presence of certain data (or lack of data) in order for the test to pass, it's an integration test.
I ususally prefer to test something like this by mocking out the component that the "data access method" used to get the actual data, whether that's a JDBC connection or web service proxy or whatever else. With a mock, you say "when this method is called, return this" or "make sure that this method is called N times", and then you tell the class under test to use the mock component rather than the real component. This then is a "unit test", because you are testing how the class under test behaves, in a closed system where you've declared exactly how the other components will behave. You've isolated the class under test completely and can be sure that your test results won't be volatile and dependent on the state of another component.
Not sure what language/technology you are working with, but in the Java world, you can use JMock, EasyMock, etc for this purpose.
I think more time has been wasted arguing about what is a unit vs. what is an integration test than value has been added.
I don't care.
Let me put it a different way: If I were testing it, I'd see two ways to do it - fake out the database returning zero rows, or actually connect to a database that has no data for the select. I'd probably start testing with whatever was easiest to do and simplest to implement - if it ran fast enough for me to get meaningful feedback. Then I'd consider the other if I needed it to run faster or thought there would be some advantage.
For example, I'd probably start connecting to the actual test DB at my work. But if the software needed to work with many different databases - Oracle, PostGres, MySQL, SQL server and DB, or if the test DB at work was down for 'refreshes' a lot, I'd probably write the 'pure/unit' test that existed totally in isolation.
In my old age, I prefer to use the term 'developer-facing' vs. 'customer facing' more often, and do the kind of testing that makes more sense. I find using terms like "unit" extensively, then getting a definition-weenie about it leads to people doing things like mocking out the filesystem or mocking getters and setters - activity that I find unhelpful.
I believe this strongly; I've presented before google on it.
http://www.google.com/url?sa=t&source=web&oi=video_result&ct=res&cd=1&url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DPHtEkkKXSiY&ei=9-wKSobjEpKANvHT_MEB&rct=j&q=heusser+GTAC+2007&usg=AFQjCNHOgFzsoVss50Qku1p011J4-UjhgQ
good luck! Let us know how it goes!
Do your test and let other people spend time with taxonomy.
My perspective is that you should categorize the test based on scope:
A unit test can be run standalone
without any external dependencies
(File IO, Network IO, Database,
External Web Services).
An integration test can touch external systems.
If the test requires a real database to run then call it an integration test and keep it separate from the unit tests. This is important because if you mix integration and unit tests than you make your code less maintainable.
A mixed bag of tests mean that new developers may need a whole heap of external dependencies in order to run the test suite. Imagine that you want to make a change to a piece of code that is related to the database but doesn't actually require the database to function, you're going to be frustrated if you need a database just to run the tests associated with the project.
If the external dependency is difficult to mock out (for example, in DotNet, if you are using Rhino Mocks and the external classes don't have interfaces) then create a thin wrapper class that touches the external system. Then mock out that wrapper in the unit tests. You shouldn't need a database to run this simple test so don't require one!
There are those (myself included) who have strict rules about what constitutes a unit test vs an integration test.
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can’t run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it
Which may be one way to make a distinction between what a unit test will be doing for you using mocking for example, rather than any of the real resource providers - filesystem, db etc.
An integration test can be viewed as a test of very coupling of systems/application layers, so the fundamentals are tested in the unit and the system interoperability is the focus of an integration test.
Its still a grey area though because one can often pinpoint certain exceptions to these sorts of rules.
I think the important question is "What SHOULD I be doing?"
In this case I think you should be unit testing. Mock the code that talks to the DB and have it return a reliable result (no rows), this way your test checks what happens when there are no rows, and not what happens when the DB returns whatever is in the DB at the point you test.
Definitely unit test it!
[TestMethod]
public void ForgotMyPassword_SendsAnEmail_WhenValidUserIsPassed()
{
var userRepository = MockRepository.GenerateStub<IUserRepository>();
var notificationSender = MockRepository.GenerateStub<INotificationSender>();
userRepository.Stub(x => x.GetUserByEmailAddressAndPassword("me#home.com", "secret")).Return(new User { Id = 5, Name = "Peter Morris" });
new LoginController(userRepository, notificationSender).ResendPassword("me#home.com", "secret");
notificationSender.AssertWasCalled(x => x.Send(null),
options => options.Constraints(Text.StartsWith("Changed")));
}
I believe that it is possible to test that as a unit test, without a real database. Instead of using a real interface to the database, replace it with a mock/stub/fake object (better visualized PDF is here).
If writing it as a unit test proves to be too hard, and you are not able to refactor the code that testing it would be easy, then you better write it as an integration test. It will run slower, so you might not be able to run all the integration tests after ever code change (unlike unit tests which you can run hundreds and thousands per second), but as long as they are run regularly (for example as part of continous integration), they produce some value.
Most likely a unit test ... but there is a blurred line here. It really depends upon how much code is being executed - if it is contained to a library or class then its unit test, if it spans multiple components then it's more of an integration test.
I believe that should be done in a unit test. You aren't testing that it can connect to the database, or that you can call your stored procedures... you are testing the behavior of your code.
I could be wrong, but that's what I think unless someone gives me a reason to think otherwise.
that is a unit test, by definition: you are testing a single isolated element of the code on a specific path