How does one write integration tests in JUnit which use JOOQ as data access layer, and which rollback after test completion?
It seems that Jooq provides only very limited transactional management. It offers a method
DSLContext.transaction(TransactionalRunnable transactional)
This method will rollback a transaction if exception is thrown by the passed lambda. But it is not obvious how to make this work with JUnit integration tests. What I really want is an equivalent of #Transactional interface without using Spring.
#Transactional
class UserApiTest
{
UserApi api;
#Test
public void testUpdateUserLastName() {
User user = getUserByUsername();
user.setLastName("NewLastName");
api.updateUser(user);
assertEquals("NewLastName", user.getLastName());
// database should be unchanged after test completion because of automatic rollback
}
}
class UserApiImpl implements UserApi
{
private final DSLContext db;
#Override
public void updateUser(LegacyUser user) {
UserRecord userRecord = db.newRecord(USER, user);
db.executeUpdate(userRecord);
}
}
Any suggestions would be appreciated.
A better approach than rolling back
First off, I would recommend you do not roll back your transactions in such integration tests, but instead, reset the database to a known state after your test. There are so many ways a rollback can fail, such as:
Calling stored procedures that have autonomous transactions that commit anyway
Integration testing some service that commits the transaction anyway
Rollbacks causing trouble because of the transaction model of the database
If you reset the database to a well known state, you will avoid all of the above problems, which are likely going to cost you much more in test maintenance.
Use something like https://www.testcontainers.org instead. Here's an example showing how to set it up: https://github.com/jOOQ/jOOQ/tree/main/jOOQ-examples/jOOQ-testcontainers-example
Rolling back at the end of a test
You shouldn't use jOOQ for this. You could use Spring's JUnit extensions, and make all your tests #Transactional. This way, whenever you have an assertion error, you will automatically roll back your transaction. Also, your test will automatically be transactional.
This works in simple scenarios, but again, I'm sure you'll run into one of the aforementioned issues.
Related
last time I am thinking about proper using logger in our applications.
For example, I have a controller which returns a stream of users but in the log, I see the "Fetch Users" log is being logged by another thread than the thread on the processing pipeline but is it a good approach?
#Slf4j
class AwesomeController {
#GetMapping(path = "/users")
public Flux<User> getUsers() {
log.info("Fetch users..");
return Flux.just(...)..subscribeOn(Schedulers.newParallel("my-custom"));
}
}
In this case, two threads are used and from my perspective, not a good option, but I can't find good practices with loggers in reactive applications. I think below approach is better because allocation memory is from processing thread but not from spring webflux thread which potential can be blocking but logger.
#GetMapping(path = "/users")
public Flux<User> getUsers() {
return Flux.defer(() -> {
return Mono.fromCallable(() -> {
log.info("Fetch users..");
.....
})
}).subscribeOn(Schedulers.newParallel("my-custom"))
}
The normal thing to do would be to configure the logger as asynchronous (this usually has to be explicit as per the comments, but all modern logging frameworks support it) and then just include it "normally" (either as a separate line as you have there, or in a side-effect method such as doOnNext() if you want it half way through the reactive chain.)
If you want to be sure that the logger's call isn't blocking, then use BlockHound to make sure (this is never a bad idea anyway.) But in any case, I can't see a use case for your second example there - that makes the code rather difficult to follow with no real advantage.
One final thing to watch out for - remember that if you include the logging statement separately as you have above, rather than as part of the reactive chain, then it'll execute when the method at calltime rather than subscription time. That may not matter in scenarios like this where the two happen near simultaneously, but would be rather confusing if (for example) you're returning a publisher which may be subscribed to multiple times - in that case, you'd only ever see the "Fetch users..." statement once, which isn't obvious when glancing through the code.
I am using nhibernate and the nhibernate profile what keeps throwing this alert.
Use of implicit transactions is discouraged"
I actually wrap everything in a transaction through ninject
public class NhibernateModule : NinjectModule
{
public override void Load()
{
Bind<ISessionFactory>().ToProvider<NhibernateSessionFactoryProvider>().InSingletonScope();
Bind<ISession>().ToMethod(context => context.Kernel.Get<ISessionFactory>().OpenSession()).InRequestScope()
.OnActivation(StartTransaction)
.OnDeactivation(CommitTransaction);
}
public void CommitTransaction(ISession session)
{
if (session.Transaction.IsActive)
{
session.Transaction.Commit();
}
}
public void StartTransaction(ISession session)
{
if (!session.Transaction.IsActive)
{
session.BeginTransaction();
}
}
}
So this should wrap everything in a transaction and it seems to work with anything that is not lazy loading.
If it is lazy loading though I get the error. What am I doing wrong.
This is, in fact, still an implicit transaction, or relatively close to it. The injector is blissfully ignorant of everything that's happened between activation and deactivation and will happily try to commit all your changes even if the state is incorrect or corrupted.
What I see is that you're essentially trying to cheat and just have Ninject automatically start a transaction at the beginning of every request, and commit the transaction at the end of every request, hoping that it will stop NH from complaining. This is extremely bad design for several reasons:
You are forcing a transaction even if the session is not used at all (i.e. opening spurious connections).
There is no exception handling - if an operation fails or is rolled back, the cleanup code simply ignores that and tries to commit anyway.
This will wreak havoc if you ever try to use a TransactionScope, because the scope will be completed before the NH transaction is.
You lose all control over when the transactions actually happen, and give up your ability to (for example) have multiple transactions within a single request.
The NH Profiler is exactly right. This isn't appropriate use of NH transactions. In fact, if you're lazy loading, the transaction might end up being committed while you're still iterating the results - not a good situation to be in.
If you want a useful abstraction over the transactional logic and don't want to have to twiddle with ISession objects then use the Unit Of Work pattern - that's what it's designed for.
Otherwise, please code your transactions correctly, with a using clause around the operations that actually represent transactions. Yes, it's extra work, but you can't cheat your way out of it so easily.
I've previously written some selenium tests using ruby/rspec, and found it quite powerful. Now, I'm using Selenium with PHPUnit, and there are a couple of things I'm missing, it might just be because of inexperience. In Ruby/RSpec, I'm used to being able to define a "global" setup, for each test case, where I, among other things, open up the browser window and log into my site.
I feel that PHPUnit is a bit lacking here, in that 1) you only have setUp() and tearDown(), which are run before and after each individual test, and that 2) it seems that the actual browser session is set up between setUp() and the test, and closed before tearDown().
This makes for a bit more clutter in the tests themselves, because you explicitly have to open the page at the beginning, and perform cleanups at the end. In every single test. It also seems like unnecessary overhead to close and reopen the browser for every single test, in stead of just going back to the landing page.
Are there any alternative ways of achieving what I'm looking for?
What I have done in the past is to make a protected method that returns an object for the session like so:
protected function initBrowserSession() {
if (!$this->browserSession) {
$this->setBrowser('*firefox');
$this->setBrowserUrl('http://www.example.com/');
//Initialize Session
$this->open('http://www.example.com/login.php');
// Do whatever other setup you need here
}
$this->browserSession = true;
}
public function testSomePage() {
$this->initBrowserSession();
//Perform your test here
}
You can't really use the setupBefore/AfterClass functions since they are static (and as such you won't have access to the instance).
Now, with that said, I would question your motivation for doing so. By having a test that re-uses a session between tests you're introducing the possibility of having side-effects between the tests. By re-opening a new session for each test you're isolating the effects down to just that of the test. Who cares about the performance (to a reasonable extent at least) of re-opening the browser? Doing so actually increases the validity of the test since it's isolated. Then again, there could be something to be said for testing a prolonged session. But if that was the case, I would make that a separate test case/class to the individual functionality test...
Although I agree with #ircmaxell that it might be best to reset the session between tests, I can see the case where tests would go from taking minutes to taking hours just to restart the browser.
Therefore, I did some digging, and found out that you can override the start() method in a base class. In my setup, I have the following:
<?php
require_once 'PHPUnit/Extensions/SeleniumTestCase.php';
class SeleniumTestCase extends PHPUnit_Extensions_SeleniumTestCase
{
public function setUp() {
parent::setUp();
// Set browser, URL, etc.
$this->setBrowser('firefox');
$this->setBrowserUrl('http://www.example.com');
}
public function start() {
parent::start();
// Perform any setup steps that depend on
// the browser session being started, like logging in/out
}
}
This will automatically affect any classes that extend SeleniumTestCase, so you don't have to worry about setting up the environment in every single test.
I haven't tested, but it seems likely that there is a stop() method called before tearDown() as well.
Hope this helps.
I'd like to ensure, that when I'm persisting any data to the database, using Fluent NHibernate, the operations are executed inside a transaction. Is there any way of checking that a transaction is active via an interceptor? Or any other eventing mechanism?
More specifically, I'm using the System.Transaction.TransactionScope for transaction management, and just want to stop myself from not using it.
If you had one place in your code that built your session, you could start the transaction there and fix the problem at a stroke.
I haven't tried this, but I think you could create a listener implementing IFlushEventListener. Something like:
public void OnFlush(FlushEvent #event)
{
if (!#event.Session.Transaction.IsActive)
{
throw new Exception("Flushing session without an active transaction!");
}
}
It's not clear to me (and Google didn't help) exactly when OnFlush is called. There also may be an implicit transaction that could set IsActive to true.
If you had been using Spring.Net for your transaction handling, you could use an anonymous inner object to ensure that your DAOs/ServiceLayer Objects are always exposed with a TransactionAdvice around their service methods.
See the spring documentation for an example.
To what end? NHProf will give you warnings if you're not executing inside a transaction. Generally you should be developing with this tool open anyway...
In integration tests, asynchronous processes (methods, external services) make for a very tough test code. If instead, I factored out the async part and create a dependency and replace it with a synchronous one for the sake of testing, would that be a "good thing"?
By replacing the async process with a synchronous one, am I not testing in the spirit of integration testing? I guess I'm assuming that integration testing refers to testing close to the real thing.
Nice question.
In a unit test this approach would make sense but for integration testing you should be testing the real system as it will behave in real-life. This includes any asynchronous operations and any side-effects they may have - this is the most likely place for bugs to exist and is probably where you should concentrate your testing not factor it out.
I often use a "waitFor" approach where I poll to see if an answer has been received and timeout after a while if not. A good implementation of this pattern, although java-specific you can get the gist, is the JUnitConditionRunner. For example:
conditionRunner = new JUnitConditionRunner(browser, WAIT_FOR_INTERVAL, WAIT_FOR_TIMEOUT);
protected void waitForText(String text) {
try {
conditionRunner.waitFor(new Text(text));
} catch(Throwable t) {
throw new AssertionFailedError("Expecting text " + text + " failed to become true. Complete text [" + browser.getBodyText() + "]");
}
}
We have a number of automated unit tests that send off asynchronous requests and need to test the output/results. The way we handle it is to actually perform all of testing as if it were part of the actual application, in other words asynchronous requests remain asynchronous. But the test harness acts synchronously: It sends off the asynchronous request, sleeps for [up to] a period of time (the maximum in which we would expect a result to be produced), and if still no result is available, then the test has failed. There are callbacks, so in almost all cases the test is awakened and continues running before the timeout has expired, but the timeouts mean that a failure (or change in expected performance) will not stall/halt the entire test suite.
This has a few advantages:
The unit test is very close to the actual calling patters of the application
No new code/stubs are needed to make the application code (the code being tested) run synchronously
Performance is tested implicitly: If the test slept for too short a period, then some performance characteristic has changed, and that needs looking in to
The last point may need a small amount of explanation. Performance testing is important, and it is often left out of test plans. The way these unit tests are run, they end up taking a lot longer (running time) than if we had rearranged the code to do everything synchronously. However this way, performance is tested implicitly, and the tests are more faithful to their usage in the application. Plus all of our message queueing infrastructure gets tested "for free" along the way.
Edit: Added note about callbacks
What are you testing? The behaviour of your class in response to certain stimuli? In which case don't suitable mocks do the job?
Class Orchestrator implements AsynchCallback {
TheAsycnhService myDelegate; // initialised by injection
public void doSomething(Request aRequest){
myDelegate.doTheWork(aRequest, this)
}
public void tellMeTheResult(Response aResponse) {
// process response
}
}
Your test can do something like
Orchestrator orch = new Orchestrator(mockAsynchService);
orch.doSomething(request);
// assertions here that the mockAsychService received the expected request
// now either the mock really does call back
// or (probably more easily) make explicit call to the tellMeTheResult() method
// assertions here that the Orchestrator did the right thing with the response
Note that there's no true asynch processing here, and the mock itself need have no logic other than to allow verification of the receipt of the correct request. For a Unit test of the Orchestrator this is sufficient.
I used this variation on the idea when testing BPEL processes in WebSphere Process Server.