Integration testing when the data layer has multiple implementations - kotlin

I have a Kotlin/Vertx API as follows with a graphDB based data repository is injected:
class MyServiceAPI : AbstractVerticle() {
//dependency injection using KodeIn
val graphRepository: GraphRepository = kodein.instance()
fun addFriend (friend: User) {
graphRepository.addFriend(friend)
}
}
GraphRepository has several implementations like OrientDBGraphRepository, JanusGraphRepository etc.
I want to add integration tests using in-memory DB instances. I have following questions:
Do I need to write separate integration tests for each DB implementation or should I pass the params(hostname, user, password) of the DB configuration to change the implementation accordingly?
Since I have coded for the repo interface, do I need to I instantiate each implementation for each case when testing them?
Apologies if this is opinion based, any insight would be appreciated.

Related

Arquillian persistence extension - #UsingDataset seed once for all tests

I have a JPA based application that is read only from the database, it therefore makes sense only to seed the database once before the tests start. Is it possible to do this using the Arquillian persistence extension? I believe it is trying to reseed/clean for every test?
I have tried the following
#RunWith(Arquillian.class)
#UsingDataset("mydataset.xml")
#Cleanup(phase = TestExecutionPhase.NONE)
public class MyArquillianTest {
//Deployable method and tests
}
I've also set the defaultDataSeedStrategy to REFRESH in the arquillian.xml.

What is the purpose to create Application and Loader in Lagom

I am reading following tutorial on Lagom.
I understand DI but the section also talks of Application and Loader. I am unable to understand the purpose of creating an Application and Loader class. So far, I have been able to run basic services (e.g., hello, world service from GettingStarted) without creating Application and loader class.
Let us consider a sample ApplicationLoader (and this is not the only way to do but an example for the sake of the question)
abstract class FriendModule (context: LagomApplicationContext)
extends LagomApplication(context)
with AhcWSComponents
with CassandraPersistenceComponents
{
persistentEntityRegistry.register(wire[FriendEntity])
override def jsonSerializerRegistry = FriendSerializerRegistry
override lazy val lagomServer: LagomServer = serverFor[FriendService](wire[FriendServiceImpl])
}
class FriendApplicationLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication =
new FriendModule(context) with ConductRApplicationComponents
override def loadDevMode(context: LagomApplicationContext): LagomApplication =
new FriendModule(context) with LagomDevModeComponents
override def describeService = Some(readDescriptor[FriendService])
}
Firstly the reason we create a class FriendModule that extends `LagomApplication, is to mixin all our dependencies. They could be:
If the application relies on cassandra and persistence api, then we mixin that. If the application needs to make HTTP calls then we provide it the WSClient etc
We of-course wire in the compile time dependencies
By doing below, we bind the implementation with the service declared
override lazy val lagomServer: LagomServer = serverForFriendService
But notice, we haven't still coupled our microservice with a Service Locator.
The role of a service locator is to provide the ability to discover application services and communicate with them. For example: If an application has five different microservices running, then each one would need to know the address of every other for the communication to be possible.
Service Locator takes this responsibility of keeping information of the address of the microservices concerned. In the absence of this service locator, we would need to configure the URL of each microservice and make it available to each microservice (may be via a properties file?).
So in the class FriendApplicationLoader we bind our implementation with LagomDevModeComponents in the dev case. LagomDevModeComponentsregisters our service with the registry. This is how magically Lagom microservices can communicate with others in a simple manner.

SpringData Gemfire inserting fake date on Dev env

I am developing some app using Gemfire and it would be great to be able to provide some fake data while in Dev environment.
So instead of doing it in the code like I do today, I was thinking about using spring application-context.xml do pre-load some dummy data in the region I am currently working on. Something close to what DBUnit does but for DEV not Test scope.
Later I could just switch envs on Spring and that data would not be loaded.
Is it possible to add data using SpringData Gemfire to a local data grid?
Thanks!
There is no direct support in Spring Data GemFire to load data into a GemFire cluster. However, there are several options afforded to a SDG/GemFire developer to load data.
The most common approach is to define a GemFire CacheLoader attached to the Region. However, this approach is "lazy" and only loads data from a (potentially) external data source on a cache miss. Of course, you could program the logic in the CacheLoader to "prefetch" a number of entries in a somewhat "predictive" manner based on data access patterns. See GemFire's User Guide for more details.
Still, we can do better than this since it is more likely that you want to "preload" a particular data set for development purposes.
Another, more effective technique, is to use a Spring BeanPostProcessor registered in your Spring ApplicationContext that post processes your "Region" bean after initialization. For instance...
Where the RegionPutAllBeanPostProcessor is implemented as...
package example;
public class RegionPutAllBeanPostProcessor implements BeanPostProcessor {
private Map regionData;
private String targetRegionBeanName;
protected Map getRegionData() {
return (regionData != null ? regionData : Collections.emptyMap());
}
public void setRegionData(final Map regionData) {
this.regionData = regionData;
}
protected String getTargetRegionBeanName() {
Assert.state(StringUtils.hasText(targetRegionBeanName), "The target Region bean name was not properly specified!");
return targetBeanName;
}
public void setTargetRegionBeanName(final String targetRegionBeanName) {
Assert.hasText(targetRegionBeanName, "The target Region bean name must be specified!");
this.targetRegionBeanName = targetRegionBeanName;
}
#Override
public Object postProcessBeforeInitialization(final Object bean, final String beanName) throws BeansException {
return bean;
}
#Override
#SuppressWarnings("unchecked")
public Object postProcessAfterInitialization(final Object bean, final String beanName) throws BeansException {
if (beanName.equals(getTargetRegionBeanName()) && bean instanceof Region) {
((Region) bean).putAll(getRegionData());
}
return bean;
}
}
It is not too difficult to imagine that you could inject a DataSource of some type to pre-populate the Region. The RegionPutAllBeanPostProcessor was designed to accept a specific Region (based on the Region beans ID) to populate. So you could defined multiple instances each taking a different Region and different DataSource (perhaps) to populate the Region(s) of choice. This BeanPostProcess just take a Map as the data source, but of course, it could be any Spring managed bean.
Finally, it is a simple matter to ensure that this, or multiple instances of the RegionPutAllBeanPostProcessor is only used in your DEV environment by taking advantage of Spring bean profiles...
<beans>
...
<beans profile="DEV">
<bean class="example.RegionPutAllBeanPostProcessor">
...
</bean>
...
</beans>
</beans>
Usually, loading pre-defined data sets is very application-specific in terms of the "source" of the pre-defined data. As my example illustrates, the source could be as simple as another Map. However, it would be a JDBC DataSource, or perhaps a Properties file or well, anything for that matter. It is usually up to the developers preference.
Though, one thing that might be useful to add to Spring Data GemFire would be to load data from a GemFire Cache Region Snapshot. I.e. data that may have been dumped from a QA or UAT environment, or perhaps even scrubbed from PROD for testing purposes. See GemFire Snapshot Service for more details.
Also see the JIRA ticket (SGF-408) I just filed to add this support.
Hopefully this gives you enough information and/or ideas to get going. Later, I will add first-class support into SDG's XML namespace for preloading data sets.
Regards,
John

testing custom ASP.NET membership provider

I'm using custom asp.net membership provider with underlaying nhibernate data access code, which is fine. Now I need to exercise these methods using tests.
Anyone interested in suggesting how these methods should be tested? with providing some links explaining test methods using some standars maybe ?
This is my first question so be gentle :)
When it comes to unit testing any code that does something with the database or a 3rd party library you should de-couple these dependencies so that your tests only test your code.
For example, if we have a method in our membership provider for adding a single user, what we want to be testing is that our code for this single method works correctly and not that the database is up and running or that methods called by this method work. Our unit test should still pass even if the database is offline or if method calls on other classes fail.
This is where Mocking comes into play. You'll want to mock out your data context and set up any methods you'll be using on it so that you can control its response.
Look closely at the methods you have in your membership provider. What should each one do? That's the only thing you really want to test. Does this method, as a standalone unit, do the job I'm expecting it to.
Membership providers are pretty difficult to mock and test, so personally I don't bother. What I do however is place all my membership code in classes that are easily testable.
Most of my custom providers look something like this:
public class CustomMembershipProvider : MembershipProvider
{
private readonly IUserService _userService;
public ButlinsMembershipProvider()
{
_userService = DI.Resolve<IUserService>();
}
public override bool ValidateUser(string username, string password)
{
return _userService.Authenticate(username, password);
}
}
In this example, I would write integration tests to verify the behavior of the user service. I don't test the provider.

IQueryable Repository with StructureMap (IoC) - How do i Implement IDisposable?

If i have the following Repository:
public IQueryable<User> Users()
{
var db = new SqlDataContext();
return db.Users;
}
I understand that the connection is opened only when the query is fired:
public class ServiceLayer
{
public IRepository repo;
public ServiceLayer(IRepository injectedRepo)
{
this.repo = injectedRepo;
}
public List<User> GetUsers()
{
return repo.Users().ToList(); // connection opened, query fired, connection closed. (or is it??)
}
}
If this is the case, do i still need to make my Repository implement IDisposable?
The Visual Studio Code Metrics certainly think i should.
I'm using IQueryable because i give control of the queries to my service layer (filters, paging, etc), so please no architectural discussions over the fact that im using it.
BTW - SqlDataContext is my custom class which extends Entity Framework's ObjectContext class (so i can have POCO parties).
So the question - do i really HAVE to implement IDisposable?
If so, i have no idea how this is possible, as each method shares the same repository instance.
EDIT
I'm using Depedency Injection (StructureMap) to inject the concrete repository into the service layer. This pattern is followed down the app stack - i'm using ASP.NET MVC and the concrete service is injected into the Controllers.
In other words:
User requests URL
Controller instance is created, which receives a new ServiceLayer instance, which is created with a new Repository instance.
Controller calls methods on service (all calls use same Repository instance)
Once request is served, controller is gone.
I am using Hybrid mode to inject dependencies into my controllers, which according to the StructureMap documentation cause the instances to be stored in the HttpContext.Current.Items.
So, i can't do this:
using (var repo = new Repository())
{
return repo.Users().ToList();
}
As this defeats the whole point of DI.
A common approach used with nhibernate is to create your session (ObjectContext) in begin_request (or some other similar lifecycle event) and then dispose it in end_request. You can put that code in an HttpModule.
You would need to change your Repository so that it has the ObjectContext injected. Your Repository should get out of the business of managing the ObjectContext lifecycle.
I would say you definitely should. Unless Entity Framework handles connections very differently than LinqToSql (which is what I've been using), you should implement IDisposable whenever you are working with connections. It might be true that the connection automatically closes after your transaction successfully completes. But what happens if it doesn't complete successfully? Implementing IDisposable is a good safeguard for making sure you don't have any connections left open after your done with them. A simpler reason is that it's a best practice to implement IDisposable.
Implementation could be as simple as putting this in your repository class:
public void Dispose()
{
SqlDataContext.Dispose();
}
Then, whenever you do anything with your repository (e.g., with your service layer), you just need to wrap everything in a using clause. You could do several "CRUD" operations within a single using clause, too, so you only dispose when you're all done.
Update
In my service layer (which I designed to work with LinqToSql, but hopefully this would apply to your situation), I do new up a new repository each time. To allow for testability, I have the dependency injector pass in a repository provider (instead of a repository instance). Each time I need a new repository, I wrap the call in a using statement, like this.
using (var repository = GetNewRepository())
{
...
}
public Repository<TDataContext, TEntity> GetNewRepository()
{
return _repositoryProvider.GetNew<TDataContext, TEntity>();
}
If you do it this way, you can mock everything (so you can test your service layer in isolation), yet still make sure you are disposing of your connections properly.
If you really need to do multiple operations with a single repository, you can put something like this in your base service class:
public void ExecuteAndSave(Action<Repository<TDataContext, TEntity>> action)
{
using (var repository = GetNewRepository())
{
action(repository);
repository.Save();
}
}
action can be a series of CRUD actions or a complex query, but you know if you call ExecuteAndSave(), when it's all done, you're repository will be disposed properly.
EDIT - Advice Received From Ayende Rahien
Got an email reply from Ayende Rahien (of Rhino Mocks, Raven, Hibernating Rhinos fame).
This is what he said:
You problem is that you initialize
your context like this:
_genericSqlServerContext = new GenericSqlServerContext(new
EntityConnection("name=EFProfDemoEntities"));
That means that the context doesn't
own the entity connection, which means
that it doesn't dispose it. In
general, it is vastly preferable to
have the context create the
connection. You can do that by using:
_genericSqlServerContext = new GenericSqlServerContext("name=EFProfDemoEntities");
Which definetely makes sense - however i would have thought that Disposing of a SqlServerContext would also dispose of the underlying connection, guess i was wrong.
Anyway, that is the solution - now everything is getting disposed of properly.
So i no longer need to do using on the repository:
public ICollection<T> FindAll<T>(Expression<Func<T, bool>> predicate, int maxRows) where T : Foo
{
// dont need this anymore
//using (var cr = ObjectFactory.GetInstance<IContentRepository>())
return _fooRepository.Find().OfType<T>().Where(predicate).Take(maxRows).ToList();
And in my base repository, i implement IDisposable and simply do this:
Context.Dispose(); // Context is an instance of my custom sql context.
Hope that helps others out.