Is there any way to have different configurations for different databases in dbUnit / Arquillian Persistence extension? - jboss-arquillian

I am trying to make integration tests that access the database using Arquillian Persistence Extension / DBunit.
It works well and I have this configured to test the part of the system that access MySQL:
<extension qualifier="persistence-dbunit">
<property name="qualifiedTableNames">true</property>
<property name="escapePattern">`?`</property>
</extension>
The escapePattern is important because I have tables with names like "user", "key" and so on.
Now I want to test the part of the system that access Vertica. Vertica has a different escape character (") and does not recognize ` as escape. Every time I try to run the test, I get an error due to the ``.
Is there any way to have two different configurations which are activated depending on which test is ran? (Or which database connection is used)?

The limitation of APE (Arquillian Persistence Extension) at the moment is that it cannot control more than one database from within a single test. I understand that your case is different, you would like to run different suites of tests against different databases (or even the same tests against different databases but with different configuration). I solved it for APE using maven profiles and I test the code base against several different combinations of containers and databases (there is docker involved in-between which you will see in the referenced example, but that's not really important for this). My approach simply boils to following:
I have separated test-resource folders for every configuration
In there I have dedicated arquillian.xml configs relevant for given DB
Maven profiles and adding those special test-resource folders on demand
This way I keep the tests portable, but I can shuffle some things around transparently.
I hope this will help you. Have a look at the config here.

Related

Environment specific migrations cause issues with copying a database to different environments

We've made use of environment specific migrations for things like seeding data, data correction, applying table grants. There are times when we'd like to take a copy of production, for example, and import it to another lower environment, either as a periodic refresh, or to start a new test environment. However, as expected, we end up with various failures like Detected applied migration not resolved locally and Detected resolved migration not applied to database. I see there are various flags (ignoreIgnoredMigrations, ignoreMissingMigrations and outOfOrder) to allow us to bypass these issues.
Are there best practices for handling scenarios like I described? Is there a way to run an environment specific migration that doesn't file an entry in the flyway_schema_history table? Other approaches to this issue that I haven't mentioned?
Thanks in advance for any insights.
We have used ignoreMissingMigrations as one approach around this issue.

Run selenium in different environments

Issue running into I have one selenium code that need to be run in different environments. One by one .the code in environment(sit) type a keyword and generate list of terms , another environment (prod) do the same thing but generate different list. I need to validate the first appearing term from the list in sit and prod .the code is failing because what is in sit is different from prod .Is there generic way that can be used to run one code on both environments even if they generate different results .Can you please direct me ?
There are several ways to achieve that.
One of the most appropriate (imho) ways to address environment independance is to use environment variables.
Another is to use property files holding different properties for different environments
Another one is to use your execution environment specific properties (like jvm properties in Java).
Options 1 and 3 are imho the most suitable for integrating your code into CI process.
You can passed those values via config file and read and use it in your test code

How can I use IntelliJ IDEA to "group" data source connections into arbitrary categories?

The place I'm currently working at has gone all in the micro-services approach. There are a hundred or so services, with many having their own database, each requiring a different data source with different connection details.
Then there's multiple environments (local development, CI, SIT, UAT, etc.)
I don't have connections for each one of those but I do have, so far, about 30 connections in my database tool window, spread across a few of those environments (mostly local and CI, but a few from other environments).
I'd like to start organising/grouping those connections (probably by environment, but a different categorisation might make more sense later) - I've more than once been jumping around between different databases and jumped into the DB for the right service, but in the wrong environment.
I've looked at the tool window buttons, and the right-click menu - but nothing there seems to be relevant to grouping the datasource nodes in the tool window.
My current workaround is to make sure each connection description starts with a prefix that identifies the environment - _local svc-blah, CI svc-blah, etc. IDEA sorts these alphabetically, so the connections for the same environment tend to stay together (note the underbar on the beginning of the local connections so they are at the top of the list). This works ok, but if IDEA has a grouping mechanism it would presumably have some expand/collapse functionality, which would help me out too.
The question: How can I use IntelliJ IDEA to "group" data source connections into arbitrary categories?
Sure, it can be grouped. Just select one and hit F6 - you will be prompted to enter group name and then you will see it like a 'folder' in tree, where you can drag'n'drop others.

Dynamically change connection string for UI tests

I'm using WebAii library for UI testing - I want to test whether my component displays the same records as there are in database therefore I need to switch my application's connection string to point to the test database just for the time of running tests. What is the best way to do it? How to dynamically change the connection string prior to running the app? Thanks
Are you storing the connection string in the Web.config file? If so, I would deploy a new Web.config just before starting the test and then use the command line to send an IISRESET.
FYI, these are the kinds of questions we answer all day long on our public forum dedicated to WebAii.
Cody
Telerik Technical Support
What kind of application is it? This is first probably an indication of not-well-factored code. Next, it is common to have a separate environment for testing code.
If you are, for example, deploying to ASP.NET with Visual Studio, you can use Web.config file transformations to set a different value when you deploy to e.g. test.contoso.com vs. www.contoso.com. The transformation syntax allows you to define a new connection string, or change an existing one from the base Web.config, when deploying a different configuration.
If you have a single environment, and control over it, you could probably write a couple of (Power)shell scripts to copy a web.config with "test" connection strings to your app root prior to the test. Then run a second script to reset the original web.config after the test is run.
If you have access to your deploy directory within the context you will be running your tests, you could even simply have a Web.test.config file included in your unit test project. In [AssemblyInitialize]:
File-copy _\\{your app server}{your app directory}\Web.config to \\{your app server}{your app directory}\Web.config.orig.
File-copy Web.test.config to \\{your app server}{your app directory}\Web.config.
Sleep for a few seconds?
Then do the reverse in [AssemblyCleanup].
Other strategies exist, too. You could build in an override to your application when in debug mode, that checks various things (special file, additional config, cookies, extra query string). Or you could have a Settings manager in your app that you can instrument in test setup when arranging your test (click through UI to change DB settings).
Very likely, however, you may get the best compounding rewards by factoring your code to reduce dependencies. Then you can write unit tests which stub/mock/fake the database. You can use code coverage tools to verify that you've tested specific scenarios, or to see that additional integration tests would be duplication of coverage at that point.

How do I configure persistence.xml for ORM on glassfish v3 with Derby and Eclipselink

I'm using the internal glassfish 3.1 plugin for Eclipse, along with a derby database I installed (it shows up on the datasource explorer in the Database Developer view in Eclipse), and I'm fumbling at the "last" step of getting the ORM working so that I can develop an app that persists data with EJBs using Eclipselink for the JPA implementation.
I know I need to configure the persistence.xml file, but I'm at a loss for what needs to be in it, what the individual field names mean. I feel like the purpose of the persistence.xml is to tell Glassfish where to find the database to store everything in, and which JPA implementation to use to do that storing.
I have a bunch of questions.
Do I have to have a persistence entry for each class that represents an object in the database? So if I had a Book class and a Library class, would I need two enteries in persistence.xml or could I just do one case that services them both?
Where can I find more information about how to configure the persistence.xml file IN GENERAL. I have found tons of very specific tutorials with information on how to configure it in X, Y, or Z setting, but nothing that explains the individual bits, and how you'd configure them from a high level.
Once I've setup my persistence.xml file correctly, what else do I need to do to ensure that my #annotated classes are going to be serviced by the ORM implementation correctly? Is there something I need to configure in Glassfish?
I'm not an expert but...
1) Yes, in my experience you need an entry for each class. There could be exceptions to this but I am unfamiliar with them.
2) [http://wiki.eclipse.org/EclipseLink/] is a good place to start.
[http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Configuration/JPA/persistence.xml] has some details that you may already know. I've have trouble finding a perfect resource myself. I've tended to find information fragmented all over the place.
3) In general most of my persistence.xml file has been generated automatically by eclipselink.
After I creating a connection pool and JDBC resource from the glassfish Administration Console
I had to add my
<jta-data-source>jdbc/your_name</jta-data-source>
to persistence.xml.[1]
<property name="eclipselink.ddl-generation" value="create-tables"/>
<property name="eclipselink.ddl-generation.output-mode" value="database"/>
I added these properties so my identity columns would auto-increment using JPA.
Try these two tutorials to get a better understanding of how it works:
[1] http://programming.manessinger.com/tutorials/an-eclipse-glassfish-java-ee-6-tutorial/#heading_toc_j_24
http://itsolutionsforall.com/index.php
[*apologies I can't post more than 2 links at the moment]