How to write automated integrated tests when using JTA? - jta

I want to write integration tests for the application I'm working on. It uses JTA (multiple resources) and runs inside an application server. What is the best way for writing an automated test for such a scenario? Using an standalone transaction manager like atomikos or somehow leveraging the app server api/tools for transaction handling?

Sounds like a good plan, might be a bit complicated satisfying the environment required by all those resources.

Related

Which Environments Should Integration Test be Run In?

Given a development pipeline with playground, staging, and production environments, which environment is most appropriate for integration tests? What is the best practice around this?
My thinking is that it should be in the playground environment, to get the earliest results (ie shift left). However, I have also seen some examples of re-running integration tests for each environment.
Is there value in running integration tests multiple times, or does it make more sense to just run it once in an appropriate environment?
There might not be a standard best practice, it also depends on the application and the testing setup you have.
You can skip running tests on the production environment as it will affect the performance for your users. Also it is not a good idea to put testing data into your production environment. To test out whether the functionality is working fine on production, you can create an environment which mimics the production environment.
Since different environment like QA/Staging can have different environment configuration and different CPU/Memory settings, it is a good idea to run the integration tests on multiple environments.

Automated Testing of Nifi flows using Jenkins

Is there any way to automatically run regression/functional tests on Nifi flows using Jenkins pipeline ?
Searched for it, without any success.
Thanks for your help.
With the recent release of NiFI-1.5.0 and NiFi-Registry-0.1.0, the community has come together to produce a number of SDLC/CICD integration tools to make using things like Jenkins Pipeline easier.
There is both Python (NiPyAPI), and Java (NiFi-Toolkit-CLI) API wrappers being produced by a team of collaborators to allow scripted manipulation of NiFi Flows across different environments.
Common functions include interaction with integrated version control, import/export of flows as JSON documents, deployment between environments, start/stop of flows, etc.
So, we are working quickly towards supporting things like an integrated wrapper for declarative Jenkins Pipelines, and I would add it is being done fully in public codebase under the Apache license, so we (I am the lead NiPy author) would welcome your collaboration.

How to test Service Contracts implemented as OSGi Bundles?

We are in the process of transitioning towards SOA.
Our current goal is to try and ensure that more of the application is developed as "Services" (mainly to improve visibility of capability, re-use and de-risk change). Some of those services will be exposed as web services, but many (and probably the majority) will not, and be used for "internal" use only to help reap some of the benefits of SOA.
For those "internal" services we are currently intending on implementing them as OSGi bundles; however we are struggling to understand how best to test them. Our goal is to enable the current System Test team to test all types of services and we have been investigating tools like SoapUI and SOA Test; however it's becoming clearer that we may face some challenges in testing our services implemented as OSGi bundles using tools like these; and indeed asking the test team to do so.
So we're looking for some advice on how best to test aspects of our capability designed to act as a "service", but implemented as an OSGi bundle instead of a web service.
What tools would people recommend, and is this a type of testing that's traditionally done by a developer during unit test, or can it be done by a less technical tester, undertaking the same basic principles of testing interfaces (i.e. inputs, processing, outputs)?
You could theoretically use a Remote Service Admin implementation (like Aries RSA or Eclipse ECF) to expose your internal services to the outside during testing to access them using an external system test tool.
I would not recommend to let an external team test your OSGi services though. It is much better to test the services in your own build using an integration testing tool like pax exam. It allows to define which bundles and other config to install. Then it boots up an OSGi framework with your setup and runs modified junit tests against it. The advantage is that such tests are quite realistic and still quite simple.
See here for some pax exam tests in aries rsa or apache karaf.
The first example uses the pax exam forked container for a very fast test (<1s per Test) while the second example uses the apache karaf container (~10s per Test) for tests that are very near a production system.
So you get much faster feedback than with an external system test team that will always lag a bit behind your current development. It also allows you to establish the policy that each team member runs the tests locally before committing.

What is the best way to test clients of different programming languages with a server?

We have written clients in different programming languages (Java, .NET/Silverlight, Flash, Javascript) that communicate with a server, as our target is to support various technologies on client side. The functionality they are supposed to perform is the same.
One of the main challenges we are having now is finding a simple and effective approach for testing this variety of client technologies against the server. Currently we use maven, hooked with many maven plugins such as JSTestDriver, Flexmojo, NPanday and others which we have developed by our own to do this. Is there any better approach?
Any help would be appreciated, whether it is recommendation for available frameworks/tools or innovative ideas to do this.
Thanks
What you need is a clean design, otherwise everything is a mess and you have to test everything together.
Your server should have an interface with other systems (Browsers, desktop applications, mobile apps) and then test thoroughly this API. You can do that by using the appropriate framework, depending on technology used for the server. This should be your main test effort and then try to keep API stable, so that for every new version of the server you just run a regression test.
Meanwhile you can test the client applications alone by creating a mock server that uses the same API.
Last one would be your integration test where you run a live version of your server and your client application and you run integration tests.
expect is a good framework for testing program-external text interfaces such as client-server interaction. It operates with tests formulated in Tcl on a purely black-box logic level.

Why testing ejb3 in a embedded container?

It could be a stupid question since almost everyone is preffering embedded container technique to test EJBs, but I have to clarify this because of my lack of experience.
Also, some my argue that embedded containers my not reproduce the real life situation of deploying in a real app server.
So, when testing ejb3, why is indicated to use embedded containers instead of standalone container ?
Thanks in advance.
Time.
Testing EJBs in full blown application servers usually takes up a lot of time because of app. server has to "spin up" whenever changes are made, so a lot of time is wasted. Because of that, embedded containers such as OpenEJB can save you a lot of time. Embedded Glassfish is also an options these days, although I haven't personally tried it.
Zero turnaround is a kind of holy grail in Java EE.
Here are the most relevant arguments that I've found. Please comment beside this, or add your own reasons about testing with embeddable containers vs. a real application server container. Thank you.
using an embedded container testing technique ensures flexibility(you just need to add the new libs to the classpath). as far as I understand if we want to be able to deliver the testing project for several application servers we have to not be bound to the application server container in tests implementation. some app server could use some specific annotations or deployment descriptors, if they are used then you are bound to app server
embedded containers are lighter - this means reduced time for running the tests. real appserver have difficulties in starting and stopping automatically or could hang up. so to build fully automated testing process using real app server could be too difficult...
another problem is the stateless nature of most Java EE applications. after a method invocation of a transaction boundary (for example, a stateless session bean), all JPA-entities become detached. the client loses its state. this forces you to transport the entire context back and forth between the client and the server - heavy load,Every change of the client’s state has to be merged with the server
with embedded container you have one process that runs all (tests and ejbs), with real app server you should coordinate 2 processes(AppServer and Tests)
for full testing, of course, you need also tests on real appserver. different server could have some particularities, for example class loading etc.. embedded containers, however, help testing the logic (unit and integration of units testing) so for daily automated testing this could be enough and more easy.
An embedded container is much faster to execute (start/stop) than a full container -> this affects the developer for sure. Setup/configuration is easier to automate, specially with continuous integration. On the other hand, as some core features are disabled on an embedded container, you can't test everything.
You may want to investigate http://www.jboss.org/arquillian to have both options. From the site:
Arquillian enables you to test your
business logic in a remote or embedded
container. Alternatively, it can
deploy an archive to the container so
the test can interact as a remote
client.
In the end, it depends on the kind of EJBs you want to test. Certain complex scenarios will not work on an embedded container without mocks to some external services. In my projects we test EJBS with a custom mock container we created (ultra fast and easy to use) and, if all proceeds well, we test in the real thing, a full JBoss, using a remote control API pretty much like Arquillian.
Hope it helps.