Integration testing of func in OSGI container - testing

I'm using FuseESB to run my app, which is essensially OSGI container (Felix), i'd like to figure approach to test my OSGI services in integration mode (including outer dependencies like DB, outer services, etc). First on a thought is ability to run specific bundle into container which involve all app services into running tests defined in this bundle. Can somebody help with that kind of issue? THANKS!

There are differnt ways of testing this.
Since FuseESB is based on Apache Karaf you might test with the apache karaf-pax-exam tools to test a complete container setup automatically.
Another way of just testing your OSGi bundles in a OSGi container is to use pax-exam directly. Last but not least if you just want to test your service look-up functionality you might test with pojosr, it's quite nice for testing but has it's limits especially if you depend on container features.
That said you'll find information at the following pages:
Pax-Exam
Apache Karaf
sample how Pax-Web uses pax-exam in its iTests

You may find http://www.javabeat.net/2011/11/how-to-test-osgi-applications/ helpful as an overview of the various OSGi test options. Configuring PAX-Exam to pull in your whole FuseESB container and get all your app services present will involve certain challenges, but once you've got the knack it can be very handy.

bndtools as the possibility to do JUnit tests inside the container.

Related

Running integration/e2e tests on top of a Kubernetes stack

I’ve been digging a bit into the way people run integration and e2e tests in the context of Kubernetes and have been quite disappointed by the lack of documentation and feedbacks. I know there are amazing tools such as kind or minikube that allow to run resources locally. But in the context of a CI, and with a bunch of services, it does not seem to be a good fit, for obvious resources reasons. I think there are great opportunities with running tests for:
Validating manifests or helm charts
Validating the well behaving of a component as part of a bigger whole
Validating the global behaviour of a product
The point here is not really about the testing framework but more about the environment on top of which the tests could be run.
Do you share my thought? Have you ever experienced running such kind of tests? Do you have any feedbacks or insights about it?
Thanks a lot
Interesting question and something that I have worked on over the last couple of months for my current employer. Essentially we ship a product as docker images with manifests. When writing e2e tests I want to run the product as close to the customer environment as possible.
Essentially to solve this we have built scripts that interact with our standard cloud provider (GCloud) to create a cluster, deploy the product and then run the tests against it.
For the major cloud providers this is not a difficult tasks but can be time consuming. There are a couple of things that we have learnt the hard way to keep in mind while developing the tests.
Concurrency, this may sound obvious but do think about the number of concurrent builds your CI can run.
Latency from the cloud, don't assume that you will get an instant response to every command that you run in the cloud. Also think about the timeouts. If you bring up a product with lots of pods and services what is the acceptable start up time?
Errors causing build failures, this is an interesting one. We have seen errors in the build due to network errors when communicating with our test deployment. These are nearly always transitive. It is best to avoid these making the build fail.
One thing to look at is GitLab are providing some documentation on how to build and test images in their CI pipeline.
On my side I use travis-ci. I build my container image inside it, then run k8s with kind (https://kind.sigs.k8s.io/) inside travis-CI, and then launch my e2e tests.
Here is some additional information on this blog post: https://k8s-school.fr/resources/en/blog/k8s-ci/
And the scripts to install kind inside travis-ci in 2 lines: https://github.com/k8s-school/kind-travis-ci.git. It allows lots of customization on the k8s side (enable psp, change CNI plugin)
Here is an example: https://github.com/lsst/qserv-operator
Or I use Github Actions CI, which allows to install kind easily: https://github.com/helm/kind-action and provide plenty of features, and free worker nodes for open-source projects.
Here is an example: https://github.com/xrootd/xrootd-k8s-operator
Please note that Github action workers may not scale for large build/e2e tests. Travis-CI scales pretty well.
In my understanding, this workflow coud be moved to an on-premise gitlab CI where your application can interact with other services located inside your network.
One interesting thing is that you do not have to maitain a k8s cluster for your CI, kind will do it for you!

How to selenium test web sites depending on each other? (OAuth2 IdS, protected sites)

I have an IdS (Thinktecture Identity Server3) and various web sites trusting the IdS.
I have selenium tests for IdS and for each of the sites.
I use TeamCity and Octopus Deploy.
Changes in IdS should trigger test of dependent web sites. Changes in individual sites should trigger only test of the site (as it is).
What is the best way of ensuring this? I should think this is a common problem? ;)
BR, Anders
One way to do so, is to use App settings configuration options of the .Net itself. You can use config transformations to create different configuration per site and change. You will have to however map each, though. This will allow you to keep everything in the project. Example of such script creating transformed config files using the command line transform execution tool. Or if you prefer to use TeamCity with XML pokes. I've used the later with great success on Selenium and multi site platform test framework. Before each test build that was chained, we modified the XMLs, so the execution was dedicated to the related Git branch or repo that TeamCity was set to monitor.
I found what I was looking for in the most obvious of places. On the web site builds, I added a Finish Build Trigger pointing to the ids build. This way all my sites (I have only one :)) gets selenium tested.

Getting coverage using OpenCover for Selenium tests

The background:
We have a project starting a service that gets controlled from the web interface GUI.
We're not using a specific (from a commercial point of view) web server, but an in-house created wrapper around the windows service that manages all the web interface interactions.
What we have:
Now we've started using Selenium & MSTest for testing the web interface and we're trying to get a coverage for these kind of tests, and OpenCover seemed to do the deal. The problem is that is not (or we're doing something different or wrong).
The only code coverage that I'm not getting is the one for the method used to start the windows service and all others that get called in the process (since I have all access to all the PDBs too), but afterwards nothing is covered, based on the action that take place from the Selenium's interaction with the browser.
Any hints/ideas or maybe other tools that are able to do the job (if even possible) are appreciated.
If you're running an ASP.net app, you're going to need to attach OpenCover to IIS or IISExpress to get accurate code coverage with selenium. That makes it a little hard to use MSTest with. You may want to consider moving as much logic into your services, and write unit tests against those.
Here's a quick example hot to attach open cover to IIS
OpenCover.Console.exe -target:C:\Windows\System32\inetsrv\w3wp.exe -targetargs:-debug
-targetdir:C:\Inetpub\wwwwoot\MyWebApp\bin\ -filter:+[] -register:user

How to do integration testing with front-end code and API in separate repositories?

Any strategies for doing this? We have a Rails codebase that is currently fully integrated (same app serves up JS assets as does the back-end heavy lifting), but are thinking of extracting the two out into separate services, each in their own git repository and running on separate servers.
I'm planning on unit/acceptance testing the API with a small ruby HTTP client that will also act as documentation for the API endpoints, and the JS front-end (Brunch.io, Backbone, Chaplin) will have unit/acceptance testing internally as well... but I feel like I should be writing cucumber tests that integrate the two together, right? Where do those cukes live? In which repo?
Appreciate any insight here. Thanks!
In a general sense, if you have code that is for both your server and your client, then the "right" place to keep it depends on which side of your app is more central or "heavier": the client or the server.
That being said, from the way your describe things in your question, it kind of sounds you consider the Rails app "primary". For instance, you mention that you currently have "JS assets" integrated/being served by your "Rails codebase" ... not Rails assets being served by your JS server ;-)
So that answers things on the theoretical level, but I also think it makes sense to put the code in your Rails codebase for a practical reason: Cucumber is a Rails tool, not a JS one. You might use it to test some non-Ruby code, but ultimately it's being run by Ruby.
I don't know for sure, but I suspect you'll create headaches for yourself if you try and put your Cucumber specs in your JS codebase, then try to run them from your Rails codebase. Plus, that really tightly couples the two codebases: to run your tests you need both codebases on your testrunner vs. if you keep the Cucumber stuff in Rails-land your test runner could just have your Rails code, and it could run against a different server that has your JS code.
So ultimately it sounds to me like the Cucumber stuff belongs in Rails-land ... but going the other way (and storing it with your JS repo) doesn't seem horrible to me either, just potentially more problematic.

Jakarta Cactus alternate?

Greetings, we have a project with loads of beans, JSP and etc. There is a desperate need for performing automated tests in our environment (we use Maven). Now, we can easily write tests for database project layer, for various security utilities we implemented. But the JSP pages remain untested.
I searched for utilities for server-side testing and Cactus seems the best option. However, according to their changelist, their last release is 1.8 and it was released more than two years ago!
So the question is - what happened to Cactus, is it still developing or what? And what are the recent alternates for Jakarta Cactus (if any exists)?
I've used a combination of Spring, JUnit and HttpClient with some success in recent projects.
Apache HttpClient provides a powerful and flexible API for constructing and sending http requests into your application. It cannot replicate a web browser, say by running client side scripts, however if there is sufficient content within the resulting http responses (headers, URI, body), then you can use this information to traverse pages within the application and validate the behavior. You can post forms, follow re-directs, process cookies and supply the inputs into your application.
JUnit (junit.org) drives the tests, invoking a series of pages with HttpClient and can be deployed alongside the application, run standalone with ant/maven, or run separately inside your IDE.
Spring (springsource.org) is, of course, optional as you may not be using it for your project. I've found it useful to stub/mock out parts of the application, such that I can isolate specific areas, such as front-end controllers, through to the business logic, by substituting the DAOs to return specific data values. It provides an excellent Test Context Framework and specialized TestRunners that hook in well to testing frameworks like JUnit (or TestNG if you prefer).
Cactus served as a good server-side testing framework in the ejb2 ages and but it's not supported anymore.
You can use combination of both Mock testing (fine-grained) and In-Container testing (coarse-grained) strategy to test your application completely.
Mock Testing Frameworks : Mockito, Jmockit, EasyMock etc..
Integration Testing Frameworks (Java EE) : Arquillian, Embeddable API, etc..
I prefer Mockito and Arquillian for server-side testing.
How about Arquillian? I haven't used it and it doesn't even have a stable version yet, but at least it's in active development.
You might want to try selenium. That with jBehave is a good combination I'm finding. And the more support for both those projects, the more they will not go defunct (like cactus).