How to test Service Contracts implemented as OSGi Bundles? - testing

We are in the process of transitioning towards SOA.
Our current goal is to try and ensure that more of the application is developed as "Services" (mainly to improve visibility of capability, re-use and de-risk change). Some of those services will be exposed as web services, but many (and probably the majority) will not, and be used for "internal" use only to help reap some of the benefits of SOA.
For those "internal" services we are currently intending on implementing them as OSGi bundles; however we are struggling to understand how best to test them. Our goal is to enable the current System Test team to test all types of services and we have been investigating tools like SoapUI and SOA Test; however it's becoming clearer that we may face some challenges in testing our services implemented as OSGi bundles using tools like these; and indeed asking the test team to do so.
So we're looking for some advice on how best to test aspects of our capability designed to act as a "service", but implemented as an OSGi bundle instead of a web service.
What tools would people recommend, and is this a type of testing that's traditionally done by a developer during unit test, or can it be done by a less technical tester, undertaking the same basic principles of testing interfaces (i.e. inputs, processing, outputs)?

You could theoretically use a Remote Service Admin implementation (like Aries RSA or Eclipse ECF) to expose your internal services to the outside during testing to access them using an external system test tool.
I would not recommend to let an external team test your OSGi services though. It is much better to test the services in your own build using an integration testing tool like pax exam. It allows to define which bundles and other config to install. Then it boots up an OSGi framework with your setup and runs modified junit tests against it. The advantage is that such tests are quite realistic and still quite simple.
See here for some pax exam tests in aries rsa or apache karaf.
The first example uses the pax exam forked container for a very fast test (<1s per Test) while the second example uses the apache karaf container (~10s per Test) for tests that are very near a production system.
So you get much faster feedback than with an external system test team that will always lag a bit behind your current development. It also allows you to establish the policy that each team member runs the tests locally before committing.

Related

API Automation through Java or Postman

In my company we use Ruby to create a framework for API automation and I have heard we can automate using Postman or SoapUI. So why do we have to create a automation framework when we already have tools for it?
It is like a buying a suit at the clothes shop versus going to a tailor to get a suit that is measured especially to your needs.
Using an existing tool will require less initial setup, you will have access to a lot of commonly needed features, without reinventing the wheel. For instance, in Postman there are available test snippets that you can use with little or no programming knowledge. Tools such as ReadyAPI, Katalon Studio, Robot Framework, SoapUI, etc. usually don't have a too steep of a learning curve, compared to developing a customer automation framework from scratch.
Using tools is fine, especially if you understand how they work in the background and have analysed the testing needs for your particular project. For example, a tool like REST Assured makes writing tests for RESTful webservices very easy, but it's actually very complex in the background.
You would build an inhouse automation framework if after researching the existing solutions, you realize they don't fully provide all that you need. A well designed/architectured framework will be far more customizable than any other tool, although it will require more initial work and maintenance as well.
In terms of using a custom test automation framework your testers will generally have to be more technical, more like SDETs rather than typical testers, but does not always has to be the case - I have seen automation frameworks build by developers and the testers would only write tests inside it by re-using the methods in the framework.
Lastly I would advise you to do some experimentation, try one of the commercial or Open Sources tools for API testing and after doing some testing with it try doing the same with a more hands-on approach, like using Python's Request client Apache Http client for Java, but every language has it's equivalent.

Client - server integration testing: mock or not?

I'm working on project with two applications: android app (client) and rest service (server). My android app consumes my rest service.
Both applications are tested separately to ensure they're doing their business as expected.
During server tests I prepare requests and check server responses.
During client tests I set up a simple http mock server and test client's requests against different mocked responses.
Now, this technique works pretty well. It gives me a flexibility I like. I can use different test frameworks and continuous integration environments. But there is one weak point. In both (client and server) test cases I specify the same api. I assume that e.g.
GET /foo-list.json
will return HTTP 200 with json
[{
id: 1,
name: foo1,
}, {
id: 2,
name: foo2
}]
So I repeat myself. If I change a response format my client tests won't fail.
My question is about good practices in testing this kind of scenario. How to make true integration tests without sacrificing flexibility of independent tests. Should I test client with mocked server or with a real instance of my rest service?
Please share your professional experience.
In your scenario you should continue to write unit tests to test individual classes, and integration tests to test the inter-operation between multiple application layers (e.g. business and database layers).
You ask:
"How to make true integration tests without sacrificing flexibility of independent tests"
All of your code should should use abstractions, so that you can use dependency injection to unit test classes in complete isolation using mock dependencies. The use of mocks will ensure that these tests will remain independent i.e. not coupled to any other classes. Hence taking this approach, the integration tests, which would use your final concrete classes, would not affect the unit tests which use the mocked classes.
Also:
"Should I test client with mocked server or with a real instance of my rest service?"
In addition to unit and integration tests you should also perform client-server integration testing; I use automated acceptance testing for doing this. Using a test framework such as Cucumber (also check out calabash-android, which is written specifically to test mobile applications) you can write tests which would test specific features and scenarios which would interact with both the client (your Android application) and server (your RESTful service). These client-server integration tests would start-up and stop concrete instances of the client and server.
Mocks are for unit testing. Your description of the tests with the mocks describes exactly that. You test the client and server as separate units.
Integration testing tests if the units work well together. Since the interface is a REST interface, mocking makes no sense then, you have to test the real thing over HTTP.
See also What is the difference between integration and unit tests?
If your service is based in Java, I'd strongly recommend looking into the Spock framework, for mocking any sort of calls that might be coming from the client. Since Spock is just an extension of jUnit, you might also be able to use it for Android (though, to be fair I've never done Android development)
I'd say you want to do two things. Integration testing and Unit testing. Integration testing would attempt to bring up the android application and cause it to make service calls, ensuring the contexts interact with each other kindly.
However, in your regular commits, I'd suggest unit testing that mocks away everything but the class under test. Spock makes this pretty easy to do, and since it's built on top of jUnit all it takes a jar.
There is no reason you can't run automated end to end tests with a real service instance. You can run a real service instance on the same test machine you are using to run the unit tests, perhaps in the same container. You can set up configuration to use a different URL for the server instance for running automated end to end testing.
Why would you want to do the extra work of creating the mock service if you can run them against the real service?
I would only create a mock service, if the service was an external service over which I had no control!

What is the best way to test clients of different programming languages with a server?

We have written clients in different programming languages (Java, .NET/Silverlight, Flash, Javascript) that communicate with a server, as our target is to support various technologies on client side. The functionality they are supposed to perform is the same.
One of the main challenges we are having now is finding a simple and effective approach for testing this variety of client technologies against the server. Currently we use maven, hooked with many maven plugins such as JSTestDriver, Flexmojo, NPanday and others which we have developed by our own to do this. Is there any better approach?
Any help would be appreciated, whether it is recommendation for available frameworks/tools or innovative ideas to do this.
Thanks
What you need is a clean design, otherwise everything is a mess and you have to test everything together.
Your server should have an interface with other systems (Browsers, desktop applications, mobile apps) and then test thoroughly this API. You can do that by using the appropriate framework, depending on technology used for the server. This should be your main test effort and then try to keep API stable, so that for every new version of the server you just run a regression test.
Meanwhile you can test the client applications alone by creating a mock server that uses the same API.
Last one would be your integration test where you run a live version of your server and your client application and you run integration tests.
expect is a good framework for testing program-external text interfaces such as client-server interaction. It operates with tests formulated in Tcl on a purely black-box logic level.

Why would I use Apache ServiceMix over just ActiveMQ

I am starting to plan a new platform which needs to integrate various services from various externals platforms. Essentially I'm tying together a bunch of internal, homegrown services and several outside services we license from 3rd parties.
Generally speaking the external services are all web services but they are a mishmash of REST, SOAP and XML-RPC.
Some of our internal services have REST API's but there are many things that aren't so easy: XMPP, Hessian, custom socket protocols, Java RPC, uWSGI, and the list goes on.
From my research it seems like an ESB like Apache ServiceMix might be a good fit for my needs. However it looks REALLY complex. I'm not launching rockets but I do need transactional messaging (mostly for eCommerce and entitlement stuff). I feel like the message queue ServiceMix uses under the hood (ActiveMQ) might be enough on its own.
Can anyone explain what ServiceMix provides above and beyond ActiveMQ? I know there is a lot but it is hard for an ESB n00b like me to really grasp the tangible difference when I'm waste-deep in buzzwords.
Thanks!
ServiceMix is an OSGi based container that allows you to deploy and run applications in a controlled runtime environment (like a J2EE container but less heavy weight and without programming to e.g. J2EE contracts).
Thanks to OSGi you can partition your applications into parts and update/evolve these parts independently from each other. You can upgrade parts of your application without having to take down the entire application. There is far better life cycle management in OSGi then you get with standalone Java processes.
If you think of creating an application that will evolve over time, then OSGi is something you should consider. And ServiceMix provides you a runtime OSGi container to deploy your applications to. I highly recommend the book "OSGi in Action" from Manning.
For tying together different external services that might even use different transport protocols I recommend Apache Camel, which btw also deploys nicely into ServiceMix.
Btw, existing applications can be deployed into an OSGi container with fairly little effort (without requiring code changes).
Torsten Mielke
FuseSource
Web: www.fusesource.com
Blog: http://tmielke.blogspot.com

What to know before setting up a new Web Dev Env?

Say you want to create a new environment for a team of developers to build a large website on a LAMP stack.
I am not interested in the knowledge needed for coding the website (php,js,html,css,etc.). This stuff I know.
I am interested in what you need to know to setup a good environment and workflow with test server, production sever, version control, backups, etc.
What would be a good learning path?
As someone who has lead this process at several companies, my recommendation is to gradually raise the "maturity" of your organisation as a software factory by incrementally consolidating a set of practices in an order that makes sense to your needs. The order I tend to follow (starting with things that I consider more basic, to the more advanced stuff):
Version control - control your sources. I used to work with SVN but I'm gradually migrating my team to Mercurial (I agree to meagar's recommendation for a distributed VCS). A great HG tutorial is in hginit
Establish a clear release process, label your releases in VCS, do clean builds in a controlled environment, test and release from these.
Defect tracking - be systematic about your bugs and feature requests. I tend to use Trac because it gives me a more or less complete solution for project management plus a wiki that I use as a knowledge base. But you have choices galore (Jira, Bugzilla, etc...)
Establish routine Testing practices. Unit tests e.g. by using one of the xUnit frameworks (make it a habit to at least write unit tests for new functions you write and old code you modify) and Integration / System tests (for webapps use some tool like Selenium).
Make your tests run frequently, as a part of an automated build process
Eventually, write your tests before you code (Test-Driven Development) and strive to increase coverage.
Go a step forward in your build/test/release cycle by setting up some continuous integration system (to make sure your build and tests are run regularly, at least nightly). I recently started using Hudson and it is great for our Java/Maven projects, but you can use it for any other build process as well
In terms of testing environments, I agree with meagar's recommendations. We have these layers:
Test at developers workstations (should contain a full setup to run your code)
Staging environment: clone your production environment as closely as possible and deploy and run your app there. We also use VMs.
Production preview: we deploy our app to the production servers with production data but in a different "preview" URL for our internal use only. We run part of our automated Integration tests against this server, and do some additional manual testing with internal users
Production - and keep fingers crossed ;)
In terms of backup, at least for your source code, distributed VCS give you the advantage that your full repos are replicated in many machines, thus minimising the risk of data loss (which is much more critical with centralised repos as is the case with SVN).
Before you do anything else, ask your developers what they want out of a test/production environment. You shouldn't be making this decision, they should. The answer to this depends entirely on what kind of workflow they're familiar with and what kind of software they'll be developing.
I'd personally recommend a distributed VCS like git or mercurial, local WAMP/LAMP stacks on each developer's workstation (shared "development" servers are silly) and a server running some testing VMs which are duplicates of your production environment. You can't ask for more specific advice than that without involving your developers.