Testing microservices? - testing

I know this question is a little subjective but I am lost on what to do here. At the moment I am using Go + Go-kit to write some microservices. I'd like to test the endpoints of these microservices in an integration test type fashion but I am unsure how to go about it. The only thing I can think of is to have shell scripts that hit the endpoints and check for responses. But this seems like kludge and not a real smart practice. I feel like there should be a better way to do this. Does anyone have any suggestions?

An alternative approach to end-to-end testing is Consumer-Driven Contract (CDC).
Although is useful to have some end-to-end tests, they have some disadvantages like:
the consumer service must know how to start the provider service. This sounds like unnecessary information, likely difficult to maintain when the number of services start ramping up;
starting up a service can be slow. Even if we’re only talking a few seconds, this is adding overhead to build times. If a consumer depends on multiple services, this all starts adding up;
the provider service might depend on a data store or other services to work as expected. It means that now not only the Provider needs to be started but also a few other services, maybe a database.
The idea of CDC is described shortly as:
The consumer defines what it expects from a specific request to a service
The provider and the consumer agree on this contract
The provider continuously verifies that the contract is fulfilled
This information is taken from here. Read more on this article, it can be useful even if it is specific to Java.

You can do this in a standard Go unit test using the httptest package. This allows you to create mock Request and ResponseWriter objects that can be passed to any Handler or HandleFunc. You create the appropriate Request, pass it to your handler, then read the response out of the ResponseRecorder and check it against the expected response.
If you're using the default mux (calling http.Handle() to register handlers) you can test against http.DefaultServeMux. I've used it for microservices in the past with good results. Works for benchmarking handlers, routing, and middleware as well.

You should always use golang's native unit testing framework to test each individual service (please, no shell script!). httptest seems fine, but I would argue it is helpful to have finer-grained test boundaries -- you should really have one _test.go for each functional block of your code. Smaller tests are easier to maintain.
In terms of overall integration tests that involve multiple microservices, you shouldn't do them at development time. Set up a staging area and run the tests over there.
My 2 cents.

Related

Windows Workflow 4.5 Paradigm Questions

I've been digging into the technical details and implementation of Windows Workflow 4.5 as a beginner and having decent results. My question is more of a "why and when" vs. a "how to" question so bear with me.
I've taken a familiar concept to us all and abstracted the business logic into WF, namely the universal log on process. What I wanted to accomplish is having reusable logic that I can call from an MVC website, a Windows Forms application, etc. and have everything run through the same workflow and I have achieved that.
Now I have 2 conceptual questions as to "when" to apply WF and when to use code.
1 - Take simple validation as an example. I'm trying to login but I've passed an empty user name or password string. Obviously, I want to send a message back to the end-user "UserName Required" and "Password Required", which I've done. Now, the way that I did that is I have a validation class (FluentValidation NuGet package if it matters) but the important thing is I'm doing this in code. So, in WF I call my validation code via an ExecuteMethod and everything works just fine. My question is: Is this the wrong approach with a WF mindset? Should I be doing inline WF "If" Actions/Decisions and building up the validation messages inside of WF directly versus calling out to some chunk of code? I'm asking not just for validation but as a concept we can all relate to but more generally should I be attempting to put anything and everything I can into WF itself or is it better to call custom code? I'm looking more for best practice with reasoning from seasoned Software Architects with WF experience versus someone's opinion if possible.
2 - Picking up a workflow on another machine. So, part of the same login workflow activity requires a service method call. I've written the code and workflow in such a way that the workflow receives an In parameter of ILogOnService which has an interface method "AuthenticateUser". The concrete implementation I'm passing in calls out to an MVC4 Web Api post method, in async, to do a standard Asp.Net membership ValidateUser. Again, should I be calling this Web Api PostAsync from inside the WF workflow? If so, doesn't that tightly-couple my workflow to Asp.Net Membership and my particular service choice. It seems there are ways to get the workflow to a certain point and then resume the process on another machine, e.g. where a service is running, and continue the process but I'm not able to find good examples of attempting that.
Just looking for some guidelines and ideas from the pros at this technology but I will pick the most informative answer.
There is nothing wrong with using C# code to implement details of a workflow. In fact I always tell people that if they are using WF4 with just the standard out of the box activities they are probably doing things wrong. You really need to be creating, or have someone else do it for your, custom activities that model business activities for your business. Now if that means creating an activity that validates a login using the FluentValidation that is perfectly fine. Another time you might build a higher level business activity out of lower level WF4 activities, just combine them as works best in your case.
Calling a service with something like PostAsync can work well if you know the action is short lived and is normally available. However when you get into SOA styles you really want to start using temporal decoupling so one service is not dependent on another service being available right away. And when you get into temporal decoupling you really want to be using queues, maybe MSMQ or maybe another similar technology. So in that cas you really want to send a one way message with a response queue and have to workflow go idle and wait for the response message to arrive. This would reload the workfloe, possibly on another machine. Now that might not always be appropriate, for example in your login it would not be much use to grant the login a day later because the membership service was unavailable, but can result in very scalable and fault tolerant systems. Of course there is no free lunch as these systems are very hard to design properly.

Is it possible to unit test service contracts without having to run the actual service?

I work on an application that uses a WCF service (which in fact is a service-client solution).
The problem that came up was that when we did a bit of refactoring, it turned out that some of the service contracts became invalid. This was not shown until the service and application were running.
Now, I would like to write test cases that simply tests the service contracts such that when they missmatch the test cases fail. Is this possible to do without having to run the actual service?
That is, can I some how simulate the service part and call the client calls at the same time in the test case?
I would say this is not possible, because there are very much traps which can happen if you execute functions over the net.
E.g. timeouts, connection failures, authorization problems and so on.
Write offline unit tests for the code in the methods itself should be possible, but this is IMHO just one little part of the work.
I'm just speaking my mind here. Would .Net Reflection solve your problem? I mean, inspect the binaries that contain the service(s) contracts that you want to verify.

good practice: REST API as the interface between the interface layer and business layer?

I was thinking about the architecture of a web application that I am planning on building and I found myself thinking a lot about a core part of the application. Since I will want to create, for example, an android application to access it, I was already thinking about having an API.
Given the fact that I will want to have an external API to my application from day one, is it a good idea to use that API as an interface between the interface layer (web) and the business layer of my application? This means that even the main interface of my application would access the data through the API. What are the downsides of this approach? performance?
In more general terms, if one is building a web application that is likely to need to be accessed in different ways, is it a good architectural design to have an API (web service) as the interface between the interface layer and business layer? Is REST a good "tool" for that?
Sounds like you've got two questions there, so my answer is in two parts.
Firstly, should you use an API between the interface layer and the business layer? This is certainly a valid approach, one that I'm using in my current project, but you'll have to decide on the benefits yourself, because only you know your project. Possibly the largest factor to consider is whether there will be enough different clients accessing the business layer to justify the extra development effort in developing an API? Often that simply means more than 1 client, as the benefits of having an API will be evident when you come to release changes or bug fixes. Also consider the added complexity, the extra code maintenance overhead and any benefits that might come from separating the interface and business layers such as increased testability.
Secondly, if you implement an API, should you use REST? REST is an architecture, which says as much about how the remainder of your application is developed as it does the API. It's no good defining resources at the API level that don't translate to the Business Layer. Rest tends to be a good approach when you want lots of people to be able to develop against your API (like NetFlix for example). In the case of my current project, we've gone for XML over HTTP, because we don't need the benefits that Rest generally offers (or SOAP for that matter).
In general, the rule of thumb is to implement the simplest solution that works, and without coding yourself into a corner, develop for today's requirements, not tomorrow's.
Chris
You will definitely need need a Web Service layer if you're going to be accessing it from a native client over the Internet.
There are obviously many approaches and solutions to achieve this however I consider the correct architectural guideline to follow is to have a well-defined Service Interface on the Server which is accessed by the Gateway on the client. You would then use POCO DTO's (Plain old DTO's) to communicate between the endpoints. The DTO's main purpose is to provide optimal representation of your web service over the wire, it also allows you to avoid having to deal with serialization as it should be handled transparently by the Client Gateway and Service Interface libraries.
It really depends on how to big your project / app is whether or not you want want to go through the effort to mapping your DTO's to the client and server domain models. For large applications the general approach would be on the client to map your DTO's to your UI Models and have your UI Views bind to that. On the server you would map your DTO's to your domain models and depending on the implementation of the service persist that.
REST is an architectural pattern which for small projects I consider an additional overhead/complexity as it is not as good programattic fit compared to RPC / Document Centric web services. In not so many words the general idea of REST is to develop your services around resources. These resources can have multiple representations which your web service should provide depending on the preferred Content-Type indicated by your HTTP Client (i.e. in the HTTP ACCEPT HEADER). The canonical urls for your web services should also be logically formed (e.g. /customers/reports/1 as opposed to /GetCustomerReports?Id=1) and your web services would ideally return the list of 'valid states your client can enter' with each response. Basically REST is a nice approach promoting a loosely-coupled architecture and re-use however requires more effort to 'adhere' to than standard RPC/Document based web services whose benefits are unlikely to be visible in small projects.
If you're still evaluating what web service technology you should use, you may want to consider using my open source web framework as it is optimized for this task. The DTO's that you use to define your web services interface with can be re-used on the client (which is not normally the case) to provide a strongly-typed interface where all the serialization is taken for you. It also has the added benefit of enabling each web service you create to be called by SOAP 1.1/1.2, XML and JSON web services automatically without any extra configuration so you can choose the most optimal end point for every client scenario, i.e. Native Desktop or Web App, etc.
My recent preference, which is based on J2EE6, is to implement the business logic in session beans and then add SOAP and RESTful web services as needed. It's very simple to add the glue to implement the web services around those session beans. That way I can provide the service that makes the most sense for a particular user application.
We've had good luck doing something like this on a project. Our web services mainly do standard content management, with a high proportion of reads (GET) to writes (PUT, POST, DELETE). So if your logic layer is similar, this is a very reasonable approach to consider.
In one case, we have a video player app on Android (Motorola Droid, Droid 2, Droid X, ...) which is supported by a set of REST web services off in the cloud. These expose a catalog of video on demand content, enable video session setup and tear-down, handle bookmarking, and so on. REST worked out very well for this.
For us one of the key advantages of REST is scalability: since RESTful GET responses may be cached in the HTTP infrastructure, many more clients can be served from the same web application.
But REST doesn't seem to fit some kinds of business logic very well. For instance in one case I wrapped a daily maintenance operation behind a web service API. It wasn't obvious what verb to use, since this operation read data from a remote source, used it to do a lot of creates and updates to a local database, then did deletes of old data, then went off and told an external system to do stuff. So I settled on making this a POST, making this part of the API non-RESTful. Even so, by having a web services layer on top of this operation, we can run the daily script on a timer, run it in response to some external event, and/or have it run as part of a higher level workflow.
Since you're using Android, take a look at the Java Restlet Framework. There's a Restlet edition supporting Android. The director of engineering at Overstock.com raved about it to me a few years ago, and everything he told us was true, it's a phenomenally well-done framework that makes things easy.
Sure, REST could be used for that. But first ask yourself, does it make sense? REST is a tool like any other, and while a good one, not always the best hammer for every nail. The advantage of building this interface RESTfully is that, IMO, it will make it easier in the future to create other uses for this data - maybe something you haven't thought of yet. If you decide to go with a REST API your next question is, what language will it speak? I've found AtomPub to be a great way for processes/applications to exchange info - and it's very extensible so you can add a lot of custom metadata and yet still be eaily parsed with any Atom libraries. Microsoft uses AtomPub in it's cloud [Azure] platform to talk between the data producers and consumers. Just a thought.

SOA Services Testing

What is the best way to test SOA services? Should I write my own tests using WCF or should I be using a testing framework such as SOAPUI. What are the limitations to each method and are there better tools?
You definitely should be using SoapUI. Especially in a mixed environment. i.e. in a mixed environment (java, delphi, WCF, etc..) SoapUI will be your common tool that can confirm what works and what doesn't. It can also be used to set up mock services so you can test against a service that isn't yet built. i.e. from the WSDL you can build something in minutes that will log requests and give responses. That's hugely beneficial. Down the road, you'll be able to verify what works and what doesn't using the common tool, rather than fighting about "works here in technology x, so it must be a problem at YOUR end".
Look into the mockservices demo where they show how to do simple canned responses based on xpath. Very simple, and effective. You can send a response and return a variety of predictable responses. for example, you send updates for emps Tom, Dick, Harry. Configure your SoapUI mockservice to return success for Tom, soft error for Dick, catastrophic error for Harry.
IMO, the best place to start before building any web service is to build a mockservice in SoapUI. Then you can test with sample payloads and see if everybody is seeing what they expect. i.e. HR sends a new employee to Payroll, using the WSDL that everyone agreed to. The Payroll dev hasn't even coded his part yet, but by looking at the transaction in SoapUI, he sees that the EmpID format is "totally not going to work on our end". Now HR can make a change. The Payroll dev also sees that the Termination Dates are 12/31/1889 for employees that haven't been fired yet. He expected ''. Now a discussion can ensue between the devs and analysts, instead of later on during integration or startup, when the discussion would likely involve several layers of PMs, "situation leads", etc..
I suggest you also take a look of the brand new SO-Aware from Tellago Studios; http://www.tellagostudios.com/ . One of the features is automatic service testing.
Soa testing just ensures that all independent services behave in the expected manner, all the while adhering to the input and output contract established by these services. The tool should not just limit itself to webservices testing.
SOA testing tools:
Soap UI
SOArite.

How to Test an Undocumented Web Service?

I came across this question recently, could anyone please help me what should be my approach as a tester.
Suppose, there is a webservice whose functionality have been changed and there is no documentation available of the same. What will be your approach to test the same?
Update: Does the same answer hold if Database functionality changed and no documentation.
It seems you might be asking one of two different questions:
1) How to discover the API of a black-box web service.
In this case, the best source would be the source of the web-service (with the existence failure of the documentation), alternatively look at existing clients, or the ?wsdl of the service.
2) How to discover what are correct and incorrect responses from the web service.
For this you need either requirements, or documentation, or correct clients. Probably the most likely to exist in this case is a client. Alternatively the web-service might be implementing some function the results of which can be confirmed externally.
You can't test something with no documentation. How would you know what results to expect?
Maybe you're looking for "documentation" in the wrong place. Somebody made these changes. They had some information telling them what changes to make to the database and to the service. There may even be a requirements document, but maybe also some design documents.
Get those, and use them to figure out what changed. Use that information to decide how to change your tests.
If you are using the service in a useful way, then presumably you have some calls which return some known results, even though this may not be documented. If this is the case then I would write tests which validate my expectations of the service as it is currently. Then at least if changes are made you'll have more chance of knowing which bits have changed that affect you.
Generally speaking, a web service provides a consistent contract between the providing service and callers. It specifies that whilst the back-end implementation might change, the interface for the service will remain consistent.
If you are interested in discovering what functions are available for the service, it may well provide metadata that documents it's available functions and message types. Usually, this is accessible by appending "?wsdl" to the web service URL, although other schemes exist.
Once you have a good idea of the available functions, you can begin to invoke them through your testing framework and evaluating the responses in accordance with your usual test processes.