We're overhauling our frontend and backend service contract reliability and are investigating two tools/techniques that seem to conflict. Consumer and provider code generation from an OpenAPI Spec (OAS) with a tool like openapi generator vs. consumer driven contract (CDC) testing with a tool like pact.
OAS Code Generation
OAS works great for generating the consumer code, and we're working on integrating provider-side generation to complete the contract confidence on both sides. As long as contract alterations start w/ the OAS and providers and consumers generate their code, is this a suitable strategy?
Pact Unit Testing
Pact CDC testing doesn't seem to involve an OAS at all, but instead programmatically builds contracts between the consumer and provider via unit testing. When using a pact broker, the addition of the can-i-deploy tool seems like a nice addition to a ci/cd pipeline. One nice thing w/ pact is that it appears to support kafka event mocking, which would be something openapi-generator doesn't cover.
If every service, front and back, is using OAS code generation, is pact useful? I could see it's utility in an environment without codegen, but otherwise starts to feel redundant/conflicting.
Thanks for any insight or anecdotes you can provide!
Pact is a contract testing framework that uses specification by example to ensure providers actually implement what the consumer needs. This removes ambiguity, but comes at a cost (writing / maintaining tests). It should be noted that the status quo here is end-to-end tests, which are more expensive than contract tests (for this purpose).
This article talks about the differences between schemas and how they relate to contract testing, and how they might be used together.
If every service, front and back, is using OAS code generation, is pact useful?
The short answer here, is how do you ensure that the compatible versions of your consumers and providers are in sync? If there are breaking changes between versions of your provider, you now need to synchronise the release of all consumers. And if there is a problem with releasing, you then need to reverse it all out - this is a key problem that contract testing addresses.
Pact CDC testing doesn't seem to involve an OAS at all
Pactflow has a feature to combine these two approaches, but the philosophy behind it is outlined here
When using a pact broker, the addition of the can-i-deploy tool seems like a nice addition to a ci/cd pipeline. One nice thing w/ pact is that it appears to support kafka event mocking, which would be something openapi-generator doesn't cover.
Yes, that's more of a practical benefit of the Pact Broker and Pact ecosystem. If you need to expand beyond REST (and what OAS) can document, you will need a different strategy. Pact might be more general purpose for those use cases.
Related
I am in a situation where I can use Service Fabric (locally) but cannot leverage Azure Service Bus (or anything "cloud"). What would be the corollary for queuing/pub-sub? Service Fabric is allowed since it is able to run in a local container, and is "free". Other 3rd party messaging infrastructure, like RabbitMQ, are also off the table (at the moment).
I've built systems using a locally grown bus, built on MSMQ and WCF, but I don't see how to accomplish the same thing in SF. I suspect I can have SF services use a custom ICommunicationListener that exposes msmq, but that would only be available inside the cluster (the way I understand it). I can build an HTTPBridge (in SF) in front of those to make them available outside the cluster, but then I'd lose the lifetime decoupling (client being able to call a service, using queues, even if that service isn't online at the time) since the bridge itself wouldn't benefit from any of the aspects of queuing.
I have a few possibilities but all suffer from some malady that only exists because of SF, locally. Also, the same code needs to easily deploy to full Azure SF (where I can use ASB and this issue disappears) so I don't want to build two separate systems just because of where I am hosting it in some instances.
Thanks for any tips.
You can build this yourself, for example like this. This uses a BrokerService that will distribute message-data to subscribed services and actors.
You can also run a containerized queuing platform like RabbitMQ with volumes.
By running the queue system inside the cluster you won't introduce an external dependency.
The problem is not SF, The main issue with your design is that you are coupling architectural requirements to implementations. SF runs on top of VirtualMachines, in the end, the only difference is that SF put the services in those machines, using another solution you would have an Agent Deploying these services in there or doing a Manual deployment. The challenges are the same.
It is clear from the description that the requirement in your design is a need for a message queue, the concept of queues are the same does not matter if it is Service Bus, RabbitMQ or MSMQ. Each of then will have the basic foundations of queues with specifics of each implementation, some might add transactions, some might implement multiple patterns, and so on.
If you design based on specific implementation, you will couple your solution to the implementation and make your solution hard to maintain and face challenges like you described.
Solutions like NServiceBus and Masstransit reduce a lot of these coupling from your code, and if you think these are not enough, you can create your own abstraction. Then you use configurations to tied your business logic to implementations.
Despite the above advice, I would not recommend you using different
solutions per environment, because as said previously, each solution
has it's own implementations and they might not assimilate to each other, as example, you might face issues in
production because you developed against MSMQ on DEV and TEST
environments, and when deployed to Production you use ServiceBus, they
have different limitations, like message size, retention period and son
on.
If you are willing to use MSMQ, you can add MSMQ to the VMs running your cluster and connect from your services without any issue. Take a look into this SO first: How can I use MSMQ in Azure Service Fabric
I would like to know are there feature wise same or different? Could you also mention any pros and cons about both of these? Also please mention real-world use case for both Embedded BrokerService vs installed ActiveMQ broker. Thanks in advance!
ActiveMQ is just a Java application, and the embedded version offers essentially the same features as the stand-alone version. In fact, you can configure an embedded broker to take its configuration from an XML file, in which case it will look very similar to the stand-alone broker.
Embedding a broker is a reasonable thing to do if you need the benefit of programmatic configuration; that is, you want to configure things according to rules which are hard to implement in an XML file. It also makes sense if you want close-coupled operation between the broker and the application components, with message data being passed in memory. This might be the situation if you're using JMS as an inter-module communication mechanism within the application.
Embedding a broker has the disadvantage -- and it can be a profound one -- of making it difficult to disentangle problems in the broker from problems in your application. Figuring out the cause of, say, runaway memory consumption could be very difficult. You can get commercial support for ActiveMQ, should you need it, but it will be hard for any commercial organization to support a hybrid broker+application installation.
We manage an application that consumes a number of external services as part of its general operation. Some services are Soap Services, others Restful Apis. Some services are also managed by us, others are third party services. Some services are central to the application's functionality, others are more auxiliary/non-mandatory.
Each external service exposes a 'test' and 'live' environment. We currently follow the policy that when our application is under test (that's development, testing and staging phases), it should consume the test version of the external service. It is only in our live environment that the live versions of the services are consumed.
There is a not-insignificant amount of overhead in managing which version of the service to consume between environments, but this is not the issue. My question is whether or not this policy is a good idea? Would we be better served instead by always consuming live versions of external services? Have we made the mistake of exposing the test versions of the external services we manage ourselves, i.e. should test environments remain private?
We have not (yet) been burned by not pointing to live external services until the application reaches 'live' but I accept that part of our problem is that we lack the granularity in our environments - by grouping development, testing and stage under the 'test' umbrella, we lose the ability to test against live external services.
All I realise at the moment is that there is little to be gained by consuming the test services in the test environments. There is negligible cost involved in consuming live third-party external services. Also, there is potential impact for our own services to be aware that they are being consumed by a client in the 'test' phase, but this could probably be accommodated.
I understand that the scenario is this somewhat open-ended, but there only seems to be 2 ways to go?
My concern would be accidentally modifying production data when running non-production instances of your application. As soon as you do one SetX(), POST/PUT, insert into/update, what have you, you are up the creek. That's a sneaky kind of bug that can be very hard to find.
If you're strictly consuming, then in theory it doesn't make a difference. In practice, I'd still be concerned. In your position, I'd probably be quite happy to have a non-live option. Otherwise I'd be thinking about stubbing out all those external services.
I am working on a project which has following requirements:
Perform sticky based load balancing(based on SOAP session ID) onto multiple backend servers.
Possibility to plugin my own custom based load balancer.
Easy to write and deploy.
A central configuration file(Possibly an XML), to take care of all the backend servers.
Easy extraction of a node from this configuration file(Possibly with xpath).
I tried working with camel for a while but, wasn't able to do perform certain task with it.
So thought of giving a try to Akka.
Will akka be possibly able to satisfy the above requirements?
If so is there a load balancing example in akka or proxy example?
Would really appreciate some feedbacks.
You can do everything you've described with Akka.
You don't mention what language you're working with, Scala or Java. I've included links to the Scala documentation.
Before you do anything with Akka you HAVE TO read the documentation and understand how Akka works.
http://doc.akka.io/docs/akka/2.0.3/
Doing so, you'll find Akka is perfect for the project you've described with some minor caveats.
Once you read the documentation the following answers should make a lot of sense.
Perform sticky based load balancing(based on SOAP session ID) onto multiple backend servers.
Load balancing is already part of the framework (it's called Routing in Akka http://doc.akka.io/docs/akka/2.0.3/scala/routing.html) and Remoting (http://doc.akka.io/docs/akka/2.0.3/scala/remoting.html) will take care of the backend servers. You can easily combine the two.
To my knowledge the idea of sticky load balancing is not a part of Akka but I can envision this being accomplished with a Map using the session ID as the key and the Actor name (or path) as the value. A quick actorFor will take care of the rest. Not well thought out but should give you a good idea of where to start.
Possibility to plugin my own custom based load balancer.
Refer to the Routing documentation.
Easy to write and deploy.
This depends on your aptitude and effort but after you read certain parts of the documentation you should be build a proof of concept in a couple of hours.
Deployment can be a bit frustrating mostly because the documentation isn't really great with respect to deploying Akka networks with remote components. However, there are enough examples on the web that you can figure out how to get it done...eventually. Once you do it once it's no big deal.
A central configuration file(Possibly an XML), to take care of all the backend servers.
Akka uses Typesafe Config (https://github.com/typesafehub/config) which is a lot easier to work with than XML (but I hate XML so take that with a grain of salt). As far as a central configuration, I'm not sure what you're trying to accomplish but it sounds like something that can be solved using remote actor creation. Again, see the Remoting documentation.
Easy extraction of a node from this configuration file(Possibly with xpath).
Akka provides a lookup method .actorFor. There's no need to go to the configuration file once the system is up and running.
If so is there a load balancing example in akka or proxy example?
Google is your friend.
I understand to an extent that it helps applications communicate regardless of their location. Why is it important and what is an example of a real-world use of WCF?
WCF is a generic communication mechanism that allows you to setup generic client/host communication between two parties. The neat thing about WCF is that is allows you to configure service properties such as transport (http/pipes/tcp/Tibco EMS), security models (any of the W3C standards), compression, encoding, timeouts, etc, without changing ANY code. That is powerful. Best of all, you can configure it so that you can have a service in C# and a client in Java (or any other language or the other way around), as long as they both talk using the same mechanisms.
You can create a standard HTTP SOAP web service using WCF and one day decide to switch it to use the faster named pipes for local communication. You can create web services that talk over TibcoEMS and have easy failover on the queue level. You can create a file streaming web service that distributes all kinds of images/videos to your application.
Here Are some brain dump i think might be useful to understand the whole scenario.
Reason of Creating WCF : Background
Modern Application[Distributed Application] development we use different architechtures and technologies for communication
i.e:
COM+
.NET Enterprise Services
MSMQ
.NET Remoting
Web Services
As there are various technologies. they all have different architechtures. so learning all them are tricky and tedious.
one need to focus on each technologies to develop rather than the application business logic
so microsoft unifies the capabilities into single, common, general service oriented programming model for Communication. WCF provides a common approach using a common API which developers can focus on their application rather than on communication protocol.
Now-a-days we call it WCF.
N.B: image collected from - http://www.codeproject.com/Articles/255114/Windows-Communication-Foundation-Basics
What Exactly WCF Service Stands For?
WCF lets you asynchronus messages transform one service endpoint to another.
The Message Can be simple as
A Single Character
A word
sent as XML
complex data structure as a stream of binary data
Windows Communication Foundation(WCF) supports multiple language & platforms.
WCF Provides you a runtime environment for your services enabling you to expose CLR types as Services and to consume other Services as CLR Types.
A few sample scenarios include:
A secure service to process business transactions.
A service that supplies current data to others, such as a traffic report or other monitoring service.
A chat service that allows two people to communicate or exchange data in real time.
A dashboard application that polls one or more services for data and presents it in a logical presentation.
Exposing a workflow implemented using Windows Workflow Foundation as a WCF service.
A Silverlight application to poll a service for the latest data feeds.
Why on Earth We Should Use WCF?
from a Code Project Article, thanks to #Mehta Priya I found the following Scenarios to illustrate the concept. Let us consider two Scenario:
The first client is using java App to interact with our Service. So for interoperability this client wants the messages in XML format and the Protocol to be HTTP.
The Second client uses .NET so far better performance this clients wants messages in binary format and the protocol to be TCP.
Without WCF Services
now for the stated scenarios if we don't use WCF then what will happen let's see with the following images:
Scenario 1 :
Scenario 2:
These are two different technologies and have completely differently programming models. So the developers have to learn different technologies
so to unify & bring all technologies under one roof. Microsoft has come with a new programming model called WCF.
How WCF Make things easy ?
one implement a service and he/she can configure as many end points as it required to support all the client needs .
To support the above 2 client requirements
-we would configure 2 end points
-we can specify the protocols and message formats that we want to use in the end point of configuration
References:
WCF : What , Why and When https://vishalnayan.wordpress.com/2010/12/31/wcf-what-why-when/
Why we use WCF Service? http://www.codeproject.com/Tips/815742/Why-We-Use-WCF-Service-and-Sample-of-WCF-Service
What Is Windows Communication Foundation https://msdn.microsoft.com/en-us/library/ms731082(v=vs.110).aspx
Windows Communication Foundation Basics http://www.codeproject.com/Articles/255114/Windows-Communication-Foundation-Basics
There's little to add to the responses so far, especially the one from "siz".
One thing to add is that WCF is the current way to do web services on the .NET platform. It's not the "new" way, it's the current way. ASMX web services are the old and just barely maintained way. One Microsoft employee has publicly stated that only critical security fixes will be made to the ASMX platform, so if you intend for your services to be useful more than a year from now, don't use ASMX.
In addition to the typical "web service" use cases, WCF handles atypical cases, like binary communication over named pipes, message queues, etc. To a very large extent, the service you write to support something simple like SOAP over SSL can also support these other protocols, with no changes to the code.
To answer the "real world" bit, I'm just finishing up a dispatch system by which a Visual Basic 6.0/access alarm receiver, a WPF/SQL ERP system and an iPhone application all share information to schedule and execute jobs.
Essentially the use case is where you want two separate applications to talk to each other somehow and their locations are unknown (could be same machine (but different application domain), same network or on the other side of the internets)
You can easily embed it into a Windows Forms application. That was a nice thing to discover. It is so much easier than .NET Remoting too.
There are a number of reasons why it is advantageous over classic ASP.NET web services (.asmx).
A couple of these off the top of my head are:
The ability to have multiple bindings for the same service call means the message doesn't have to serialise into XML and back if you simply want to communicate inside a web farm.
The way contracts are defined is much more forgiving when it comes to multiple versions of the same contract.