I've seen several questions revolving around that theme on SO, but no answer that really satisfies me.
I'm trying to put words on things I feel without always being able to express them clearly enough to convince people around me. Might be that I'm wrong. Might be that my understanding is not deep enough to find proper arguments.
How would you contrast developing applications according to a "service oriented approach" instead of a "traditional" API approach?
Let's be totally clear here that, by services, I don't necessarily mean Web Services.
Here are some differences I see. Please correct me if I'm wrong:
a service is a "living thing" that you can talk to, according to a given and explicit protocol. A service has its own runtime while a library uses the runtime of your application. You can move that "living thing" wherever you want
a library allows code-based integration, while services traditionally use a message-based integration (however, nothing really prevents you to write a library based on exchanging messages)
services are discoverable
contracts are explicit and expressed "outside" the running code
services are autonomous (but here again, you could write autonomous APIs, couldn't you?)
boundaries are explicit
What am I missing here? What else really distinguishes services from a high-level API?
Service oriented architecture implies that the exposed interface does not live on the same host where the client runs and the service is completely decoupled from the client code (loose coupling). You can easily call an API by loading the necessary library and executing your code, on the same node. Rather than defining the API, service oriented architecture is focusing on the functionality, many times you can access the same feature using different protocols.
I would go for the loose code coupling if there was anything which would distinguish SOA and AOA.
You have covered most important points. I would add one :
Usually, a Service is stateless. Each Service request is independent. This is in contrast to a library interface where you may make certain calls in a sequence to get the desired result.
Related
PROBLEM: Application uses Axon Framework and org.axonframework.eventsourcing.EventSourcingRepository and building _links in HAL format is needed in responses.
RESEARCH: Can be tuned with Spring Hateoas, but a lot requires to be handcoded in rest-controller. Spring Data REST offers autogeneration of links with an only annotation on CRUD repository. The project is not RDBS & JPA-based, so Spring Data REST is not an option.
QUESTION: Does Axon offer any RESTful solutions from the box, or is there a better autoconfigured alternative to Spring HATEOAS?
Gotcha, so you are essentially looking to expose a service's capabilities when it comes to which commands can be handled by a given Command Handling Component, disregarding whether that component is an Aggregate or an External Command Handler.
Note, that interaction between a component which dispatches commands and one which handles them resides within the CommandBus. When an Axon application starts up, it's the CommandBus which receives all the registrations for known command handlers.
That way, the CommandBus provides the location transparency for this part of the application. And it's location transparency which provides clear and cleanly segregated components; essentially what will help you to take an evolutionary microservices approach (as AxonIQ describes here).
I'd thus argue the necessity of sharing all known command handlers on a given service/aggregate through REST.
Regardless, whether it makes sense is always a question of "it depends". I for one have created a means to share the known commands a service could handle as JSON schema, as you can see here in a sample project I helped built between AxonIQ and Pivotal.
So, to come round to your question:
QUESTION: Does Axon offer any RESTful solutions from the box, or is there a better autoconfigured alternative to Spring HATEOAS?
No, Axon does not provide something like this out of the box, as it expect you use the CommandBus for communication. I do know you might need a starting point somewhere, for which REST makes sense, but even then exposing all known commands can be regarded as exposing your internal domain to the outside world. In the majority of scenarios, that would be undesirable, but as stated this highly "depends" on your use case.
I'm in the middle of writing my first web app. Just wondering how the conventions are when it comes to REST API designs. Is it better to have it reflect my server side architecture or whatever seems to be easier to reason about?
I'm thinking of either doing:
/serviceProvider/product
or
/product/serviceProvider
My server side architecture are all separated into modules organized by service providers, however they all expose a product query API.
APIs ideally should be designed to make most sense for its consumer. There isn't really a good reason to reflect your "server architecture" at all. In fact, it's what's usually called a leaky abstraction or a leaky API and is considered bad practice, mainly because your application structure may change and then you have these possible scenarios:
you need to change your API, which is a non-trivial task when it's already being used by someone;
your API stops being reflective of your application structure which leads to inconsistencies;
exposing your application structure or database schema to the world may have security implications.
With these things in mind, you might as well design the API with focus on ease of use in the first place. The consumer of your API doesn't need to know or care about your application architecture.
I believe that keeping on the same architecture is important because you're forced to offer simple API and it will enforce you a simplified architecture on the server side.
That said, of course that you don't want to expose any server side method or even every server side property of the returned objects.
In Kaltura we also believe in flat (not nested) paths to simplify the API.
For more guidelines, see my blog: http://restafar.com/create-new-rest-server/
I'm starting to find myself getting more and more in to using WCF for projects I implement for internal use (automating company tasks, making sure all clients are on the same page, etc.) This is largely due to the 3-10 clients I am automating at once whenever I do implement a solution, and (even if it was a small sample) the company is growing which continually adds more clients in the pool and thus a higher demand for reliability/consistency. With that said, I'm recognizing how important it is to make sure I make things expandable as (previously) pushing a release was getting harder the more clients I have depending on the service.
My latest project has a potential of being externalized. Until now I've done it the way I know works, but I'd still like to travel down the "right" path in terms of future updates. How should I be setting up my project file to make this as easy and seamless as possible to keep maintained, up-to-date and expansive? Should I be placing version numbers in to the namespace (as in Company.Interfaces.Contracts.June2011.IMyService), using pseudo folders, ...?
I just don't feel confident in this aspect of moving forward. I'd like to know that whatever ground work I have in place now won't place burdens on future expansion/customizing later. I'd also like to stick to the "development norm" as much as possible as it's getting more plausible that we'll hire additional programmers to help the work load.
Does anyone with this kind of experience have any thoughts, suggestions, guidance in this field? I would really appreciate any examples, books, documentation, etc. that you can provide.
Update (06-17-2011)
To give some insight, I'm also looking for some specific questions. These include:
How do you decorate a service class vs a DTO in terms of namespace? I've seen http://service.domain.com/ServerName/Version used on the Service class itself & http://types.domain.com/ServiceName/Version used on the DTOs. Is this common? (Separate the namespace in to a type and service collection?)
Should I be implementing IExtensibleDataObject on all my objects on the basis that they could potentially be evolved in future released? (Lay the ground work out now)
If my database has constraints on it for (e.g.) string length, I should be extending IParameterInspector and using that method for validity (keeping logic and validation separate), correct?
Should the "actual service" be broken out in to its own class so, as I version, the Service Contract classes just call the code (keeping each new version release with an minimal code as possible?) Or should I keep it within the service class and inherit from it with any new methods (likewise, what happens should you remove a method?)
I'm sorry if I have a lot of questions, I just see two ends of the spectrum in documentation. I see "Setting up wcf" then directly to "this is a versioned WCF"--no segue/steps between. I'm assuming it's going to just "click" once I get enough information, but I'm (sadly) not there yet.
tl;dr
When you start writing a WCF service that you know is going to hit several iterations, how do you setup your project(s) to make it as easy as possible in the future (on yourself and teammates)?
I have had success using a "strict" versioning policy (it seems from past experience you are heading in this direction anyway) where you simply create new endpoint/s each time a new definition is released. This means you won't have any contract backwards compatibility concerns for legacy clients - older versions can easily be turned off once logging indicates all clients have upgraded. It is generally necessary however to write bridging code for any legacy endpoint/s so they can continue to call into the modified business logic.
In terms of project organisation, I would create a new project for each version so they can easily be deployed separately. Namespaces using v1, v2 are normally works well enough. The endpoint names can also include a version number which should easily distinguish them from each other.
Alternately you could try using a "lax" versioning policy where you can have the ability to add or remove data members by implementing the IExtensibleDataObject interface in all your services. Some useful MSDN article links can be found in a popular response to a similar question: WCF client's and versioning.
Another "lax" kind of option is to move more towards a messaging solution (which WCF can support through message contracts and/or the MSMQ binding). Here podcast by SOA guru Udi Dahan that provides an interesting perspective and is definitely worth a listen - there is no IDog2.
Finally here is a good blog post with some further more fine-grained guidelines on whichever strategy you end up using:
http://wcfpro.wordpress.com/2010/12/21/wcf-versioning-guidelines-2/.
I'm just about getting into WCF; but from what I've read so far, like the sample scenarios I found on MSDN and some other sites, I can do all that with web services and applications that call those web services. So why the need for an elaborate layer like WCF?
Most of the comparisons I've googled for explain it more from a programming point of view. Still trying to find answers without much success as to when it makes business and of course programming sense to use the WCF layer as opposed to traditional application to web services model.
Anyone here with experience on both and can advice on how to go about choosing either web services or going the WCF way? What are those things that can't absolutely be done using just plain old web services called by applications and where the WCF layer will save the day.
You've fallen for the Microsoft trap of "it's just about web services" :-)
It's actually a lot more:
it's about service-oriented programming in general (not just web services - you can also write TCP/IP based services, MSMQ queue-based messaging and a lot more)
it's about unifying all the diverse programming models that existed so far (ASMX, Enterprise Services, DCOM, .NET remoting)
it's about providing a lot of ready-made and ready-to-use plumbing which can handle things like reliable messaging, transaction support, security in any shape or form you'd like, service discovery, and a lot more
it's about separating the service implementation from the details of how clients will call it and making this a configurable stack of protocols, encodings etc.
Sure - most of this stuff can be done in ASMX, or .NET remoting - but try to convert an ASMX web service to be callable in your intranet using TCP/IP and transport security... Many of those "older" technologies have a very intricate and direct link to how they're being used - you can't easily change that without changing the whole service code.
WCF separates all these "plumbing details" like what endpoint to call, what protocol to use to call it, how to handle security etc. out into a "WCF stack" that's configurable and composable, so you can easily switch your service XYZ to use HTTP allowing anonymous users to call it, to using TCP/IP with Windows credentials required - your service code won't change a bit - it's only configuration of the plumbing.
That to me is the most compelling reason for WCF - I can totally concentrate on my actual service code, and not pollute it with lots of plumbing stuff - how to handle transports and text encodings and all that. And I can easily change that and adapt to new requirements and needs in deployment without having to touch my actual service code.
Plus, the second major point is extensibility - most of the older technologies just had their one, set way of doing things and many didn't lend themselves to being extended. You had to either adapt to use it the way they did it - or forget about it. WCF has a vast and very intricate system for extending just about anything - you can create your own transport protocol (people have created UDP or SMTP based bindings), you can create your own message encoders (like I had to do to talk to a web service which could only understand ISO-8859-1 encoded messages), and you can extend just about anything else in WCF - all in an organized, well-documented, very stable and safe way.
So these two things - separating out plumbing into configurable layers, and extensibility to the maximum - are the most compelling reasons for me to use WCF.
Edit: Kobi's link above, is a far better answer than mine.
WCF is basically a better architecture for supporting communications. It breaks many dependencies such as hosting (not iis dependent), transport, security, addressing into plugin components, and allows customisation to a very high degree.
Yes you can do a lot with traditional technologies, however you can do more with WCF. If you don't need the features now then of course you can can continue with legacy technologies, however if you prefer you can opt for a better architecture now with an eye on the future but it comes at a cost of having to switch technologies now.
Take this example. If you have a legacy asmx web service, how easily can you offer the same service via an MSMQed endpoint? With WCF its as simple as adding new config settings.
I assume that you are not asking "why not just stick with SOAP/HTTP". WSF allows you to choose a number of different transports rather than just simple HTTP, but as you observe the WS-* technologies allow you to do all that. So I think you're asking why use a powerful but complex framework when the raw technolgies are not impossibly complex?
You could ask this same question of any Framework. You could just use the basic technologies and avoid the learning curve of adopting the framework.
Frameworks such as WCF do have a learning curve, but consider what happens if you don't use them:
You find that you write boiler-plate code for each service invocation. You then either accept duplication or begin to refactor and build your own libraries. Before long you've developed your own Framework, but it's not the same as anybody else's. So then any new team memeber has to learn your local framework, serious learning currve.
Note also that WCF addresses issues such as the monitoring of the deployed solution.
The biggest appeal to me is testability. Services are defined by a CLR interface, which is quite easy to mock inside a test harness. Some words of warning, however. With great flexibility comes some pain in the configuration process, along with a few "gotchas". An example of a gotcha is that WCF--adhering closely to a "best practice"--requires an active SSL connection in order to pass SOAP authentication credentials over HTTP. This hinders testing quite a bit.
I've inherited this really weird codebase where they've built an external web service over a bunch of internal web services just to add authentication/authorization using WS-Security, WS-Encryption, et al. Less than a month into this engagement, I'm already feeling the pain of coupling volatile components through rigid WSDL, esp considering some of them use WCF and other choose to go WSDL first. Managing various versions of generated proxies and wrappers at various levels is a nightmare!
I'll admit the design is over-complicated and could have been much better, but my question essentially is:
Would you ever build a web service just to provide a cross cutting concern over a bunch of services?
Would this be better implemented as web service handlers?
and lastly...
Would you categorize this under the Web Service Gateway pattern?
I saw that very thing being built one year ago. I almost cried when the team took months to build 4 web services, 2 of which simply wrapped other internal ones, using WCF and some serious encryption. The only reason they wrapped the internal ones was to change the potential error numbers coming back.
So, would I ever intentionaly do that? Nope.
Would it be better implemented as almost anything else? yep.
Would I categorize it under the WTF pattern? absolutely.
UPDATE:
One thing I just remembered is that there is an architecture called "Enterprise Service Bus" It's purpose is to provide a common interface into other SOA systems. This way it doesn't matter what the different applications use for their end point mechanisms (WCF, WSE 1/2/3, RESTful, etc).
BizTalk is one example of an ESB and there are many other off the shelf programs that can be used. Basically, your app passes a message to the ESB and it handles sending that message, in a reliable way, to the other systems as well as marshalling any responses back.
This also means that you could insulate other applications from many types of changes to the end points. Of course, if the new end points require additional information, then you'd have to modify the callers. However, if all they are changing is the mechanism then a good ESB would be able to handle those changes without impacting your app.
I have seen similar implementations if you are exposing the services to the outside world and if you need to tighten down the security..check this MSDN column..