SOA Solution Design and Service Encapsulation: Should I keep service codebases in separate solutions, or all together in one? - nservicebus

I am developing an SOA project using principles I learned in Udi Dahan's Advanced Distributed Systems Design course. So far the project has 6 completely independent services, as well as an IT/Ops service for aggregation and integration with third party services. I also have a set of libraries I've developed for use across all of these services with infrastructure code, i/o, and utilities.
Is it better to keep these services' code together under one solution, or to keep them in entirely separate solutions? Or both? Integration testing will be much simpler with one solution, but I'm concerned with service encapsulation - my gut tells me I don't want a developer working on one service to know how another service works, because it can lead to unintentional coupling. Has anyone run into real-world issues with service encapsulation when keeping all codebases under one solution?

I would keep the services in separate solutions or even separate repositories.
I don't think it has anything to do with service encapsulation, but more to help deal with code contention better and make it hard to take dependancies between services.
As for common code (like contracts) put them in a separate repository and use nuget (or similar) to reference form your solutions.
Make sense?

Related

Web apps architecture: 1 or n API

Background:
I'm thinking about web application organisation. I will separate front (web site for browser) from back (API): 2 apps, 2 repository, 2 hosting. Front will call API for almost everything.
So, if I have two separate domain services with my API (example: learning context and booking context) with no direct link between them, should I build 2 API (with 2 repository, 2 build process, etc.) ? Is it a good practice to build n APIs for n needs or one "big" API ? I'm speaking about a substantial web app with trafic.
(I hope this question will not be closed as not constructive... I think it's a real question for a concrete case, sorry if not. This question and some other about architecture were not closed so there is hope for mine)
It all depends on the application you are working on, its business needs, priorities you have and so on. Generally you have several options:
Stay with one monolithic application
Stay with one monolithic application but decouple domain model across separate modules/bundles/libraries
Create distributed architecture (like Service Oriented Architecture (SOA) or Event Driven Architecture (EDA))
One monolithic application
It's the easiest and the cheapest way to develop application on its beginning stage. You don't have to worry about complex architecture, complex deployment and development process. It also works better if there are no many developers around.
Once the application is growing up, this model begins to be problematic. You can't deploy modules separately, the app is more exposed to anti-patterns, spaghetti code/design (especially when a lot people working on it). QA process takes more and more time, which may make it unusable on CI basis. Introducing approaches like Continuous Integration/Delivery/Deployment is also much much harder.
Within this approach you have one repo/build process for all your APIs,
One monolithic application but decouple domain model
Within this approach you still have one big platform, but you connect logically separate modules on 3rd party basis. For example you may extract one module and create a library from it.
Thanks to that you are able to introduce separate processes (QA, dev) for different libraries but you still have to deploy whole application at once. It also helps you avoid anti-patterns, but it may be hard to keep backward compatibility across libraries within the application lifespan.
Regarding your question, in this way you have separate API, dev process and repository for each "type of actions" as long as you move its domain logic to separate library.
Distributed architecture (SOA / EDA)
SOA has a lot profits. You can introduce completely different processes for each service: dev, QA, deploying. You can deploy just one service at once. You also can use different technologies for different purposes. QA process gets more reliable as it involves smaller projects. You can version communication (API) between services which makes them even more independent. Moreover you have better ability to scale horizontally.
On the other hand complexity of the high level architecture grows. You have much more different components you have to take care: authentication / authorisation between services, security, service discovering, distributed transactions etc. If your application is data driven (separate frontend which use APIs for consuming data) and particular services don't need to communicate to each other - it may be not as much complicate (but such assumption is IMO quite risky, sooner or letter you will need to communicate them).
In that approach you have separate API, with separate repositories and separate processes for each "type of actions" (which I understand ss separate domain model / services).
As I wrote on the beginning the way you choose depends on the application and its needs. Anyway, back to your original question, my suggestion is to keep APIs as separate as you can. Even if you have one monolithic application you should be able to version APIs separately and keep their domain logic separate. Separating repositories and/or processes depends on the approach you choose (eg. among these I mentioned before).
If I missed your point, please describe in more detailed way what answer do you expect.
Best!

Keep business/data logic for wcf library in separate assembly?

I was thinking of have my wcf interface in its separate assembly and then the data/business logic in it's own assembly. Is this over-architecture or is it just fine? Does it make updating the services easier? or if their is an issue/bug, does it make fixing the bugs easier.
This is a good way to design your program.
This allows you to focus on business logic or display logic independently, which is called Separation of Concerns and is one of the most important principles in the development of quality software.
This doesn't help with "fixing" bugs so much as it helps in avoiding bugs altogether.
It also allows you to create different front-ends for the same business objects, just in case you would also like to have a scriptable Console interface or a web or Silverlight interface later on.

Why limit WCF ServiceContracts to 10-20 OperationContracts?

I've seen recommendations (Juval Lowy, et al) that a service contract should have "no more than 20 members...twelve is probably the practical limit". Why? It seems that if you wish to provide a service as the interface to a relatively large db (50-100 tables) you're going to go way past that in just CRUD alone. I've worked with plenty of other services that provided hundreds of 'OperationContracts'...is there something peculiar about WCF? Is there something I'm missing here?
probably the fact that you should not expose CRUD in the SOA world.... the idea is to expose business processes. Inidividual CRUD operations lead to a TERIIBLY slow and granular interface. Look more how RIA Services / ASTORIA do the CRUD thing.
I don tthink this is a technical limit. the idea is a service defines all contracts necessary for a business operation (order management, account management) and should not be TOO complicated.
I think it's related to the principles of SOA. Many people would regard a service with hundreds of OperationContracts as a poorly designed service, even if technically correct.
You should not simply expose a web interface for a bunch of tables. Rather, the services should expose abstract operations (probably mapped to business processes) that interact with those tables under the hood.
I've seen similar recommendations in the past and I think it depends on who's going to use your service.
If, like me, you're writing it to link an app to a remote data source then the most abstract interface you can write will still include a "get" and a "save" method for each logical object in your database.
My latest project has a servicecontract with 246 operation contracts in it. It's a mostrosity of a file and a pig to edit, but the client side code is neat and tidy and it's just easier to work with. It's not like anyone but me is ever going to see it.
In short, there are no technical or performance implications to either approach. Go with whatever seems most appropriate to your project.

Jumping into N-Tier architecture with WCF?

I work for a large state government agency that is a tad behind the times. Our skill sets are outdated and budgetary freezes prevent any training or hiring of new employees/consultants (firing people is also impossible). Designing business objects, implementing design patterns, establishing code libraries and services, unit testing, source control, etc. are all things that you will not find being done here. We are as much of a 0 on the Joel Test as you can possibly get. The good news is that we can only go up from here!
We develop desktop CRUD applications (in C++, C#, or Java) that hit the Oracle database directly through an ODBC connection. We basically have GUI's littered with SQL statements and patchwork code. We have been told to move towards a service-oriented n-tier architecture to prevent direct access to the database and remove the Oracle Client need on user machines.
Is WCF the path we should be headed down? We've done a few of the n-tier application walkthroughs (like this one) and they seem easy to implement, but we just don't know enough to understand if we are even considering the right technologies. Utilizing the .NET generated typed DataSets seems like a nice stopgap to save us month/years of work (as opposed to creating new business objects from the ground up for numerous projects). Is this canned approach viable for a first step?
I recently started using WCF services for my Data Layer in some web applications and I must say, it's frustrating at the beginning (the first week or so), but it is totally worth it once the code is deployed.
You should first try it out with a small existing app, or maybe a proof of concept to make sure it will fit your needs.
From the description of the environment you are in, I'm sure you'll realize the benefit almost immediately.
The last company I worked for chose WCF for almost the exact reason you describe above. There is lots of good documentation and books for WCF, its relatively easy to get working, and WCF supports a lot of configuration options.
There can be some headaches when you start trying to bend WCF to work in a way not specifically designed out of the box. These are generally configuration issues. But sites like this or IDesign can help you through those.
First of all, I would definitely not (sorry for the emphasis) worry about the time you'll save using typed DataSet's versus creating your own business objects. That is usually not where you will spend most of your development time. I prefer using business objects myself.
In you're situation I would want to implement a proof-of-concept first. One that addresses all issues you may encounter. This proof-of-concept should implement an entire use case, starting on the client, retrieving data from the database and returning it to the client. You should feel confident about your implementation before continuing.
Then about choice of technology. WCF is definitely a good choice for communication between your client applications and the service layer. I suppose that both your clients as well as your service layer will become C# applications? That makes things a lot easier since interoperability between different platforms (Java/C# for example) is still not trivial although it should work in most cases.
Take a look at Entity Framework (as there are a couple Oracle providers available for it already) in conjunction with .NET 3.5 SP1 which enables built-in WCF serialization of your EF generated classes.
Here is a good blog to get started: http://blogs.msdn.com/dsimmons
CSLA might be a good fit for your N-Tier desktop apps. It supports WCF, has a large dev community, and is well documented. It is very object oriented.

Is it advisable to build a web service over other web services?

I've inherited this really weird codebase where they've built an external web service over a bunch of internal web services just to add authentication/authorization using WS-Security, WS-Encryption, et al. Less than a month into this engagement, I'm already feeling the pain of coupling volatile components through rigid WSDL, esp considering some of them use WCF and other choose to go WSDL first. Managing various versions of generated proxies and wrappers at various levels is a nightmare!
I'll admit the design is over-complicated and could have been much better, but my question essentially is:
Would you ever build a web service just to provide a cross cutting concern over a bunch of services?
Would this be better implemented as web service handlers?
and lastly...
Would you categorize this under the Web Service Gateway pattern?
I saw that very thing being built one year ago. I almost cried when the team took months to build 4 web services, 2 of which simply wrapped other internal ones, using WCF and some serious encryption. The only reason they wrapped the internal ones was to change the potential error numbers coming back.
So, would I ever intentionaly do that? Nope.
Would it be better implemented as almost anything else? yep.
Would I categorize it under the WTF pattern? absolutely.
UPDATE:
One thing I just remembered is that there is an architecture called "Enterprise Service Bus" It's purpose is to provide a common interface into other SOA systems. This way it doesn't matter what the different applications use for their end point mechanisms (WCF, WSE 1/2/3, RESTful, etc).
BizTalk is one example of an ESB and there are many other off the shelf programs that can be used. Basically, your app passes a message to the ESB and it handles sending that message, in a reliable way, to the other systems as well as marshalling any responses back.
This also means that you could insulate other applications from many types of changes to the end points. Of course, if the new end points require additional information, then you'd have to modify the callers. However, if all they are changing is the mechanism then a good ESB would be able to handle those changes without impacting your app.
I have seen similar implementations if you are exposing the services to the outside world and if you need to tighten down the security..check this MSDN column..