Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am presently designing a message queue system that will be used by various applications and am having a difficult time trying to decide whether to use WCF to provide the services or use a shared class library (DLL) and deploy the DLL with the clients.
For some additional information:
• The queue is stored in a SQL database.
• We expect to have approximately 3-7 different applications/clients using this message queue system.
• The clients/applications may or may not run all on one machine.
• We do not expect a heavy load of messages being queued on a daily basis (approx 1000-10000 per day (gross estimates btw))
• Somewhat “mission critical” – several clients/applications cannot do its job if this service is unavailable.
• Everything operates within the corporate network – no access to Internet required.
I have given some though to the pros and cons for each decision:
WCF Service
Pros:
• Can update the logic in the Queue system without having to update the clients
• Room for scalability – but most likely not going to be an issue.
Cons:
• More difficult to diagnose/debug
• More effort required when deploying
• Queue System unavailable if the service is down (unless we cluster/farm)
Shared Class Library (DLL)
Pros:
• Easier to debug
• Easier development efforts
• Only have to ensure that the DB is available – no dependence on another service/machine.
Cons:
• Deployment headaches when we make updates to the DLL – we may forget to update all the applications that depend on the DLL.
If anyone can provide more arguments for a solution – that would be helpful! If you have an opinion to what you believe is the best direction, please do tell! I’ll appreciate any input that will help make me a decision.
Thanks for your time,
Adrian
You can do both:
Implement a class library that encapsulates all the functionality you wish to expose. Consumers that wish that level of control can reference the library directly.
Develop a WCF service that only acts as a Remote Facade for the class library. It would only expose and translate the class library into DTOs/messages, and contain no logic in itself.
In other words, the class library is your Domain Model, and the service is just a thin Facade in front of that model. That's the way any WCF service should be developed in any case.
That said, I you must choose one or the other, I'll try to add my own thought to yours (which I find most reaonable).
One disadvantage of WCF that you forgot is that it does add some processing overhead because it must serialize and deserialze messages.
Even so, selecting the best model is not only about counting the number of bullets in each pro/con section, as each bullet has different weights.
Without knowing your exact situation and requirements, I would still consider the deployment/versioning issue that relates to direct use of a class library to be a very strong argument against that strategy.
A web service interface will allow you to vary service and each client independently of each other as long as the contract remains stable. That could also be made possible with a class library, but is more difficult.
An interoperable web service interface also gives you a better ability to grow and respond to new business opportunities, as the clients are not constrained to .NET applications. You may have only .NET applications today, but are you sure it will stay that way forever?
If, on the other hand, you decide to go the class library route, make sure to have each client consume abstract base classes, because that will provide you with the most flexible options of changing the implementation without breaking existing clients.
Given the information you provided, I don't think there's a clear-cut winner here. I lean slightly towards WCF despite the added complexity, because I consider the flexibility it provides gives you better options for responding to unforseen changes.
Related
How different is the recent buzzword of API/APIfication different from having a SOA based architecture?
Apart from the technical difference of APIs being REST-based web services and SOA being SOAP based webservices, are there any other benefit or advantage of this new buzzword API/APIfication?
...APIs being rest based WebServices and SoA being SOAP based Webservices
This is probably the least accurate definition of both terms I have ever heard.
I think the question you are trying to ask is "What is the difference between REST and SOAP web services?"
In this case there. Are. Many. Answers.
But, i was trying to understand the recent Buzzword of APIfication of
traditional/legacy Enterprise Apps
APIfication is meaningless. A google search of this term returns mixed results.
The concept seemed to be similar with SoA architecture style
API and SOA are unrelated concepts. Both terms have been around for years and their meaning has stayed fairly constant over time.
So, i was trying to clarify if i was missing anything
It's unlikely you're missing anything other than clarity about what exactly it is you want to ask.
My understanding of SOA architecture:
All code belongs to a service, regardless of what tier it runs in - whether it's UI, middle-tier or data access. It belongs to whichever service owns the data it operates on or displays.
Microservices never ever call each other. Instead, their UI's are composed together at runtime, and business processes that cross services boundaries are "emergent" rather than being orchestrated at a high level. The only communication that crosses service boundaries consists of events, and not data.
These events can be versioned, with newer versions extending older versions, so that the publisher can publish a new version of an event, while subscribers still receive the old version of the event (which decouples the services and prevents multiple services from having to be modified and deployed in lockstep.)
The "IT/Ops" service composes UI components from multiple services together at runtime to create the front-end interface of an application.
Since an API implies coupling between the provider and the consumer, API calls are only ever made within a service, not across service boundaries.
I'm starting to find myself getting more and more in to using WCF for projects I implement for internal use (automating company tasks, making sure all clients are on the same page, etc.) This is largely due to the 3-10 clients I am automating at once whenever I do implement a solution, and (even if it was a small sample) the company is growing which continually adds more clients in the pool and thus a higher demand for reliability/consistency. With that said, I'm recognizing how important it is to make sure I make things expandable as (previously) pushing a release was getting harder the more clients I have depending on the service.
My latest project has a potential of being externalized. Until now I've done it the way I know works, but I'd still like to travel down the "right" path in terms of future updates. How should I be setting up my project file to make this as easy and seamless as possible to keep maintained, up-to-date and expansive? Should I be placing version numbers in to the namespace (as in Company.Interfaces.Contracts.June2011.IMyService), using pseudo folders, ...?
I just don't feel confident in this aspect of moving forward. I'd like to know that whatever ground work I have in place now won't place burdens on future expansion/customizing later. I'd also like to stick to the "development norm" as much as possible as it's getting more plausible that we'll hire additional programmers to help the work load.
Does anyone with this kind of experience have any thoughts, suggestions, guidance in this field? I would really appreciate any examples, books, documentation, etc. that you can provide.
Update (06-17-2011)
To give some insight, I'm also looking for some specific questions. These include:
How do you decorate a service class vs a DTO in terms of namespace? I've seen http://service.domain.com/ServerName/Version used on the Service class itself & http://types.domain.com/ServiceName/Version used on the DTOs. Is this common? (Separate the namespace in to a type and service collection?)
Should I be implementing IExtensibleDataObject on all my objects on the basis that they could potentially be evolved in future released? (Lay the ground work out now)
If my database has constraints on it for (e.g.) string length, I should be extending IParameterInspector and using that method for validity (keeping logic and validation separate), correct?
Should the "actual service" be broken out in to its own class so, as I version, the Service Contract classes just call the code (keeping each new version release with an minimal code as possible?) Or should I keep it within the service class and inherit from it with any new methods (likewise, what happens should you remove a method?)
I'm sorry if I have a lot of questions, I just see two ends of the spectrum in documentation. I see "Setting up wcf" then directly to "this is a versioned WCF"--no segue/steps between. I'm assuming it's going to just "click" once I get enough information, but I'm (sadly) not there yet.
tl;dr
When you start writing a WCF service that you know is going to hit several iterations, how do you setup your project(s) to make it as easy as possible in the future (on yourself and teammates)?
I have had success using a "strict" versioning policy (it seems from past experience you are heading in this direction anyway) where you simply create new endpoint/s each time a new definition is released. This means you won't have any contract backwards compatibility concerns for legacy clients - older versions can easily be turned off once logging indicates all clients have upgraded. It is generally necessary however to write bridging code for any legacy endpoint/s so they can continue to call into the modified business logic.
In terms of project organisation, I would create a new project for each version so they can easily be deployed separately. Namespaces using v1, v2 are normally works well enough. The endpoint names can also include a version number which should easily distinguish them from each other.
Alternately you could try using a "lax" versioning policy where you can have the ability to add or remove data members by implementing the IExtensibleDataObject interface in all your services. Some useful MSDN article links can be found in a popular response to a similar question: WCF client's and versioning.
Another "lax" kind of option is to move more towards a messaging solution (which WCF can support through message contracts and/or the MSMQ binding). Here podcast by SOA guru Udi Dahan that provides an interesting perspective and is definitely worth a listen - there is no IDog2.
Finally here is a good blog post with some further more fine-grained guidelines on whichever strategy you end up using:
http://wcfpro.wordpress.com/2010/12/21/wcf-versioning-guidelines-2/.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In subsystem design, I sometimes see software designs that have one high-level class that has only one feature: It routes a call from a client using the class to another a certain class the client would like to use. However, it alone does not have any functionality. Take this scenario:
Say there are five classes in the bowling alley subsystem: An alley, a lane, a bowler, control desk, and a score. Anytime a client outside the subsystem wants any data to display to a user, it would communicate only to the control desk (the router) that would call any of the classes it holds to get the client's requested data (a score for example: Client calls control desk with getScore(), which calls a Lane's getScore(), which calls a Bowler's getScore()).
I understand this is a bad design decision, but I'd like to hear real-world examples with consequences you discovered of having this router class (Can also be known as a "middleman"). What issues did you run into as the system you were working on evolved? What arguments would you make to persuade software designers to avoid router classes?
I'd argue that in some designs a router is the preferred design pattern, such as in MVC frameworks to delegate handlers for URLs. In that situation it's really helpful because it provides a very clean separation between what the client "sees" and the actual logic behind it.
Anytime a client outside the subsystem wants any data to display to a user, it would communicate only to the control desk (the router) that would call any of the classes it holds to get the client's requested data
this sounds like the Facade pattern
As for the middleman, in the following example, wouldnt the Lane be the culprit?
a score for example: Client calls control desk with getScore(), which calls a Lane's getScore(), which calls a Bowler's getScore())
simplifying the interface to a subsystem for the benefit of clients outside the subsystem could be considered good design.
The Facade pattern, and the Mediator pattern perform similar tasks to what you are describing. Your use of the Middleman moniker implies the Mediator pattern over the Facade pattern, as a Middleman is responsible for negotiating between two entities with neither entity needing to know the specifics of how to communicate with the other.
You can use either of these patterns to reduce coupling for the client class, which needs to use the system the Mediator or Facade is masking. In the case of the Facade pattern, the intention is to provide a convenient way to interface a system of classes. For the Mediator pattern, the purpose is to abstract the steps required to perform a complex task from the client.
I don't know that routing method calls is always such a bad idea.
It seems that you'd just have the problem associated with any additional layer of abstraction - that the abstraction can break, or that it's one more thing that can potentially misbehave if there's a change made to something underlying.
I've never seen anything that called more than a few layers deep, but I just imagine that adding extra calls would make it more difficult to trace the path information takes, and make troubleshooting more difficult.
One potential problem, though, is if each layer implements its own error handling or retry process, making something that's insignificant at each level overwhelming as a whole. For example, if the Lane makes two attempts to check the bowler's score, and the desk makes 3 attempts to check the score, then a failure of the bowler to return a score will result in 6 queries being made. Add a 30 second timeout at the bowler, and you're suddenly waiting for 3 minutes for what should take 30 seconds.
OldNewThing had an article about an example of this in the Windows OS, and the problems it caused, but now I can't seem to find it.
I think that both ASP.NET MVC and MVP patterns utilize this type of concept where you end up with something simply handling the logic that is executed from one end to the other or on behalf of a lower layer to a higher layer. This certainly makes testing more easy to perform so that in and of itself is a MAJOR benefit. This type of pattern does create some manual or tedeious work in that you could click a button and have it do a task rather than click the button, have something intercept that click, then call into some service managing class that does some work. But on the front of keeping your code clean and readable the more separation there is often times the better.
If you are not a tester or could care less about patterns directly then think of it in another format. You have a link that takes a user to a page. This link is scattered across your site all over the place as the destination is very important or used a lot. The destination changes. This could be a find and replace operation...or you could insert a RedirectService (call it what you will) that when someone clicks a link takes change and directs the clicker to the right location. This allows the location to be defined once in one location and therefore changed once. Find and replace often times changes things that weren't meant too be changed!
No matter how you look at this...separation of concerns is a good think. The UI is one concern. The controller of activities is another concern. The activity itself is yet another concern!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
In my analysis of the newer web platforms/applications, such as Drupal, Wordpress, and Salesforce, many of them create their software based on the concept of modularization: Where developers can create new extensions and applications without needing to change code in the "core" system maintained by the lead developers. In particular, I know Drupal uses a "hook" system, but I don't know much about the engine or design that implements it.
If you were to go down a path of creating an application and you wanted a system that allowed for modularization, where do you start? Is this a particular design pattern that everyone knows about? Is there a handbook that this paradigm tends to subscribe to? Are their any websites that discuss this type of development from the ground-up?
I know some people point directly to OOP, but that doesn't seem to be the same thing, entirely.
This particular system I'm planning leans more towards something like Salesforce, but it is not a CRM system.
For the sake of the question, please ignore the Buy vs. Build argument, as that consideration is already in the works. Right now, I'm researching the build aspect.
There are two ways to go around here, which one to take depends on how will your software behave.
One way is the plugin route, where people can install new code into the application modifying the relevant aspects. This route demands your application is installable and not only offered as a service (or else that you install and review code sent in by third parties, a nightmare).
The other way is to offer an API, which can be called by the relevant parties and make the application transfer control to code located elsewhere (a la Facebook apps) or make the application to do as the API commands enable the developer (a la Google Maps).
Even though the mechanisms vary and how to actually implement them differ, you have to, in any case, define
What freedom will I let the users have?
What services will I offer for programmers to customize the application?
and the most important thing:
How to enable this in my code while remaining secure and robust. This is usually done by sandboxing the code, validating inputs and potentially offering limited capabilities to the users.
In this context, hooks are predefined places in the code that call all the registered plugins' hook function, if defined, modifying the standard behavior of the application. For example, if you have a function that renders a background you can have
function renderBackground() {
foreach (Plugin p in getRegisteredPlugins()) {
if (p.rendersBackground) p.renderBackground();
}
//Standard background code if nothing got executed (or it still runs,
//according to needs)
}
In this case you have the 'renderBackground' hook that plugins can implement to change the background.
In an API way, the user application would call your service to get the background rendered
//other code
Background b = Salesforce2.AjaxRequest('getBackground',RGB(255,10,0));
//the app now has the result of calling you
This is all also related to the Hollywood principle, which is a good thing to apply, but sometimes it's just not practical.
The Plugin pattern from P of EAA is probably what you are after. Create a public interface for your service to which plugins (modules) can integrate to ad-hoc at runtime.
This is called a component architecture. It's really quite a big area, but some of the key important things here are:
composition of components (container components can contain any other component)
for example a grid should be able to contain other grids, or any other components
programming by interface (components are interacted with through known interfaces)
for example a view system that might ask a component to render itself (say in HTML, or it might be passed a render area and ask the view to draw into it directly
extensive use of dynamic registries (when a plugin is loaded, it registers itself with the appropriate registries)
a system for passing events to components (such as mouse clicks, cursor enter etc)
a notification
user management
and much much more!
If you're hosting the application, publish (and dogfood) a RESTful API.
If you're distributing software, look at OSGi.
Here's a small video that at least will give you some hints; the Lego Process [less than 2 minutes long]
There's also a complete recipe for how to create your own framework based extensively on Modularization...
The most important key element to make modularized software is to remember that it's purely [mostly] a matter of how loosely coupled you can create your systems. The more loosely coupled, the easier it is to modularize...
I've inherited this really weird codebase where they've built an external web service over a bunch of internal web services just to add authentication/authorization using WS-Security, WS-Encryption, et al. Less than a month into this engagement, I'm already feeling the pain of coupling volatile components through rigid WSDL, esp considering some of them use WCF and other choose to go WSDL first. Managing various versions of generated proxies and wrappers at various levels is a nightmare!
I'll admit the design is over-complicated and could have been much better, but my question essentially is:
Would you ever build a web service just to provide a cross cutting concern over a bunch of services?
Would this be better implemented as web service handlers?
and lastly...
Would you categorize this under the Web Service Gateway pattern?
I saw that very thing being built one year ago. I almost cried when the team took months to build 4 web services, 2 of which simply wrapped other internal ones, using WCF and some serious encryption. The only reason they wrapped the internal ones was to change the potential error numbers coming back.
So, would I ever intentionaly do that? Nope.
Would it be better implemented as almost anything else? yep.
Would I categorize it under the WTF pattern? absolutely.
UPDATE:
One thing I just remembered is that there is an architecture called "Enterprise Service Bus" It's purpose is to provide a common interface into other SOA systems. This way it doesn't matter what the different applications use for their end point mechanisms (WCF, WSE 1/2/3, RESTful, etc).
BizTalk is one example of an ESB and there are many other off the shelf programs that can be used. Basically, your app passes a message to the ESB and it handles sending that message, in a reliable way, to the other systems as well as marshalling any responses back.
This also means that you could insulate other applications from many types of changes to the end points. Of course, if the new end points require additional information, then you'd have to modify the callers. However, if all they are changing is the mechanism then a good ESB would be able to handle those changes without impacting your app.
I have seen similar implementations if you are exposing the services to the outside world and if you need to tighten down the security..check this MSDN column..