Keep business/data logic for wcf library in separate assembly? - wcf

I was thinking of have my wcf interface in its separate assembly and then the data/business logic in it's own assembly. Is this over-architecture or is it just fine? Does it make updating the services easier? or if their is an issue/bug, does it make fixing the bugs easier.

This is a good way to design your program.
This allows you to focus on business logic or display logic independently, which is called Separation of Concerns and is one of the most important principles in the development of quality software.
This doesn't help with "fixing" bugs so much as it helps in avoiding bugs altogether.
It also allows you to create different front-ends for the same business objects, just in case you would also like to have a scriptable Console interface or a web or Silverlight interface later on.

Related

SOA Solution Design and Service Encapsulation: Should I keep service codebases in separate solutions, or all together in one?

I am developing an SOA project using principles I learned in Udi Dahan's Advanced Distributed Systems Design course. So far the project has 6 completely independent services, as well as an IT/Ops service for aggregation and integration with third party services. I also have a set of libraries I've developed for use across all of these services with infrastructure code, i/o, and utilities.
Is it better to keep these services' code together under one solution, or to keep them in entirely separate solutions? Or both? Integration testing will be much simpler with one solution, but I'm concerned with service encapsulation - my gut tells me I don't want a developer working on one service to know how another service works, because it can lead to unintentional coupling. Has anyone run into real-world issues with service encapsulation when keeping all codebases under one solution?
I would keep the services in separate solutions or even separate repositories.
I don't think it has anything to do with service encapsulation, but more to help deal with code contention better and make it hard to take dependancies between services.
As for common code (like contracts) put them in a separate repository and use nuget (or similar) to reference form your solutions.
Make sense?

What logic should go in the domain class and what should go in a service in Grails?

I am working on my first Grails application, which involves porting over an old struts web appliction. There is a lot of existing functionality, and as I move things over I'm having a really hard time deciding what should go in a service and what should be included directly on the model?
Coming from a background of mostly Ruby on Rails development I feel strongly inclined to put almost everything in the domain class that it's related to. But, with an application as large as the one I'm porting, some classes will end up being thousands upon thousands of lines long.
How do you decide what should go in the domain versus what should go in a service? Are there any established best practices? I've done some looking around, but most people just seem to acknowledge the issue.
In general, the main guideline I follow is:
If a chunk of logic relates to more than one domain class, put it in a service, otherwise it goes in the domain class.
Without more details about what sort of logic you are dealing with, it's hard to go much deeper than that, but here's a few more general thoughts:
For things that relate to view presentation, those go in tag libs (or perhaps services)
For things that deal with figuring out what to send to a view and what view to send it to, that goes in the controllers (or perhaps services)
For things that talk to external entities (ie. the file system, queues, etc.), those go in services
In general, I tend to err on the side of having too many services than having things too lumped together. In the end, it's all about what makes the most sense to you and how you think about the code and how you want to maintain it.
The one thing I would look out for, however, is that there's probably a high level of code duplication in what you are porting over. With Grails being built on Groovy and having access to more powerful programming methods like Closures, it's likely that you can clean up and simplify much of your code.
I personally think it is not only a decision between service and domain class but also plugins or injected code.
Think of the dynamic finders like Book.findAllByName(). It's great that Grails injects them into the domain classes. Otherwise they would have to be copied to each domain class or they would be callable through a service (dynamicFinderService.findAllByName('Book') -argh).
So in my projects, I have quite a few things which I will move to a plugin and inject into the domain classes...
From a Java EE point of view: No logic at all within domain classes. Keep your domain classes as easy and short as possible. Your domain model is the core of your application and must be easy to get.
Grails is developed for dependency injection. All logic which resides in domain class is hard to test, and to too easy to refactor.
You cannot exchange implementations like with services (DI).
You cannot test the code of your domain class easily.
You cannot scale up your system, since your implementations cannot be pooled like services.
My recommendation: Keep every logic in services. Services are easy to test, easy to reuse, easy to maintain, easy to re-configure.
Specific to grails: if you change your domain class, the in-memory-DB will be killed, so you lose your test data, which prevents you from rapid prototyping.

API library decoupling approaches?

Imagine a set of libraries that represent some APIs. By using an inversion of control mechanisms, concrete implementations will be injected in a consuming project.
Here is a situation. I have some of the API libraries depending on other API libraries for certain functionalities - therefore the API libraries themselves are coupled at some point. This coupling can become an issue later, because changing one API will result in changes of the dependent APIs, and the corresponding implementations will also need to be changed, so in the worst case we end up with quite a number of projects that need to be modified to reflect a change form only one of them.
Now I have in mind two possible solutions for this:
Create a monolith API project that unites the related API libraries.
Further decouple APIs by making each library provide interfaces for all functionalities that are dependent on the other API, so the direct dependency is removed. This might result in a similar code in both libraries, but gives freedom to the implementations chosen via the IoC mechanisms and also allows the APIs to improve independently from each other (when an API is changed, the changes would affect only its implementation libraries, not other APIs or their implementatons).
The problem with the second approach is the duplicating of code, and the result might be of having too much api libraries that need to be referenced (for instance, in .NET application each API will be a separate DLL. In some scenarios, like Silverlight applications, this can be an issue with app size - download time and client performance overally).
Is there a better solution for the situation. When is it better to merge some API libs into one bigger and when not? I know this is a very general question I am asking, but lets ignore the due dates, estimations, client requirements and technologies for a moment, I want to be able to determine the right approach based on both achieving maximum scalability and minimum maintanance time. So, what could be a good reason to choose either approach, or another one you might suggest me?
Edit:
I feel like I must clarify something about the question. I have in mind decoupling APIs from each other, not the API from its implementation. So, for instance if I have security API for validating permissions of access, and user accounts API that uses (references) the security API, changing security API will bring the need of changing the user accounts API, and the implementations of both of them. The more APIs that happen to be coupled this way, the more changes will have to be applied. It is what I want to avoid.
The choice is between few huge libraries and a myriad of small libraries.
If you have a huge library, the code within will tend to be tightly coupled simply because there's no force providing pressure to design the various elements in a loosely coupled way. The risk is that it becomes harder and harder to evolve that library because there are so many interdependencies that must be coordinated. Think about the .NET Base Class Library as an example.
If you have a myriad of small libraries, you might risk dll hell. Yes, we were promised many years ago that this was over, but it's not. Just try to consume a lot of fine-grained open source libraries in your application code base and you'll know what I mean.
Still, the Single Responsibility Principle also applies at the package level, so I'd recommend small, focused libraries instead of huge general-purpose libraries. This also makes it easier to always pick best-of-breed libraries.
Small libraries can always be composed/compiled into larger libraries (in .NET with an Assembly Linker / Merger / Repacker utility), while it's much harder to split a big library.
No matter what you do, the most important thing to keep in mind is backwards compatibility. The fewer breaking changes you introduce, the easier those libraries will be to manage.
I don't see this as a problem, really.
Some library will depend on other libraries, and this is fine to me: improving one library will improve all the dependents! The "owner" of a library will have the responsibility not to break existing code, when making a change, but this is normal and can easily be handled if the code is well designed.
If you have changes rippling through all dependent code you should reconsider your design. If your library surfaces a certain API it should isolate its consumers from changes to underlying classes or libraries.
Update 1:
If your application uses Library1 with API1 it should not have to deal with the fact that Library1 uses Lib2, Lib3, .. , LibX.
E.g. The Moq mocking library depends on CastleDynamicProxy. Why should you have to care about that? You get an assembly where DynamicProxy is already merged in and you can just use Moq. You never see, use or have to care about DynamicProxy. So if the DP API changes, that would not affect your tests written using Moq. Moq isolates your code from changes in the API of the underlying DP.
Update 2:
Finding a problem valid for more than one branch causes modifications
of all of them
If that is the case you don't build a library but a helper for a very specific problem that should NEVER be forced upon other projects. Shared libraries tend to degenerate to a collection of "might be useful somewhere in the distant future". Don't! This will always bite you in the a**! If you have a solution for a problem that occurs in more than one place (like Guard classes): share it. If you believe that you might find a use for some solution to a problem: leave it in the project until you really have that situation. Then share it. Never do that "just in case".

Jumping into N-Tier architecture with WCF?

I work for a large state government agency that is a tad behind the times. Our skill sets are outdated and budgetary freezes prevent any training or hiring of new employees/consultants (firing people is also impossible). Designing business objects, implementing design patterns, establishing code libraries and services, unit testing, source control, etc. are all things that you will not find being done here. We are as much of a 0 on the Joel Test as you can possibly get. The good news is that we can only go up from here!
We develop desktop CRUD applications (in C++, C#, or Java) that hit the Oracle database directly through an ODBC connection. We basically have GUI's littered with SQL statements and patchwork code. We have been told to move towards a service-oriented n-tier architecture to prevent direct access to the database and remove the Oracle Client need on user machines.
Is WCF the path we should be headed down? We've done a few of the n-tier application walkthroughs (like this one) and they seem easy to implement, but we just don't know enough to understand if we are even considering the right technologies. Utilizing the .NET generated typed DataSets seems like a nice stopgap to save us month/years of work (as opposed to creating new business objects from the ground up for numerous projects). Is this canned approach viable for a first step?
I recently started using WCF services for my Data Layer in some web applications and I must say, it's frustrating at the beginning (the first week or so), but it is totally worth it once the code is deployed.
You should first try it out with a small existing app, or maybe a proof of concept to make sure it will fit your needs.
From the description of the environment you are in, I'm sure you'll realize the benefit almost immediately.
The last company I worked for chose WCF for almost the exact reason you describe above. There is lots of good documentation and books for WCF, its relatively easy to get working, and WCF supports a lot of configuration options.
There can be some headaches when you start trying to bend WCF to work in a way not specifically designed out of the box. These are generally configuration issues. But sites like this or IDesign can help you through those.
First of all, I would definitely not (sorry for the emphasis) worry about the time you'll save using typed DataSet's versus creating your own business objects. That is usually not where you will spend most of your development time. I prefer using business objects myself.
In you're situation I would want to implement a proof-of-concept first. One that addresses all issues you may encounter. This proof-of-concept should implement an entire use case, starting on the client, retrieving data from the database and returning it to the client. You should feel confident about your implementation before continuing.
Then about choice of technology. WCF is definitely a good choice for communication between your client applications and the service layer. I suppose that both your clients as well as your service layer will become C# applications? That makes things a lot easier since interoperability between different platforms (Java/C# for example) is still not trivial although it should work in most cases.
Take a look at Entity Framework (as there are a couple Oracle providers available for it already) in conjunction with .NET 3.5 SP1 which enables built-in WCF serialization of your EF generated classes.
Here is a good blog to get started: http://blogs.msdn.com/dsimmons
CSLA might be a good fit for your N-Tier desktop apps. It supports WCF, has a large dev community, and is well documented. It is very object oriented.

Do you use AOP (Aspect Oriented Programming) in production software?

AOP is an interesting programming paradigm in my opinion. However, there haven't been discussions about it yet here on stackoverflow (at least I couldn't find them). What do you think about it in general? Do you use AOP in your projects? Or do you think it's rather a niche technology that won't be around for a long time or won't make it into the mainstream (like OOP did, at least in theory ;))?
If you do use AOP then please let us know which tools you use as well. Thanks!
Python supports AOP by letting you dynamically modify its classes at runtime (which in Python is typically called monkeypatching rather than AOP). Here are some of my AOP use cases:
I have a website in which every page is generated by a Python function. I'd like to take a class and make all of the webpages generated by that class password-protected. AOP comes to the rescue; before each function is called, I do the appropriate session checking and redirect if necessary.
I'd like to do some logging and profiling on a bunch of functions in my program during its actual usage. AOP lets me calculate timing and print data to log files without actually modifying any of these functions.
I have a module or class full of non-thread-safe functions and I find myself using it in some multi-threaded code. Some AOP adds locking around these function calls without having to go into the library and change anything.
This kind of thing doesn't come up very often, but whenever it does, monkeypatching is VERY useful. Python also has decorators which implement the Decorator design pattern (http://en.wikipedia.org/wiki/Decorator_pattern) to accomplish similar things.
Note that dynamically modifying classes can also let you work around bugs or add features to a third-party library without actually having to modify that library. I almost never need to do this, but the few times it's come up it's been incredibly useful.
Yes.
Orthogonal concerns, like security, are best done with AOP-style interception. Whether that is done automatically (through something like a dependency injection container) or manually is unimportant to the end goal.
One example: the "before/after" attributes in xUnit.net (an open source project I run) are a form of AOP-style method interception. You decorate your test methods with these attributes, and just before and after that test method runs, your code is called. It can be used for things like setting up a database and rolling back the results, changing the security context in which the test runs, etc.
Another example: the filter attributes in ASP.NET MVC also act like specialized AOP-style method interceptors. One, for instance, allows you to say how unhandled errors should be treated, if they happen in your action method.
Many dependency injection containers, including Castle Windsor and Unity, support this behavior either "in the box" or through the use of extensions.
I don't understand how one can handle cross-cutting concerns like logging, security, transaction management, exception-handling in a clean fashion without using AOP.
Anyone using the Spring framework (probably about 50% of Java enterprise developers) is using AOP whether they know it or not.
At Terracotta we use AOP and bytecode instrumentation pretty extensively to integrate with and instrument third-party software. For example, our Spring intergration is accomplished in large part by using aspectwerkz. In a nutshell, we need to intercept calls to Spring beans and bean factories at various points in order to cluster them.
So AOP can be useful for integrating with third party code that can't otherwise be modified. However, we've found there is a huge pitfall - if possible, only use the third party public API in your join points, otherwise you risk having your code broken by a change to some private method in the next minor release, and it becomes a maintenance nightmare.
AOP and transaction demarcation is a match made in heaven. We use Spring AOP #Transaction annotations, it makes for easier and more intuitive tx-demarcation than I've ever seen anywhere else.
We used aspectJ in one of my big projects for quite some time. The project was made up of several web services, each with several functions, which was the front end for a complicated document processing/querying system. Somewhere around 75k lines of code. We used aspects for two relatively minor pieces of functionality.
First was tracing application flow. We created an aspect that ran before and after each function call to print out "entered 'function'" and "exited 'function'". With the function selector thing (pointcut maybe? I don't remember the right name) we were able to use this as a debugging tool, selecting only functions that we wanted to trace at a given time. This was a really nice use for aspects in our project.
The second thing we did was application specific metrics. We put aspects around our web service methods to capture timing, object information, etc. and dump the results in a database. This was nice because we could capture this information, but still keep all of that capture code separate from the "real" code that did the work.
I've read about some nice solutions that aspects can bring to the table, but I'm still not convinced that they can really do anything that you couldn't do (maybe better) with "normal" technology. For example, I couldn't think of any major feature or functionality that any of our projects needed that couldn't be done just as easily without aspects - where I've found aspects useful are the kind of minor things that I've mentioned.
I use AOP heavily in my C# applications. I'm not a huge fan of having to use Attributes, so I used Castle DynamicProxy and Boo to apply aspects at runtime without polluting my code
We use AOP in our session facade to provide a consistent framework for our customers to customize our application. This allows us to expose a single point of customization without having to add manual hook support in for each method.
Additionally, AOP provides a single point of configuration for additional transaction setup and teardown, and the usual logging things. All told, much more maintainable than doing all of this by hand.
The main application I work on includes a script host. AOP allows the host to examine the properties of a script before deciding whether or not to load the script into the Application Domain. Since some of the scripts are quite cumbersome, this makes for much faster loading at run-time.
We also use and plan to use a significant number of attributes for things like compiler control, flow control and in-IDE debugging, which do not need to be part of the final distributed application.
We use PostSharp for our AOP solution. We have caching, error handling, and database retry aspects that we currently use and are in the process of making our security checks an Aspect.
Works great for us. Developers really do like the separation of concerns. The Architects really like having the platform level logic consolidated in one location.
The PostSharp library is a post compiler that does the injection of the code. It has a library of pre-defined intercepts that are brain dead easy to implement. It feels like wiring in event handlers.
Yes, we do use AOP in application programming . I preferably use AspectJ for integrating aop in my Spring applications. Have a look at this article for getting a broader prospective for the same.
http://codemodeweb.blogspot.in/2018/03/spring-aop-and-aspectj-framework.html