Fluent validation or EntLib Validation Application Block for WCF services - wcf

I am looking for a standard way to add validation of the input parameters to the set of WCF services.
Can anyone give comparison of Fluent validation http://fluentvalidation.codeplex.com/ and EntLib Validation Application Block?
What are advantages/disadvantages of each of them?
What are scenarios when one or another should be used?
My question is similar to Which validation framework would you recommend for .net projects? and Which validation framework to choose: Spring Validation or Validation Application Block (Enterprise LIbrary 4.0)? , but the answers to these questions do not have detailed comparison.
I would appreciate if some other similar technology would be recommended( with reasoning why)
Does anyone has experience with both framework and select one for their projects? What were the reasons for the decision?

After a few months I can answer, that EntLib Validation Application block(VAB) is a mature library which  supports code, attribute and configuration validation.
In most cases developers should start with attribute validation of DataMember properties in DataContract request as the simplest and concise way.
If you expect that Validation rules will be changed frequently or different installations of application will need different rules for the same property(e.g. Zip code rules are different for different countries), you should choose configuration. It is not straightforward and required a learning, but flexibility is an advantage. EntLib config editor can be helpful to make it easier.
Only for complex rules, that can't be expressed using attributes or configuration, you should write code.
If you are repeating the same rules a few times, consider to create custom validator and validation attribute.
Fluent validation library supports adding validation in code, that is less desirable method. So I don't understand, why Fluent validation is so popular. Also I was surprised, that  Fluent validation author is not familiar with EntLib VAB.
My original question was about input parameters for WCF operations. However the best practices recommend to use single request parameter as data contract rather then multiple RPC style simple parameters.
Anyway  VAB provide attributes for individual parameters of WCF operations, which gives more concise view
(e.g. See http://www.codeproject.com/Articles/259327/Integrate-Validation-Block-with-WCF) 

Related

NestJS Schema First GraphQL Serialization

I've done some research into the subject of response serialization for NestJS/GraphQL. There's some helpful information to be found here, but the documentation seems to be completely focused on a code first approach. My project happens to be taking schema first approach, and from what I've read across a few sources, the option available for a schema-first project would be to implement interceptors for the resolvers, and carry out the serialization there.
Before I run off and start writing these interceptors, my question is this; is there any better options provided by nestjs to implement serialization for a schema first approach?
If it's just transformation of values then an interceptor is a great tool for that. Everything shown for "code-first" should work for "schema-first" in terms of high level ideas of the framework (interceptors, pipes, filters, etc). In fact, once the server is running, there shouldn't be a distinguishable difference between the two approaches, and how they operate. The big thing you'd need to be concerned with is that you won't be easily able to take advantage of class-transformer and class-validator because the original class definitions are created via the gql-codegen, but you can still extend those types and add on the decorators necessary if you choose.

What makes Struts tightly coupled

I've been trying to find a clear answer to this question for a while but haven't had any luck. I need to know why Struts is tightly coupled? which component of struts makes it tightly coupled.
As this great Mkyong article discusses, what makes Struts tightly coupled is not the presence of something but rather the absence of something. Struts is basically a web UI framework, and it can be compared to Spring MVC. However, unlike Spring, Struts has no out-of-the-box support for dependency injection. As a result, this means that when using Struts your entire code may have to be changed if a given depenency changes. Another way of saying this is that the components you use in Struts are tightly coupled to the framework.
There are two ways of looking at this question:
Struts 1 itself is tightly-coupled, and
Struts 1 is tightly-coupled to the servlet spec
Issue #1
Eliminating this is trivial; use a supported version of Spring, and you have all the DI you need at the application level, e.g., you can inject your services, wire things with AOP, etc.
At the framework level you don't have the same flexibility. You cannot arbitrarily replace Struts 1 framework components. That's why custom request processors and action base classes are the first approach when framework-level functionality is needed–there's nowhere else to put it.
Issue 2
Eliminating issue #2 is less trivial: Struts artifacts all reference servlet spec artifacts, like the HTTP request and response. If you want to abstract that away, e.g., for easier testing, or business logic reuse, you must do so manually.
An example would be to marshal request parameters (e.g., form values) into a domain object or a simple map, and pass that to your domain logic.
Struts 1 was written before DI/IoC was what the cool kids were doing, before strict layer isolation was common, etc. It was written early. It carried that baggage on through the years because backwards compatibility is a thing.
You could argue that the coupling to the servlet spec is good or bad: it's really a matter of where the isolation between business logic and the web app occurs, who is responsible for it, and how do you want to test it.
Unit-testing a Struts 1 action is kind of a PITA. If all it does is handle the web-app-to-business-logic marshalling it's argiable they don't need to be unit tested: the business logic does, but the Struts 1 layer could be tested via an integration test at the web app level. (I'd argue that even makes sense, and I feel much the same way about Struts 2 action testing–but S2 actions are so easy to test (barring heavy interceptor interaction and a few other less-common things) that the differentiation is less important.

Should I use Datasets or Entity Framework to transfer data via WCF services?

I am working on a application design and investigating on passing data between WCF services and the ASP.NET web application. Options I am looking at are to either use "Datasets" or Entity Framework.
I have bunch of questions,
If I use Entity Framework to pass data, would it add more overhead to the WCF communication,
Is Datasets considered as "light weight" considering the communication overhead,
If i use Entity Framework, how can I maintain object models if I use stored procedures to return complex data?
Overall I need to figure out pros and cons of using these technologies.
Your question touches on the principles of service-oriented architecture (SOA). In SOA, a service should expose operations (methods) that apply to business objects through contracts. The business objects should be defined in a standard way which is why WCF uses WSDL and XML Schema to do this.
An SOA tenet is that business objects are shared through a schema and/or a contract but not as a specific implementation. By this priniciple, Dataset is a heavyweight, .NET specific, database-oriented object which should not be used to represent a business object. The Microsoft Patterns & Practices group apparently disregards SOA tenets since it shows how to use Dataset as Data Transfer Objects. Whether they're just promoting vendor lock-in or just trashing the whole concept of SOA is anybody's guess. If there is even the remotest chance your service will ever be consumed by a non-.NET client please do not use a Dataset.
If you decide on using the Entity Framework then I'd recommend using Code First to define the Entity Framework data model and just expose those "code first" business objects in your service. This SO question & answers provide a good discussion on using Entity Framework with WCF.
Are you really considering passing entire datasets up and down? This will destroy your object model and be more difficult to maintain, though I suppose you could implement the Repository pattern. Still, even just sending the changes you would need to do strange things like ensure you do not transfer the schema and perhaps using the compress option. But it is a very poor choice when compared to Entity Framework Code First with nice clean POCOs in a separate assembly and without any of the EF infrastructure polluting the DTOs.
EF should not add overhead as such but it depends how it is implemented and what data objects are being passed around. If you pass data into a context and use the correct option when SaveChanges is called such as SaveChangesOptions.ReplaceOnUpdate, only the changed entities will be updated. The queries executed are efficient so long as you are mindful not to lazy load stuff you don't need. You need to understand LINQ to entities well and batch your updates as you would any other potentially expensive method call. Run some tests with your database profiler running and try to improve the efficiency of your interactions with EF, monitor your IIS logs for data sizes and transmission times etc.
Datasets are not considered lightweight due to the need to encapsulate a schema and potentially somebody might make a mistake and send up a whole heap of data in multiple tables, including tables that are dependencies. These might need to be pulled in anyway either on the client or server - very messy! EF does support stored procedures in a sensible manner as they can be part of your model and get called when specific entities need to be saved. ORM will compliment your OO design and lead to cleaner code.
Also if you are doing something simple and only require CRUD without much in the way of business logic consider WCF Data Services.

What is Aspect Oriented Programming? [duplicate]

This question already has answers here:
Closed 14 years ago.
Duplicate:
What is aspect-oriented programming?
Every time I here a podcast or read a blog entry about it, even here, they make it sound like string theory or something. Is the best way to describe it OOP with Dependency Injection on steroids?
Every time someone tries to explain it, it’s like, Aspects, [Adults from Peanuts Cartoon sound], Orthogonal, [more noise], cross cutting concerns, etc. Seriously, can anyone describe it in layman’s terms.
Laymans terms so let me give you an example. Let's say you have a web application, and you need to add error logging / auditing. One implementation would be to go into every public method and add your try catch blocks etc...
Well Aspect oriented says hogwash with that, let me inject my method around your method so for example instead of calling YourClass.UpdateModel(), the system might call,
LoggingHandler.CallMethod() this method might then redirect the call to UpdateModel but wraps it in a try catch block to handle logging errors.
Now the trick is that this redirection happens automagically, through configuration or by applying attributes to methods.
This works for as you said cross cutting things which are very common programing elements that exist in every domain, such as: Logging, Auditing, Transaction Mgmt, Authorization.
The idea behind it is to remove all this common plumbing code out of your business / app tier so you can focus on solving the problem not worrying about logging this method call or that method call.
Class and method attributes in .NET are a form of aspect-oriented programming. You decorate your classes/methods with attributes. Behind the scenes this adds code to your class/method that performs the particular functions of the attribute. For example, marking a class serializable allows it to be serialized automatically for storage or transmission to another system. Other attributes might mark certain properties as non-serializable and these would be automatically omitted from the serialized object. Serialization is an aspect, implemented by other code in the system, and applied to your class by the application of a "configuration" attribute (decoration) .
AOP is all about managing the common functionality (which span across the application, hence cross cutting) within the application such that it is not embedded within the business logic.
Examples to such cross cutting concerns are logging, managing security, transaction management etc.
Frameworks allows this to managed automatically with the help of some configuration files.
I currently use Post Sharp, i would read the info from their website. I use it to provide a security around method calls.
"PostSharp is an open platform for the analysis and transformation of .NET assemblies. It comes with PostSharp Laos, a powerful yet simple plug-in that let you develop custom attributes that actually adds behavior of your code. PostSharp Laos is the leading aspect-oriented programming (AOP) solution for the .NET Framework."
The classic examples are security and logging. Instead of writing code within your application to log occurance of x or check object z for security access control there is a language contraption "out of band" of normal code which can systematically inject security or logging into routines that don't nativly have them in such a way that even though your code doesn't supply it -- its taken care of.
A more concrete example is the operating system providing access controls to a file. A software program does not need to check for access restrictions because the underlying system does that work for it.
If you think you need AOP in my experience you actually really need to be investing more time and effort into appropriate meta-data management within your system with a focus on well thought structural / systems design.

Where to place language translations in an multitiered architecture

I build an app with following layers:
WEB presentation layer
Business Logic Layer - BLL - called from WEB UI through HTTP web services
WindowsService - Runtime - called from BLL through net.pipe
BLL can also be called from 3rd party for integration into other customer's systems.
Let's say that there is an validation error that happens in the Runtime or even BLL. Where would be better to put translations:
in the exception message - means
that we must send UICulture from WEB
layer to lower layers
BLL and Runtime are returning error
codes or custom Exception derived
types and translation is performed
in WEB UI layer
some other method
What is the best practice for supporting multiple languages in SOA architectures ?
EDIT:
I should probably use the term tiers instead of layers.
WEB UI tier is implemented in ASP.NET web forms and will be deployed on server A under IIS.
BLL and Runtime will be deployed on server B but are separated by process boundaries (BLL runs under ASP.NET worker process because of WCF services and Runtime runs as separated windows service process).
My advice for your issues is general because I do not know the specifics of the .NET platform you are working with.
From your question, I see two distinct problems.
You web presentation layer will be language-dependent. It will require custom CSS, fonts, spacing and even custom content. Do not fool yourself that this will not be needed. This is one of the biggest mistakes people make initially when internationalizing web applications. You will need another style for the language. So, if you are using a template approach you can put most of your language content right in the language-dependent template.
From the description of your problem, it sounds like you will also need to handle localized error messages. There is two approaches two this: you can have a language file where you localize when the error is thrown using a resource file solution. Another approach is to have your error messages use a common identifier and parameters and have another layer catch the message and localize it. I myself prefer the former solution since it is simpler.
Also remember that if you are writing your error messages to a log file, that the error messages are in a language that the developers can read. Likewise for errors displayed to users in the GUI, you will want some way for the users to identify the errors to the developers who do not speak the user's language. This could be done by using numbers - I prefer using short keys like DATABASE_ERROR_BAD_QUERY.
The translation should be handled by the presentation layer as it relates to the view. The more context that can be added to the message the better and the business logic might not be aware of the context nor should it be.
An approach that I've used is as follows:
Business logic throws unique, defined
error codes which can be used as keys
into a resource bundle to get a
translated message.
Presentation layer maintains a text
package containing all the error code
translations separate from the
general presentation layer code.
The presentation layer depends on
both the business logic and the text
package.
3rd party clients of the business logic, like a web service,
can choose to include the text
package as a dependency if they want
the standard error code translations.
Otherwise they can handle the error
codes thrown by the business logic on
their own.
I'm not quite sure what is your definition of the WEB UI. If you use the MVC pattern, the Controller would be place to take care of showing the right language version in the UI, while the text itself would be in the View layer. What I didn't understand is what plays the role of the controller in your architecture. Does BLL mean only including processing logic and no communication between UI and Services? If so, then probably the Web UI layer would contain the localization logic.
I would also say that it depends a lot on the technology you use in the project. ASP.NET for example has a built-in localization model that you can use. I'm not saying it should serve as an example, even on the contrary - ASP.NET breaks separation of concerns. But I think this is an issue, which would have very different solutions in different custom architecture models, as in your case.
Based on how your application is structured, you may need internationalization in two locations:
1) Software itself. Menus, Dialogs, Messages, Labels, Reports, etc.
2) Content. Your application when running in more than one language will likely need to handle content in more than one language.
I have had good luck in seperating the management tools and the publishing logic in different layers (so far).
First, consider placing the language translation management and generation logic (resource bundles, etc) in the Business Logic. So, for all your translations, you want a way to quickly sync all data entries as they get added to the system in all languages from a master language (english), and then populate and manage those. So, if you're using resource bundles for example, generate the rb files from a database for all the languages.
Second, publish the language translations logic at the presentation tier. It has more to do with presentation and formatting. You inevitably will run into issues with localization for dates, times, currencies, etc that you can handle pretty well here. You may or may not build your own library to publish these things as the confines, or flexibility of your programming language may or may not allow.
If you can give more details I'm sure there are other insights available from everyone here.
Good Luck!