AutoFixture adhering to FluentValidation - fluentvalidation

I'm currently learning using AutoFixture and can't figure out if there is a neat way to let AutoFixture generate a specimen that adheres the rules defined in my validator using FluentValidation.
Desired Solution
Let AutoFixture generate speciman following rules defined in validator.
Current Situation
AutoFixture generates random properties that violate the validation rules.
Question
Can AutoFixture create specimens adhering a AbstractValidator from FluentValidation?

Out of the box AutoFixture supports only DataAnnotations attributes as a means to generate models that would pass validation.
To my knowledge there is currently no glue library that would integrate AutoFixture with FluentValidation.
As a solution you might want to create customizations for your models that would adhere to the rules defined in your validators. Then compose the customizations in a way that would make sense for the tested scenario.

Related

What is a json hyper schema?

I am a beginner here and I have questions regarding JSON Hyper-Schema.
What is the purpose of links in Hyper-Schema and how do I validate them?
JsonSchema Hyper Schema is an extension to JsonSchema designed to support application level semantics, in a similar vein to something like swagger or RAML.
The JsonSchema standard was originally designed to have the same scope of something like XSD; that is, it's primarily about type definitions. Type definitions are important to things like API service contracts as it allows you to remove ambiguity about the resources your API deals with.
However, like XSD, JsonSchema says nothing about what kinds of operations your types will be exposed over. In the REST world, tools such as swagger were created to plug this gap. Hyper Schema would appear to be another tool for this purpose.
Onto your questions:
what is the purpose of links in hyper schema
Links are the mechanism by which a schema author can specify without ambiguity by what means the defined resources can be accessed.
how to validate them
You don't. A contract is a contract and does not require validation at the point of consumption. If your question is more about how to validate schema instances against a schema which contains links, again the answer is you do not. The links are there to tell any consumer how to semantically communicate with the resource.

Fluent validation or EntLib Validation Application Block for WCF services

I am looking for a standard way to add validation of the input parameters to the set of WCF services.
Can anyone give comparison of Fluent validation http://fluentvalidation.codeplex.com/ and EntLib Validation Application Block?
What are advantages/disadvantages of each of them?
What are scenarios when one or another should be used?
My question is similar to Which validation framework would you recommend for .net projects? and Which validation framework to choose: Spring Validation or Validation Application Block (Enterprise LIbrary 4.0)? , but the answers to these questions do not have detailed comparison.
I would appreciate if some other similar technology would be recommended( with reasoning why)
Does anyone has experience with both framework and select one for their projects? What were the reasons for the decision?
After a few months I can answer, that EntLib Validation Application block(VAB) is a mature library which  supports code, attribute and configuration validation.
In most cases developers should start with attribute validation of DataMember properties in DataContract request as the simplest and concise way.
If you expect that Validation rules will be changed frequently or different installations of application will need different rules for the same property(e.g. Zip code rules are different for different countries), you should choose configuration. It is not straightforward and required a learning, but flexibility is an advantage. EntLib config editor can be helpful to make it easier.
Only for complex rules, that can't be expressed using attributes or configuration, you should write code.
If you are repeating the same rules a few times, consider to create custom validator and validation attribute.
Fluent validation library supports adding validation in code, that is less desirable method. So I don't understand, why Fluent validation is so popular. Also I was surprised, that  Fluent validation author is not familiar with EntLib VAB.
My original question was about input parameters for WCF operations. However the best practices recommend to use single request parameter as data contract rather then multiple RPC style simple parameters.
Anyway  VAB provide attributes for individual parameters of WCF operations, which gives more concise view
(e.g. See http://www.codeproject.com/Articles/259327/Integrate-Validation-Block-with-WCF) 

NHibernate validator or Fluent Validation?

I use NHibernate. I need to decide how validate domain entities. What do you recommend? Are there any troubles if use NHibernate with Fluent Validation?
Of the O/RM tools I know, NHibernate has the smallest footprint in the C# code of domain classes. It almost allows working with POCO's while being totally oblivious of dependencies. That is exactly what FluentValidation allows too. So it seems like a happy marriage to me.
But I wouldn't dare recommend or advise against any validation tool or framework without knowing more of your context. There are many candidates and they would work with NHibernate as well. Data access and validation are two different concerns that should (and can) be separated from one another.
Fluent validation is really good for user input validation, and can be used for simple business rules. But it has no integration with NHibernate. That means that nothing would prevent NHibernate from saving not valid entity except your custom code.
On other hand there is a NHibernate validator project. It has an integration with nhibernate, and it won't let you to save not valid entity.
Generally your domain shouldn't know about your ORM. It should be kept in isolation. So my answer is:
I Can not see any issue with NH and FNH Validation, but keep domain in isolation as much as possible.

WCF code generation for large/complex schema (HR-XML/OAGIS) - is there an alternative?

and thank you for reading.
I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification.
I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach.
there are litterally hundreds of classes - the code file is half a meg in size
duplicate classes (ex. Type, Type1 - which both represent the same type)
there are classes declared as inheriting from a base class, but that base class is not generated/defined
I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold:
Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema?
Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML?
Thank you very much!
-Sasha Borodin
DFWHC.org
Sasha, If you are going to use code generation, you likely should never start with the modular schemas. When you put a code generator against the modular schemas, you'll generate a class for all the common compoents in the HR-XML library and a good bit of the common components in OAGIS. You don't want this. HR-XML is distributed with standalone schemas, which are a better starting point. An even better starting point would be to create a flattened package xsd containing only the types brought in by the WSDL. If you use a couple standalone schemas, you are going to at least have some duplications among your generated code.
Well, you could try and do something like this:
convert your XSD to C# code separately, using something like the xsd.exe tool from Microsoft, or something like Xsd2Code as a Visual Studio Plugin.
Xsd2Code in Visual Studio http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=Xsd2Code&DownloadId=41336
once you have your C# classes, weed out any inconsistencies, duplications, and so forth
package everything up into a separate class library assembly
now, when generating your WCF service from the WSDL, either using Add Service Reference from Visual Studio or the svcutil.exe tool, reference that assembly with all the data classes. Doing so, WCF should skip re-creating the whole set of classes again, and use whatever is available in that data assembly
With this, you might be able to get this mess under control.

Validation Block vs Nhibernate.Validator

I am looking for validation framework and while I am already using NHibernate I am thinking of using NHibernate.validator from contrib project however I also look at MS Validation Block which seem to be robust but i am not yet get into detail of each one yet so I wonder has anyone had step into these two frameworks and how is the experience like?
NHibernate Validator does not require you to use NHibernate for persistence. Usage can be as simple as:
var engine = new ValidatorEngine();
InvalidValue[] errors = engine.Validate(someModelObjectWithAttributes);
foreach(var error in errors)
{
Console.WriteLine(error.Message);
}
Of course it can hook into NHibernate and prevent persistence of invalid objects, but you may use it to validate non-persistent objects as well.
For the most part I would say that Spring.NET is pretty independent. Meaning it should not force you to re-architect. You can use as much or as little as you want. It should be pretty easy to write an object that you can inject into classes needing validation using spring. You would then wire this object up in castle to take the name of the "Validation Group" or "Validators" you needed and then have spring inject the validators into that object where your form/business object/service would then use the validators.
Here is a link to the doc,Validation is section 12:
http://www.springframework.net/docs/1.2.0-M1/reference/html/index.html
Are you just using Castle or are you using Monorail?
Of course you can try to write your own validation framework. For eg. Karl Seguin will help you:
http://codebetter.com/blogs/karlseguin/archive/2009/04/26/validation-part-1-getting-started.aspx
http://codebetter.com/blogs/karlseguin/archive/2009/04/27/validation-part-2-client-side.aspx
http://codebetter.com/blogs/karlseguin/archive/2009/04/28/validation-part-3-server-side.aspx
It's really nice solution :)
How about D) None of the above.
I remember evaluating this last year and decided on going with Spring.NET's validation framework.
If your using NHibernate your probably want to use Spring.NET's facilities for using NHibernate as well.