Using FluentValidation's WithMessage method with parameters with a WarningMessage - fluentvalidation

I am new at FluentValidation in general. I am writing a validator, and I can't seem to figure out how to do a .WithMessage with a WarningMessage instead of an ErrorMessage and use params.
I can do this:
RuleFor(x => x.Endorsement)
.Must((coverage, endorsement) => HaveCoveragePerAcreOverMinimum(_coverage, coverage))
.When(x => (!HaveSpecialRequest(_coverage) && !HavePermissionsToOverrideLimits()))
.WithMessage("Some error message {0}", x => x.MyError);
But that sets it as an ErrorMessage and I need a Warning Message. I tried this but no dice:
RuleFor(x => x.Endorsement)
.Must((coverage, endorsement) => HaveCoveragePerAcreOverMinimum(_coverage, coverage))
.When(x => (!HaveSpecialRequest(_coverage) && !HavePermissionsToOverrideLimits()))
.WithMessage(new WarningMessage("Some warning message {0}", x => x.MyError));

There's no direct implementation of Warning message in FluentValidation (or Mvc's ModelState).
In FluentValidation, you've got a
WithState() extension method that you can use for this purpose.
First, you can create an enum
public enum ValidationErrorLevel
{
Error,
Warning
}
Then, you can write a few extension methods, in a static class, to use warnings and errors.
One to use in your Validator classes
public static IRuleBuilderOptions<T, TProperty> AsWarning<T, TProperty>(this IRuleBuilderOptions<T, TProperty> rule)
{
return rule.WithState<T, TProperty>(x => ValidationErrorLevel.Warning);
}
You can use it this way
RuleFor(x => x.Endorsement)
.Must((coverage, endorsement) => HaveCoveragePerAcreOverMinimum(_coverage, coverage))
.When(x => (!HaveSpecialRequest(_coverage) && !HavePermissionsToOverrideLimits()))
.WithMessage("Some error message {0}", x => x.MyError)
.AsWarning();
Few other to use to manage your validation results
public static IList<ValidationFailure> GetWarnings(this ValidationResult result)
{
return result.Errors.Where(m => m.CustomState != null && Convert.ToInt32(m.CustomState) == (int)ValidationErrorLevel.Warning).ToList();
}
public static IList<ValidationFailure> GetErrors(this ValidationResult result)
{
return result.Errors.Except(result.GetWarnings()).ToList();
}
Then, you should use
validator.Validate(<someclass>)
instead of
validator.ValidateAndThrow(<someclass>)
var results = validator.Validate(<someclass>);
You can then put errors in ModelState, for example
foreach (var error in result.GetErrors()) {
ModelState.AddModelError(error.PropertyName, error.ErrorMessage);
}
and do something else for Warnings, for example put it in TempData
TempData["warnings"] = new List<string>();
((List<string>)TempData[WarningMessageKey]).AddRange(result.GetWarnings().Select(warning => warning.ErrorMessage));
Then you can display them like any other TempData.

Validation (in general) attaches errors to the ModelState - which in itself is an object so if you put a break point on the if(ModelState.IsValid) line you can look at the other properties of the ModelState.
The thing with errors (and error messages) they are either there or not. If there is no issue there is no error message and if there is an issue an error message will be added.
If I were you, if the model state is not valid in your controller I would get all error messages using
var allErrors = ModelState.Values.SelectMany(v => v.Errors);
and then iterate over each one and decide what you want to do with it.
I hope this answers or at least helps as I am not totally sure what you are asking / trying to get to.

Related

Build/Test verification for missing implementations of query/commands in MediatR

We're using MediatR heavily in our LoB application, where we use the command & query pattern.
Often, to continue in development, we make the commands and the queries first, since they are simple POCOs.
This sometimes can lead to forgetting to create an actual command handler/query handler. Since there's no compile-time validation if there is actually an implementation for the query/command, I was wondering what would be the best approach to see if there's an implementation and throw an error if not, before being able to merge into master.
My idea so far:
Create a two tests, one for queries and one for commands, that scan all the assemblies for an implementation of IRequest<TResponse>, and then scan the assemblies for an associated implementation of IRequestHandler<TRequest, TResponse>
But this would make it still required to first execute the tests (which is happening in the build pipeline), which still depends on the developer manually executing the tests (or configuring VS to do so after compile).
I don't know if there's a compile-time solution for this, and even if that would be a good idea?
We've gone with a test (and thus build-time) verification;
Sharing the code here for the actual test, which we have once per domain project.
The mediator modules contain our query/command(handler) registrations, the infrastructure modules contain our handlers of queries;
public class MissingHandlersTests
{
[Fact]
public void Missing_Handlers()
{
List<Assembly> assemblies = new List<Assembly>();
assemblies.Add(typeof(MediatorModules).Assembly);
assemblies.Add(typeof(InfrastructureModule).Assembly);
var missingTypes = MissingHandlersHelpers.FindUnmatchedRequests(assemblies);
Assert.Empty(missingTypes);
}
}
The helper class;
public class MissingHandlersHelpers
{
public static IEnumerable<Type> FindUnmatchedRequests(List<Assembly> assemblies)
{
var requests = assemblies.SelectMany(x => x.GetTypes())
.Where(t => t.IsClass && t.IsClosedTypeOf(typeof(IRequest<>)))
.ToList();
var handlerInterfaces = assemblies.SelectMany(x => x.GetTypes())
.Where(t => t.IsClass && (t.IsClosedTypeOf(typeof(IRequestHandler<>)) || t.IsClosedTypeOf(typeof(IRequestHandler<,>))))
.SelectMany(t => t.GetInterfaces())
.ToList();
List<Type> missingRegistrations = new List<Type>();
foreach(var request in requests)
{
var args = request.GetInterfaces().Single(i => i.IsClosedTypeOf(typeof(IRequest<>)) && i.GetGenericArguments().Any() && !i.IsClosedTypeOf(typeof(ICacheableRequest<>))).GetGenericArguments().First();
var handler = typeof(IRequestHandler<,>).MakeGenericType(request, args);
if (handler == null || !handlerInterfaces.Any(x => x == handler))
missingRegistrations.Add(handler);
}
return missingRegistrations;
}
}
If you are using .Net Core you could the Microsoft.AspNetCore.TestHost to create an endpoint your tests could hit. Sort of works like this:
var builder = WebHost.CreateDefaultBuilder()
.UseStartup<TStartup>()
.UseEnvironment(EnvironmentName.Development)
.ConfigureTestServices(
services =>
{
services.AddTransient((a) => this.SomeMockService.Object);
});
this.Server = new TestServer(builder);
this.Services = this.Server.Host.Services;
this.Client = this.Server.CreateClient();
this.Client.BaseAddress = new Uri("http://localhost");
So we mock any http calls (or any other stuff we want) but the real startup gets called.
And our tests would be like this:
public SomeControllerTests(TestServerFixture<Startup> testServerFixture)
: base(testServerFixture)
{
}
[Fact]
public async Task SomeController_Returns_Titles_OK()
{
var response = await this.GetAsync("/somedata/titles");
response.StatusCode.Should().Be(HttpStatusCode.OK);
var responseAsString = await response.Content.ReadAsStringAsync();
var actualResponse = Newtonsoft.Json.JsonConvert.DeserializeObject<IEnumerable<string>>(responseAsString);
actualResponse.Should().NotBeNullOrEmpty();
actualResponse.Should().HaveCount(20);
}
So when this test runs, if you have not registered your handler(s) it will fail! We use this to assert what we need (db records added, response what we expect etc) but it is a nice side effect that forgetting to register your handler gets caught at the test stage!
https://fullstackmark.com/post/20/painless-integration-testing-with-aspnet-core-web-api

Checking exceptions with TestCaseData parameters

I'm using NUnit 3 TestCaseData objects to feed test data to tests and Fluent Assertions library to check exceptions thrown.
Typically my TestCaseData object contains two parameters param1 and param2 used to create an instance of some object within the test and upon which I then invoke methods that should/should not throw exceptions, like this:
var subject = new Subject(param1, param2);
subject.Invoking(s => s.Add()).Should().NotThrow();
or
var subject = new Subject(param1, param2);
subject.Invoking(s => s.Add()).Should().Throw<ApplicationException>();
Is there a way to pass NotThrow() and Throw<ApplicationException>() parts as specific conditions in a third parameter in TestCaseData object to be used in the test? Basically I want to parameterize the test's expected result (it may be an exception of some type or no exception at all).
[TestCaseData] is meant for Test Case Data, not for assertions methods.
I would keep the NotThrow and Throw in separate tests to maintain readability.
If they share a lot of setup-logic, I would extract that into shared methods to reduce the size of the test method bodies.
TestCaseData accepts compile time values, whereas TestCaseSource generates them on runtime, which would be necessary to use Throw and NotThrow.
Here's a way to do it by misusing TestCaseSource.
The result is an unreadable test method, so please don't use this anywhere.
Anyway here goes:
[TestFixture]
public class ActionTests
{
private static IEnumerable<TestCaseData> ActionTestCaseData
{
get
{
yield return new TestCaseData((Action)(() => throw new Exception()), (Action<Action>)(act => act.Should().Throw<Exception>()));
yield return new TestCaseData((Action)(() => {}), (Action<Action>)(act => act.Should().NotThrow()));
}
}
[Test]
[TestCaseSource(typeof(ActionTests), nameof(ActionTestCaseData))]
public void Calculate_Success(Action act, Action<Action> assert)
{
assert(act);
}
}
I ended up using this:
using ExceptionResult = Action<System.Func<UserDetail>>;
[Test]
[TestCaseSource(typeof(UserEndpointTests), nameof(AddUserTestCases))]
public void User_Add(string creatorUsername, Role role, ExceptionResult result)
{
var endpoint = new UserEndpoint(creatorUsername);
var person = GeneratePerson();
var request = GenerateCreateUserRequest(person, role);
// Assertion comes here
result(endpoint.Invoking(e => e.Add(request)));
}
private static IEnumerable AddUserTestCases
{
get
{
yield return new TestCaseData(TestUserEmail, Role.User, new ExceptionResult(x => x.Should().Throw<ApplicationException>())
.SetName("{m} (Regular User => Regular User)")
.SetDescription("User with Regular User role cannot add any users.");
yield return new TestCaseData(TestAdminEmail, Role.Admin, new ExceptionResult(x => x.Should().NotThrow())
)
.SetName("{m} (Admin => Admin)")
.SetDescription("User with Admin role adds another user with Admin role.");
}
}
No big issues with readability, besides, SetName() and SetDescription() methods in the test case source help with that.

How to resolve a Collection of Types from within the IoC Container

We're using MvvmCross in our app, and using the MvxSimpleIoCContainer
In the app startup, we register all of our Migrations.
it's easy do do since all migrations inherit from IMigration
typeof (IMigration)
.Assembly
.CreatableTypes()
.Inherits<IMigration>()
.AsTypes()
.RegisterAsLazySingleton();
After the migrations are registered, we need to run them consecutively, and therefore the MigrationRunner looks a little something like this.
Mvx.Resolve<IMigrationRunner>().RunAll(SystemRole.Client, new List<IMigration>
{
Mvx.IocConstruct<Migration001>(),
Mvx.IocConstruct<Migration002>()
});
as you can see, I'm explicitely constructing each Migration using Mvx. This get's tedious and is prone to mistakes when a bunch of migrations end up in the app.
What I'd prefer to be able to do is resolve the entire collection in one fell swoop, and not have to touch it every time I create a new Migration.
Is there a way to do this via MvvmCross?
Pseudo Code
Mvx.Resolve<IMigrationRunner>()
.RunAll(SystemRole.Client, Mvx.ResolveAll<IMigration>());
I would use LINQ to get the list of types. Unfortunately there's no way to get a list of registered types, so you'll have to enumerate the types again like you do for registration. You can even sort by type name. Now that you have a list of types, you can create a new list of instantiated/resolved types to pass into RunAll(). Something like:
var migrationTypes = typeof (IMigration)
.Assembly
.CreatableTypes()
.Inherits<IMigration>()
.AsTypes()
.OrderBy(t => t.Name)
.ToList();
Mvx.Resolve<IMigrationRunner>()
.RunAll(SystemRole.Client,
migrationTypes.Select(t => Mvx.Resolve(t)).ToList());
This is "browser" code, so no guarantees, but you get the gist.
Ok, so reflection is the answer to this problem for now, and eventually, I'd like to either extend our custom MvxServiceLocator : IServiceLocator to include something like
public IEnumerable<object> GetAllInstances(Type serviceType){...}
but for now I've just got a RunMigrations() method in the app
private void RunMigrations()
{
var migrationType = typeof (IMigration); // IMigration is in a separate assembly
var migrations = GetType().Assembly
.GetTypes()
.Where(
t => migrationType.IsAssignableFrom(t) && !t.IsAbstract)
.OrderBy(t => t.Name)
.Select(m => _serviceLocator.GetInstance(m) as IMigration)
.ToList();
var migrationRunner = new MigrationRunner(Mvx.Resolve<IDbProvider>());
migrationRunner.RunAll(SystemRole.Client, migrations);
}
where _serviceLocator.GetInstance(m) just lives in our custom MvxServiceLocator
public object GetInstance(Type serviceType)
{
return _ioCProvider.Resolve(serviceType);
}
Edit: here's how I extended our service locator wrapper.
public class MvxServiceLocator : IServiceLocator
{
private readonly IMvxIoCProvider _ioCProvider;
public MvxServiceLocator(IMvxIoCProvider ioCProvider)
{
_ioCProvider = ioCProvider;
}
public IEnumerable<TService> GetAllInstances<TService>()
{
var serviceType = typeof(TService);
var registrations = GetType().Assembly
.GetTypes()
.Where(
t => serviceType.IsAssignableFrom(t) && !t.IsAbstract)
.Select(m => (TService)_ioCProvider.Resolve(m));
return registrations;
}
}

Ninject Moqing Kernel (what does reset do?)

I have an interface that I'm using with a couple different concrete classes. What I wish is that there was something like this...
_kernel.GetMock<ISerializeToFile>().Named("MyRegisteredName")
.Setup(x => x.Read<ObservableCollection<PointCtTestDataInput>>(
It.IsAny<string>()));
The project I'm working on uses the service locator pattern - anti-pattern which I'm getting less fond of all the time...
Originally I tried..
[ClassInitialize]
public static void ClassInitialize(TestContext testContext)
{
_kernel = new MoqMockingKernel();
}
[TestInitialize]
public void TestInitialize()
{
_kernel.Reset();
ServiceLocator.SetLocatorProvider(
() => new NinjectServiceLocator(_kernel));
_kernel.Bind<ISerializeToFile>().ToMock()
.InSingletonScope().Named("ObjectToFile");
_kernel.GetMock<ISerializeToFile>()
.Setup(x => x.Read<ObservableCollection<PointCtTestDataInput>>(
It.IsAny<string>()));
_kernel.GetMock<ISerializeToFile>()
.Setup(x => x.Save<ObservableCollection<PointCtTestDataInput>>(
It.IsAny<ObservableCollection<PointCtTestDataInput>>(),
It.IsAny<string>()));
}
I got the standard Ninject error stating that more than one matching binding is available. So, I moved _kernel = new MoqMockingKernel(); into the TestInitialize, and then that error went away... Perhaps I'm incorrectly guess at what _kernel.Reset() does?
Reset removes any instance from the cache. It does not delete existing bindings. So the second test will have the ISerializeToFile twice.

Rhino Mocks: Repeat.Once() not working?

Can anyone tell me why in the world the following test is not failing?
[Test]
public void uhh_what() {
var a = MockRepository.GenerateMock<IPrebuiltNotifier>();
a.Expect(x => x.Notify()).Repeat.Once();
a.Notify();
a.Notify();
a.VerifyAllExpectations();
}
Really need a second pair of eyes to confirm I'm not crazy...now I'm worried that all my tests are unreliable.
There is already a thread on the RhinoMocks group.
GenerateMock creates a dynamic mock. The dynamic mock allows calls that are not specified (=expected). If this happens, it just returns null (or the default value of the return type).
Note: Repeat is a specification of the behaviour (like Stub), not the expectation even if specified in an expectation.
If you want to avoid having more then a certain number of calls, you could write:
[Test]
public void uhh_what()
{
var a = MockRepository.GenerateMock<IPrebuiltNotifier>();
a.Expect(x => x.Notify()).Repeat.Once();
a.Stub(x => x.Notify()).Throw(new InvalidOperationException("gotcha"));
a.Notify();
// this fails
a.Notify();
a.VerifyAllExpectations();
}
Or
[Test]
public void uhh_what()
{
var a = MockRepository.GenerateMock<IPrebuiltNotifier>();
a.Notify();
a.Notify();
// this fails
a.AssertWasCalled(
x => x.Notify(),
o => o.Repeat.Once());
}
When using GenerateMock (or with Dynamic Mocks in general) I always mentally insert the following:
a.Expect(x => x.Notify()).Repeat.*[AtLeast]*Once();