Does Binding Order Matter When Using WhenInjectedExactlyInto and a Default Binding? - ninject

With multiple Ninject modules, I end up having a binding order for a particular interface which looks like this:
Kernel.Bind<ILogger>().To<Logger>().WhenInjectedExactlyInto(typeof(TroubleshootingLogger), typeof(RegularAndStashLogger), typeof(LogStashLogger), typeof(KafkaSendClient));
Kernel.Bind<ILogger>().To<TroubleshootingLogger>();
Kernel.Bind<ILogger>().To<RegularAndStashLogger>().WhenInjectedExactlyInto<ProcessConfiguration>();
My question is if, when I call the Kernel for an instance of ProcessConfiguration, will it inject TroubleshootingLogger (the default bind), or RegularAndStashLogger (the exact bind)?

I went ahead and built a small test program to determine this myself (I acknowledge I should have done this first).
As it turns out, Ninject does appear to check all "WhenInjectedExactlyInto" bindings before falling back to a default binding.
The program (which depends on Ninject to run, duh): pastebin.com/9Kpsb25h

Related

Spring webFlux difference between DTOs

I use Spring boot reactive web flux for my rest endpoint.
What is the difference between :
#PostMapping
public Mono someMethod(#RequestBody SomeDTO someDto){
.....
to
#PostMapping
public Mono someMethod(#RequestBody Mono<SomeDTO> someDTO) {
....
I don't understand the difference in input argument to my controller method . I know one is pojo and the other in mono but what does it mean from reactive point of view?
First things, some background. You are using the 1.4. Annotated Controllers classes for WebFlux. These implementations are based on the 1.5. Functional Endpoints classes. I would recommend using the Functional Endpoints directly.
From a reactive POV you have to understand that you need to create the reactive flow. In the Mono method this has been created for you, in the SomeDTO method you should probably use Mono.just(someDTO) to create it.
What is important to understand in that statement is creation statements will be executed during the build phase not the execution phase of the reactive. The build phase is not executed asynchronously.
So, consider two mono creation statements.
return Mono.just(Thread.sleep(1000));
and
return Mono.just(1000).map(Thread::sleep);
Yes, I know it won't compile because of interrupted exception, but in the first case the Mono won't be returned to the client until 1 second and then it will do nothing when subscribed to. In the second case the mono will be returned to the client right away and will wait one second after it is subscribed to. The second one is what you are striving for.
What does it mean to you? Consider
return Mono.just(repo.save(someDto));
and
return someDto.map(repo::save);
In the first case, as above, someDto will be saved in the repo and then the mono will be returned to the client and will do nothing when subscribed to. Wrong! In the second case the mono will be returned to the client, the thread released back to the webflux framework for use in another request, and someDto will be saved when the client subscribes to the returned mono. What you are striving for.
Do it correctly with your first case by doing
return Mono.just(someDto).map(repo::save);
This is doing Mono.just(someDto) yourself whereas in your second case the webflux framework is doing it for you.
Which to choose? If you are just going to wrap someDto in a mono and use it then might as well have the framework do it for you or use the functional endpoints. If you are going to create a mono for some other reason and then use someDto during a mapping process use your first case. This second reason is, IMHO, a rare use case.
Typically when using the functional endpoints you will end up doing request.bodyToMono(SomeDto.class) which is equivalent to your second case and what is done by the framework for you in your second case.

SimpleRetryStrategy Failed<TMessage>

interface IHandleMessages has contravariant parameter TMessage
IHandleMessages<in TMessage>
this makes possible to Register in Ioc Container IHandleMessages<DerivedType> and have implementation in Handler : IHandleMessages<BaseType>. That is Ok.
The problem consist in Failed<TMessage> wrapper for failed Messages, where TMessage is not contravariant. That makes impossible to have
implementation of Handler like Handler : IHandleMessages<Failed<Base>>
and registration in Ioc container .As<IHandleMessages<Failed<DerivedType>>>()
I think its reasonable to have Failed<in TMessage> but not Failed<TMessage>
What do you think?
I did not consider this scenario when I implemented the second-level retries mechanism in Rebus, but I would like to support it.
I've added the feature to 0.99.36 (which will be on NuGet in a few days if the tests pass and everything else looks good).
It looks slightly different from what you proposed though, since co- and contra-variance can only be had with interfaces.
Therefore, Rebus now dispatches an IFailed<out TMessage>, because then you can implement e.g. IHandleMessages<IFailed<AbstractBaseClass>> when the failed message is DerivedFromAbstractBaseClass.
Keep an eye on NuGet.org - it'll be out in a few days :)
In the meantime you can see what the code looks like in the accompanying test.

Ninject InRequest Scope Losing Binding

I'm having a frustrating issue with Ninject and MVC 4.
Here is the code block:
Kernel.Bind<IUserInfo>().To<UserInfo().InRequestScope();
var userInfo = Kernel.Get<IUserInfo>();
Most of the time, this is fine, and I have a user info. Sometimes, though, I get the following error:
Error activating IUserInfo
No matching bindings are available, and the Type is not self-bindable.
Activation path:
1) Request for IUserInfo
Suggestions:
1) Ensure that you have defined a binding for IUserInfo.
2) If the binding was defined in a module, ensure that the module has been loaded into the kernel.
3) Ensure you have not accidentally created more than one kernel.
4) If you are using constructor arguments, ensure that the parameter name matches the constructors parameter name.\r\n 5) If you are using automatic module loading, ensure the search path and filters are correct.
I've pared down everything I cant think to, and am at a loss. I don't know why this would fail intermittently. Based on my admittedly limited knowledge of Ninject, there should be no way for the binding to be missing.
I see a lot of references to using the Ninject MVC Nuget packages, but the app as I inherited it does not use those, it initializes Ninject using an ActionFilter. Is this pattern just broken at its core, and somehow interfering with proper binding?
Help?
Take a look at the BindFilter option
https://github.com/ninject/ninject.web.mvc/wiki/Filter-configurations
There is some sort of caching issue I believe, that makes filters behave differently to controllers. This means that the binding can fail, usually under heavy load, but unpredicatably.
It turns out that newer versions of Ninject need more setup for InRequestScope to work. By removing Ninject entirely, and readding references to Ninject, Ninject.Web.Common, and Ninject.Web.MVC, it added the Ninject.Web.Common.cs file that was neccessary for InRequestScope to work.
Previously, it was actually binding InTransientScope, which meant it would get garbage collected, which is non-deterministic, which explains my intermittent issues. I wish it would have thrown exceptions when i tried to bind InRequestScope, but c'est la vie.

Disable implicit binding/injection of non explicitly bound classes in Ninject 2+

If you request an unbound object from NInject, then the default behaviour is (if a suitable constructor is available) appears to be to create an instance of the appropriate object.
I'd like to disable this behaviour (I had a difficult to debug issue because something was auto-bound instead of picking up my custom binding in a module). This question hints that it is possible, but I'm unable to find the answer from the NInject wiki.
Remove the SelfBindingResolver from the kernel components after creation:
kernel.Components.RemoveAll<IMissingBindingResolver>();
kernel.Components.Add<IMissingBindingResolver, DefaultValueBindingResolver>();
The following is a better, more direct way of removing the SelfBindingResolver, without assuming that the DefaultValueBindingResolver is the only other IMissingBindingResolver component:
kernel.Components.Remove<IMissingBindingResolver, SelfBindingResolver>();
It's possible the Remove<T, TImplementation>() method was only added in a recent version of Ninject, but this works for me using Ninject 3.2.2.0.

Can someone help me set up Ninject 2 with Log4net?

I've been (happily) using Ninject for a while now with some basic scenarios, and would like to give it control of my logging. I noted the existence of the Ninject.Extensions.Logging namespace, and would like to use it, but I'm running into two issues:
I want the logger to be initialized with the type of the class running it (as if I ran LogManager.GetLogger with the GetCurrentMethod().DeclaringType).
I want to be able to easily mock, or "nullify" the logger for unit testing (i.e I don't want to have the logger work), without running into NullReferenceExceptions for not initializing the logger.
Now, I know there are some questions (and even answers) around here, but I couldn't seem to find any that pointed me in the right direction.
I'll appreciate any help (even a "you bone-head" it's here! Linking to something I should have noticed).
This is the default behavior of the extension
Don't use Ninject to create the object under test in your unit tests. Create an instance manually and pass what ever you want for the logger.
Best you have a look at the unittests. https://github.com/ninject/ninject.extensions.logging/blob/master/src/Ninject.Extensions.Logging.Tests/Infrastructure/CommonTests.cs