SimpleRetryStrategy Failed<TMessage> - servicebus

interface IHandleMessages has contravariant parameter TMessage
IHandleMessages<in TMessage>
this makes possible to Register in Ioc Container IHandleMessages<DerivedType> and have implementation in Handler : IHandleMessages<BaseType>. That is Ok.
The problem consist in Failed<TMessage> wrapper for failed Messages, where TMessage is not contravariant. That makes impossible to have
implementation of Handler like Handler : IHandleMessages<Failed<Base>>
and registration in Ioc container .As<IHandleMessages<Failed<DerivedType>>>()
I think its reasonable to have Failed<in TMessage> but not Failed<TMessage>
What do you think?

I did not consider this scenario when I implemented the second-level retries mechanism in Rebus, but I would like to support it.
I've added the feature to 0.99.36 (which will be on NuGet in a few days if the tests pass and everything else looks good).
It looks slightly different from what you proposed though, since co- and contra-variance can only be had with interfaces.
Therefore, Rebus now dispatches an IFailed<out TMessage>, because then you can implement e.g. IHandleMessages<IFailed<AbstractBaseClass>> when the failed message is DerivedFromAbstractBaseClass.
Keep an eye on NuGet.org - it'll be out in a few days :)
In the meantime you can see what the code looks like in the accompanying test.

Related

Spring webFlux difference between DTOs

I use Spring boot reactive web flux for my rest endpoint.
What is the difference between :
#PostMapping
public Mono someMethod(#RequestBody SomeDTO someDto){
.....
to
#PostMapping
public Mono someMethod(#RequestBody Mono<SomeDTO> someDTO) {
....
I don't understand the difference in input argument to my controller method . I know one is pojo and the other in mono but what does it mean from reactive point of view?
First things, some background. You are using the 1.4. Annotated Controllers classes for WebFlux. These implementations are based on the 1.5. Functional Endpoints classes. I would recommend using the Functional Endpoints directly.
From a reactive POV you have to understand that you need to create the reactive flow. In the Mono method this has been created for you, in the SomeDTO method you should probably use Mono.just(someDTO) to create it.
What is important to understand in that statement is creation statements will be executed during the build phase not the execution phase of the reactive. The build phase is not executed asynchronously.
So, consider two mono creation statements.
return Mono.just(Thread.sleep(1000));
and
return Mono.just(1000).map(Thread::sleep);
Yes, I know it won't compile because of interrupted exception, but in the first case the Mono won't be returned to the client until 1 second and then it will do nothing when subscribed to. In the second case the mono will be returned to the client right away and will wait one second after it is subscribed to. The second one is what you are striving for.
What does it mean to you? Consider
return Mono.just(repo.save(someDto));
and
return someDto.map(repo::save);
In the first case, as above, someDto will be saved in the repo and then the mono will be returned to the client and will do nothing when subscribed to. Wrong! In the second case the mono will be returned to the client, the thread released back to the webflux framework for use in another request, and someDto will be saved when the client subscribes to the returned mono. What you are striving for.
Do it correctly with your first case by doing
return Mono.just(someDto).map(repo::save);
This is doing Mono.just(someDto) yourself whereas in your second case the webflux framework is doing it for you.
Which to choose? If you are just going to wrap someDto in a mono and use it then might as well have the framework do it for you or use the functional endpoints. If you are going to create a mono for some other reason and then use someDto during a mapping process use your first case. This second reason is, IMHO, a rare use case.
Typically when using the functional endpoints you will end up doing request.bodyToMono(SomeDto.class) which is equivalent to your second case and what is done by the framework for you in your second case.

Akka Remote shared classes

I have two different Java 8 projects that will live on different servers and which will both use Akka (specifically Akka Remoting) to talk to each other.
For instance, one app might send a Fizzbuzz message to the other app:
public class Fizzbuzz {
private int foo;
private String bar;
// Getters, setters & ctor omitted for brevity
}
I've never used Akka Remoting before. I assume I need to create a 3rd project, a library/jar for holding the shared messages (such as Fizzbuzz and others) and then pull that library in to both projects as a dependency.
Is it that simple? Are there any serialization (or other Akka and/or networking) considerations that affect the design of these "shared" messages? Thanks in advance!
Shared library is a way to go for sure, except there are indeed serialization concerns:
Akka-remoting docs:
When using remoting for actors you must ensure that the props and messages used for those actors are serializable. Failing to do so will cause the system to behave in an unintended way.
For more information please see Serialization.
Basically, you'll need to provide and configure the serialization for actor props and messages sent (including all the nested classes of course). If I'm not mistaking default settings will get you up and running without any configuration on your side, provided that everything you send over the wire is java-serializable.
However, default config uses default Java serialization, which is known to be quite inefficient - so you might want to switch to protobuf, kryo, or maybe even json. In that case, it would make sense to provide the serialization implementation and bindings as a shared library - either a dedicated one or a part of the "shared models" one that you mentioned in the question - depends if you want to reuse it elsewhere and mind/don't mind having serailization-related transitive dependencies popping all over the place.
Finally, if you allow some personal opinion, I would suggest trying protobuf first - it's binary format (read: efficient) and is widely supported (there are bindings for other languages). Kryo works well too (I have a few closed-source akka-cluster apps with kryo serialization in production), but has a few quirks with regards to collection/map handling.

Generate a Mock object with a Method which raises an event

I am working on a VB.NET project which requires the extensive used of Unit Tests but am having problems mocking on of the classes.
Here is a breakdown of the issue:
Using NUnit and Rhino Mock 3.6
VS2010 & VB.NET
I have an interface which contains a number of methods and an Event.
The class which implements that Interface raises the event when one of the methods is called.
When I mock the object in my tests I can stub methods and create/assert expectations on the methods with no problems.
How do I configure the mock object so that when a method is called the event is raised so that I can assert that is was raised?
I have found numerous posts using C# which suggest code like this
mockObject.MyEvent += null...
When I try this 'MyEvent' does not appear in Intellisense.
I'm obviously not configuring my test/mock correctly but with so few VB.NET examples out there I'm drawing a blank.
Sorry for my lack of VB syntax; I'm a C# guy. Also, I think you should be congratulated for writing tests at all, regardless of test first or test last.
I think your code needs refactoring. It sounds like you have an interface that requires implementations to contain an event, and then another class (which you're testing) depends on this interface. The code under test then executes the event when certain things happen.
The question in my mind is, "Why is it a publically exposed event?" Why not just a method that implementations can define? I suppose the event could have multiple delegates being added to it dynamically somewhere, but if that's something you really need, then the implementation should figure out how that works. You could replace the event with a pair of methods: HandleEvent([event parameters]) and AddEventListener(TheDelegateType listener). I think the meaning and usage of those should be obvious enough. If the implementation wants to use events internally, it can, but I feel like that's an implementation detail that users of the interface should not care about. All they should care about is adding their listener and that all the listeners get called. Then you can just assert that HandleEvent or AddEventListener were called. This is probably the simplest way to make this more testable.
If you really need to keep the event, then see here for information on mocking delegates. My advice would be to mock a delegate, add it to the event during set up, and then assert it was called. This might also be useful if you need to test that things are added to the event.
Also, I wouldn't rely on Intellisense too much. Mocking is done via some crafty IL code, I believe. I wouldn't count on Intellisense to keep up with members of its objects, especially when you start getting beyond normal methods.

NServiceBus: need to configure channels for my Gateway with code

I'm engaged in building NServiceBus Gateway handler, and I need to avoid config files so that all configuration is defined inside c# classes. As a result I have to convert the following section to c# code
<GatewayConfig>
<Channels>
<Channel Address="http://localhost:25899/SiteB/" ChannelType="Http" Default="true"/>
</Channels>
</GatewayConfig>
I've found GatewayConfig, ChannelCollection and ChannelConfig in a NServiceBus.Config namespace, but I can not link them together, coz GatewayConfig refers to ChannelCollection, but ChannelCollection has nothing to do with ChannelConfig. Please help
Just create a class implementing IProvideConfiguration of GatewayConfig. That gives you a way to provide your own config. Look at the pubsub sample for the exact details on how to do this.
Well, I've found the way to do it as I installed Reflector and looked into the implementation. There is a ChannelCollection.CreateNewElement() method returning back System.Configuration.ConfigurationElement. NServiceBus overriden the method instantiating ChannelConfig inside it, so all I have to do is to cast ConfigurationElement type to ChannelConfig type which is far from intuitive interface. Looks like this NServiceBus.Config.ChannelCollection is kind of unfinished work, because if you look at other collections like NServiceBus.Config.MessageEndpointMappingCollection you can find there all necessary type-safe methods to work with its child elements NServiceBus.Config.MessageEndpointMapping, so I think NServiceBus team was just lazy to make the same for ChannelCollection.
UPDATE: as CreateNewElement() method is protected, I have to implement my own class inherited from ChannelCollection to make a method adding new ChannelConfig element publicly available

How to use an IoC Container?? I don't get it

Here's what I know so far:
DI lets me build reusable, unit-testable components
DI is verbose because it requires that I explicitly set the dependencies (via constructor or method. I still don't understand the interface injection though). This is why a container or a service locator is needed.
Container is better than service locator because classes won't need to be aware of the existence of it.
But I found these problems:
Some classes will now depend from Container? If I don't use the default config for every class, as described in my services file, some classes will need to call the container to re-configure the needed object.
On page 79 from this slide http://www.slideshare.net/fabpot/dependency-injection-with-php-53, Fabien Potencier said that a Container does not manage all the objects, only those with a single instance (yet not singletons). I'm even more confused now.
Any help is greatly appreciated. =)
Some classes will now depend from Container?
No. That's why you use dependency injection as opposed to service location.
On page 79 from this slide...
See page 82, it says "Unlike model objects". Honestly I'd never explain it like that ("Objects with only one instance (!= Singletons)" is either wrong or something very PHP specific, it doesn't apply to dependency injection or IoC+DI containers in general), but I bet what he was trying to explain is that the container usually manages service-like things, not model-like things.