Can anyone point me to a protobuf-net serializer for NEventStore 3.0?
I'm having trouble I think mainly due to the serialization in event store 3 wrapping the event body and headers in an EventMessage.
I'm not sure how to setup the custom serializer correctly.
This is entirely untested guesswork based on a very brief glance at github, but it looks like you want to use the wire-up API to specify a custom serializer, for example:
var store = Wireup.Init()
.UsingSqlPersistence("Name Of EventStore ConnectionString In Config File")
.InitializeStorageEngine()
.UsingCustomSerialization(mySerializer)
... etc
where mySerializer is an instance of a type that implements the ISerialize interface. It looks like this should work:
class ProtobufSerializer : EventStore.Serialization.ISerialize
{
public void Serialize<T>(Stream output, T graph)
{
ProtoBuf.Serializer.Serialize<T>(output, graph);
}
public T Deserialize<T>(Stream input)
{
return ProtoBuf.Serializer.Deserialize<T>(input);
}
}
(so obviously mySerializer here would be a new ProtobufSerializer())
Related
I have written a custom serializer and attached it to my RestClient. I am trying to also implement a custom deserializer as well. I noticed in my code that the serializer gets called when i added it to my client like so :RestClient Client = new RestClient(options).UseSerializer<CustomJsonSerializer>();
However, I am not sure what code to add to point to my custom deserializer and where to add it.
I am trying to call a method that essentially hijack's the response content, changes the string, and then sends the modified string back as the new response content to then be deserialized.
Where would i add the code to call my custom deserializer? What would the code snippet look like? And is it possible to even alter the response.content before the deserialization happens? And if so, how do i implement that?
The UseSerializer<T> expects T to be IRestSerializer, which has properties for ISerializer and IDeserializer. The Deserializer property needs to return your custom deserializer.
public interface IRestSerializer {
ISerializer Serializer { get; }
IDeserializer Deserializer { get; }
...
I'm trying to write a test for a class that has a constructor dependency on Func<T>. In order to complete successfully the function under test needs to create a number of separate objects of type T.
When running in production, AutoFac generates a new T every time factory() is called, however when writing a test using AutoMock it returns the same object when it is called again.
Test case below showing the difference in behaviour when using AutoFac and AutoMock. I'd expect both of these to pass, but the AutoMock one fails.
public class TestClass
{
private readonly Func<TestDep> factory;
public TestClass(Func<TestDep> factory)
{
this.factory = factory;
}
public TestDep Get()
{
return factory();
}
}
public class TestDep
{}
[TestMethod()]
public void TestIt()
{
using var autoMock = AutoMock.GetStrict();
var testClass = autoMock.Create<TestClass>();
var obj1 = testClass.Get();
var obj2 = testClass.Get();
Assert.AreNotEqual(obj1, obj2);
}
[TestMethod()]
public void TestIt2()
{
var builder = new ContainerBuilder();
builder.RegisterSource(new AnyConcreteTypeNotAlreadyRegisteredSource());
var container = builder.Build();
var testClass = container.Resolve<TestClass>();
var obj1 = testClass.Get();
var obj2 = testClass.Get();
Assert.AreNotEqual(obj1, obj2);
}
AutoMock (from the Autofac.Extras.Moq package) is primarily useful for setting up complex mocks. Which is to say, you have a single object with a lot of dependencies and it's really hard to set that object up because it doesn't have a parameterless constructor. Moq doesn't let you set up objects with constructor parameters by default, so having something that fills the gap is useful.
However, the mocks you get from it are treated like any other mock you might get from Moq. When you set up a mock instance with Moq, you're not getting a new one every time unless you also implement the factory logic yourself.
AutoMock is not for mocking Autofac behavior. The Func<T> support where Autofac calls a resolve operation on every call to the Func<T> - that's Autofac, not Moq.
It makes sense for AutoMock to use InstancePerLifetimeScope because, just like setting up mocks with plain Moq, you need to be able to get the mock instance back to configure it and validate against it. It would be much harder if it was new every time.
Obviously there are ways to work around that, and with a non-trivial amount of breaking changes you could probably implement InstancePerDependency semantics in there, but there's really not much value in doing that at this point since that's not really what this is for... and you could always create two different AutoMock instances to get two different mocks.
A much better way to go, in general, is to provide useful abstractions and use Autofac with mocks in the container.
For example, say you have something like...
public class ThingToTest
{
public ThingToTest(PackageSender sender) { /* ... */ }
}
public class PackageSender
{
public PackageSender(AddressChecker checker, DataContext context) { /* ... */ }
}
public class AddressChecker { }
public class DataContext { }
If you're trying to set up ThingToTest, you can see how also setting up a PackageSender is going to be complex, and you'd likely want something like AutoMock to handle that.
However, you can make your life easier by introducing an interface there.
public class ThingToTest
{
public ThingToTest(IPackageSender sender) { /* ... */ }
}
public interface IPackageSender { }
public class PackageSender : IPackageSender { }
By hiding all the complexity behind the interface, you now can mock just IPackageSender using plain Moq (or whatever other mocking framework you like, or even creating a manual stub implementation). You wouldn't even need to include Autofac in the mix because you could mock the dependency directly and pass it in.
Point being, you can design your way into making testing and setup easier, which is why, in the comments on your question, I asked why you were doing things that way (which, at the time of this writing, never did get answered). I would strongly recommend designing things to be easier to test if possible.
I'm injecting my dependencies into my classes fine, but I'm wondering if it's possible to get the class name I'm injecting into?
For example:
Bind<ISomething>.ToMethod(c => new Something([GIVE INJECTING *TO* CLASS NAME]));
So, if I had:
public class Blah{
public Blah(ISomething something) { /**/ }
}
When injecting Ninject would in effect call:
new Blah(new Something("Blah"));
Can this be done?
Yes, it can be done. You use the IContext you're given in the ToMethod method to get the name of the type you're being injected into like this:
Bind<ISomething>().ToMethod(c => new Something(GetParentTypeName(c)));
Which uses this little helper method (which could also be turned into a nice extension method):
private string GetParentTypeName(IContext context)
{
return context.Request.ParentRequest.ParentRequest.Target.Member.DeclaringType.Name;
}
It has probably changed in later versions of Ninject. As for version v3.2.0 the accepted solution didn't work for me.
The following does though:
Bind<ISomething>().ToMethod((ctx)
=> new Something(ctx.Request.Target?.Member?.DeclaringType?.Name ?? ""));
I have looked at the Dozer's FAQs and docs, including the SourceForge forum, but I didn't see any good tutorial or even a simple example on how to implement a custom BeanFactory.
Everyone says, "Just implement a BeanFactory". How exactly do you implement it?
I've Googled and all I see are just jars and sources of jars.
Here is one of my BeanFactories, I hope it helps to explain the common pattern:
public class LineBeanFactory implements BeanFactory {
#Override
public Object createBean(final Object source, final Class<?> sourceClass, final String targetBeanId) {
final LineDto dto = (LineDto) source;
return new Line(dto.getCode(), dto.getElectrified(), dto.getName());
}
}
And the corresponding XML mapping:
<mapping>
<class-a bean-factory="com.floyd.nav.web.ws.mapping.dozer.LineBeanFactory">com.floyd.nav.core.model.Line</class-a>
<class-b>com.floyd.nav.web.contract.dto.LineDto</class-b>
</mapping>
This way I declare that when a new instance of Line is needed then it should create it with my BeanFactory. Here is a unit test, that can explain it:
#Test
public void Line_is_created_with_three_arg_constructor_from_LineDto() {
final LineDto dto = createTransientLineDto();
final Line line = (Line) this.lineBeanFactory.createBean(dto, LineDto.class, null);
assertEquals(dto.getCode(), line.getCode());
assertEquals(dto.getElectrified(), line.isElectrified());
assertEquals(dto.getName(), line.getName());
}
So Object source is the source bean that is mapped, Class sourceClass is the class of the source bean (I'm ignoring it, 'cause it will always be a LineDto instance). String targetBeanId is the ID of the destination bean (too ignored).
A custom bean factory is a class that has a method that creates a bean. There are two "flavours"
a) static create method
SomeBean x = SomeBeanFactory.createSomeBean();
b) instance create method
SomeBeanFactory sbf = new SomeBeanFactory();
SomeBean x = sbf.createSomeBean();
You would create a bean factory if creating and setting up your bean requires some tricky logic, like for example initial value of certain properties depend on external configuration file. A bean factory class allows you to centralize "knowledge" about how to create such a tricky bean. Other classes just call create method without worying how to correctly create such bean.
Here is an actual implementation. Obviously it does not make a lot of sense, since Dozer would do the same without the BeanFactory, but instead of just returning an object, you could initialized it somehow differently.
public class ComponentBeanFactory implements BeanFactory {
#Override
public Object createBean(Object source, Class<?> sourceClass,
String targetBeanId) {
return new ComponentDto();
}
}
Why do you need a BeanFactory anyways? Maybe that would help understanding your question.
I'm working on a framework extension which handles dynamic injection using Ninject as the IoC container, but I'm having some trouble trying to work out how to achieve this.
The expectation of my framework is that you'll pass in the IModule(s) so it can easily be used in MVC, WebForms, etc. So I have the class structured like this:
public class NinjectFactory : IFactory, IDisposable {
readonly IKernel kernel;
public NinjectFactory(IModule[] modules) {
kernel = new StandardKernel(modules);
}
}
This is fine, I can create an instance in a Unit Test and pass in a basic implementation of IModule (using the build in InlineModule which seems to be recommended for testing).
The problem is that it's not until runtime that I know the type(s) I need to inject, and they are requested through the framework I'm extending, in a method like this:
public IInterface Create(Type neededType) {
}
And here's where I'm stumped, I'm not sure the best way to check->create (if required)->return, I have this so far:
public IInterface Create(Type neededType) {
if(!kernel.Components.Has(neededType)) {
kernel.Components.Connect(neededType, new StandardBindingFactory());
}
}
This adds it to the components collection, but I can't work out if it's created an instance or how I create an instance and pass in arguments for the .ctor.
Am I going about this the right way, or is Ninject not even meant to be be used that way?
Unless you want to alter or extend the internals of Ninject, you don't need to add anything to the Components collection on the kernel. To determine if a binding is available for a type, you can do something like this:
Type neededType = ...;
IKernel kernel = ...;
var registry = kernel.Components.Get<IBindingRegistry>();
if (registry.Has(neededType)) {
// Ninject can activate the type
}
Very very late answer but Microsoft.Practices.Unity allows Late Binding via App.Config
Just in case someone comes across this question