akka.net persistence custom serializer is not getting invoked - serialization

I am working on Akka.net persistence and using MongoDb as the persistent store. One of the properties on the events that we persist is of custom struct type "Rational". We have configured a custom serializer for this Rational type that serializes rational type value into a decimal value. However, we don't see that custom serializer getting invoked at all. The MongoDb shows newly inserted document with rational value of type object instead of decimal.
Below is akka.hocon configuration;
akka {
actor {
serializers {
my-rational = "RationalTypePersistence.RationalSerializer, RationalTypePersistence"
}
serialization-bindings {
"RationalTypePersistence.Rational, RationalTypePersistence" = "my-rational"
}
}
}
In debugging session also, the breakpoints set in the custom serializer's "ToBinary" and "fromBinary" methods don't get hit. The breakpoint in the constructor get hits multiple times though.
The custom serializer is extending Akka.Serialization.Serializer, and has overriden Identifier, FromBinary, ToBinary properties/methods.
Are we missing any configuration ?

Related

Signal R client not raising On even with Json serialized complex objects

I am trying to implement signal r client and server with json serialization.
Currently I am targeting .NET 5 and use Microsoft json serializer implementation.
My messages are represented by complex objects and there is an JsonConverter used for reading and writing.
What I see is that on the client the On event is never raised except if handler parameter specified as an object class.
connection.On("EntityEventAsync", (object obj) =>
{
//obj will be json object here
});
On client side I can clearly see that the messages are received as JsonConverter is called and reads the messages as it should BUT the On event is never raised.
Typed client hub code
public interface IEventsClient
{
Task EntityEventAsync(DetailedMessage message);
}
[Authorize(AuthenticationSchemes = "Basic,Bearer")]
public class EventHub : Hub<Clients.IEventsClient>
{
#region CONSTRUCTOR
public EventHub()
{
}
#endregion
}
What I can be missing here?
If some one else struggles with same problem then it be might the problem with the JsonConverter implementation as it in my case.
Its possible to enable Signal R client logging as mentioned here https://learn.microsoft.com/en-us/aspnet/core/signalr/diagnostics?view=aspnetcore-5.0 and that should make it easier to figure out the problem.

By default where Akka.net stores its messages?

I have downloaded a sample code from github and run AtLeastOnceDelivery.sln
Every new run it is sending messages with it. And if I change the message namespace it shows an error started with
Error loading snapshot [SnapshotMetadata<pid: delivery, seqNr: 0, timestamp: 2018/09/24>], remaining attempts: [0]
If I could clear the persistence hopefully it will accept then changed namespace and restart messaging id.
By default, all snapshots are stored as files directly in ./snapshots directory of the application, while events are stored in the memory. Because of that you should consider using a one of the akka.persistence plugins for the production purposes.
Your problem happens because you're using akka.net default serializers (dedicated for networking) which are not very version tolerant - so changing any fields, their types, class names or namespaces makes previous version of the class non-deserializable - and in future will be subject to change. This is also why it's strongly discouraged to use default serializers for persistence.
How to make a custom Akka.NET Serializer
While there are plans to improve serializers API, at the current moment (Akka.NET v1.3.9), to make your own serializer you need to simply inherit from Akka.Serialization.Serializer class:
public sealed class MySerializer : Serializer
{
public MySerializer(ExtendedActorSystem system) : base(system) { }
public override int Identifier => /* globaly unique serializer id */;
public override bool IncludeManifest => true;
public override byte[] ToBinary(object obj)
{
// serialize object
}
public override object FromBinary(byte[] bytes, Type type)
{
// deserialize object
}
}
Keep in mind that Identifier property must be unique in cluster scope - usually values below 100 are used by akka.net internal serializers, therefore it's better to use higher values.
How to bind serializer to be used for a given type
By convention Akka.NET uses empty interfaces to mark message types that are supposed to be serialized. Then you can setup your HOCON configuration to use a specific serializer for a given interface:
akka.actor {
serializers {
my-serializer = ""MyNamespace.MySerializer, MyAssembly""
}
serialization-bindings {
""MyNamespace.MyInterface, MyAssembly"" = my-serializer
}
}
Where MyInterface is interface assigned to a message type you want to serialize/deserialize with MySerializer.

jdto superclass boolean field binding incorrect value

public class Model {
}
public class SuperclassDTO {
private boolean funny = true;
public boolean isFunny() {
return funny;
}
public boolean setFunny(boolean f) {
this.funny = f;
}
}
public class SubclassDTO extends SuperclassDTO {
}
new SubclassDTO().isFunny() //returns true
SubclassDTO dto = binder.bindFromBusinessObject(SubclassDTO.class, new Model());
dto.isFunny(); //returns false!!!!
Isn't this weird? Model class does not have a "funny" field but somehow dto is bind with a wrong value. First I thought jDTO required "getFunny" convention, so it couldn't read the value and just set it "false" but changing the getter name to "getFunny" does not resolve the issue, plus I'm not allowed to modify SuperclassDTO. How can I bind the correct value?
Jdto version 1.4 by the way...
The behavior you're experiencing is a "side effect" of the convention over configuration approach. All the fields on the DTO are configured unless you mark them as transient, either by using the #DTOTransient annotation or the transient configuration on the XML file. If a configured field does not have a corresponding field on the source bean, it will be set with default values and that is the reason why you're experiencing this behavior.
You have some options to overcome this issue:
Add the #DTOTransient annotation to the DTO.
Since you're not able to modify the DTO, you could configure it through XML.
Use Binding lifecycle to Restore the value. By adding code on the subclass.
You might as well submit a bug report on the jDTO issue tracker on github.

Ninject Factory - "new" object being passed in instead of one called in factory method

I am using the Ninject Factory Extensions so that I can create objects that have services injected plus custom values
so:
public interface IGameOperationsFactory
{
ISpinEvaluator Create(GameArtifact game);
IReelSpinner CreateSpinner(GameArtifact game);
}
Then in module:
Bind<IGameOperationsFactory>().ToFactory().InSingletonScope();
Bind<ISpinEvaluator>().To<DefaultSpinEvaluatorImpl>();
Bind<IReelSpinner>().To<DefaultReelSpinnerImpl>();
The actual factory gets injected in a classes' constructor and is then used like:
_spinner = _factory.CreateSpinner(_artifact);
_spinEval = _factory.Create(_artifact);
Where _artifact is of type GameArtifact
Then in each of the implementation's constructors services plus the passed in objects are injected. The GameArtifact is successfully passed in the first constructor, and in the second one a "new" GameArtifact is passed in, i.e. not a null one but one with just default values as if the framework just called
new GameArtifact()
instead of passing in the already existing one!
The Constructor for the two objects is very similar, but the one that doesn't work looks like:
[Inject]
public DefaultReelSpinnerImpl(GameArtifact ga, IGameOperationsFactory factory, IRandomService serv)
{
_rand = serv;
_ra = ga.Reels;
_mainReels = factory.Create(_ra);
_winLine = ga.Config.WinLine;
}
Where the factory and serv are injected by Ninject and ga is SUPPOSED to be passed in via the factory.
Anyone have a clue why a new "fresh" object is passed in rather than the one I passed in??
I have rewritten you sample a little bit, and it seems to work fine. Could you provide more detailed code sample?
My implementation
I have changed verb Create to Get to match Ninject conventions
public interface IGameOperationsFactory
{
ISpinEvaluator GetSpinEvaluator(GameArtifact gameArtifact);
IReelSpinner GetReelSpinner(GameArtifact gameArtifact);
}
Ninject configuration
I have added named bindings to configure factory
Bind<ISpinEvaluator>()
.To<DefaultSpinEvaluatorImpl>()
.Named("SpinEvaluator");
Bind<IReelSpinner>()
.To<DefaultReelSpinnerImpl>()
.Named("ReelSpinner");
Bind<IGameOperationsFactory>()
.ToFactory();
ps: full sample with tests

IronPython Lists, Tuples, Dictionaries crash WCF communications

I am attempting to use WCF to execute IronPython remotely inside of C#. Everything in my system is functioning beautifully as long as it is local.
I have isolated the problem to passing certain objects to the client via WCF:
If you try to pass these to a WCF client from a WCF server, the communications channel crashes:
PythonDictionaries containing values that are Tuples or Lists
Tuples of any kind
...Strangely, dictionaries containing dictionaries are ok (as long as the nested dictionary doesn't meet these 2 conditions). Here is my example code:
try
{
PythonFlow localPython = new PythonFlow();
IPythonFlow remotePython = new IronTesterWcfClient("localhost", "8000");
string tuple = "(1,2,3)";
string list = "[1,2,3]";
string complexDict0 = "{'a':'b','c':{'d':'f'}}";
string complexDict1 = "{'a':'b','c':(1,2,3),'e':'e'}";
string complexDict2 = "{'a':'b','c':[1,2,3],'d':'e'}";
string complexDict3 = "{'a':'b','c':[1,2,3],'d':(1,2,3),'e':{'a':'b','c':[1,2,3],'d':(1,2,3)}}";
localPython.OpenFlow(args[2]);
//OK
IronPython.Runtime.List list1 = localPython.PythonListFromString(list);
//OK
IronPython.Runtime.PythonDictionary dict0 = localPython.PythonDictionaryFromString(complexDict0);
//OK
IronPython.Runtime.PythonDictionary dict1 = localPython.PythonDictionaryFromString(complexDict1);
//OK
IronPython.Runtime.PythonDictionary dict2 = localPython.PythonDictionaryFromString(complexDict2);
//OK
IronPython.Runtime.PythonDictionary dict3 = localPython.PythonDictionaryFromString(complexDict3);
//OK
IronPython.Runtime.PythonTuple tuple1 = localPython.PythonTupleFromString(tuple);
remotePython.OpenFlow(args[2]);
//OK
IronPython.Runtime.List list2 = remotePython.PythonListFromString(list);
//OK
IronPython.Runtime.PythonDictionary dict5 = remotePython.PythonDictionaryFromString(complexDict0);
//Fail!!!
IronPython.Runtime.PythonDictionary dict6 = remotePython.PythonDictionaryFromString(complexDict1);
//Fail!!!
IronPython.Runtime.PythonDictionary dict7 = remotePython.PythonDictionaryFromString(complexDict2);
//Fail!!!
IronPython.Runtime.PythonDictionary dict8 = remotePython.PythonDictionaryFromString(complexDict3);
//Fail!!!
IronPython.Runtime.PythonTuple tuple2 = remotePython.PythonTupleFromString(tuple);
}
catch (Exception ex)
{
//The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state.
Console.WriteLine(ex.ToString());
}
I am using NetTcpBinding with SecurityMode.None on the WCF server side... I should also mention that the python call is ultimately accessing a simple object in python which returns the result of eval()
It's basically making it impossible to use Python with WCF. Any ideas?
More info... I was finally able to extract the exceptions inside WCF when this happens:
Outer Exception:
There was an error while trying to serialize parameter http://Intel.ServiceModel.Samples:TestResult.
The InnerException message was 'Type 'IronPython.Runtime.PythonTuple' with data contract name 'ArrayOfanyType:http://schemas.microsoft.com/2003/10/Serialization/Arrays' is not expected. Consider using a DataContractResolver or add any types not known statically to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer.'. Please see InnerException for more details.
Inner Exception:
Type 'IronPython.Runtime.PythonTuple' with data contract name 'ArrayOfanyType:http://schemas.microsoft.com/2003/10/Serialization/Arrays' is not expected. Consider using a DataContractResolver or add any types not known statically to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer.
You're getting a SerializationException, indicating that .NET doesn't know how to deserialize some chunk of the data you're sending. In this case, it's choking on ArrayOfanyType, which is any kind on non-generic collection (an ArrayList or plain array, for instance).
I've reviewed the source for IronPython 2.7.1 (what version are you using?), looking at the implementation of List and PythonTuple. Both contain an Object array, pretty much identically declared; List has a few other random instance fields.
// IronPython.Runtime.List
internal volatile object[] _data;
private const int INITIAL_SIZE = 20;
internal int _size;
// IronPython.Runtime.PythonTuple
internal readonly object[] _data;
I don't know why the serializer isn't happy with the PythonTuple class, when it's fine with List. What this probably indicates, however, is that .NET's type resolver can't resolve some element of the serialized object.
There are two ways to resolve this, that I know of.
You can try to convince .NET to consider a given type during deserialization, using the KnownTypes attribute. From MSDN:
When data arrives at a receiving endpoint, the WCF runtime attempts to deserialize the data into an instance of a common language runtime (CLR) type. The type that is instantiated for deserialization is chosen by first inspecting the incoming message to determine the data contract to which the contents of the message conform. The deserialization engine then attempts to find a CLR type that implements a data contract compatible with the message contents. The set of candidate types that the deserialization engine allows for during this process is referred to as the deserializer's set of "known types."
You'd want to apply this attribute to the class being transferred over the wire, and this isn't convenient when you don't control the class, as is the case here. So this is probably a non-starter.
You can specify a custom DataContractResolver to resolve your problematic types:
A data contract resolver allows you to configure known types dynamically. Known types are required when serializing or deserializing a type not expected by a data contract.
You can do this without controlling the class to be serialized, but it takes a bit more work. This MSDN blog post has a great writeup.
In summary, you'd create a DataContractResolver and override its two methods, TryResolveType and ResolveName. The first is used during serialization, and the second during deserialization. From the MSDN sample, with my comments:
public class MyCustomerResolver : DataContractResolver
{
public override bool TryResolveType(Type dataContractType, Type declaredType, DataContractResolver knownTypeResolver, out XmlDictionaryString typeName, out XmlDictionaryString typeNamespace)
{
if (dataContractType == typeof(Customer)) // a type I recognize
{
XmlDictionary dictionary = new XmlDictionary();
typeName = dictionary.Add("SomeCustomer");
typeNamespace = dictionary.Add("www.FPSTroller.com");
return true;
}
else // I don't know what this is; defer to the inbuilt type resolver
{
return knownTypeResolver.TryResolveType(dataContractType, declaredType, null, out typeName, out typeNamespace);
}
}
public override Type ResolveName(string typeName, string typeNamespace, DataContractResolver knownTypeResolver)
{
// my type
if (typeName == "SomeCustomer" && typeNamespace == "http://www.FPSTroller.com")
{
return typeof(Customer);
}
else // I don't know what this is; defer to the inbuilt type resolver
{
return knownTypeResolver.ResolveName(typeName, typeNamespace, null);
}
}
}
The blog post I mentioned above has some sample resolvers that might give .NET a better shot and handling your classes without writing anything custom (look for the "Useful resolvers" heading).
You'd use DataContractSerializerOperationBehavior. To plug your resolver into WCF; see the sample in the MSDN documentation.
Finally, before going down this path, you might consider changing your WCF operations interface. Do you really need to pass these custom, non-generic types over the wire? What I've read implies that non-generic types run into this kind of issue often. Consider using a plain old System.Collections.Generic.Dictionary<K,V> and (if you're using .NET 4+) System.Tuple. Lock down your types; don't make the resolver guess.