I've already a service working using the DataContract attributes. We would like to switch to the protobuf implementation, but if we have to change all the attributes, it would be a lot of hardwork.
Is it possible to NOT use the ProtoMember and ProtoContract and have ProtoBuf using the DataMember and DataContract attributes?
thanks
Sure; protobuf-net is perfectly happy with [DataContract] / [DataMember] as long as it can still get valid numbers, which it does by looking for the Order property of DataMemberAttribute.
There is, however, a small problem... tools like svcutil don't guarantee the actual numbers - just the order. This can make it problematic to ensure that you have the same numbers of both sides. In addition, svcutil tends to start at zero, not one - and zero is not a valid field number for protobuf. If the numbers you get all turn out to be off-by-one, then you can tweak this by adding a partial class in a seperate file with a fixup, for example:
[ProtoContract(DataMemberOffset = 1)]
partial class Whatever { }
However, if the numbers are now all over the place (because they weren't sequential originally), they you might want to either use multiple [ProtoPartialMember(...)] attributes to tell it how to map each one (remembering that you can use nameof rather than hard-coding the member names):
[ProtoContract]
[ProtoPartialMember(1, nameof(SomeStringValue))]
[ProtoPartialMember(2, nameof(WhateverId))]
partial class Whatever { }
or just share the original type definition, which might be easier.
Related
I'm building an app that generates random sequences of musical notes and displays them to the user as musical notation. These sequences can be generated according to several parameters, including density and maximum consecutive notes of the same pitch.
Musical sequences are captured by a sequence object whose notes property is a simple string of notes such as "abcdaba".
My early attempts to generate random sequences involved a SequenceGenerator class that compiled random sequences using several private methods. This looks like a service to me. But I'm trying to honour the principle expressed in Domain-Driven Design (Evans 2003) to only use services where necessary and to prefer associating behaviour with domain objects.
So my question is:
Should the job of producing random sequences be taken care of by a public method on sequence itself (such as generateRandom()) or should it be kept separate?
I considered the possibility that my original design is more along the lines of a builder or factory pattern than a service, but the the code is very different for creating a random sequence than for creating one with a supplied string of notes.
One concern I have with the method route is that generateRandom() as a method on sequence changes the content of sequence but isn't actually generating a new sequence object. This just feels wrong, but I can't express why.
I'm still getting my head around some the core OO design principles, so any help is greatly appreciated.
Should the job of producing random sequences be taken care of by a public method on sequence itself (such as generateRandom()) or should it be kept separate?
I usually find that I get cleaner designs if I treat "random" the same way that I treat "time", or "I/O" -- as an input to the model, rather than as an aspect of the model itself.
If you don't consider time an input value, think about it until you do -- it is an important concept (John Carmack, 1998).
Within the constraints of DDD, that could either mean passing a "domain service" as an argument to your method, allowing your aggregate to invoke the service as needed, or it could mean having a method on the aggregate, so that the application can pass in random numbers when needed.
So any creation of a sequence would involve passing in some pattern or seed, but whether that is random or not is decided outside of the sequence itself?
Yes, exactly.
The creation of an object is not usually considered part of the logic for the object.
How you do that technically is a different matter. You could potentially use delegation. For example:
public interface NoteSequence {
void play();
}
public final class LettersNoteSequence implements NoteSequence {
public LettersNoteSequence(String letters) {
...
}
...
}
public final class RandomNoteSequence implements NoteSequence {
...
#Override
public void play() {
new LetterNoteSequence(generateRandomLetters()).play();
}
}
This way you don't have to have a "service" or a "factory", but this is only one alternative, may or may not fit your use-case.
I have POCO classes , I use NewtonSoft json for seralization. Now i want to migrate it to Google protocol buff. Is there any way i can migrate all my classes (not manually) so that i can use google protocol buff for serialization and deseralization.
Do you just want it to work? The absolute simplest way to do this would be to use protobuf-net and add [ProtoContract(ImplicitFields = ImplicitFields.AllPublic)]. What this does is tell protobuf-net to make up the field numbers, which it does by taking all the public members, sorting them alphabetically, and just counting upwards. Then you can use your type with ProtoBuf.Serializer and it should behave in the way you expect.
This is simple, but it isn't very robust. If you add, remove or rename members it can all get out of sync. The problem here is that the protocol buffers format doesn't include names - just field numbers, and it is much harder to guarantee numbers over time. If your type is likely to change, you probably want to define field numbers explicitly. For example:
[ProtoContract]
public class Foo {
[ProtoMember(1)]
public int Id {get;set;}
[ProtoMember(2)]
public List<string> Names {get;} = new List<string>();
}
One other thing to watch out for would be non-zero default values. By default protobuf-net assumes certain things about implicit default values. If you are routinely using non-zero default values without doing it very carefully, protobuf-net may misunderstand you. You can turn that off globally if you desire:
RuntimeTypeModel.Default.UseImplicitZeroDefaults = false;
I have been exploring the CQRS/DDD-principles and patterns for a while now and have started implementing a sample project where I have split my storage-model into a WriteModel and a ReadModel. The WriteModel will use a simple NoSQL-like database where aggregates are stored in a key-value style, with value being just a serialized version of the aggregate.
I am now looking at ProtoBuf-Net for serializing and deserializing my domain model aggregates in and out of storage. Other than this post I haven't found any guidance or tips for using ProtoBuf-Net in this area. The point is that the (ideal) requirements for serialization and deserialization of aggregates is that the domain model should have as little knowledge as possible about this infrastructural concern, which implies the following:
No attributes on the classes
No constructors, getters, setters or any other piece of code just for the sake of serialization.
Ability to use any (custom) type possible and have it serialized/deserialized.
Thus far I have implemented just the serialization of the first versions of my aggregates which works perfectly fine. I use the RuntimeTypeModel.Default-instance to configure the MetaModel at runtime and have UseConstructor = false everywhere, which enables me to completely separate the serialization mechanics from my domain-assembly. I have even implemented a custom post-deserialization mechanism that enables me to just-in-time initialize fields after ProtoBuf-Net has deserialized it into a valid instance. So suppose I have class AggregateA like so:
[Version(1)]
public sealed class AggregateA
{
private readonly int _x;
private readonly string _y;
...
}
Then in my serialization-library I have code something along the following lines:
var metaType = RuntimeTypeModel.Default.Add(typeof(AggregateA), false);
metaType.UseConstructor = false;
metaType.AddField(1, "_x");
metaType.AddField(2, "_y");
...
However, I realize that up to this point I have only implemented the basic scenario, and I am now starting to think about how to approach versioning of my model. I am particularly interested in larger refactoring-scenario's, where type A has been split into type A1 and A2, for example:
[Version(2)]
public sealed class AggregateA1
{
private readonly int _x;
...
}
[Version(2)]
public sealed class AggregateA2
{
private readonly string _y;
...
}
Suppose I have a serialized bunch of instances of AggregateA, but now my domain model knows only AggregateA1 and AggregateA2, how would you handle this scenario with ProtoBuf-Net?
A second question deals with point 3: is ProtoBuf-Net capable of handling arbitrary types if you're willing to put in some extra configuration-effort? I've read about exceptions raised when using the DateTimeOffset-type, which makes me think not all types can be serialized by the framework out-of-the-box, but can I serialize these types by registering them in the RuntimeTypeModel? Should I even want to go there? Or better to forget about serializing common .NET types other than the simple ones?
protobuf-net is intended to work with predictable known models. It is true that everything can be configured at runtime, but I have not put any thought as to how to handle your A1/A2 scenario, precisely because that is not a supported scenario (in my defense, I can't see that working nicely with most serializers). Thinking off the top of my head, if you have the configuration/mapping data somewhere, then you could simply deserialize twice; i.e. as long as we still tell it that AggregateA1._x maps to 1 and AggregateA2._y maps to 2, you can do:
object a1 = model.Deserialize(source, null, typeof(AggregateA1));
source.Position = 0; // rewind
object a2 = model.Deserialize(source, null, typeof(AggregateA2));
However, more complex tweaks would require additional thought.
Re "arbitrary types"... define "arbitrary" ;p In particular, there is support for "surrogate" types which can be useful for some transformations - but without a very specific "problem statement" it is hard to answer completely.
Summary:
protobuf-net has an intended usage, which includes both serialization-aware (attributed, etc) and non-aware scenarios (runtime configuration, etc) - but it also works for a range of more bespoke scenarios (letting you drop to the raw reader/writer API if you want to). It does not and cannot guarantee to be a direct fit for every serialization scenario imaginable, and how well it behaves will depend on how far from that scenario you are.
I'd like to use for table storage an entity like this:
public class MyEntity
{
public String Text { get; private set; }
public Int32 SomeValue { get; private set; }
public MyEntity(String text, Int32 someValue)
{
Text = text;
SomeValue = someValue;
}
}
But it's not possible, because the ATS needs
Parameterless constructor
All properties public and
read/write.
Inherit from TableServiceEntity;
The first two, are two things I don't want to do. Why should I want that anybody could change some data that should be readonly? or create objects of this kind in a inconsistent way (what are .ctor's for then?), or even worst, alter the PartitionKey or the RowKey. Why are we still constrained by these deserialization requirements?
I don't like develop software in that way, how can I use table storage library in a way that I can serialize and deserialize myself the objects? I think that as long the objects inherits from TableServiceEntity it shouldn't be a problem.
So far I got to save an object, but I don't know how retrieve it:
Message m = new Message("message XXXXXXXXXXXXX");
CloudTableClient tableClient = account.CreateCloudTableClient();
tableClient.CreateTableIfNotExist("Messages");
TableServiceContext tcontext = new TableServiceContext(account.TableEndpoint.AbsoluteUri, account.Credentials);
var list = tableClient.ListTables().ToArray();
tcontext.AddObject("Messages", m);
tcontext.SaveChanges();
Is there any way to avoid those deserialization requirements or get the raw object?
Cheers.
If you want to use the Storage Client Library, then yes, there are restrictions on what you can and can't do with your objects that you want to store. Point 1 is correct. I'd expand point 2 to say "All properties that you want to store must be public and read/write" (for integer properties you can get away with having read only properties and it won't try to save them) but you don't actually have to inherit from TableServiceEntity.
TableServiceEntity is just a very light class that has the properties PartitionKey, RowKey, Timestamp and is decorated with the DataServiceKey attribute (take a look with Reflector). All of these things you can do to a class that you create yourself and doesn't inherit from TableServiceEntity (note that the casing of these properties is important).
If this still doesn't give you enough control over how you build your classes, you can always ignore the Storage Client Library and just use the REST API directly. This will give you the ability to searialize and deserialize the XML any which way you like. You will lose the all of the nice things that come with using the library, like ability to create queries in LINQ.
The constraints around that ADO.NET wrapper for the Table Storage are indeed somewhat painful. You can also adopt a Fat Entity approach as implemented in Lokad.Cloud. This will give you much more flexibility concerning the serialization of your entities.
Just don't use inheritance.
If you want to use your own POCO's, create your class as you want it and create a separate tableEntity wrapper/container class that holds the pK and rK and carries your class as a serialized byte array.
You can use composition to achieve what you want.
Create your Table Entities as you need to for storage and create your POCOs as wrappers on those providing the API you want the rest of your application code to see.
You can even mix in some interfaces for better code.
How about generating the POCO wrappers at runtime using System.Reflection.Emit http://blog.kloud.com.au/2012/09/30/a-better-dynamic-tableserviceentity/
Soliciting feedback/options/comments regarding a "best" pattern to use for reference data in my services.
What do I mean by reference data?
Let's use Northwind as an example. An Order is related to a Customer in the database. When I implement my Orders Service, in some cases I'll want the reference a "full" Customer from an Order and other cases when I just want a reference to the Customer (for example a Key/Value pair).
For example, if I were doing a GetAllOrders(), I wouldn't want to return a fully filled out Order, I'd want to return a lightweight version of an Order with only reference data for each order's Customer. If I did a GetOrder() method, though, I'd probably want to fill in the Customer details because chances are a consumer of this method might need it. There might be other situations where I might want to ask that the Customer details be filled in during certain method calls, but left out for others.
Here is what I've come up with:
[DataContract]
public OrderDTO
{
[DataMember(Required)]
public CustomerDTO;
//etc..
}
[DataContract]
public CustomerDTO
{
[DataMember(Required)]
public ReferenceInfo ReferenceInfo;
[DataMember(Optional)]
public CustomerInfo CustomerInfo;
}
[DataContract]
public ReferenceInfo
{
[DataMember(Required)]
public string Key;
[DataMember(Required)]
public string Value;
}
[DataContract]
public CustomerInfo
{
[DataMember(Required)]
public string CustomerID;
[DataMember(Required)]
public string Name;
//etc....
}
The thinking here is that since ReferenceInfo (which is a generic Key/Value pair) is always required in CustomerDTO, I'll always have ReferenceInfo. It gives me enough information to obtain the Customer details later if needed. The downside to having CustomerDTO require ReferenceInfo is that it might be overkill when I am getting the full CustomerDTO (i.e. with CustomerInfo filled in), but at least I am guaranteed the reference info.
Is there some other pattern or framework piece I can use to make this scenario/implementation "cleaner"?
The reason I ask is that although we could simply say in Northwind to ALWAYS return a full CustomerDTO, that might work fine in the simplistic Northwind situation. In my case, I have an object that has 25-50 fields that are reference/lookup type data. Some are more important to load than others in different situations, but i'd like to have as few definitions of these reference types as possible (so that I don't get into "DTO maintenance hell").
Opinions? Feedback? Comments?
Thanks!
We're at the same decision point on our project. As of right now, we've decided to create three levels of DTOs to handle a Thing: SimpleThing, ComplexThing, and FullThing. We don't know how it'll work out for us, though, so this is not yet an answer grounded in reality.
One thing I'm wondering is if we might learn that our services are designed at the "wrong" level. For example, is there ever an instance where we should bust a FullThing apart and only pass a SimpleThing? If we do, does that imply we've inappropriately put some business logic at too high of a level?
Amazon Product Advertising API Web service is a good example of the same problem that you are experiencing.
They use different DTOs to provide callers with more or less detail depending on their circumstances. For example there is the small response group, the large response group and in the middle medium response group.
Having different DTOs is a good technique if as you say you don't want a chatty interface.
It seems like a complicated solution to me. Why not just have a customer id field in the OrderDTO class and then let the application decide at runtime whether it needs the customer data. Since it has the customer id it can pull the data down when it so decides.
I've decided against the approach I was going to take. I think much of my initial concerns were a result of a lack of requirements. I sort of expected this to be the case, but was curious to see how others might have tackled this issue of determining when to load up certain data and when not to.
I am flattening my Data Contract to contain the most used fields of reference data elements. This should work for a majority of consumers. If the supplied data is not enough for a given consumer, they'll have the option to query a separate service to pull back the full details for a particular reference entity (for example a Currency, State, etc). For simple lookups that really are basically Key/Value pairs, we'll be handling them with a generic Key/Value pair Data Contract. I might even use the KnownType attribute for my more specialized Key/Value pairs.
[DataContract]
public OrderDTO
{
[DataMember(Required)]
public CustomerDTO Customer;
//in this case, I think consumers will need currency data,
//so I pass back a full currency item
[DataMember(Required)]
public Currency Currency;
//in this case, I think consumers are not likely to need full StateRegion data,
//so I pass back a "reference" to it
//User's can call a separate service method to get full details if needed, or
[DataMember(Required)]
public KeyValuePair ShipToStateRegion;
//etc..
}
[DataContract]
[KnownType(Currency)]
public KeyValuePair
{
[DataMember(Required)]
public string Key;
[DataMember(Required)]
public string Value;
//enum consisting of all possible reference types,
//such as "Currency", "StateRegion", "Country", etc.
[DataMember(Required)]
public ReferenceType ReferenceType;
}
[DataContract]
public Currency : KeyValuePair
{
[DataMember(Required)]
public decimal ExchangeRate;
[DataMember(Required)]
public DateTime ExchangeRateAsOfDate;
}
[DataContract]
public CustomerDTO
{
[DataMember(Required)]
public string CustomerID;
[DataMember(Required)]
public string Name;
//etc....
}
Thoughts? Opinions? Comments?
We've faced this problem in object-relational mapping as well. There are situations where we want the full object and others where we want a reference to it.
The difficulty is that by baking the serialization into the classes themselves, the datacontract pattern enforces the idea that there's only one right way to serialize an object. But there are lots of scenarios where you might want to partially serialize a class and/or its child objects.
This usually means that you have to have multiple DTOs for each class. For example, a FullCustomerDTO and a CustomerReferenceDTO. Then you have to create ways to map the different DTOs back to the Customer domain object.
As you can imagine, it's a ton of work, most of it very tedious.
One other possibility is to treat the objects as property bags. Specify the properties you want when querying, and get back exactly the properties you need.
Changing the properties to show in the "short" version then won't require multiple round trips, you can get all of the properties for a set at one time (avoiding chatty interfaces), and you don't have to modify your data or operation contracts if you decide you need different properties for the "short" version.
I typically build in lazy loading to my complex web services (ie web services that send/receive entities). If a Person has a Father property (also a Person), I send just an identifier for the Father instead of the nested object, then I just make sure my web service has an operation that can accept an identifier and respond with the corresponding Person entity. The client can then call the web service back if it wants to use the Father property.
I've also expanded on this so that batching can occur. If an operation sends back 5 Persons, then if the Father property is accessed on any one of those Persons, then a request is made for all 5 Fathers with their identifiers. This helps reduce the chattiness of the web service.