I'd like to use for table storage an entity like this:
public class MyEntity
{
public String Text { get; private set; }
public Int32 SomeValue { get; private set; }
public MyEntity(String text, Int32 someValue)
{
Text = text;
SomeValue = someValue;
}
}
But it's not possible, because the ATS needs
Parameterless constructor
All properties public and
read/write.
Inherit from TableServiceEntity;
The first two, are two things I don't want to do. Why should I want that anybody could change some data that should be readonly? or create objects of this kind in a inconsistent way (what are .ctor's for then?), or even worst, alter the PartitionKey or the RowKey. Why are we still constrained by these deserialization requirements?
I don't like develop software in that way, how can I use table storage library in a way that I can serialize and deserialize myself the objects? I think that as long the objects inherits from TableServiceEntity it shouldn't be a problem.
So far I got to save an object, but I don't know how retrieve it:
Message m = new Message("message XXXXXXXXXXXXX");
CloudTableClient tableClient = account.CreateCloudTableClient();
tableClient.CreateTableIfNotExist("Messages");
TableServiceContext tcontext = new TableServiceContext(account.TableEndpoint.AbsoluteUri, account.Credentials);
var list = tableClient.ListTables().ToArray();
tcontext.AddObject("Messages", m);
tcontext.SaveChanges();
Is there any way to avoid those deserialization requirements or get the raw object?
Cheers.
If you want to use the Storage Client Library, then yes, there are restrictions on what you can and can't do with your objects that you want to store. Point 1 is correct. I'd expand point 2 to say "All properties that you want to store must be public and read/write" (for integer properties you can get away with having read only properties and it won't try to save them) but you don't actually have to inherit from TableServiceEntity.
TableServiceEntity is just a very light class that has the properties PartitionKey, RowKey, Timestamp and is decorated with the DataServiceKey attribute (take a look with Reflector). All of these things you can do to a class that you create yourself and doesn't inherit from TableServiceEntity (note that the casing of these properties is important).
If this still doesn't give you enough control over how you build your classes, you can always ignore the Storage Client Library and just use the REST API directly. This will give you the ability to searialize and deserialize the XML any which way you like. You will lose the all of the nice things that come with using the library, like ability to create queries in LINQ.
The constraints around that ADO.NET wrapper for the Table Storage are indeed somewhat painful. You can also adopt a Fat Entity approach as implemented in Lokad.Cloud. This will give you much more flexibility concerning the serialization of your entities.
Just don't use inheritance.
If you want to use your own POCO's, create your class as you want it and create a separate tableEntity wrapper/container class that holds the pK and rK and carries your class as a serialized byte array.
You can use composition to achieve what you want.
Create your Table Entities as you need to for storage and create your POCOs as wrappers on those providing the API you want the rest of your application code to see.
You can even mix in some interfaces for better code.
How about generating the POCO wrappers at runtime using System.Reflection.Emit http://blog.kloud.com.au/2012/09/30/a-better-dynamic-tableserviceentity/
Related
I have POCO classes , I use NewtonSoft json for seralization. Now i want to migrate it to Google protocol buff. Is there any way i can migrate all my classes (not manually) so that i can use google protocol buff for serialization and deseralization.
Do you just want it to work? The absolute simplest way to do this would be to use protobuf-net and add [ProtoContract(ImplicitFields = ImplicitFields.AllPublic)]. What this does is tell protobuf-net to make up the field numbers, which it does by taking all the public members, sorting them alphabetically, and just counting upwards. Then you can use your type with ProtoBuf.Serializer and it should behave in the way you expect.
This is simple, but it isn't very robust. If you add, remove or rename members it can all get out of sync. The problem here is that the protocol buffers format doesn't include names - just field numbers, and it is much harder to guarantee numbers over time. If your type is likely to change, you probably want to define field numbers explicitly. For example:
[ProtoContract]
public class Foo {
[ProtoMember(1)]
public int Id {get;set;}
[ProtoMember(2)]
public List<string> Names {get;} = new List<string>();
}
One other thing to watch out for would be non-zero default values. By default protobuf-net assumes certain things about implicit default values. If you are routinely using non-zero default values without doing it very carefully, protobuf-net may misunderstand you. You can turn that off globally if you desire:
RuntimeTypeModel.Default.UseImplicitZeroDefaults = false;
Assume we have class Car which MAIN field is called VIN (Vehicle Identification Number). VIN gives us a lot of information such us:
owner
place of registration
country of production
year of production
color
engine type
etc. etc
I can continue and add more information:
last known GPS coordinates
fine list
is theft (boolean)
etc. etc.
It seems reasonable to store some of information (for example year of production and engine type) right inside Car object. However storing all this information right inside Car object will make it too complicated, "overloaded" and hard to manage. Moreover while application evolves I can add more and more information.
So where is the border? What should be stored inside Car object and what should be stored outside in something like Dictionary<Car, GPSCoordinates>
I think that probably I should store "static" data inside Car object so making it immutable. And store "dynamic" data in special storages.
I would use a class called CarModel for the base attributes shared by every possible car in your application (engine size, color, registration #, etc). You can then extend this class with any number of more specific subclasses like Car, RentalCar, or whatever fits your business logic.
This way you have one clear definition of what all cars share and additional definitions for the different states cars can be in (RentalCar with its unique parameters, for example).
Update:
I guess what you're looking for is something like this (although I would recommend against it):
public class Car
{
// mandatory
protected int engineSize;
protected int color;
// optional
protected Map<String, Object> attributes = new HashMap<String, Object>();
public void set(String name, Object value)
{
attributes.put(name, value);
}
public Object get(String name)
{
return attributes.get(name);
}
}
Why this is not a good solution:
Good luck trying to persist this class to a database or design anything that relies on a well known set of attributes for it.
Nightmare to debug potential problems.
Not a very good use of OOP with regard to type definitions. This can be abused to turn the Car class into something it is not.
Just because your Car class provide a property GPSCoordinates does not mean you need to hold those coordinates internally. Essentially, that's what encapsulation is all about.
And yes, you can then add properties such as "IsInGarageNow", "WasEverDrivedByMadonna" or "RecommendedOil".
Is it a violation of the Persistance igorance to inject a repository interface into a Entity object Like this. By not using a interface I clearly see a problem but when using a interface is there really a problem? Is the code below a good or bad pattern and why?
public class Contact
{
private readonly IAddressRepository _addressRepository;
public Contact(IAddressRepository addressRepository)
{
_addressRepository = addressRepository;
}
private IEnumerable<Address> _addressBook;
public IEnumerable<Address> AddressBook
{
get
{
if(_addressBook == null)
{
_addressBook = _addressRepository.GetAddresses(this.Id);
}
return _addressBook;
}
}
}
It's not exactly a good idea, but it may be ok for some limited scenarios. I'm a little confused by your model, as I have a hard time believing that Address is your aggregate root, and therefore it wouldn't be ordinary to have a full-blown address repository. Based on your example, you probably are actually using a table data gateway or dao rather than a respository.
I prefer to use a data mapper to solve this problem (an ORM or similar solution). Basically, I would take advantage of my ORM to treat address-book as a lazy loaded property of the aggregate root, "Contact". This has the advantage that your changes can be saved as long as the entity is bound to a session.
If I weren't using an ORM, I'd still prefer that the concrete Contact repository implementation set the property of the AddressBook backing store (list, or whatever). I might have the repository set that enumeration to a proxy object that does know about the other data store, and loads it on demand.
You can inject the load function from outside. The new Lazy<T> type in .NET 4.0 comes in handy for that:
public Contact(Lazy<IEnumerable<Address>> addressBook)
{
_addressBook = addressBook;
}
private Lazy<IEnumerable<Address>> _addressBook;
public IEnumerable<Address> AddressBook
{
get { return this._addressBook.Value; }
}
Also note that IEnumerable<T>s might be intrinsically lazy anyhow when you get them from a query provider. But for any other type you can use the Lazy<T>.
Normally when you follow DDD you always operate with the whole aggregate. The repository always returns you a fully loaded aggregate root.
It doesn't make much sense (in DDD at least) to write code as in your example. A Contact aggregate will always contain all the addresses (if it needs them for its behavior, which I doubt to be honest).
So typically ContactRepository supposes to construct you the whole Contact aggregate where Address is an entity or, most likely, a value object inside this aggregate.
Because Address is an entity/value object that belongs to (and therefore managed by) Contact aggregate it will not have its own repository as you are not suppose to manage entities that belong to an aggregate outside this aggregate.
Resume: always load the whole Contact and call its behavior method to do something with its state.
Since its been 2 years since I asked the question and the question somewhat misunderstood I will try to answer it myself.
Rephrased question:
"Should Business entity classes be fully persistance ignorant?"
I think entity classes should be fully persistance ignorant, because you will instanciate them many places in your code base so it will quickly become messy to always have to inject the Repository class into the entity constructor, neither does it look very clean. This becomes even more evident if you are in need of injecting several repositories. Therefore I always use a separate handler/service class to do the persistance jobs for the entities. These classes are instanciated far less frequently and you usually have more control over where and when this happens. Entity classes are kept as lightweight as possible.
I now always have 1 Repository pr aggregate root and if I have need for some extra business logic when entities are fetched from repositories I usually create 1 ServiceClass for the aggregate root.
By taking a tweaked example of the code in the question as it was a bad example I would do it like this now:
Instead of:
public class Contact
{
private readonly IContactRepository _contactRepository;
public Contact(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save()
{
_contactRepository.Save(this);
}
}
I do it like this:
public class Contact
{
}
public class ContactService
{
private readonly IContactRepository _contactRepository;
public ContactService(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save(Contact contact)
{
_contactRepository.Save(contact);
}
}
Soliciting feedback/options/comments regarding a "best" pattern to use for reference data in my services.
What do I mean by reference data?
Let's use Northwind as an example. An Order is related to a Customer in the database. When I implement my Orders Service, in some cases I'll want the reference a "full" Customer from an Order and other cases when I just want a reference to the Customer (for example a Key/Value pair).
For example, if I were doing a GetAllOrders(), I wouldn't want to return a fully filled out Order, I'd want to return a lightweight version of an Order with only reference data for each order's Customer. If I did a GetOrder() method, though, I'd probably want to fill in the Customer details because chances are a consumer of this method might need it. There might be other situations where I might want to ask that the Customer details be filled in during certain method calls, but left out for others.
Here is what I've come up with:
[DataContract]
public OrderDTO
{
[DataMember(Required)]
public CustomerDTO;
//etc..
}
[DataContract]
public CustomerDTO
{
[DataMember(Required)]
public ReferenceInfo ReferenceInfo;
[DataMember(Optional)]
public CustomerInfo CustomerInfo;
}
[DataContract]
public ReferenceInfo
{
[DataMember(Required)]
public string Key;
[DataMember(Required)]
public string Value;
}
[DataContract]
public CustomerInfo
{
[DataMember(Required)]
public string CustomerID;
[DataMember(Required)]
public string Name;
//etc....
}
The thinking here is that since ReferenceInfo (which is a generic Key/Value pair) is always required in CustomerDTO, I'll always have ReferenceInfo. It gives me enough information to obtain the Customer details later if needed. The downside to having CustomerDTO require ReferenceInfo is that it might be overkill when I am getting the full CustomerDTO (i.e. with CustomerInfo filled in), but at least I am guaranteed the reference info.
Is there some other pattern or framework piece I can use to make this scenario/implementation "cleaner"?
The reason I ask is that although we could simply say in Northwind to ALWAYS return a full CustomerDTO, that might work fine in the simplistic Northwind situation. In my case, I have an object that has 25-50 fields that are reference/lookup type data. Some are more important to load than others in different situations, but i'd like to have as few definitions of these reference types as possible (so that I don't get into "DTO maintenance hell").
Opinions? Feedback? Comments?
Thanks!
We're at the same decision point on our project. As of right now, we've decided to create three levels of DTOs to handle a Thing: SimpleThing, ComplexThing, and FullThing. We don't know how it'll work out for us, though, so this is not yet an answer grounded in reality.
One thing I'm wondering is if we might learn that our services are designed at the "wrong" level. For example, is there ever an instance where we should bust a FullThing apart and only pass a SimpleThing? If we do, does that imply we've inappropriately put some business logic at too high of a level?
Amazon Product Advertising API Web service is a good example of the same problem that you are experiencing.
They use different DTOs to provide callers with more or less detail depending on their circumstances. For example there is the small response group, the large response group and in the middle medium response group.
Having different DTOs is a good technique if as you say you don't want a chatty interface.
It seems like a complicated solution to me. Why not just have a customer id field in the OrderDTO class and then let the application decide at runtime whether it needs the customer data. Since it has the customer id it can pull the data down when it so decides.
I've decided against the approach I was going to take. I think much of my initial concerns were a result of a lack of requirements. I sort of expected this to be the case, but was curious to see how others might have tackled this issue of determining when to load up certain data and when not to.
I am flattening my Data Contract to contain the most used fields of reference data elements. This should work for a majority of consumers. If the supplied data is not enough for a given consumer, they'll have the option to query a separate service to pull back the full details for a particular reference entity (for example a Currency, State, etc). For simple lookups that really are basically Key/Value pairs, we'll be handling them with a generic Key/Value pair Data Contract. I might even use the KnownType attribute for my more specialized Key/Value pairs.
[DataContract]
public OrderDTO
{
[DataMember(Required)]
public CustomerDTO Customer;
//in this case, I think consumers will need currency data,
//so I pass back a full currency item
[DataMember(Required)]
public Currency Currency;
//in this case, I think consumers are not likely to need full StateRegion data,
//so I pass back a "reference" to it
//User's can call a separate service method to get full details if needed, or
[DataMember(Required)]
public KeyValuePair ShipToStateRegion;
//etc..
}
[DataContract]
[KnownType(Currency)]
public KeyValuePair
{
[DataMember(Required)]
public string Key;
[DataMember(Required)]
public string Value;
//enum consisting of all possible reference types,
//such as "Currency", "StateRegion", "Country", etc.
[DataMember(Required)]
public ReferenceType ReferenceType;
}
[DataContract]
public Currency : KeyValuePair
{
[DataMember(Required)]
public decimal ExchangeRate;
[DataMember(Required)]
public DateTime ExchangeRateAsOfDate;
}
[DataContract]
public CustomerDTO
{
[DataMember(Required)]
public string CustomerID;
[DataMember(Required)]
public string Name;
//etc....
}
Thoughts? Opinions? Comments?
We've faced this problem in object-relational mapping as well. There are situations where we want the full object and others where we want a reference to it.
The difficulty is that by baking the serialization into the classes themselves, the datacontract pattern enforces the idea that there's only one right way to serialize an object. But there are lots of scenarios where you might want to partially serialize a class and/or its child objects.
This usually means that you have to have multiple DTOs for each class. For example, a FullCustomerDTO and a CustomerReferenceDTO. Then you have to create ways to map the different DTOs back to the Customer domain object.
As you can imagine, it's a ton of work, most of it very tedious.
One other possibility is to treat the objects as property bags. Specify the properties you want when querying, and get back exactly the properties you need.
Changing the properties to show in the "short" version then won't require multiple round trips, you can get all of the properties for a set at one time (avoiding chatty interfaces), and you don't have to modify your data or operation contracts if you decide you need different properties for the "short" version.
I typically build in lazy loading to my complex web services (ie web services that send/receive entities). If a Person has a Father property (also a Person), I send just an identifier for the Father instead of the nested object, then I just make sure my web service has an operation that can accept an identifier and respond with the corresponding Person entity. The client can then call the web service back if it wants to use the Father property.
I've also expanded on this so that batching can occur. If an operation sends back 5 Persons, then if the Father property is accessed on any one of those Persons, then a request is made for all 5 Fathers with their identifiers. This helps reduce the chattiness of the web service.
I am working on a little pinball-game project for a hobby and am looking for a pattern to encapsulate constant variables.
I have a model, within which there are values which will be constant over the life of that model e.g. maximum speed/maximum gravity etc. Throughout the GUI and other areas these values are required in order to correctly validate input. Currently they are included either as references to a public static final, or just plain hard-coded. I'd like to encapsulate these "constant variables" in an object which can be injected into the model, and retrieved by the view/controller.
To clarify, the value of the "constant variables" may not necessarily be defined at compile-time, they could come from reading in a file; user input etc. What is known at compile time is which ones are needed. A way which may be easier to explain it is that whatever this encapsulation is, the values it provides are immutable.
I'm looking for a way to achieve this which:
has compile time type-safety (i.e. not mapping a string to variable at runtime)
avoids anything static (including enums, which can't be extended)
I know I could define an interface which has the methods such as:
public int getMaximumSpeed();
public int getMaximumGravity();
... and inject an instance of that into the model, and make it accessible in some way. However, this results in a lot of boilerplate code, which is pretty tedious to write/test etc (I am doing this for funsies :-)).
I am looking for a better way to do this, preferably something which has the benefits of being part of a shared vocabulary, as with design patterns.
Is there a better way to do this?
P.S. I've thought some more about this, and the best trade-off I could find would be to have something like:
public class Variables {
enum Variable {
MaxSpeed(100),
MaxGravity(10)
Variable(Object variableValue) {
// assign value to field, provide getter etc.
}
}
public Object getVariable(Variable v) { // look up enum and get member }
} // end of MyVariables
I could then do something like:
Model m = new Model(new Variables());
Advantages: the lookup of a variable is protected by having to be a member of the enum in order to compile, variables can be added with little extra code
Disadvantages: enums cannot be extended, brittleness (a recompile is needed to add a variable), variable values would have to be cast from Object (to Integer in this example), which again isn't type safe, though generics may be an option for that... somehow
Are you looking for the Singleton or, a variant, the Monostate? If not, how does that pattern fail your needs?
Of course, here's the mandatory disclaimer that Anything Global Is Evil.
UPDATE: I did some looking, because I've been having similar debates/issues. I stumbled across a list of "alternatives" to classic global/scope solutions. Thought I'd share.
Thanks for all the time spent by you guys trying to decipher what is a pretty weird question.
I think, in terms of design patterns, the closest that comes to what I'm describing is the factory pattern, where I have a factory of pseudo-constants. Technically it's not creating an instance each call, but rather always providing the same instance (in the sense of a Guice provider). But I can create several factories, which each can provide different psuedo-constants, and inject each into a different model, so the model's UI can validate input a lot more flexibly.
If anyone's interested I've came to the conclusion that an interface providing a method for each psuedo-constant is the way to go:
public interface IVariableProvider {
public int maxGravity();
public int maxSpeed();
// and everything else...
}
public class VariableProvider {
private final int maxGravity, maxSpeed...;
public VariableProvider(int maxGravity, int maxSpeed) {
// assign final fields
}
}
Then I can do:
Model firstModel = new Model(new VariableProvider(2, 10));
Model secondModel = new Model(new VariableProvider(10, 100));
I think as long as the interface doesn't provide a prohibitively large number of variable getters, it wins over some parameterised lookup (which will either be vulnerable at run-time, or will prohibit extension/polymorphism).
P.S. I realise some have been questioning what my problem is with static final values. I made the statement (with tongue in cheek) to a colleague that anything static is an inherently not object-oriented. So in my hobby I used that as the basis for a thought exercise where I try to remove anything static from the project (next I'll be trying to remove all 'if' statements ;-D). If I was on a deadline and I was satisfied public static final values wouldn't hamstring testing, I would have used them pretty quickly.
If you're just using java/IOC, why not just dependency-inject the values?
e.g. Spring inject the values via a map, specify the object as a singleton -
<property name="values">
<map>
<entry> <key><value>a1</value></key><value>b1</value></entry>
<entry> <key><value>a2</value></key><value>b3</value></entry>
</map>
</property>
your class is a singleton that holds an immutable copy of the map set in spring -
private Map<String, String> m;
public String getValue(String s)
{
return m.containsKey(s)?m.get(s):null;
}
public void setValues(Map m)
{
this.m=Collections.unmodifiableMap(m):
}
From what I can tell, you probably don't need to implement a pattern here -- you just need access to a set of constants, and it seems to me that's handled pretty well through the use of a publicly accessible static interface to them. Unless I'm missing something. :)
If you simply want to "objectify" the constants though, for some reason, than the Singleton pattern would probably be called for, if any; I know you mentioned in a comment that you don't mind creating multiple instances of this wrapper object, but in response I'd ask, then why even introduce the sort of confusion that could arise from having multiple instances at all? What practical benefit are you looking for that'd be satisfied with having the data in object form?
Now, if the values aren't constants, then that's different -- in that case, you probably do want a Singleton or Monostate. But if they really are constants, just wrap a set of enums or static constants in a class and be done! Keep-it-simple is as good a "pattern" as any.