What is the differences between these two Properties?
I can use HttpContext.Items instead of HttpContext.Features to share data between middlewares. The only difference I see is that I tell Items for a key and it gives me an object and I have to cast it. This casting can be done in Features automatically.
Is there something else behind them?
The biggest difference is that the HttpContext.Items is designed to store Key-Value-Pair, while the HttpContext.Features is designed to store Type-Instance-Pair.
To be more clear, HttpContext.Items is designed to share items within the scope of current request, while the HttpContext.Features, which is an instance of IFeatureCollection, is by no means to be used like that .
The IFeatureCollection interface represents a collection of HTTP features, such as:
IAuthenticationFeature which stores original PathBase and original Path.
ISessionFeature which stores current Session.
IHttpConnectionFeature which stores the underlying connection.
and so on.
To help store and retrieve a Type-Instance-Pair, the interface has three important methods:
public interface IFeatureCollection : IEnumerable<KeyValuePair<Type, object>>{
// ...
object this[Type key] { get; set; }
TFeature Get<TFeature>();
void Set<TFeature>(TFeature instance);
}
and the implementation (FeatureCollection) will simply cast the value into required type:
public class FeatureCollection : IFeatureCollection
{
// ... get the required type of feature
public TFeature Get<TFeature>()
{
return (TFeature)this[typeof(TFeature)]; // note: cast here!
}
public void Set<TFeature>(TFeature instance)
{
this[typeof(TFeature)] = instance; // note!
}
}
This is by design. Because there's no need to store two IHttpConnectionFeature instances or two ISession instances.
While you can store some Type-Value pairs with FeatureCollection, you'd better not . As you see, the Set<TFeature>(TFeature instance) will simply replace the old one if the some type already exists in the collection; it also means there will be a bug if you have two of the same type.
HttpContext.Items is designed to share short-lived per-request data, as you mentioned.
HttpContext.Features is designed to share various HTTP features that allow middleware to create or modify the application's hosting pipeline. It's already filled with several features from .NET, such as IHttpSendFileFeature.
You should use HttpContext.Items to store data, and HttpContext.Features to add any new HTTP features that another middleware class might need.
Related
I am designing a process that should be run daily at work, and I've written a class to do the work. It looks something like this:
class LeadReport {
public $posts = array();
public $fields = array();
protected _getPost() {
// get posts of a certain type
// set them to the property $this->posts
}
protected _getFields() {
// use $this->posts to get fields
// set $this->fields
}
protected _writeCsv() {
// use the properties to write a csv
}
protected _sendMail() {
// attach a csv and send it
}
public function main() {
$this->out('Lead Report');
$this->out("Getting Yesterday's Posts...");
$this->_getPosts();
$this->out("Done.");
$this->out("Next, Parsing the Fields...");
$this->_getFields();
$this->out("Done.");
$this->out("Next, Writing the CSVs...");
$this->_writeCsv();
$this->out("Done.");
$this->out("Finally, Sending Mail");
$this->_sendMail();
$this->out('Bye!');
}
}
After showing this code to one of my colleagues, he commented that the _get() methods should have return values, and that the _write() and _sendMail() methods should use those values as parameters.
So, two questions:
1) Which is "correct" in this case (properties or return values)?
2) Is there a general rule or principle that governs when to use properties over when to use return values in object oriented programming?
I think maybe the source of your question comes from the fact that you are not entirely convinced that using properties is better than having public fields. For example here, common practice says that should not have posts and fields as public. You should use the getField method or a Field protected property to regulate access to those fields. Having a protected getField and a public fields doesn't really make sense.
In this case your colleague may be pointing at two things. The fact that you need to use Properties and not public fields and also the fact that it is probably better to pass the post into the method and not have the method access a property if you can. That way you don't have to set a property before calling the method. Think of it as a way of documenting what the method needs for it to operate. In this way another developer doesn't need to know which properties need to be set for the method to work. Everything the method needs should be passed in.
Regarding why we need properties in the first place? why shouldn't you use public fields. Isn't it more convenient? It sure is. The reason we use properties and not public fields is that just like most other concepts in OOP, you want your object to hide its details from the outside world and just project well defined interfaces of its state. Why? Ultimately to hide implementation details and keep internal change to ripple out(Encapsulation). Also, accessing properties has the added benefit of debugging. You can simply set a breakpoint in a property to see when a variable is changed or simply do a check if the variable is of certain value. Instead of littering your code with said check all over the place. There are many more goodies that come with this, returning readonly values, access control, etc.
To sum up, fields are though of as internal state. Properties(actual get/set methods) are though of as methods that interact with internal state. Having an outside object interact with interfaces is smiley face. Having outside class interact with internal state directly is frowny face.
I'm using protocol buffer as a wire data-format in a client-server architecture. Domain objects (java beans) will go through following life-cycle.
Used in client side business logic
Converted to protobuf format
Transmitted to the server
Converted back to domain object
Used in server side business logic
"Protocol Buffers and O-O Design" section in ProtoBuf documentation recommends wrapping generated class inside proper domain model.
I'd like to find-out the best appoach.
For e.g. I have a simple proto definition.
package customer;
option java_package = "com.example";
option java_outer_classname = "CustomerProtos";
message Customer {
required string name = 1;
optional string address = 2;
}
This is how domain model is defined. As you can see, the data is completely stored in proto builder object.
package com.example;
public class CustomerModel
{
private CustomerProtos.Customer.Builder builder = CustomerProtos.Customer.newBuilder();
public String getName()
{
return builder.getName();
}
public void setName(String name)
{
builder.setName(name);
}
public String getAddress()
{
return builder.getAddress();
}
public void setAddress(String address)
{
builder.setAddress(address);
}
public byte[] serialize()
{
return builder.build().toByteArray();
}
}
Is this a good practice? because these objects are used in all phases of life-cycle, but we only requires protocolbuf format at client-server transmission phase.
Is there any performance issue when accessing proto builder class getter/setter methods specially when proto definition is complex and nested?
I have no experience with protocol buffers, but I would not recommend implementing your domain objects tailored to a specific serialization/transfer framework. You might regret that in the future.
The domain objects and logic of a software application should be as independent as possible from specific implementation issues (in your case serialization/transfer), because you want your domain to be easy to understand and be reusable/maintainable in the future.
If you want to define your domain objects independent of serialization/transfer, you have two options:
Before serialization/transfer, you copy the information to protocol
buffers specific objects and send them to your server. There you
would have to copy the information back to your domain objects.
Use a non-protocol serialization library like Kryo or
ProtoStuff to directly transfer your domain objects to the
server.
The disadvantages of option 1 are that your domain is defined two times (which is undesirable with respect to modifications) and the copying of information (which produces error-prone and non maintainable code).
The disadvantages of option 2 are that you lose schema evolution (although ProtoStuff apparently supports it) and the complete (potentially large) object graph is serialized and transferred. Although you could prune the object graph (manually or with JGT) before serialization/transfer.
We've made a protobuf-converter to solve the problem of transformation of your Domain Model Objects into Google Protobuf Messages and vice versa.
How to use it:
Domain model classes that have to be transformed into protobuf messages must satisfy conditions:
Class has to be marked by #ProtoClass annotaion that contains
reference on related protobuf message class.
Class fields has to be marked by #ProtoField annotaion. These fields must have getters and setters.
E.g.:
#ProtoClass(ProtobufUser.class)
public class User {
#ProtoField
private String name;
#ProtoField
private String password;
// getters and setters for 'name' and 'password' fields
...
}
Code for conversion User instance into related protobuf message:
User userDomain = new User();
...
ProtobufUser userProto = Converter.create().toProtobuf(ProtobufUser.class, userDomain);
Code for backward conversion:
User userDomain = Converter.create().toDomain(User.class, userProto);
Conversion of lists of objects is similar to single object conversion.
In my Google Web Toolkit project, I got the following error:
com.google.gwt.user.client.rpc.SerializationException: Type ‘your.class.Type’ was not included in the set of types which can be serialized by this SerializationPolicy or its Class object could not be loaded. For security purposes, this type will not be serialized.
What are the possible causes of this error?
GWT keeps track of a set of types which can be serialized and sent to the client. your.class.Type apparently was not on this list. Lists like this are stored in .gwt.rpc files. These lists are generated, so editing these lists is probably useless. How these lists are generated is a bit unclear, but you can try the following things:
Make sure your.class.Type implements java.io.Serializable
Make sure your.class.Type has a public no-args constructor
Make sure the members of your.class.Type do the same
Check if your program does not contain collections of a non-serializable type, e.g. ArrayList<Object>. If such a collection contains your.class.Type and is serialized, this error will occur.
Make your.class.Type implement IsSerializable. This marker interface was specifically meant for classes that should be sent to the client. This didn't work for me, but my class also implemented Serializable, so maybe both interfaces don't work well together.
Another option is to create a dummy class with your.class.Type as a member, and add a method to your RPC interface that gets and returns the dummy. This forces the GWT compiler to add the dummy class and its members to the serialization whitelist.
I'll also add that if you want to use a nested class, use a static member class.
I.e.,
public class Pojo {
public static class Insider {
}
}
Nonstatic member classes get the SerializationException in GWT 2.4
I had the same issue in a RemoteService like this
public List<X> getX(...);
where X is an interface. The only implementation did conform to the rules, i.e. implements Serializable or IsSerializable, has a default constructor, and all its (non-transient and non-final) fields follow those rules as well.
But I kept getting that SerializationException until I changed the result type from List to X[], so
public X[] getX(...);
worked. Interestingly, the only argument being a List, Y being an interface, was no problem at all...
I have run into this problem, and if you per chance are using JPA or Hibernate, this can be a result of trying to return the query object and not creating a new object and copying your relavant fields into that new object. Check the following out, which I saw in a google group.
#SuppressWarnings("unchecked")
public static List<Article> getForUser(User user)
{
List<Article> articles = null;
PersistenceManager pm = PMF.get().getPersistenceManager();
try
{
Query query = pm.newQuery(Article.class);
query.setFilter("email == emailParam");
query.setOrdering("timeStamp desc");
query.declareParameters("String emailParam");
List<Article> results = (List<Article>) query.execute(user.getEmail
());
articles = new ArrayList<Article>();
for (Article a : results)
{
a.getEmail();
articles.add(a);
}
}
finally
{
pm.close();
}
return articles;
}
this helped me out a lot, hopefully it points others in the right direction.
Looks like this question is very similar to what IsSerializable or not in GWT?, see more links to related documentation there.
When your class has JDO annotations, then this fixed it for me (in addition to the points in bspoel's answer) : https://stackoverflow.com/a/4826778/1099376
I'd like to use for table storage an entity like this:
public class MyEntity
{
public String Text { get; private set; }
public Int32 SomeValue { get; private set; }
public MyEntity(String text, Int32 someValue)
{
Text = text;
SomeValue = someValue;
}
}
But it's not possible, because the ATS needs
Parameterless constructor
All properties public and
read/write.
Inherit from TableServiceEntity;
The first two, are two things I don't want to do. Why should I want that anybody could change some data that should be readonly? or create objects of this kind in a inconsistent way (what are .ctor's for then?), or even worst, alter the PartitionKey or the RowKey. Why are we still constrained by these deserialization requirements?
I don't like develop software in that way, how can I use table storage library in a way that I can serialize and deserialize myself the objects? I think that as long the objects inherits from TableServiceEntity it shouldn't be a problem.
So far I got to save an object, but I don't know how retrieve it:
Message m = new Message("message XXXXXXXXXXXXX");
CloudTableClient tableClient = account.CreateCloudTableClient();
tableClient.CreateTableIfNotExist("Messages");
TableServiceContext tcontext = new TableServiceContext(account.TableEndpoint.AbsoluteUri, account.Credentials);
var list = tableClient.ListTables().ToArray();
tcontext.AddObject("Messages", m);
tcontext.SaveChanges();
Is there any way to avoid those deserialization requirements or get the raw object?
Cheers.
If you want to use the Storage Client Library, then yes, there are restrictions on what you can and can't do with your objects that you want to store. Point 1 is correct. I'd expand point 2 to say "All properties that you want to store must be public and read/write" (for integer properties you can get away with having read only properties and it won't try to save them) but you don't actually have to inherit from TableServiceEntity.
TableServiceEntity is just a very light class that has the properties PartitionKey, RowKey, Timestamp and is decorated with the DataServiceKey attribute (take a look with Reflector). All of these things you can do to a class that you create yourself and doesn't inherit from TableServiceEntity (note that the casing of these properties is important).
If this still doesn't give you enough control over how you build your classes, you can always ignore the Storage Client Library and just use the REST API directly. This will give you the ability to searialize and deserialize the XML any which way you like. You will lose the all of the nice things that come with using the library, like ability to create queries in LINQ.
The constraints around that ADO.NET wrapper for the Table Storage are indeed somewhat painful. You can also adopt a Fat Entity approach as implemented in Lokad.Cloud. This will give you much more flexibility concerning the serialization of your entities.
Just don't use inheritance.
If you want to use your own POCO's, create your class as you want it and create a separate tableEntity wrapper/container class that holds the pK and rK and carries your class as a serialized byte array.
You can use composition to achieve what you want.
Create your Table Entities as you need to for storage and create your POCOs as wrappers on those providing the API you want the rest of your application code to see.
You can even mix in some interfaces for better code.
How about generating the POCO wrappers at runtime using System.Reflection.Emit http://blog.kloud.com.au/2012/09/30/a-better-dynamic-tableserviceentity/
Is it a violation of the Persistance igorance to inject a repository interface into a Entity object Like this. By not using a interface I clearly see a problem but when using a interface is there really a problem? Is the code below a good or bad pattern and why?
public class Contact
{
private readonly IAddressRepository _addressRepository;
public Contact(IAddressRepository addressRepository)
{
_addressRepository = addressRepository;
}
private IEnumerable<Address> _addressBook;
public IEnumerable<Address> AddressBook
{
get
{
if(_addressBook == null)
{
_addressBook = _addressRepository.GetAddresses(this.Id);
}
return _addressBook;
}
}
}
It's not exactly a good idea, but it may be ok for some limited scenarios. I'm a little confused by your model, as I have a hard time believing that Address is your aggregate root, and therefore it wouldn't be ordinary to have a full-blown address repository. Based on your example, you probably are actually using a table data gateway or dao rather than a respository.
I prefer to use a data mapper to solve this problem (an ORM or similar solution). Basically, I would take advantage of my ORM to treat address-book as a lazy loaded property of the aggregate root, "Contact". This has the advantage that your changes can be saved as long as the entity is bound to a session.
If I weren't using an ORM, I'd still prefer that the concrete Contact repository implementation set the property of the AddressBook backing store (list, or whatever). I might have the repository set that enumeration to a proxy object that does know about the other data store, and loads it on demand.
You can inject the load function from outside. The new Lazy<T> type in .NET 4.0 comes in handy for that:
public Contact(Lazy<IEnumerable<Address>> addressBook)
{
_addressBook = addressBook;
}
private Lazy<IEnumerable<Address>> _addressBook;
public IEnumerable<Address> AddressBook
{
get { return this._addressBook.Value; }
}
Also note that IEnumerable<T>s might be intrinsically lazy anyhow when you get them from a query provider. But for any other type you can use the Lazy<T>.
Normally when you follow DDD you always operate with the whole aggregate. The repository always returns you a fully loaded aggregate root.
It doesn't make much sense (in DDD at least) to write code as in your example. A Contact aggregate will always contain all the addresses (if it needs them for its behavior, which I doubt to be honest).
So typically ContactRepository supposes to construct you the whole Contact aggregate where Address is an entity or, most likely, a value object inside this aggregate.
Because Address is an entity/value object that belongs to (and therefore managed by) Contact aggregate it will not have its own repository as you are not suppose to manage entities that belong to an aggregate outside this aggregate.
Resume: always load the whole Contact and call its behavior method to do something with its state.
Since its been 2 years since I asked the question and the question somewhat misunderstood I will try to answer it myself.
Rephrased question:
"Should Business entity classes be fully persistance ignorant?"
I think entity classes should be fully persistance ignorant, because you will instanciate them many places in your code base so it will quickly become messy to always have to inject the Repository class into the entity constructor, neither does it look very clean. This becomes even more evident if you are in need of injecting several repositories. Therefore I always use a separate handler/service class to do the persistance jobs for the entities. These classes are instanciated far less frequently and you usually have more control over where and when this happens. Entity classes are kept as lightweight as possible.
I now always have 1 Repository pr aggregate root and if I have need for some extra business logic when entities are fetched from repositories I usually create 1 ServiceClass for the aggregate root.
By taking a tweaked example of the code in the question as it was a bad example I would do it like this now:
Instead of:
public class Contact
{
private readonly IContactRepository _contactRepository;
public Contact(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save()
{
_contactRepository.Save(this);
}
}
I do it like this:
public class Contact
{
}
public class ContactService
{
private readonly IContactRepository _contactRepository;
public ContactService(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save(Contact contact)
{
_contactRepository.Save(contact);
}
}