I'm having trouble defining what my OperationContract should be when adding / updating an entity. I want to send an entity (or list of entities) to the ObjectContext via the WCF Service (which will instantiate a Business Manager for me to do the actual validation).
If the entity passes all of the validation rules (which could very well require querying the database to determine pass/fail for more complex business rules), it'll be saved to the database, and I'll need to be able to pass back its ID (Identity Column primary key) and the value of the concurrency token (timestamp column), but if it fails, obviously we want to have a message or messages saying what was wrong. In the case of an update, all we would need would be the new value of a concurrency token, but again we'd want the validation message(s).
To make it trickier, an entity could have multiple child/grandchild entities as well. For instance, a Trip will have Stops, which could potentially have Orders.
I'm just wondering how people handle this in the real world. The simplest examples just show the WCF service's operations like:
[OperationContract]
bool AddEntity(Entity e);
[OperationContract]
bool UpdateEntity(Entity e);
Does anyone have any great ideas for handling this? I guess I'm really just looking for practical advice here.
Should we be trying to save a collection of objects in one service call?
Should we be conveying the validation messages through a fault contract?
Any advice/input would be helpful, thanks!
Should we be trying to save a
collection of objects in one service
call?
If you mean saving whole object graph in one call then the answer is definitely yes. If you mean saving multiple independent object graphs (collection) in one call then the answer is probably yes. It is good idea to reduce number of roundtrips between client and service to minimum but in the same time doing this can introduce complications. You must decide if the whole collection must be saved as atomic operation or if you are happy with saving only part of the collection and returning errors for the rest. This will influence the rest of your architecture.
Should we be conveying the validation
messages through a fault contract?
Yes but only if you will use save operation as atomic because fault contract is exception and exception should break your current operation and return only validation errors. It should be enough to have single fault contract which will transfer all validation errors. Don't fire the exception for each single validation error because it can make your application pretty annoying and useless.
If you want to save only part of the collection which passes validations and return errors for the rest you should not use fault contracts. Instead of fault contracts you should have some container data contract used for response which will carry both ids and timestamps for saved data and ids and errors for unsaved data.
One little note to STEs: Passing back just Ids and timestamps can be probably tricky. I'm not sure if you don't have to turn off tracking when you want to set them and after that turn the tracking on again.
Related
I'm having some message design head-aches. I want to start up an NServiceBus saga for a long running process. Part of the data needed to do the initialization is a list of constraints, which are implementations of an abstract base class. As I've understood the design philosophy, messages should ideally be
Self-contained, that is contain all the data needed to process them. Following this, I would pass along all the list of constraints in the message.
Versionable. NServiceBus does this by using an XML serializer which does not pass along type information (see this thread answer by Udi). In my case, that means I cannot on the recieving end pick up the specifics of the constraints.
The serialization problems can be "solved" by using the BinarySerializer, but this does not seem to be a recommended practice since it breaks versioning. The alternative is to send along some identifier so that the constraints can be retrieved from some datastore, but that would remove the "self-containedness".
Is there a third way here, or do I simply have to choose some "least bad" solution?
There is also the option of having these objects injected into your saga via DI.
Just create a boostrapping class that at startup will call:
Configure.Instance.Configurer.ConfigureProperty<yourSaga>(s => s.SomeProperty = value);
According to this post, I am using a data context per call, so in each method of my WCF service, I use a using block to create a new data context.
But I have some doubts in the form to work in this way.
For example, I use a method getAllCLients() from my repository to get all the clients of the data base, then the service send to the client that call the method a list with all the clients. Then the user modify the information of some of them, three for example. The modify client perhaps I can add to a list that have the modified clients.
When I want to update this three clients, I can call a method updateClients() which receive a list of modified clients. How I am use a new data context per each method, in updateCients() get a new dataContext, without entities, so I think that I have to follow this steps:
1.- create a new data context which has the clients that I want to update. SO I need to specified the conditions for that. This is an extra operation (I get the clients before with the getAllClients() method), so I need to get again the clients.
2.- go throw the clients collection of the DBSet (I use EF 4.1) and change the information. This makes me to go throw the list that I receive from the client application too. So I must to go throw two lists. This needs resources.
3.- save the changes. This is needed anyway, so it has no required more work.
There is any way to make the step 2 easily? exist some method in dataContext to pass the values from my modified client to the client in the data context? I use POCO entities, perhaps it exists an easy way to do that.
Other question is about concurrency. If I control the concurrency with pesimistic concurrency that allow EF (with a timestamp field for example), is it better to call the updateClient() one for each client or better to pass a list with all the clients? I mean that if I use a list as parameter, if there is a concurrency issue with one client,the second for example, the first client will be update correctly, but the second not and the third neither. How can I notify to the user that there is problems with some clients?
To resume, I would like to know the best way to make updates when I have a short life datacontext.
Thanks.
Daimroc.
The service is disconnected scenario so when your client passes backs modified records you just need to process them as modified. You don't need to load all records from database for that.
public void SaveClients(List<Client> modifiedClients)
{
using (var context = new Context())
{
modifiedClients.ForEach(c =>
{
context.Entry(c).State = EntityState.Modified;
});
context.SaveChanges();
}
}
If you are using per call service and every service operation needs context you can move your context instancing to service constructor because service instance will live only to server single service call = you don't need using for every call. If you do that don't forget to implement IDisposable on your service to dispose context.
Other question is about concurrency. If I control the concurrency with
pesimistic concurrency that allow EF (with a timestamp field for
example), is it better to call the updateClient() one for each client
or better to pass a list with all the clients?
EF doesn't support pesimistic concurrency out of the box. Using timestamp is optimistic concurrency because it allows others to use the record. Pesimistic concurrency is application logic where other client is not able to select locked record for update.
The concurrency is resolved per record but the problem in this case is transaction. Each call to SaveChanges results in transaction used to process all changes in the database. So if any of your modified records is not up to date you will get concurrency exception and whole transaction is rolled back = no record is updated.
You can still overcome the issue by passing list of modified records to the service (reducing roundtrips between client and service is a best practice) but you can process each record separately by calling SaveChanges for every single record. Anyway this should be very carefully considered because each call to SaveChanges is like separate unit of work - is it really what you want?
Btw. the best practice is to make your service statless. You should avoid maintaining data between service calls and this example really doesn't need it.
I am creating a brand new application, including the database, and I'm going to use Entity Framework Code First. This will also use WCF for services which also opens it up for multiple UI's for different devices, as well as making the services API usable from other unknown apps.
I have seen this batted around in several posts here on SO but I don't see direct questions or answers pertaining to Code First, although there are a few mentioning POCOs. I am going to ask the question again so here it goes - do I really need DTOs with Entity Framework Code First or can I use the model as a set of common entities for all boundaries? I am really trying to follow the YAGNI train of thought so while I have a clean sheet of paper I figured that I would get this out of the way first.
Thanks,
Paul Speranza
There is no definite answer to this problem and it is also the reason why you didn't find any.
Are you going to build services providing CRUD operations? It generally means that your services will be able to return, insert, update and delete entities as they are = you will always expose whole entity or single exactly defined serializable part of the entity to all clients. But once you do this it probably worth to check WCF Data Services.
Are you going to expose business facade working with entities? The facade will provide real business methods instead of just CRUD operations. These buisness methods will get some data object and decompose it to multiple entities in wrapped business logic. Here it makes sense to use specific DTO for every operation. DTO will transfer only data needed for the operation and return only date allowed to the client.
Very simple example. Suppose that your entities keep information like LastModifiedBy. This is probably information you want to pass back to the client. In the first scenario you have single serializable set so you will pass it back to the client and client pass it modified back to the service. Now you must verify that client didn't change the field because he probably didn't have permissions to do that. You must do it with every single field which client didn't have permission to change. In the second scenario your DTO with updated data will simply not include this property (= specialized DTO for your operation) so client will not be able to send you a new value at all.
It can be somehow related to the way how you want to work with data and where your real logic will be applied. Will it be on the service or on the client? How will you ensure that client will not post invalid data? Do you want to restrict passing invalid data by logic or by specific transferred objects?
I strongly recommend a dedicated view model.
Doing this means:
You can design the UI (and iterate on it) without having to wait to design the data model first.
There is less friction when you want to change the UI.
You can avoid security problems with auto-mapping/model binding "accidentally" updating fields which shouldn't be editable by the user -- just don't put them in the view model.
However, with a WCF Data Service, it's hard to ignore the advantage of being able to write the service in essentially one line when you expose entities directly. So that might make the most sense for the WCF/server side.
But when it comes to UI, you're "gonna need it."
do I really need DTOs with Entity Framework Code First or can I use the model as a set of common entities for all boundaries?
Yes, the same set of POCOs / entities can be used for all boundaries.
But a set of mappers / converters / configurators will be needed to adapt entities to some generic structures of each layer.
For example, when entities are configured with DataContract and DataMember attributes, WCF is able to transfer domain objects' state without creating any special classes.
Similarly, when entities are mapped using Entity Framework fluent mapping api, EF is able to persist domain objects' state in database without creating any special classes.
The same way, entities can be configured to be used in any layer by means of the layer infrastructure without creating any special classes.
We are using a WCF Data Service to broker our data server side, and give third parties easy OData access to our data. The server side of things has been relatively easy. The client side, on the other hand, is giving us fits.
We are converting from regular Entity Framework to Data Services, and we've created an assembly which contains the generated client objects that talk to the data service (via a Service Reference). Those classes are partial, so we've added some logic and extended properties to them. This all works great.
The issue we are having is that we need to process our objects at save time, because they need to do some advanced serialization before they are sent over the wire. The DataServiceContext class contains two events: WritingEntity and ReadingEntity. The ReadingEntity event actually happens at the correct time for us (post object deserialization). The WritingEntity event happens at the WRONG time for us (post object serialization).
Is there any way to catch an object before it's written to the request, so that we can call a method on entity that is about to be written?
Obviously we could just loop through the Entities list, looking for any entity that is not in a state of Unchanged or Deleted, and call the appropriate method there...but this would require me to add special code every time I wanted to call SaveChanges on the context. This may be what we need to do, but it would be nice if there was a way to catch the entities before they are written to XML for sending to the service.
Currently there's no hook in the DataServiceContext to do what you want. The closest I can think of is the approach you suggested with walking all the entities and findings those which were modified. You could do this in your own SaveChanges-like method on the context class (which is also partial).
Is it ok from your real-world-experience to define service contract with one method which will accept some object as a form of request and return some other object as a result of that request. What I mean is instead of having method for creating, deleting, editing and searching customers I would have these activities encapsulated within DataContracts and what service would do after receiving such DataContract would be take some action accordingly. But service interface would be simple as that:
interface ISomeService
{
IMessageResult Process(IMessageRequest msg);
}
So IMessageRequest would have filed named OperationType = OperationTypes.CreateCustomer and rest of fields would provide enough information for the service that it could create Customer object or record in database or whatever. And IMessageResult could have field with some code for indication that customer was created or not.
What I'm trying to achieve by such design is an ability to easy delegate IMessageRequest to other internal services that client side wouldn't even know about. Another benefit I see is that if we will have to add some operation on customers we only provide additional DataContract for this operation and don't have to change anything on service interface side (I want to avoid this at all costs, I mean not new operations but changing service interface :)
So, what do you think? Is it good way of handling complicated business processes? What are pitfals, what could be better.
If I duplicated some other thread and there are some answers to my question please provide me with links because I didn't find them.
Short answer: yes, this could be a very good idea (and one I have implemented in one form or another a couple of times).
A good starting point for this type of approach are the posts by Davy Brion on what he calls the request/response layer. He consolidated his initial ideas & thoughts into a very usable OSS project called Agatha, which I am proposing at a customer site as I write this.
This is exactly what we're doing here where I work. It works great and is easy for all the developers to understand, and really easy to wire up new methods/class/etc.