WCF/RIA with one common set of CRUD methods - wcf

I am very new to WCF/RIA services. I am looking to build an application using PRISM/MEF where I can offer new plug-ins for the application from time to time. Now, my database structure is pretty much static. It will not see many changes during its life (but there still might be a few). The new plug-ins will use the entity classes exposed by the database.
My Question is when I create new plug-in controls, these controls might need some special server side methods to be run. Which would mean I update my WCF/RIA service to account for the new methods. I really want to avoid that and was wondering if it is possible to create a WCF service that has just 4 CRUD mehods. I can pass any entity to these methods and depending upon the type, the entity gets saved, updated or deleted. Also it lets me pass any kind of LINQ query to the get method and returns me the appropriate results. The goal is to avoid making changes to WCF service unless the underlying DB structure changes.
Whatever special methods I add to my plug-in, they could simply mean passing complex LINQ queries to the generic Get method and get the results on the client side. Most of entity management happens on the client. WCF becomes a simple (yet powerful) layer over my database that lets me access any entity and process any complex query based on client side LINQ queries.
Thanks,
M

Have these 4 CRUD operations in a seperated Domain Service.

Related

Best Practice For Updating Entity in Web Api

I'm researching best practice for updating entity from action that called by client. There are several ways to do that but none of them seem best practice.
1- Getting datas that will be updated via reflection from request model and update entity with these properties. But reflection doesn't recommended to use in web api.
2- Sending all datas of entity to client and getting it's updated version from request. It seems make unnecessary traffic.
3- Getting datas that will be updated and check them with if else conditions for getting which ones changed. It's so basic and not generic, seems so unprofessionally.
Request Model that i talked about is clone of entity model.
First off, don't use Reflection. It's slow as hell and makes your code extra fragile.
When it comes to EF, usually there are 3 possible solutions:
1; The client sends the whole updated entity, and only the updated entity. In this case, you simply have to attach the entity to the corresponding entityset and mark the entity state as Modified.
2; The client sends both the original entity and the updated entity. You attach the original and set the currentvalues to the the update entity.
3; The client only sends the modified properties, not the whole entity. In this case you have to query the original entity from the db and set the properties either one by one or again override the currentvalues.
The 3 approaches differ in their bandwith requirement and the number of queries they make.
1; If we take this as the baseline, it has a bandwith requirement of sending one entity from the client to the server, and then sending this one entity from the server to the db. This makes 1 db query altogehter (attaching does not require querying, so only the saving changes part initiates the query).
2; This has a bandwith of sending two entities from the client to the server. Here you have to send less data from the server to the db, because the changed properties are calculated when you set the currentvalues. Again, just 1 query (attaching and setting currentvalues don't initiate queries, so only the saving changes part creates a query).
3; This has the least bandwith requirement both from the client to the server and from the server to the db (both times only the changed properties are sent). However, this does need one more query besides saving, because you have to query the original values from the db, before setting the changes.
I ususally find the the first approach is a good trade-off between the other two. It does send more data than the third, but still less than the second, and it only initiates the one query for saving data. Also I like to minimize the traffic between the client and the server even if it means there is more traffic between the server and the db. The clients (for me at least) are usually mobile, so no guaranteed bandwith, no guaranteed battery lifetime. The server and the db are much "closer" and they don't have these restrictions. But of course this can be different for your application.

Where should we calculate fields?

I'm currently working in a Silverlight / MS SQL project where the Entity Framework has not been implemented and I would like to know what's the best practice to deal with calculated fields in this particular situation.
Considering that some external system might also consume my data directly in the DB or thru a web service, here's the 3 options I can see right now.
1) Force any external system to consume data thru a web service and create all the calculated fields in the objects only.
2) Create the calculated fields in a DB view and resync your object with the server each time a value needs to be calculated.
3) Replicate the calculation rules in the object and the database view.
Any other suggestions would also be welcomed.
I would recommend to follow two principles: data decoupling and minimum functionality duplication. Both would suggest to put your calculations in one place only, and serve them already calculated. So I would implement the calculations in the DB, and serve them via a web service.
However, you have to consider your particular case. For example, if the calculations are VERY heavy, you could delegate them to the client to spare server resources. This could even be the reason you are using Silverlight. I am in a similar situation on a project, and I found that the best compromise is to push raw data to the client and have it do the heavy computations.
Having a best practice or approach for this kind of problem is difficult as circumstances change what was formerly a good approach might start to seem less useful. That said where possible I would do anything data related at the DB level including calculated fields. This way you know no matter where you are looking at the data from you will see the same results. So your web service, SQL reporting and anything else that needs to look at or receive data will see the same result.

Entity Framework for two applications and common database

I have two applications(web and a desktop app) that uses entity framework which use a common sql server database. They have unit of work pattern implemented and it keeps the context in the session or in the relevant thread. My question is how to update context of another application when one application updates something on the database ?
As an example let say the windows service has added some row to a table. How can the web application context get that one at the same time it is inserted.
Context in scenario of a web application should only last per the request. From what I see, you have to implement something as an event from database level as that seems to be the common place. This can be done using Triggers
In your scenario, you should perform following steps (just doing a drawing board scenario)
Add triggers at database level for each table, which will basically throw an event to the application layer.
Somehow extract those triggers into stored procedures, so that you can use with EF
Thereafter, implement a layer that sits on both the application whose primary responsibility is to notify the user of a change in the database by other application and then update the request by clicking a button(which in turn update the context). Basically the database level trigger, triggers something on the respective UI.
The meat of the work lies in the third point. You can achieve it in many ways. Alternatives are writing a service that polls another service (which accepts alerts from db trigger) for checking the modifications. so the logical separation could be like db --> service that accepts the change notification --> service that polls the notification service --> application
Above works logically and theoretically but hope it helps you out and I would be keen to know how you go about doing this.

WCF SOA: CRUD Data Access Service...why bother (or is our design wrong)?

We have a Data Access service in our SOA WCF system. This service is responsible for doing CRUD (create, update, delete) operations on "system wide" database tables, and is also the source of this data for queries. Any other service in the system wanting to access the tables under the contol of the DAS have to go to the DAS to get it or modify it. We use Entity Framework and built our own POCO state tracking system for this DAS.
We have other tables in our database that belong to single services and store data only for their own use, ie state information they can access if they crash and resume or recording of business information. We have a rule any one table cannot be accessed by more than one service: so data needed by multiple services ends up in the DAS.
Truth is I have never really understood why a Data Access Service is a good idea as opposed to just accessing tables directly. It seems to be to be slower, our DAS is not transactional as it cannot send back a POCO graph for database update (only single POCOS at a time) and we have issues also where the DAS is actually a client to another service which needs data from it...circular dependancy.
Why bother with a DAS? Why is a DAS so important when it comes to SOA? What am I missing here? Single point of control?
Is it also an SOA design flaw that not all tables are part of a DAS and that some services have their own "private" tables?
Any discussion about this welcome.
You're correct in thinking that this is the proper way to do things, and you're also correct that it slows things down and can occasionally be cumbersome. SOA necessarily trades off some efficiency in exchange for ensuring single points of control for all data associated with a service. In fact, even the idea of having a "common DAS" service is slightly smelly in some SOA circles.
By centralizing all CRUD operations to one service in an SOA application, you can ensure data integrity and that business rules are being acted upon properly. To give an example, think of an entity you'd like to store that has some business rules associated with it that are difficult to approach from a pure SQL perspective - for example, let's say a table that stores file references, and create / update services that ensure that these files exist.
With SOA and a single access point to those tables, you can code the logic into the create / update methods and be reasonably assured that the data you're recieving from the service is valid - i.e. the files referenced exist. If anyone was capable of writing to these tables or retrieving data from them, no such assurance would exist - even if you're calling the service yourself, you don't know what other programmers, through malice or just plan forgetfulness, forgot to implement that critical business rule. This leads to defensive programming where every bit of client code is ensuring business logic independently, and ultimately a tangled mess of business logic scattered throughout your application.
Another benefit is scalability and maintanability. Let's say one of your services is accessing a huge chunk of data. With SOA, everything is "black-boxed" so that your client code doesn't have much knowledge of how the data is ultimately obtained. You could change your RDBMS, partition tables, or implement caching, and make that all invisible to the client code calling it - ensuring your painful updates only need to be made in one place. With database code scattered throughout your app, this sort of upgrade becomes extremely painful.

WCF: sharing cached data across multiple services

We are developing a project that involves about 10 different WCF services with several endpoints each. One of the services keeps a few big tables of data cached in memory.
We have found we need access to that data from another service. Rather than keeping 2 copies of the cache, I'd like to be able to share those tables across all services.
I have done some research and found some articles about using an IExtension attached to the servicehosts to store the shared data.
Provided that all the services are running under the same web site, will that work? And is it the right approach? Or should I be looking elsewhere?
If the data that you're caching is required by more than one service, it sounds like - from a Service Oriented Architecture perspective, anyway - that it doesn't belong in either of services you have calling it.
If the data being cached isn't really related to either service, but is something that both services need, then perhaps it belongs in it's own seperate service. Have you considered encapsulating your cache in a third service, and performing a service-to-service call to retrieve the data you need? Benefits include...
It solves your original dilemma, avoiding the need to read the whole cache from the database several times;
It encapsulates the cache in one place for easy maintainance/change later.
It allows you to abstract the implementation of the cache away from the other services by putting another service interface in the way.
All in all, I'd suggest that's the best approach. The only downside is the extra overhead of making the service-to-service call, but that surely outperforms having to read the whole cache from the database.
Alternatively, if the data in your cache is very closely related to BOTH of the services that are calling the cache, i.e. both services add/change the data in the cache, etc. then perhaps the two existing services should be combined into a single service.
If what I'm saying is making some sense, then then principle of SOA I'm drawing on is Service Autonomy.
Provided all your services are part of the same application there doesn't seem to be any reason why you can't share the cache directly via a shared object reference. The simplest way of doing this is via a static field.
If you choose this approach, one thing to be very careful about is thread safety. If your cache is concurrently accessed via two WCF sessions, you must ensure that the two sessions are not going to interfere with each other by both changing the cache at the same time. If the cache is read-only, your need to do this is lessened, but you still might need to synchronrise initialisation of the cache.