WCF insert into DB using EF - wcf

I have created a WCF service with below operation to insert into DB. (Pseudo code)
// Context mode is percall
AddUser
{
var dbCtx = new MyEntity();
dbCtx.DbSet.Add(Record);
dbCtx.SaveChanges();
}
This method is called many times by the client asynchronously. How to improve its performance? How to perform group insert and call savechanges across multiple calls.

For performance improvement, Firstly, you need to benchmark your method calls. for e.g. what time is it taking for the method if it is called by 'n' users.
As one of the option you may use Visual Studio Instrumentation profiler (https://msdn.microsoft.com/en-us/library/dd264994.aspx) to know the HotPath and then work on Performance Improvement.
Also, on the wcf side, you can definitely make some improvements and refer to links
1. https://msdn.microsoft.com/en-us/library/vstudio/Hh273113%28v=VS.100%29.aspx
2. Performance Tuning WCF Service
For EF, you can make optimizations like precompiled queries etc. More details at link https://msdn.microsoft.com/en-us/data/hh949853.aspx

Related

Entity Framework and WCF. Best approach?

I'm developing n-layer architecture, and for the data access layer I am using Entity Framework 4.1.
The database oonly expose stored procedures. I also have an additional layer, service layer, developed in WCF.
For each service call, use a new data context in a using statement.
Considering that service calls will reach 1000 per second, this approach is right?
Best Regards.
1000 per second - if your service really has to do something you will need either very good server or load balanced environment.
If your database exposes only stored procedures and you cannot execute direct SQL (= you cannot use LINQ) there is no reason to use EF. Actually there are many reason why you should not because it will not give you any additional value except much worse performance. Also if your stored procedures use for example multiple result sets, table value parameters and some other advanced techniques you will not be able to use them from EF 4.1 at all.
Using direct ADO.NET will allow you executing queries asynchronously which can lead to asynchronous WCF service operations = better utilization of your computing power and better throughput.
You should worry more about server load balancing , sclability , WCF performance issues including but not limited to concurrency , thorottling . You should chose binding that can scale easily as per your need with minium breakdown time in future.
Additionally,You should make sure that you are doing a good multithreaded design on your backend to support your benchmark of 1000 calls/sec ( I am still wondering what it is though) and to increase throughput of your service.
EF doesnt play any part in your case . You need raw performance here . Dont kill by adding another layer of unnecessary stuff.
For load balancing you can start here Loadbalancing

WCF/RIA with one common set of CRUD methods

I am very new to WCF/RIA services. I am looking to build an application using PRISM/MEF where I can offer new plug-ins for the application from time to time. Now, my database structure is pretty much static. It will not see many changes during its life (but there still might be a few). The new plug-ins will use the entity classes exposed by the database.
My Question is when I create new plug-in controls, these controls might need some special server side methods to be run. Which would mean I update my WCF/RIA service to account for the new methods. I really want to avoid that and was wondering if it is possible to create a WCF service that has just 4 CRUD mehods. I can pass any entity to these methods and depending upon the type, the entity gets saved, updated or deleted. Also it lets me pass any kind of LINQ query to the get method and returns me the appropriate results. The goal is to avoid making changes to WCF service unless the underlying DB structure changes.
Whatever special methods I add to my plug-in, they could simply mean passing complex LINQ queries to the generic Get method and get the results on the client side. Most of entity management happens on the client. WCF becomes a simple (yet powerful) layer over my database that lets me access any entity and process any complex query based on client side LINQ queries.
Thanks,
M
Have these 4 CRUD operations in a seperated Domain Service.

Strange behaviour of code inside TransactionScope?

We are facing a very complex issue in our production application.
We have a WCF method which creates a complex Entity in the database with all its relation.
public void InsertEntity(Entity entity)
{
using(TransactionScope scope = new TransactionScope())
{
EntityDao.Create(entity);
}
}
EntityDao.Create(entity) method is very complex and has huge pieces of logic. During the entire process of creation it creates several child entities and also have several queries to database.
During the entire WCF request of entity creation usually Connection is maintained in a ThreadStatic variable and reused by the DAOs. Although some of the queries in DAO described in step 2 uses a new connection and closes it after use.
Overall we have seen that the above process behaviour is erratic. Some of the queries in the inner DAO does not even return actual data from the database? The same query when run to the actaul data store gives correct result.
What can be possible reason of this behaviour?
ThreadStatic is not recommended. Use CallContext instead. I have code at http://code.google.com/p/softwareishardwork/ which demos the correct way to handle connections in a manner you describe (tested in severe high performance scenarios). Try a test case using this code.

Write-though caching of large data sets in WCF?

We've got a smart client that talks to a SQL Server database via WCF, displaying the entities in the database, and allowing the user to edit those entities.
Some of the WCF calls return a large data set. Since this data set doesn't change very often, I'm considering some sort of write-through cache on the client, and only getting the deltas from the WCF service.
That is: the client both reads from the service and writes to the service.
I'm not looking for disconnected/offline operation, but since the majority of the data doesn't change very often, I'd probably implement this with a local data store.
I don't want the local store to get too stale, and I don't think I'm too concerned about conflict resolution, because updates will always go straight to the WCF service -- think of it as a write-through cache.
Would Microsoft's Sync Framework be good for this? Could I use a local SQL-CE cache and perform the updates over WCF? The service end has a SQL Server 2005/2008 backend, but I don't want to talk to it directly. Does Sync Framework integrate well with WCF?
Are there other solutions out there? Should I roll something myself?
I don't think you have to couple it to WCF at all. FeedSync allows you to publish directly to an RSS feed.
The only that I'm not too sure about is if it would be suitable for a "large dataset" though. Since you don't need two way replication, if your dataset is extremely large, you might want to write your own WCF implementation to optimize it; especially for the initial population.

WCF/Silverlight/SQL DB Caching Strategies

Ok, I have a pretty complex silverlight app that gets its data from a WCF service (asp.net hosted service layer) which in turn calls into a data layer that calls stored procedures in a SQL 2005 DB to extract the needed data. So the round trip goes like this:
Silverlight App --> WCF Service --> Data Layer --> DB --> Data Layer --> WCF Service transforms Data Entity into corresponding DTO (Data Transfer Object) or List<> thereof --> Silverlight App
Much of the data is highly relational (so it needs to exist in the DB), but it will change infrequently. It seems that I have several choices of locations to cache this "semi-constant" data:
I can cache it in the data layer. My data layer is already set up to use the SQLDependency class and cache the results from a stored procedure call. I think that this is or can be an application level cache.
I can cache the resulting DTO in an application level (or session level depending on the call) cache within the WCF service itself.
2(a) I could even take this a step further by serializing the XML for the resulting DTO(s) into a file on the WCF service side so that I could (a) check memory cache, then (b) check file cache and (c) hit the data layer
I could do something similar to 2(a) with isolated storage on the client side within the SL app. I could serialize the data to the local isolated storage with a hash (or a moddate or something) and then just make a call to check that.
One more thing to add: I am hosting this WCF service in IIS7 with dynamic compression turned on so that the (often very large and easily compressed) XML response gets gzip-ed. Ideally, it would seem, I would like IIS to cache this gzip-ed result to avoid all the extra processing. I think that it may do this already but I am not sure.
I am pretty sure that the final answer to this is some flavor of "it depends", but I would love to hear how others are approaching this. A good tactical recipe of Do X, Test Performance with tool Y, the do Z if needed would be great to have.
A few links (I will add to this as I research this):
WCF Caching Approach
If you have data that are user that will change quite rarely and need fast response, going for a custom mechanism bases on local storage is a great advantage quite faster than having to wait for a server roundtrip.
Dino Sposito published an interesting article about local storage and caching on MSDN Magazine there you can find as well an approach to catch assemblies (imagine just loading the minimum package required and just go loadin the rest of assemblies in background, ... performance rocket, more complexity on your code :)).
As you said is matter to go putting in a balance and decide.
HTH
Braulio
My approach would be this:
Determine if there is actually a problem with performance (isn't it alreade acceptable to my users?)
Measure the performance at each teir (how long does it take the database to come up with data? how long does it take the service to respond with data? how much time does it take from the service to the client?)
Based on the measurements I would then determine where to do my caching. Remember that, the closer to your data storage you do caching, the easier it is, but the closer to the client you do caching, the better the performance gain (usually).
Also remember that caching should not be the first thing to do to improve performance. You should also look into other performance gains as well. Are the stored procedures slow? Is there a lot of overhead in the WCF messages? Is there some inefficient processing in the service? Do I realy need all that data in one message?
HTH,
Jonathan
I think #2 is your best bet for maintainability and architecture. IIS provides caching, why not use it?
You don't want to have to reference System.Web from a data layer. Client side is not the best option either, because you'd have to write a bunch of additional code to keep the data synchronized.
Is System.Web caching even available to WCF when it's not running in ASP.NET compatible mode? Probably best not to depend on it and write your own.
On the other hand, look into Microsoft's Velocity project, which looks like it will produce a very interesting caching technology not dependant on ASP.NET.
We just recently implemented #3, the client-side caching using Isolated Storage.
In our app we have lot of drop downs and custom fields which the app used to get from the server every time it loads. Moving these data to IS really helped. The app now makes a call to check if there were any changes on the server, and if not - loads the data from the IS, otherwise ( which is pretty rare ) refreshes IS.
That eliminated a lot of WCF calls and data transfers, the SL pages' loading time is shorter, and the app in general became more scalable because of the reduced network traffic and db access.
Yes, there are some coding involved, but the benefits for the end users are essential.
Andrew
If you use RIA Services, then a simple approach is to have two separate edmx definitions. One for cached entities, one for transactional ones.
One domain context can reference the entities on another domaincontext via AddReference see.
The cached entities could be loaded immediately after user has authenticated. For simplicity, transactional data should not load until cached entities have loaded.
Depending on the size of the cache, you may also wish to consider serializing these values to local storage.