We were using the System.Data.Services.Client (version 4 I guess) of Microsoft WCF Data Services. When we updated to the version 5.2 (Microsoft.Data.Services.Client dll), it seems that some caching mechanism has been inserted into the new version of WCF Data Services.
Because when we query the data services (OData) through browser, fresh data would be returned, but when we add a service reference to our UI project and use that reference (proxy) to retrieve data, only after 10 minutes or so the fresh data would be shown.
By resetting IIS (iisreset.exe) fresh data would be available, which probably means that somewhere in UI project a caching should be in place.
We don't do something extraordinary in our code, but using OData service reference in its most simple state:
public List<Customer> GetCustomers()
{
CustomersODataModel customersData = new CustomersODataModel("Url");
return customersData.ToList();
}
Consider disabling client side caching in the DataService object and see if that helps. I had the same problem and setting dataService.MergeOption to MergeOption = MergeOption.OverwriteChanges helped keep the data service refreshing the obejct on each change and get.
Related
I am currently working on a .NET Core project where I use the Microsoft.Azure.Servicebus version 1.0 NuGet package found here: https://github.com/Azure/azure-service-bus
The problem I have is that I haven't found any method to get a queue's number of active messages. This used to be pretty easy with .NET framework using the ServicebusNamespace.NamespaceManager, referring to a queue and use the .ActiveMessageCount.
Is this possible in some other way in this library with .NET Core 1.1?
It is now possible using the latest version of the Service Bus library (3.1.1):
using Microsoft.Azure.ServiceBus;
using Microsoft.Azure.ServiceBus.Management;
var client = new ManagementClient(connectionString);
var queue = await client.GetQueueRuntimeInfoAsync(queuePath);
var counts = queue.MessageCountDetails;
var subs = await client.GetSubscriptionRuntimeInfoAsync(topic, subscription);
var countForThisSubscription = subs.MessageCount; //// (Comes back as a Long.)
The .NET Standard client (Microsoft.Azure.ServiceBus) is deliberately not providing management operations. It states that management operations should not be performed at run time. Management operations are extremely slow.
Is this possible in some other way in this library with .NET Core 1.1?
Yes, it is possible.
Instead of the NamespaceManager that was available with the old client (WindowsAzure.ServiceBus), there's a ServiceBus management library (Microsoft.Azure.Management.ServiceBus.Fluent)
You will need to do the following:
Authenticate using ServiceBusManager
Access the namespace you're interested in via ServiceBusManager.Namespaces
Filter out the entity you're interested in by locating it under ServiceBusManager.Namespaces.Queues/ServiceBusManager.Namespaces.Topics. For subscription you'll need to locate one via ITopic object.
Once you've got your entity (IQueue, ITopic, or ISubscription), you'll be able to access the message counts.
I'm not a big fan of this approach. Rather than each developer reinventing this wheel, Azure Service Bus team should have provided a helper library to replace NamespaceManger. You can always raise an issue or vote for an issue that was closed.
Management operations were introduced back in version 3.1.1 with pull request #481.
I have a WCF Data Service that is wrapping an Entity Framework 4 data model. I am connecting a WPF client to the service using the WCF Data Services Client library.
Is it possible in WCF Data Services to undo / cancel changes to tracked objects ?
scenario : In the UI I allow a user to edit an object. I have save and cancel buttons. If the user chooses to save I call SaveChanges() on my WCF context and changes are sent to the database via the WCF service. If the user clicks cancel I want to undo the changes and revert to the original property values of the current object.
I know that the WCF data services client library has change tracking built in - but I cannot find any way at accessing this information.
In Entity Framework the context supports the Refresh method and you can specify RefreshMode.StoreWins and pass in the object - this will effectively cancel / undo any changes.
documented here : http://msdn.microsoft.com/en-us/library/bb896255.aspx
Any suggestions on how I can achieve the same thing in WCF DataServices in my client application ?
cheers
Chris
The only "solution" I know of is:
var oldMergeOption = _service.MergeOption;
_service.MergeOption = MergeOption.OverwriteChanges;
try {
_service.YourQueryable.Where(x => x.Id==oldObject.Id).Single();
} finally {
_service.MergeOption = oldMergeOption;
}
This should replace the values of "oldObject" with the values stored in the DB. However, I'm not sure if the object returned by Single() will always be the same as "oldObject".
I typically refrain from operating on entities within the DataServiceContext until I'm ready to commit those changes to the database. I don't treat my entities as part of my domain model so I create a specific domain model that adapts my model objects to entity objects using adapters and a repository class. This way, all operations within by domain model are self-contained until I'm ready to commit them to the database. A fantastic article from Ben Day on what I'm referring to can be found here: http://visualstudiomagazine.com/articles/2011/04/01/pfcov_silverlight-mvvm-tips.aspx
I have a Silverlight 5 app. This app has been in development for 18 months. This app calls back to a WCF service. I just had a support request.
Before today, the service would return ObservableCollection<T> results. However, now all of the sudden, out-of-the-middle of nowhere, it starts returning T[] results after I updated the service reference in the Silverlight app.
My question is, what could have happened that would cause this change? This has caused approximately 70 errors due to type conflicts. Am I overlooking a basic setting?
Thank you!
If you're using a service reference to communicate with the service, make sure the Data Type hasn't been changed. Right click on the service in the Service References folder, select Configure Service Reference..., and look at the Data Type - Collection type:. If it's System.Array, then this may be your problem. Change it to ObservableCollection and see if that helps.
i have an Autocomplete ajax control that calls a WCF service method automatically.
this method gets a string as an input parameter and return a list of strings.
i don't make that call from the client , the control does it, but i write the content of this method.
Can i store some data somewhere in the WCF service when that method is called and extract it later when i use the WCF from the client ?
i mean , i want the control to call the method, in the method i'll store some data and later when i use the WCF client object i'll extract it.
is there such "WCF cache" mechanism?
No, there's no such thing as a 'WCF cache' mechanism. But WCF is .NET, and in a .NET application you can use the Caching Application Block.
Caching has very little if anything to do with WCF. You cache inside your application, but the caching mechanism for a WCF service is fundamentally the same as a Windows Forms application or a managed Windows Service (or an ASMX webservice, or a ASP.NET application, or any .NET application). The only difference is how you use and rely on the cache & how the applicaton's lifecycle is managed.
If your WCF service is hosted in IIS (as is very popular), then when the application pool is recycled (or the website is restarted) you will lose everything in the cache. Will this be a problem?
The typical use case for a cache is when you have a set of data stored (in, say, a database) that will need to retrieved over and over again. Rather than get from the database everytime, you get from the cache. If it's not in the cache, you get from the database and put it in the cache so it's there for next time. It sounds like you want to store some data from the client application in the cache. You can do this, but what will happen if it's not there when you go to retrieve it.
I'm new to WCF Data Services so I've been playing. After some initial tests I am disappointed by the performance of my test data service.
I realize that because a WCF DS is HTTP-based there is overhead inherent in the protocol but my tests are still way slower than I would expect:
Environment:
All on one box: Quad core 64-bit laptop with 4GB RAM running W7. Decent machine.
Small SQL database (SQLExpress 2008 R2) with 16 tables... the table under test has 243 rows.
Hosted my test service in IIS with all defaults.
Code:
I've created a Entity Framework model (DataContext) for this database (stock codegen by VS2010).
I've created a data-service based on this model.
I've created a client which has a direct service reference (ObjectContext) for this service (stock codegen by VS2010)
In the client I am also able to call the EF model directly and also use Native SQL (ADO.NET SqlConnection)
Test Plan:
Each iteration connects to the database (there is an option to reuse connections), queries for all rows in the target table ("EVENTS") and then counts them (thus forcing any deferred fetches to be performaed).
Run for 25 iterations each for Native SQL (SqlConnection/SqlCommand), Entity Framework (DataContext) and WCF Data Services (ObjectContext).
Results:
25 iterations of Native SQL: 436ms
25 iterations of Entity Framework: 656ms
25 iterations of WCF Data Services: 12110ms
Ouch. That's about 20x slower than EF.
Since WCF Data Services is HTTP, there's no opportunity for HTTP connection reuse, so the client is forced to reconnect to the web server for each iteration. But surely there's more going on here than that.
EF itself is fairly fast and it's the same EF code/model is reused for both the service and the direct-to-EF client tests. There's going to be some overhead for Xml serialization and deserialization in the data-service, but that much!?! I've had good performance with Xml serialization in the past.
I'm going to run some tests with JSON and Protocol-Buffer encodings to see if I can get better performance, but I'm curious if the community has any advice for speeding this up.
I'm not strong with IIS, so perhaps there are some IIS tweaks (caches, connection pools, etc) that can be set to improves this?
Consider deploying as a windows service instead? IIS may have ASAPI filters, rewrite rules, etc that it runs through. even if none of these are active, the IIS pipeline is so long, something may slow you down marginally.
a service should give you a good baseline of how long it takes the request to run, be packed, etc, without the IIS slowdowns
The link below has video that has some interesting WCF benchmarks and comparisons between WCF data services and Entity Framework.
http://www.relationalis.com/articles/2011/4/10/wcf-data-services-overhead-performance.html
I increased performance of our WCF Data Service API by 41% simply by enabling compression. It was really easy to do do. Follow this link that explains what to do on your IIs server: Enabling dynamic compression (gzip, deflate) for WCF Data Feeds, OData and other custom services in IIS7
Don't forget to iisReset after your change!
On the client-side:
// This is your context basically, you should have this code throughout your app.
var context = new YourEntities("YourServiceURL");
context.SendingRequest2 += SendingRequest2;
// Add the following method somewhere in a static utility library
public static void SendingRequest2(object sender, SendingRequest2EventArgs e)
{
var request = ((HttpWebRequestMessage)e.RequestMessage).HttpWebRequest;
request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
}
Try setting security to "none" in the binding section in the configuration. This should make big improvement.
In order to eliminate most of the connection overhead you can try to batch all operations to the WCF DS to to see if that makes a significant difference.
NorthwindEntities context = new NorthwindEntities(svcUri);
var batchRequests =
new DataServiceRequest[]{someCustomerQuery, someProductsQuery};
var batchResponse = context.ExecuteBatch(batchRequests);
For more info see here.
WCF DataServices are for providing your disparate clients with OpenData protocol; so as you don't have to write/refactor multiple web service methods for each change request. I never advise it to be used if the entire system is microsoft technology stack based. It's meant for remote clients.
How do you pass those 25 iterations for WCF?
var WCFobj = new ...Service();
foreach(var calling in CallList)
WCFobj.Call(...)
If you call like that it means you call WCF 25 times, which consumes too many resources.
For me, I used to build up everything into a DataTable and user table name to stored procedure I'm calling; DataRow is params. When calling, just pass the DataTable in encrypted form by using
var table = new DataTable("PROC_CALLING")...
...
StringBuilder sb = new StringBuilder();
var xml = System.Xml.XmlWriter.Create(sb);
table.WriteXml(xml);
var bytes = System.Text.Encoding.UTF8.GetBytes(sb.ToString());
[optional]use GZip to bytes
WCFobj.Call(bytes);
The thing is you pass all 25 calls at once, that can save performance significantly. If the return object is same structure, just pass it as DataTable in bytes form and convert it back to DataTable.
I used to implement this methods with GZip for import/export data modules. Passing large amount of bytes is going make WCF unhappy. Its depends whatever you want to consume; computing resources or networking resources.
things to try:
1) results encoding: use binary encoding of your WCF channel if possible, see http://msdn.microsoft.com/en-us/magazine/ee294456.aspx -- alternately use compression: http://programmerpayback.com/2009/02/18/speed-up-your-app-by-compressing-wcf-service-responses/
2) change your service instance behavior, see http://msdn.microsoft.com/en-us/magazine/cc163590.aspx#S6 -- try InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode=ConcurrencyMode.Multiple - if you can verify that your service is built in a thread safe way.
Regarding your benchmark, I think you should simulate more realistic load (including concurrent users) and ignore outliers, the first request to IIS will be really slow (it has to load all the DLLs)