How to handle large data from wcf service - wcf

I have wcf service, in one of method which returns a list. Getting the data from oracle database, which is a large data(records in lakhs). This method when tested with wcf client works fine. When I consume the same service in silverlight application, I am getting timeout exceptions.. Pls suggest necessary steps to handle large data or avoid this issue.

An application I worked on a few years ago had similar requirements. If my memory serves me correctly, we created some custom WCF behaviours that compressed/decompressed the dataset and transported it as binary data. You could also stream the data but this is a little more brittle in my opinion and requires more work on the client. HTH.

You can do it by holding data in object collection and use silverlight datagrid pagination, so with proper coding you can show atleast 1000 records at time because as per my view user could not look lakhs of records by scroll down and scroll up.
If u don't want pagination then do the background threading, when user scroll up or scroll down data fetch as per index. Handle as much as data in coding level.
Above same thing i have done in my last project.

Related

Knockout API with multiple endpoints

I have a question regarding working with Knockout and an API over which I have no control. Forgive my vagueness, but to avoid IP issues, I can't give too much more information. Basically, I have a page in my application that requests data from multiple endpoints (10+) which send back a ton of data, much of which is unnecessary. I've developed a way to make observable those fields I care about, so that's not an issue.
My issue is with assembling the UI...I tried merging all the responses into my view model and then creating the UI in the typical knockout way. This makes extracting the necessary data into new objects and posting back to the respective endpoints quite difficult, though, unless I manually code it all.
I then thought of possibly namespacing the responses to keep them separated and iterate over them when it comes time to post back, since they'll be encapsulated for their own endpoints, but I'm hoping someone out there has more experience with a non-REST API and, specifically, working with multiple endpoints in a single view model. Thanks!
Nothing is stopping you from splitting out your separate endpoints into different services or objects. You could use something like RequireJS to setup dependencies for your viewmodel.

MonoTouch, WCF, Large XML response, how to capture progress

I'm developing an app using MonoTouch and have to get some data from a 3rd party web service. I have a WCF binding to do such.
The particular web service method I am calling could potentially return an XML string in the range of several hundred megabytes, which could take a while to download to a mobile device.
I'm looking for some way to capture how many bytes have been read from the network at the system level, with the end-game being to display a progress indicator to the user. Is there some way I can achieve this using Behaviors?
Note, I don't have any way to modify the web service code to return a Stream object, which is what most of the articles I have found require doing.
Any help or direction would be much appreciated. Thanks
As a last resort, I can always fall back on using an NSURLConnection to do this, instead of WCF, because I know there are NSURLConnectionDelegate methods to hook into that will provide this. I wanted to avoid NSURLConnection, so that I will be able to easily drop this code into an Android project in the future.
Is there a reason to use WCF instead of the plain HttpWebRequest?
WCF is not exactly efficient at parsing data, it will keep multiple copies in memory to deserialize the various chunks of information.
There is no system level API that can provide this information, the Stream that you get back from the HttpWebRequest is the best value that you will get.

Using DTOs - Guidance

Looking for some guidance.
I'm building an application, SL4 with WCF as the backend service. My WCF Service layer sits over a Domain Model and I'm converting my Domain Entities to screen specific DTOs using an assembler.
I have a screen (security related) which shows a User and the Groups that they are a member of, now the user can add and remove groups for the user after which they can hit the apply button. Only when this apply button is hit will the changes be submitted.
Currently I have a UserDetailDto which is sent to the client to populate the screen and my intention was on hitting apply to send a UserDetailUpdateDto back to the server to perform the actual update to the domain model.
Does this sound ok to start?
If so when the user is making changes client-side should my UserDetailUpdateDto be sending back the changes, ie. whats been added and whats been removed.
Not sure, guidance would be great.
Guidance is always to tricky when so much is unknown about the requirements and the deployment environment. However, your approachs sounds reasonable to me. The key things I like about this:
1) You are keeping your DTOs separate from your Domain Entities. In small simple apps it can be fine to send entities over the wire, but they can start to get in each other's way as complexity and function increase.
2) You are differentiating between Query object (UserDetailDto) and Command object (UserDetailUpdateDto). Again the two can often be satisfied using a single object but you will start to them bloat as complexity/function increases because the object is serving two masters (the Query object is to be consumed at the client and the Command object is to be consumed at the server). I use a convention where all command DTOs start with a verb (e.g. UpdateUserDetail), it just makes it easier to sort 'data' from 'methods' at the client end.
If the SL application is likely to become large with more complex screen logic it may be worth looking at the Model-View-ViewModel (MVVM) pattern. It allows you to separate screen design from screen function. It provides a bit more flexibility in distributing work around a development team and better supports unit testing.
As far as what gets sent back in the UpdateUserDetail object, I think this should be guided by what is going to be easiest to work with at the domain model (or the WCF service sitting over your domain model). Generally, smaller is better when it comes to DTOs.

Global Filtering on Odata Provider

Im connecting to a multi-tenant database through an odata service (my client is an iOS app, using the obj-c OData SDK). My question is, is there a way to apply a global filter to all data calls. Every data call should be filtered by TenantID=?, so instead of going to every single data call and adding TenantID=? to the filter string (My app is already developed for single database and am now refactoring it for multi-tenant), i was just hoping there is a way to catch it in say the OnBeforeSend event and manipulate the URL to add the filter. So therefore all data calls are filtered. Any ideas? Or any suggestions on approaching this?
Thanks in advance
There's nothing wrong with that approach.
Another approach, which may not be applicable in your situation, is to filter it on the Odata side using change and query interceptors.

Linq-to-SQL entites unstanding? please help?

I’m having a little bit of difficulty understanding some architectural principles when developing a service. If you make a call to a WCF service and it returns a collection of items(Orders) (which are custom made classes made up From LINQ-to-SQL entity data) to a client and each item has a collection of items(OrderItems) (one-to-many) that are also made up from the same LINQ-to-SQL context. If I make another call to the service and request a particular OrderItem and modify its details on the client side, how then does the first collection of Items realise that one of its Orders OrderItem has changed from the client side. I am taking the approach of when changing the OrderItem I send the OrderItem object to the WCF service for storage via LINQ-to-SQL commands but to update the collection that the client first called I use IList interface to search and replace each instance of the OrderItem. Also subscribing each item to the PropertyChanged event give some control. This does work with certain obvious limitations but how would one 'more correctly' approach this by perhaps managing all of the data changing from the service itself.. ORM? static classes? If this is too difficult question to answer, perhaps some link or even chat group that I can discuss this as I understand that this site is geared for quick Q/A type topics rather than guided tutorial discussions.
Thanks all the same.
Chris Leach
If you have multiple clients changing the same data at the same time, at the end of the day you system must implement some sort of Concurrency Control. Broadly thats going to fall into one of two categories: pessimistic or optimistic.
In your case it sounds like you are venturing down the optimistic route, whereby anyone can access the resource via the service - it does not get locked or accessed exclusively. What that means is ultimately you need to detect and resolve conflicts that will arise when one client changes the data before another.
The second architectural requirement you seem to be describing is some way to synchronize changes between clients. This is a very difficult problem. One way is to build some sort of publish/subscribe system whereby, after a client retrieves some resources from the service, it also subscribes to get updates to changes to resource. You can do this either in a push or pull based fashion (pull is probably simpler, i.e. just poll for changes).
Fundamentally you are trying to solve a reasonably complex problem, but its also one which pops up quite frequently in software.