I have thousands on records in MS Sql server database table, to get it search it quickly on web page I created WCF REST service that returns List of records fetched from database by keywords converted into JSON and get displayed in DIV just below html textbox in html page. (like google search textbox).
I used server side cache object to avoid database hit upto some extent.
But I am forced to hit REST GET url on every text change.
Any suggestions to make it faster?
There can be a way to reduce your REST calls. There are techniques available for client side caching which allows to cache the ajax responses so that next time if same request is repeated the results are produced from cache. But you have to Very Careful using such techniques as it may be endup giving wrong results and behavior.
See this answer. It is similar to your question but the discussion is really interesting which will give you insight of client side cache implementation to reduced Ajax call round trips.
As you're using a Rest, you're making a http request to your service. You can take advantage of the Output Cache of ASP.Net.
The call will hit the server, but it will automatically response your request without running the code.
You do it like this:
[AspNetCacheProfile("CachePoliceName")]
[WebGet(UriTemplate = "{userName}")]
public String GetData(string parameter)
{ // your code }
If requered, you need to enable AspNet compatibility in your configuration file:
<system.serviceModel>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
</system.serviceModel>
See more here: https://msdn.microsoft.com/en-us/library/vstudio/ee230443%28v=vs.100%29.aspx
And here: http://blogs.msdn.com/b/endpoint/archive/2010/01/28/integrating-asp-net-output-caching-with-wcf-webhttp-services.aspx
Hope it helps.
Related
I am designing my first REST API.
Suppose I have a (SOAP) web service that takes MyData1 and returns MyData2.
It is a pure function with no side effects, for example:
MyData2 myData2 = transform(MyData myData);
transform() does not change the state of the server. My question is, what REST call do I use? MyData can be large, so I will need to put it in the body of the request, so POST seems required. However, POST seems to be used only to change the server state and not return anything, which transform() is not doing. So POST might not be correct? Is there a specific REST technique to use for pure functions that take and return something, or should I just use POST, unload the response body, and not worry about it?
I think POST is the way to go here, because of the sheer fact that you need to pass data in the body. The GET method is used when you need to retrieve information (in the form of an entity), identified by the Request-URI. In short, that means that when processing a GET request, a server is only required to examine the Request-URI and Host header field, and nothing else.
See the pertinent section of the HTTP specification for details.
It is okay to use POST
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.”
It's not a great answer, but it's the right answer. The real issue here is that HTTP, which is a protocol for the transfer of documents over a network, isn't a great fit for document transformation.
If you imagine this idea on the web, how would it work? well, you'd click of a bunch of links to get to some web form, and that web form would allow you to specify the source data (including perhaps attaching a file), and then submitting the form would send everything to the server, and you'd get the transformed representation back as the response.
But - because of the payload, you would end up using POST, which means that general purpose components wouldn't have the data available to tell them that the request was safe.
You could look into the WebDav specifications to see if SEARCH or REPORT is a satisfactory fit -- every time I've looked into them for myself I've decided against using them (no, I don't want an HTTP file server).
We call wcf svcs (not ours) and we're using gets for searching a product database.
Example:
http://foo.com/SearchProducts.svc?$skip=0$take=10$includeTotalCount=true
We were passing the Odata parameters to page the results of the SearchProducts svc. The svc has been changed to a POST because one of our filters "skus" is sometimes huge (hundres of skus) which causes the GET to break because the uri is too large. The easiest solution we thought was to just change the call to a post but now the Odata params dont seem to be used.
Do these params need to be sent in a different manner when doing a POST?
Compliant OData service will not support POST verb for queries (unless you use POST tunneling, but then you're going to be hitting the URL limit anyway). So I wonder how it works for you at all.
The URL size limit can be overcome using several approaches:
Simplify the query expression. Obviously this can only go so far, but it's usually the best solution as it will likely speed up the query execution as well.
Use batch instead. You can send the GET request inside a batch. The length of the URL is not an issue in this case, since the query URL is sent in the payload of the batch.
Define a service operation for the complex query you're using (but since you don't own the service this is probably not a good solution for you).
I've walked into a project that is using a WCF service for the data tier. Currently, when data is needed for a grid, all rows are returned and the results are bound to a grid and the dataset is stuffed into a session variable for paging/sorting/rebinding. We've already hit a max message size problem, so I'm thinking it's time to convert from fetch and cache to fetch only the current page.
Face value this seems easy enough, but there's a small catch. The user is allowed to export the entire result set at any point. This means that for grid viewing purposes fetching the current page is fine, but when they want to do an export, I still need to make a call for all data.
This puts me back into the max message size issue. What is the recommended approach for this type of setup?
We are currently using the wsHttpBinding...
Thanks for any assistance.
I think the recommended approach for large files is to use WCF streaming. I'm not sure the exact details for your scenario, but you could take a look at this as a starting point:
http://msdn.microsoft.com/en-us/library/ms789010.aspx
I would probably do something like this in your case
create a service with a "paged" GetData() method - where you specify the page index and the page size as additional parameters. This should give you a nice clean interface for "regular" use, and that should not hit the maxMessageSize limits
create a second service (or method) that would send all data - ideally, you could bundle that up into a ZIP file or something on the server, before sending it. If that ZIP file is still too large, you might want to check out WCF streaming for handling large files, as Andy already pointed out
The maxMessageSizeLimit is in place for a good reason: to avoid Denial of Service attacks where a WCF service would just get flooded with large messages and thus brought to its knees. If you can, always try to keep that in mind and don't just jack up the maxMessageSize to 2 GB - it might come back to bite you :-)
I have a WCF services that has to return some data sets that can be as large as 10mb or more, I want some visual feedback for the user on progress, is there a way to track the download progress?
My client is Silverlight 3 and ultimately I would like to be able to bind a progress bar to this; any ideas?
EDIT: After the bounty SO automatically selected the answer with upvotes as the correct answer when this is not the case.
There is an example of this on the code project, see:
http://www.codeproject.com/KB/WCF/WCF_FileTransfer_Progress.aspx
If you have one giant WCF call, then you only have two states, everything or nothing. Also, WCF has a maximum transaction size, so returning a large dataset runs the risks of going over this limit.
In order to solve these problems in my projects, I split the one big request into many smaller requests. I then check how many responses I have vs. original requests to get an indication of progress.
Edit: added better explanation.
The CodeProject article may be tricky to get working with Silverlight since Silverlight only has access to the BasicHttpBinding -- although it looks like BasicHttpBinding has a TransferMode="Streamed" so perhaps it is possible -- I don't know.
If you can get it to return a Stream, that seems like it would be the best approach.
Still, I thought that I would put forward a random "other" approach.
Perhaps you could serialize the data into a file and use the WebClient to download it. So basically, you would have a WS.GetData() which would save a file on the server and return its filename -- then the Silverlight app would use WebClient to download it (which has a DownloadProgressChanged event).
I know it's not what you're looking for -- just an idea...
EDIT: I answered this thinking you wanted a silverlight uploader, but it actually looks like you want a silverlight downloader. You can do the same thing I suggested for the uploader except use HTTP GET, or Binary WCF, or Sockets.
I have written a Silverlight 2 uploader with a progress bar and I modeled it after this one. It uses HTTP POST to sent the file to the server one piece at a time. The tricky part is that the bigger your POST, the faster the file will be uploaded, but your progress bar only gets updated once per POST. So I wrote an algorithm that dynamically tries to find the biggest POST size that takes less than a second.
If you want to use WCF instead of HTTP POST, that's probably better because Silverlight 3 now supports binary message encoding:
<customBinding>
<binding name="MyBinaryBinding" maxBufferSize="2147483647"
maxReceivedMessageSize="2147483647">
<binaryMessageEncoding />
<httpTransport />
</binding>
</customBinding>
OR you could write a sockets implementation - Silverlight does support this but it can be a little tricky to setup and requires your server to have a port in the range 4502-4532 open, and port 943 open for a policy file.
I'm pushing the bounds of what one should ask of others with this one, but I'm totally stuck, so here goes...
This is my first web service. Not only that, it's my companies first web service - nobody I work with has ever written or consumed anything like this one. I know these things are not complicated, but for a first kick at the can, this is killing me because the API is so large.
WSDL is here: https://fast.uspspostalone.com/USPSMLXMLWeb/services/UspsMailXmlMailingServices/wsdl/UspsMailXmlMailing70.wsdl
I need to get a "FullServiceNixieDetail". Should be an XML doc. The documentation provided by USPS says I need to invoke FullServiceNixieDetailQueryRequest, and I will get back a FullServiceNixieDetailQueryResponse, which contains a FullServiceNixieDetail.
I cannot for the life of me get anything that seems to work. The code I currently have is:
Imports USPSACSProcessor.UPSPMailXML
Dim c As New UspsMailXmlMailingServiceClient
Dim request As New FullServiceNixieDetailQueryRequest
Dim response As FullServiceNixieDetailQueryResponse
'Assume I populate the Request object correctly here
response = c.FullServiceNixieDetailQuery(request)
But my response object has no FullServiceNixieDetail. Just a bunch of summary properties like TotalMessageCount etc.
How do I get my FullServiceNixieDetail XML?
Did you populate your request with the proper authentications?
I suspect it is the response.Item that is the FullServiceNixieDetail, but without the usage knowledge of this particular web service, it's hard to confirm, you will need to find this out from the service host. You can also try doing a cast on the item to FullServiceNixieDetail, to verify this.