wcf very slow in lan flie transfer - wcf

I have a service that has a method to send a file to the service from the client. I notice that when I run the client and the service in the same machine and the file that I want to send is also in the local machine, all works very fast.
However, I the client and the service are in the same machine but the file is in other computer, then speed is very slow.
If I copy the file from one computer to other, the speed is fast, so the problem does not seem to be the bandwidth.
I try to use tcp and basicHttp Binding, but the results are the same.
This problem also occurrs when I try to send if the client are in other computer.
Thanks.
EDIT: If I open the task manager, in the network tab of the computer taht run the client, I can see that the use of the network is about 0.5%. Why?

WCF for transmitting large file is not the optimal method because WCF has a lot of layers and overhead that adds up and causes delay in file transmission. Morever, you may not have written the WCF service to continuously read chunks of byte and write to the response. You might be doing a File.ReadAll and then just return the whole string, which would cause a large amount of sync read on the server, a lot of memory allocation and then writing the large string to WCF buffer, which in turn write to IIS buffer and so on.
The best way to transmit large files is by using HttpHandlers. You can just use Response.TransmitFile to transfer the file and IIS will transmit the file in the most optimal way. Otherwise you can always read 8k at a time and then write to the Response stream and call Flush after every 8k write.
If you cannot go for HttpHandler for any weird reason, can you show me the WCF code?
Another thing. You might be expecting performance that is simply not possible when IIS is in the picture. First you should measure how long it takes for IIS to transmit the file if you just host the file directly on a website and download the file by doing a WebClient.downloadString.
Another thing is, how are you downloading? Via Browser? or via client side code? Client side code can be suboptimal as well if you are trying to transmit the whole file in one shot and trying to hold it in a string. For ex, WebClient.DownloadString would be the worst approach.

Related

SQL FILESTREAM and Connection Pooling

I am currently enhancing a product to support web delivery of large file-content.
I would like to store it in the database, and whether or not I choose to FILESTREAM by BLOB, the following question still holds.
My WCF method will return a stream, meaning that the file stream will remain open while the content is read by the client. If the connection is slow, then the stream could be open for some time.
Question: Connection pooling assumes that connections are exclusively held, only for a short period of time. Am I correct in assuming, that given I have a connection pool of finite size, there could be a contention problem, if slow network connections are used to download files?
Under this assumption, I really want to use FILESTREAM, and open the file directly from the file-system, rather than the SQL connection. However, if the database is remote, I will have no choice but to pull the content from the SQL connection (until I have a local cache of the file anyway).
I realise I have other options, such as to server-buffer the stream, however that will have implications as well. I wish at this time, to discuss only the issues relating to returning a stream obtained from a DB connection.

WCF streamed message download time

I have a streamed service. The message returned from the operation has a stream as the only body member, which is a stream to a file in the file system. I wonder if there's a way to record how much time it takes to the client to consume such file, from the server?
One of the ways you can go - return from server not only stream, but data structure, contains file size as well.
On client - you can use timer and calculate time against already read vs took time vs full file size.
See this example: http://www.codeproject.com/Articles/20364/Progress-Indication-while-Uploading-Downloading-Fi

OS and/or IIS Caching

Is there a way where I can force caching files at an OS level and/or Web Server level (IIS)
The problem I am facing is that there a many static files ( xslt's for example ) that need to be loaded again and again - and I want to load all these files to memory so that no time wasted on hard disk I/O.
(1) I want to cache it at the OS level so that every program that runs on my OS and which tries to read a file must read it from memory. I want no changing in program source code - it must happen transparently. For example, read("c:\abc.txt") must not cause a disk I/O, it must read it from the memory.
(2) Achieving similar thing in IIS. I've read few things about output caching for database queries - but how to achieve it for files?
All suggestions are welcome!
Thanks
You should look into some tricks used by SO itself. One was that they moved all their static content off to another domain for efficiency.
The problem with default set ups for Apache (at a minimum) is that the web server will pass all requests through to an app server to see if the content is meant to be dynamic. That's a huge waste for content that you know to be static.
Far better to set up a separate domain for static content without an app server. That way, the static requests are not sent unnecessarily to another layer and the web server can run much faster.
Even in a setup where there's not another layer invoked every time, there are other reasons for a separate domain, as you'll see from that link (specifically removing cookies which both reduces traffic and improves the chances of the Internet caching your data).

How to upload a file in WCF along with identifying credentials?

I've got an issue with WCF, streaming, and security that isn't the biggest deal but I wanted to get people's thoughts on how I could get around it.
I need to allow clients to upload files to a server, and I'm allowing this by using the transferMode="StreamedRequest" feature of the BasicHttpBinding. When they upload a file, I'd like to transactionally place this file in the file system and update the database with the metadata for the file (I'm actually using Sql Server 2008's FILESTREAM data type, that natively supports this). I'm using WCF Windows Authentication and delegating the Kerberos credentials to SQL Server for all my database authentication.
The problem is that, as the exception I get helpfully notes, "HTTP request streaming cannot be used in conjunction with HTTP authentication." So, for my upload file service, I can't pass the Windows authentication token along with my message call. Even if I weren't using SQL Server logins, I wouldn't even be able to identify my calling client by their Windows credentials.
I've worked around this temporarily by leaving the upload method unsecured, and having it dump the file to a temporary store and return a locator GUID. The client then makes a second call to a secure, non-streaming service, passing the GUID, which uploads the file from the temporary store to the database using Windows authentication.
Obviously, this isn't ideal. From a performance point of view, I'm doing an extra read/write to the disk. From a scalability point of view, there's (in principle, with a load balancer) no guarantee that I hit the same server with the two subsequent calls, meaning that the temporary file store needs to be on a shared location, meaning not a scalable design.
Can anybody think of a better way to deal with this situation? Like I said, it's not the biggest deal, since a) I really don't need to scale this thing out much, there aren't too many users, and b) it's not like these uploads/downloads are getting called a lot. But still, I'd like to know if I'm missing an obvious solution here.
Thanks,
Daniel

How to transfer large file using WCF

I need to transfer large Excel files over a WCF service. Our project requires generating some reports for the clients, and we use Excel to generate the reports.
Right now the project uses net.tcp binding, but we are considering switching over to http binding.
I read another post on SO about transferring large images and the answers all suggested using streaming. However I'm wondering what the best approach would be considering its an Excel file. The file sizes can sometimes approach ~10Mb.
Yes, streaming will work TCP or HTTP -- you should use it. Using streaming will remove the need to have large in-memory buffers holding the entire file at once. This will increase scalability of your application.
As JP says streaming is a good option- I would generally recommend going for that. The book "Essential Windows Communication Foundation" suggests that if you need reliable messaging, digital signatures or resuming after failure then another option is to manually chunk the data into smaller messages and then reconstitute them on the server.