How to show feedback while streaming large files with WCF - wcf

I'm sending large files over WCF and I'm using transferMode="Streamed" in order to get this working, and it is working fine.
The thing is sometimes those files are just too big, and I want to give the client some sort of feedback about the progress.
Does anybody have a godd solution/idea on how to acomplish this?
EDIT: I don't command the read of the file in either side (client or server) if I did I could just give feedback on the read function of the stream.
EDIT2: part of my code to help others understand my problem
Here's my contract
[OperationContract]
FileTransfer Update(FileTransfer request);
and here's the definition of FileTransfer
[System.ServiceModel.MessageContractAttribute(WrapperName = "FileTransfer", WrapperNamespace = "http://tempuri.org/", IsWrapped = true)]
public class FileTransfer : IDisposable
{
[System.ServiceModel.MessageBodyMemberAttribute(Namespace = "dummy", Order = 0)]
public Stream FileByteStream;
public void Dispose()
{
if (FileByteStream != null)
{
FileByteStream.Close();
FileByteStream = null;
}
}
}
so, in my service (hosted on IIS) I just have something like this:
request.FileByteStream;
and WCF itself reads the stream, right?
I hope this helps people out to understand my problem... please let me know if you need further info

What about adding total stream size as custom Soap header (use MessageContracts). Then you can process the stream on client in chunks (like reading to buffer of defined size in loop) and for each chunk you can notify client about processed increment in context of expected size.

The only way I see right now is by creating another operation that report the number of bytes read by the streamed operation. This would require activating sessions and multi-threading at the server side and implementing a asynchronous call from the client together with calls to the "progress reporting" operation.
The client knows the size of the stream (assuming the client is the sender), it can extract a progress percentage from the known total size and the reported size from the server.
EDIT:
My comment works under the assumption that the client is uploading data. So the server knows of much data it has already read from the stream while the client knows the size of the data.
If the server exposes an operation that reports the volume of data it has read so far, the client will be able to calculate the progress percentage by calling this operation.

Related

Best practice for sending large messages on ServiceBus

We need to send large messages on ServiceBus Topics. Current size is around 10MB. Our initial take is to save a temporary file in BlobStorage and then send a message with reference to the blob. The file is compressed to save upload time. It works fine.
Today I read this article: http://geekswithblogs.net/asmith/archive/2012/04/10/149275.aspx
The suggestion there is to split the message in smaller chunks and on the receiving side aggregate them again.
I can admit that is a "cleaner approach", avoiding the roundtrip to BlobStore. On the other hand I prefer to keep things simple. The splitting mechanism introduces increased complexity. I mean there must have been a reason why they didn't include that in the ServiceBus from the beginning ...
Has anyone tried the splitting approach in real life situation?
Are there better patterns?
I wrote that blog article a while ago, the intention was to implement
the splitter and aggregator patterns using the Service Bus. I found this question by chance when searching for a better alternative.
I agree that the simplest approach may be to use Blob storage to store the message body, and send a reference to that in the message. This is the scenario we are considering for a customer project right now.
I remember a couple of years ago, there was some sample code published that would abstract Service Bus and Storage Queues from the client application, and handle the use of Blob storage for large message bodies when required. (I think it was the CAT team at Microsoft, but I'm not sure).
I can't find the sample with a Quick google search, but as it's probably a couple of years old, it will be out of date, as the Service Bus client library has been improved a lot since then.
I have used the splitting of messages when the message size was too large, but as this was for batched telemetry data there was no need to aggregate the messages, and I could just process a number of smaller batches on the receiving end instead of one large message.
Another disadvantage of the splitter-aggregator approach is that it requires sessions, and therefore a session enabled Queue or Subscription. This means that all messages will require sessions, even smaller ones, and also the Session Id cannot be used for another purpose in the implementation.
If I were you I would not trust the code on the blog post, it was written a long time ago, and I have learned a lot since then :-).
The Blob Storage approach is probably the way to go.
Regards,
Alan
In case someone will stumble in the same scenario, the Claim Check approach would help.
Details:
Implement Claim Check Pattern
Use ServiceBus.AttachmentPlugin (Assuming you use C#. Optionally, you can create your own)
Use extral storage e.g. Azure Storage Account (optionally, you can use other storage)
C# Code Snippet
using ServiceBus.AttachmentPlugin;
...
// Getting connection information
var serviceBusConnectionString = Environment.GetEnvironmentVariable("SERVICE_BUS_CONNECTION_STRING");
var queueName = Environment.GetEnvironmentVariable("QUEUE_NAME");
var storageConnectionString = Environment.GetEnvironmentVariable("STORAGE_CONNECTION_STRING");
// Creating config for sending message
var config = new AzureStorageAttachmentConfiguration(storageConnectionString);
// Creating and registering the sender using Service Bus Connection String and Queue Name
var sender = new MessageSender(serviceBusConnectionString, queueName);
sender.RegisterAzureStorageAttachmentPlugin(config);
// Create payload
var payload = new { data = "random data string for testing" };
var serialized = JsonConvert.SerializeObject(payload);
var payloadAsBytes = Encoding.UTF8.GetBytes(serialized);
var message = new Message(payloadAsBytes);
// Send the message
await sender.SendAsync(message);
References:
https://learn.microsoft.com/en-us/azure/architecture/patterns/claim-check
https://learn.microsoft.com/en-us/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/
https://www.enterpriseintegrationpatterns.com/patterns/messaging/StoreInLibrary.html
https://github.com/SeanFeldman/ServiceBus.AttachmentPlugin
https://github.com/mspnp/cloud-design-patterns/tree/master/claim-check/code-samples/sample-3

Sending a file from a java client to a server using wcf method?

I want to build a wcf web service so that the client and the server would be able to transfer files between each other. Do you know how I can achieve this? I think I should turn it into a byte array but I have no idea how to do that. The file is also quite big so I must turn on streamed response.
It sounds like you're on the right track. A quick search of the interwebz yielded this link: http://www.codeproject.com/Articles/166763/WCF-Streaming-Upload-Download-Files-Over-HTTP
Your question indicates that you want to send a file from a java client to a WCFd endpoint, but the contents of your question indicate that this should be a bidirectional capability. If this is the case, then you'll need to implement a service endpoint on your client as well. As far as that is concerned, I cannot be of much help, but there are resources out there like this SO question: In-process SOAP service server for Java
As far as practical implementation, I would think that using these two links you should be able to produce some code for your server and client.
As far as reading all bytes of a file, in C# you can use: File.ReadAllBytes It should work as in the following code:
//Read The contents of the file indicated
string fileName = "/path/to/some/file";
//store the binary in a byte array
byte[] buffer = File.ReadAllBytes(fileName);
//do something with those bytes!
Be sure to use the search function in the future:

Wcf integration task, need to transfer large amount of data via soap service

I have to design an integration solution that transfers large amount of data and works once a day. The company X we work with will invoke the service / services and give the data as parameters.
Do you have any suggestions for this solution?
For example do you think that I have to tell the company X that they have to send compressed (gzip?) data?
Or do I have to realise this usage scenario:
while(!allDataSent)
{
SendData(List<object> objects);
}
TransferCompleted();
How do you develop this kind of tasks?
A good starting point is to have separate endpoints for the service and client that are going to do the data transfer because you need to change the timeouts and maximum limits for how much data you can send and receive within the connection. If it's not time critical you can use IEnumerable<YourType> as the return type for the function and on the client end, they can use it as a stream and can batch save it while it's getting the data so that there won't be a need to have it all in memory.
On the clientside it could look something like this:
foreach (var bytes = servliceClient.GetLargeAmountOfData())
SaveByteToDisc(bytes);
Information about the binding properties can be found at MSDN

memory exception using wcf wshttpbinding

I have an application to upload files to a server. I am using nettcpbinding and wshttpbinding. When the files is larger than 200 MB, I get a memory exception. Working around this, I have seen people recommend streaming, and of course it works with nettcpbinding for large files (>1GB), but when using wshttpbinding, what would be the approach?? Should I change to basichttpbinding?? what?? Thanks.
I suggest you expose another end point just to upload such large size data. This can have a binding that supports streaming. In our previous project we needed to do file uploads to server as part of business process. We ended up creating 2 endpoints one just dedicated to file upload, and another for all other business functionality.
The streaming data service can be a generic service to stream any data to the server and maybe return a token for identifying the data on server.For subsequent requests this token can be passed along to manipulate the data on server.
If you don't want to (or cannot because of legit reasons) change the binding nor use streaming, what you can do is have some method with a signature along the lines of the following:
void UploadFile(string fileName, long offset, byte[] data)
Instead of sending the whole file, you send little packets, and tell where the data should be placed. You can add more data, of course, like the whole filesize, CRC of the file to know if the transfer was successful, etc.

AdapterStream never flushes properly in WCF service?

I've written a windows service application that exposes a WCF REST web service. The call signature is as follows:
[OperationContract]
[WebInvoke(
Method = "POST",
UriTemplate = "/render",
BodyStyle = WebMessageBodyStyle.Bare)]
Stream Render(Stream input);
It's implemented like so:
public Stream Render(Stream input)
{
return new AdapterStream(output => DoActualWork(input, output));
}
But immediately when Render() returns, the underlying stream to the calling client is cut off, so that it gets a HTTP 200 OK and 0 bytes of body data. Immediately after that, a breakpoint inside DoActualWork() is hit - so the server actually does its job and dumps data onto the AdapterStream during the following seconds, but by then the caller has been disconnected and is long gone.
Any ideas why this happens? Since DoActualWork() actually IS called, it would seem that the framework really did try to fetch data from the AdapterStream, but ... well, too late or something.
If nothing else, do you have any other suggestions for achieving the same thing (i.e. circumventing the fact that the result stream is a return value rather than a method parameter) without having to dump the (huge) results into a MemoryStream first and returning that? Which works like a charm, by the way, except for eating lots of memory.
Even if this question is quite old, I solved the problem like this:
First ensure that your WebHttpBinding has TransferMode streamed.
Second ensure that you flush and dispose the stream given to you in the callback.
Third block the callback (Not the Render method!) until finished writing to the stream.