resuming files when uploading to a server using wcf - wcf

I'm using WCF and I am trying to resume my upload with next code on the server app:
class DataUploader : IDataUploader
{
public void Upload(UploadMessage msg)
{
int speed = msg.AvgSpeed * 1024; // convert to KB
Stream stream= msg.DataStream;
string name = msg.VirtualPath;
int seekPoint; // this is get reading the partial uploaded file
using (FileStream fs = new FileStream(#"C:\savedfile.dat, FileMode.Append))
{
int bufferSize = 4 * 1024; // 4KB buffer
byte[] buffer = new byte[bufferSize];
int bytes;
while ((bytes = stream.Read(buffer, startPoint, bufferSize)) > 0)
{
fs.Write(buffer, 0, bytes);
fs.Flush();
}
stream.Close();
fs.Close();
}
}
}
I'm trying to begin to read the stream from a specified point (startPoint) cause the first bytes have already been uploaded. So I could append only remaining bytes to the file partially uploaded. By this way i get an error with the buffersize and can't use seeking because a method not supported exception so I think maybe this approach is not right. Help!!
My service contract:
[ServiceContract]
interface IDataUploader
{
[OperationContract]
void Upload(UploadMessage msg);
}
My message contract:
[MessageContract]
public class UploadMessage
{
[MessageHeader(MustUnderstand = true)]
public string VirtualPath { get; set; }
[MessageHeader(MustUnderstand = true)]
public int AvgSpeed { get; set; }
[MessageBodyMember(Order = 1)]
public Stream DataStream { get; set; }
}

It seems like you are using a standard soap message rather than the streaming binding. Check out the this link
If you don't want to use WCF's streaming api, which is proprietary to WCF, I would considering creating a 'chunking' method from the client if the client is uploading the file. Similar to how FTP can resume, I would query the server to see the current offset, send up a block or set of blocks, write them to my persistance (memory, db, file, etc), and then continue with multiple calls from the client sending smaller blocks (be careful of serialization as that can introduce unnecessary delays). This technique be something you want to investigate since it sounds like the client is 'streaming' to the server.
Btw, you may want to look at the following article to determine if your use of MessageContract is appropriate, as opposed to a DataContract.
http://blogs.msdn.com/b/drnick/archive/2007/07/25/data-contract-and-message-contract.aspx

If you want resume functionality you cannot do it this way. Your client must send the file in chunks and it must maintain the id of last successfully updated chunk. The service must process chunks and append them to storage.
If the most basic implementation it means that your client must divide file into chunks of well known size and call the upload operation for each chunk. The message must also contains the chunk Id and probably also chunk size (or something identifying the last chunk). This can be also combined with reliable session to allow automatic resend of lost chunks and to enforce in order delivery.
There is also example of channel implementation which does chunking internally.

Related

Returning Azure BLOB from WCF service as a Stream - Do we need to close it?

I have a simple WCF service that exposes a REST endpoint, and fetches files from a BLOB container. The service returns the file as a stream. i stumbled this post about closing the stream after the response has been made :
http://devdump.wordpress.com/2008/12/07/disposing-return-values/
This is my code:
public class FileService
{
[OperationContract]
[WebGet(UriTemplate = "{*url}")]
public Stream ServeHttpRequest(string url)
{
var fileDir = Path.GetDirectoryName(url);
var fileName = Path.GetFileName(url);
var blobName = Path.Combine(fileDir, fileName);
return getBlob(blobName);
}
private Stream getBlob(string blobName)
{
var account = CloudStorageAccount.FromConfigurationSetting("ConnectingString");
var client = account.CreateCloudBlobClient();
var container = client.GetContainerReference("data");
var blob = container.GetBlobReference(blobName);
MemoryStream ms = new MemoryStream();
blob.DownloadToStream(ms);
ms.Seek(0, SeekOrigin.Begin);
return ms;
}
}
So I have two question :
Should I follow the pattern mentioned in the post ?
If I change my return type to Byte[], what are Cons/Pros ?
( My client is Silverlight 4.0, just in case it has any effect )
I'd consider changing your return type to byte[]. It's tidier.
Stream implements IDisposable, so in theory the consumer of your method will need to call your code in a using block:
using (var receivedStream = new FileService().ServeHttpRequest(someUrl))
{
// do something with the stream
}
If your client definitely needs access to something that Stream provides, then by all means go ahead and return that, but by returning a byte[] you keep control of any unmanaged resources that are hidden under the covers.
OperationBehaviorAttribute.AutoDisposeParameters is set to TRUE by default which calls dispose on all the inputs/outputs that are disposable. So everything just works.
This link :
http://devdump.wordpress.com/2008/12/07/disposing-return-values/
explains how to manually control the process.

Does protobuf-net have built-in compression for serialization?

I was doing some comparison between BinaryFormatter and protobuf-net serializer and was quite pleased with what I found, but what was strange is that protobuf-net managed to serialize the objects into a smaller byte array than what I would get if I just wrote the value of every property into an array of bytes without any metadata.
I know protobuf-net supports string interning if you set AsReference to true, but I'm not doing that in this case, so does protobuf-net provide some compression by default?
Here's some code you can run to see for yourself:
var simpleObject = new SimpleObject
{
Id = 10,
Name = "Yan",
Address = "Planet Earth",
Scores = Enumerable.Range(1, 10).ToList()
};
using (var memStream = new MemoryStream())
{
var binaryWriter = new BinaryWriter(memStream);
// 4 bytes for int
binaryWriter.Write(simpleObject.Id);
// 3 bytes + 1 more for string termination
binaryWriter.Write(simpleObject.Name);
// 12 bytes + 1 more for string termination
binaryWriter.Write(simpleObject.Address);
// 40 bytes for 10 ints
simpleObject.Scores.ForEach(binaryWriter.Write);
// 61 bytes, which is what I expect
Console.WriteLine("BinaryWriter wrote [{0}] bytes",
memStream.ToArray().Count());
}
using (var memStream = new MemoryStream())
{
ProtoBuf.Serializer.Serialize(memStream, simpleObject);
// 41 bytes!
Console.WriteLine("Protobuf serialize wrote [{0}] bytes",
memStream.ToArray().Count());
}
EDIT: forgot to add, the SimpleObject class looks like this:
[Serializable]
[DataContract]
public class SimpleObject
{
[DataMember(Order = 1)]
public int Id { get; set; }
[DataMember(Order = 2)]
public string Name { get; set; }
[DataMember(Order = 3)]
public string Address { get; set; }
[DataMember(Order = 4)]
public List<int> Scores { get; set; }
}
No it does not; there is no "compression" as such specified in the protobuf spec; however, it does (by default) use "varint encoding" - a variable-length encoding for integer data that means small values use less space; so 0-127 take 1 byte plus the header. Note that varint by itself goes pretty loopy for negative numbers, so "zigzag" encoding is also supported which allows small magnitude numbers to be small (basically, it interleaves positive and negative pairs).
Actually, in your case for Scores you should also look at "packed" encoding, which requires either [ProtoMember(4, IsPacked = true)] or the equivalent via TypeModel in v2 (v2 supports either approach). This avoids the overhead of a header per value, by writing a single header and the combined length. "Packed" can be used with varint/zigzag. There are also fixed-length encodings for scenarios where you know the values are likely large and unpredictable.
Note also: but if your data has lots of text you may benefit from additionally running it through gzip or deflate; if it doesn't, then both gzip and deflate could cause it to get bigger.
An overview of the wire format is here; it isn't very tricky to understand, and may help you plan how best to further optimize.
At least the c++ library does support writing to and from compressed streams:
https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/io/gzip_stream.h
I'm not sure though if that has been ported to the .Net implementation.

WCF 4.0 REST Upload MS-Excel File

I am trying to upload MS-Excel file through WCF-REST Service.
I used the solution given in below post:-
RESTful WCF service image upload problem
My POST Method is declared as:
[OperationContract]
[WebInvoke(Method = "POST", UriTemplate = "/RFQ")]
[WebContentType("application/octet-stream")]
void UploadRFQDoc(Stream fileContents);
When I debug, stream content is fine till the call goes, and when I attach service to debug, Stream fileContents parameter becomes null , and service returns with [Bad Request]. I am not sending large file (it is just 50 KB). I am using HttpClient to call the Post.
Here are the client code(RestClient is HttpClient).
protected void Post(string uri, Stream stream, int length)
{
var content = HttpContent.Create(output => CopyToStream(stream, output, length), "application/octet-stream", length);
Uri relativeUri = new Uri(uri, UriKind.Relative);
var resp = RestClient.Post(relativeUri, content);
ProcessResponse(resp);
}
void CopyToStream(Stream input, Stream output, int length)
{
var buffer = new byte[length];
var read = input.Read(buffer, 0, Convert.ToInt32 (length));
output.Write(buffer, 0, read);
}
Any clue what else can go wrong.
Many Thanks.
[WebContentType("application/octet-stream")] attribute was unnecessary here. I commented it out, and all worked fine :).

WCF streaming files

I need to pass a memory stream to the WCF server , how do i need to add this data type in my data contract.
I will eventually need to convert this to a memory stream and pass it on to my service layer.
datacontact[DataMember]
Stream str = null;
public Stream File
{
get { return str; }
set { str = value; }
}
Here is the WCF Streaming page. I'm not really sure if (how) you can do this with a DataContract, the normal way is to specify streams in the OperationContract. Wouldn't that work for you?
Short summary:
Sender produces the Stream
Sender does not Close the Stream
Receiver does close the stream
Set the MaxReceivedMessageSize property of the binding to a value larger than the largest item you wish to transfer.

Silverlight changes the io.Stream to byte[]

I have created a WCF service for uploading images , which accepts System.IO.Stream as input parameter and am using streaming. When I added the service reference in Silverlight project then it automatically changed the parameter of my WCF method from System.IO.Stream to byte[]. Can anyone suggest if there is a way around this so that I can get System.IO.Stream type rather than byte[].
Thanks in advance
Silverlight does not support transfer mode streamed: http://forums.silverlight.net/forums/t/119340.aspx
So I think that you are stuck with getting a byte array.
Can you verify that you're not hitting one of the reader quotas in the service? You can try increasing all of them to see if this solves your problem.
I think you should set the transferMode property of your basicHttpBinding to the correct value, as described in this article. And then add the service reference to your Silverlight application again.
http://blogs.msdn.com/b/carlosfigueira/archive/2010/07/08/using-transfermode-streamedresponse-to-download-files-in-silverlight-4.aspx
Even I was struggling with the same issue. At last I got a solution by myself. All you can do is:
declare the accepting parameter as string array in the WCF Service.
convert the byte array into string array at client place.
After Sending the converted byte array as string array again convert back it into byte array.
eg. at the WCF side:
[DataContract]
Class FileInfo
{
[DataMember]
string filename;
[DataMember]
string[] StrArr;
}
the receiving function:
public void uploadFile(FileInfo fi)
{
int len=fi.StrArr.len;
byte[] myFileByte=new byte[len];
for(int i=0;i<len;i++)
{
myFileByte[i]=Convert.ToByte(fi.StrArr[i]);
}
//your uploaded File buffer is ready as myFileByte
//proceeding operations are most welcome here......
.........
}
At Client Side:
public void UploadMyFile()
{
//Take the InputStream from the selected File as iStream;
int len=(int)iStream.length;
byte[] buffer=new byte[len];
string[] MyStrArr=new string[len];
for(int i=0;i<len;i++)
{
MyStrArr[i]=Convert.ToString(buffer[i]);
}
//Here your string array is ready to send to the WCF Service....
//I m confident this code will work perfectly with some file limitation consideartions.
}