How can I unlock a FileStream lock? - wcf

I am implementing a module to upload files in chunks from a client machine to a server. At the server side I am using a WCF Soap service.
In order to upload files in chunks, I have implemented this sample from Microsoft http://msdn.microsoft.com/en-us/library/aa717050.aspx. I have been able to get it working in a simple scenario, so it does upload the files in chunks. This chunking module is using a WSDualHttpBinding.
I need to implement a feature to re-upload a file in case the upload process is stopped for any reason (user choice, machine turned off, etc) while that specific file is being uploaded.
At my WCF service I have a method that handles the file writing at the server side:
public void UploadFile(RemoteFileInfo request)
{
FileInfo fi = new FileInfo(Path.Combine(Utils.StorePath(), request.FileName));
if (!fi.Directory.Exists)
{
fi.Directory.Create();
}
FileStream file = new FileStream(fi.FullName, FileMode.Create, FileAccess.Write);
int count;
byte[] buffer = new byte[4096];
while ((count = request.FileByteStream.Read(buffer, 0, buffer.Length)) > 0)
{
file.Write(buffer, 0, count);
file.Flush();
}
request.FileByteStream.Close();
file.Position = 0;
file.Close();
if (request.FileByteStream != null)
{
request.FileByteStream.Close();
request.FileByteStream.Dispose();
}
}
The chunking module is sending chunks while the function request.FileByteStream.Read(buffer, 0, buffer.Length) is being consumed.
After the filestream is initialized, then the file gets locked (this is the normal behavior when initializing a filestream for writing), but the problem I have is that I stop the upload process while the send/receive process is being performed, then the channel used by the chunking module is not cancelled, so the file keeps locked since the WCF service is still waiting for more data to be sent, until the Send Timeout expires (timeout is 1 hr since I need to upload files +2.5GB). At the next upload, if I try to upload the same file, I get an exception at the WCF service because the filestream cannot be initialized again for the same file.
I would like to know if there is a way to avoid/remove the file lock, so at the next run I can re-upload that same file even when the previous filestream already locked the file.
Any help would be appreciate it, Thanks.

I don't personally like this sort of solution. Maintaining the connection is not ideal.
Using your example, you could be half way through a 2.5GB file and the process could be aborted. You end up in the situation you have above. To make matters worse, you need to resubmit all of the data that has already been submitted.
I would go the route of handling the blocks myself and appending them to the same file server side. Call a WCF method that indicates a file is starting, upload the data in blocks and then call another method when the upload is complete. IF you are confident that the file names are unique then you could even accomplish this with a single method call.
Something like:
ulong StartFile(string filename) // This returns the data already uploaded
void UploadFile(string filename, ulong start, byte[] data)
void EndFile(string filename) // Just as a safety net
A simple solution to your problem above if you don't want to go the route I outlined above (it doesn't answer your question directly) is to use a temporary file name while you are doing the upload and then rename the file once the upload is complete. You should really be adopting this approach anyway to prevent an application on the server from picking up the file before the upload is complete.

Related

How to upload file to stream from swagger?

I was wondering how you would generate the swagger UI to upload a file as a stream to the ASP.net Core controller.
Here is a link describing the difference between uploading small files vs. big files.
Here is the link of describing how to implement the upload for a small file, but doesn't elaborate on how to implement a stream.
https://www.janaks.com.np/upload-file-swagger-ui-asp-net-core-web-api/
Thanks,
Derek
I'm not aware of any capability to work with Stream type(s) on the request parameters directly, but the IFormFile interface stipulates the ability to work with streams. So I would keep the IFormFile type in your request params, then either:
Copy it in full to a memory stream OR
Open the stream for read
In my case I wanted the base 64 bytes in full (and my files are only a few hundred kbs) so I used something like this:
string fileBase64 = null;
using (var memStream = new MemoryStream())
{
model.FormFile.CopyTo(memStream);
fileBase64 = Convert.ToBase64String(memStream.ToArray());
}
The MemoryStream is probably not appropriate in your case; as you mentioned large files which you will not want to keep in memory in their entirety.
So I would suggest opening the stream for reading, e.g.:
using (var fileStream = model.FormFile.OpenReadStream())
{
// Do yo' thang
}

How to check whether Azure Blob Storage upload was successful?

I'm using an Azure SAS URL to upload a file to a blob storage:
var blockBlob = new Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob(new System.Uri(sasUrl));
blockBlob.UploadFromFile(filePath);
The file exists on my disk, and the URL should be correct since it is automatically retrieved from the Windows Store Ingestion API (and, if I slightly change one character in the URL's signature part, the upload fails with HTTP 403).
However, when checking
var blobs = blockBlob.Container.ListBlobs();
the result is Count = 0, so I'm wondering if the upload was successful? Unfortunately, the UploadFromFile method (similarly to the UploadFromStream method) has no return type, so I'm not sure how to retrieve the upload's result).
If I try to connect to the SAS URL using Azure Storage Explorer, listing
blob containers fails with the error "Authentication Error. Signature fields not well formed". I tried URL escaping the URL's signature part since that seems to be the reason for that error in some similar cases, but that doesn't solve the problem.
Is there any way to check the status of a blob upload? Has anybody an idea why an auto-generated URL (delivered by one of Microsoft's official APIs) can not be connected to using Azure Explorer?
Please examine the sp field of your SAS. It shows the rights you are authorized to do with the blob. For example, sp=rw means you could read the blob and write content to the blob using this SAS. sp=w means you can only write content to the blob using this SAS.
If you have the read right, you can copy the SAS URL to the browser address bar. The browser will download or show the blob content for you.
Is there any way to check the status of a blob upload?
If no exception throws from your code, it means the blob has been uploaded successfully. Otherwise, a exception will be thrown.
try
{
blockBlob.UploadFromFile(filePath);
}
catch(Exception ex)
{
//uploaded failed
}
You can also confirm it using any web debugging proxy tool(ex. Fiddler) to capture the response message from storage server. 201 Created status code will be returned if the blob has been uploaded successfully.
Has anybody an idea why an auto-generated URL (delivered by one of Microsoft's official APIs) can not be connected to using Azure Explorer?
Azure Storage Explorer only allow us to connect a Storage Account using SAS or attach a storage service (blob container, queue, or table) using an SAS. It doesn't allow us to connect a blob item using SAS.
In case on synchronous UPLOAD, we can try Exception based approach and also we can cross check "blockBlob.Properties.Length". Before uploading file, its "-1" and after upload completes, it become size of file got uploaded.
So we can add check, to verify block length, which will give details on state of upload.
try
{
blockBlob.UploadFromFile(filePath);
if(blockBlob.Properties.Length >= 0)
{
// File uploaded successfull
// You can take any action.
}
}
catch(Exception ex)
{
//uploaded failed
}

Cannot call handler ashx more than once when using Response.TransmitFile

I've created a HttpHandler (.ashx) for clients download content (videos) from my website. First I was using the WriteFile method, that I realized it was requiring to much memory and then I decided to change it to TransmitFile method.
But one weird thing happened, I wasn't able to make more than one download anymore. I had to wait a download finishes and start the other.
Basically the code is like this:
System.IO.FileInfo file = new System.IO.FileInfo(file_path);
context.Response.Clear();
if (flagH264)
{
context.Response.ContentType = "video/mp4";
}
else
{
context.Response.ContentType = "video/x-ms-wmv";
}
context.Response.AddHeader("Content-Length", file.Length.ToString());
context.Response.AddHeader("Content-Disposition", "attachment; filename=" + name);
//context.Response.WriteFile(file_path.Trim());
context.Response.TransmitFile(file_path.Trim());
context.Response.Flush();
Anyone may know what is this problem?
I found what was the problem.
The HttpHandler (ashx) I was using for the download page was implementing an interface IRequireSessionState that gave me read/write rights to manipulate session data. When using TransmitFile method IIS blocks any operation on the system to protect session data from being altered.
The solution was changing the IRequireSessionState for IReadOnlySessionState, that gives only reading access to session data and there was no need to provide any kind of security, blocking user actions.

memory exception using wcf wshttpbinding

I have an application to upload files to a server. I am using nettcpbinding and wshttpbinding. When the files is larger than 200 MB, I get a memory exception. Working around this, I have seen people recommend streaming, and of course it works with nettcpbinding for large files (>1GB), but when using wshttpbinding, what would be the approach?? Should I change to basichttpbinding?? what?? Thanks.
I suggest you expose another end point just to upload such large size data. This can have a binding that supports streaming. In our previous project we needed to do file uploads to server as part of business process. We ended up creating 2 endpoints one just dedicated to file upload, and another for all other business functionality.
The streaming data service can be a generic service to stream any data to the server and maybe return a token for identifying the data on server.For subsequent requests this token can be passed along to manipulate the data on server.
If you don't want to (or cannot because of legit reasons) change the binding nor use streaming, what you can do is have some method with a signature along the lines of the following:
void UploadFile(string fileName, long offset, byte[] data)
Instead of sending the whole file, you send little packets, and tell where the data should be placed. You can add more data, of course, like the whole filesize, CRC of the file to know if the transfer was successful, etc.

How to show feedback while streaming large files with WCF

I'm sending large files over WCF and I'm using transferMode="Streamed" in order to get this working, and it is working fine.
The thing is sometimes those files are just too big, and I want to give the client some sort of feedback about the progress.
Does anybody have a godd solution/idea on how to acomplish this?
EDIT: I don't command the read of the file in either side (client or server) if I did I could just give feedback on the read function of the stream.
EDIT2: part of my code to help others understand my problem
Here's my contract
[OperationContract]
FileTransfer Update(FileTransfer request);
and here's the definition of FileTransfer
[System.ServiceModel.MessageContractAttribute(WrapperName = "FileTransfer", WrapperNamespace = "http://tempuri.org/", IsWrapped = true)]
public class FileTransfer : IDisposable
{
[System.ServiceModel.MessageBodyMemberAttribute(Namespace = "dummy", Order = 0)]
public Stream FileByteStream;
public void Dispose()
{
if (FileByteStream != null)
{
FileByteStream.Close();
FileByteStream = null;
}
}
}
so, in my service (hosted on IIS) I just have something like this:
request.FileByteStream;
and WCF itself reads the stream, right?
I hope this helps people out to understand my problem... please let me know if you need further info
What about adding total stream size as custom Soap header (use MessageContracts). Then you can process the stream on client in chunks (like reading to buffer of defined size in loop) and for each chunk you can notify client about processed increment in context of expected size.
The only way I see right now is by creating another operation that report the number of bytes read by the streamed operation. This would require activating sessions and multi-threading at the server side and implementing a asynchronous call from the client together with calls to the "progress reporting" operation.
The client knows the size of the stream (assuming the client is the sender), it can extract a progress percentage from the known total size and the reported size from the server.
EDIT:
My comment works under the assumption that the client is uploading data. So the server knows of much data it has already read from the stream while the client knows the size of the data.
If the server exposes an operation that reports the volume of data it has read so far, the client will be able to calculate the progress percentage by calling this operation.