How to upload file to stream from swagger? - file-upload

I was wondering how you would generate the swagger UI to upload a file as a stream to the ASP.net Core controller.
Here is a link describing the difference between uploading small files vs. big files.
Here is the link of describing how to implement the upload for a small file, but doesn't elaborate on how to implement a stream.
https://www.janaks.com.np/upload-file-swagger-ui-asp-net-core-web-api/
Thanks,
Derek

I'm not aware of any capability to work with Stream type(s) on the request parameters directly, but the IFormFile interface stipulates the ability to work with streams. So I would keep the IFormFile type in your request params, then either:
Copy it in full to a memory stream OR
Open the stream for read
In my case I wanted the base 64 bytes in full (and my files are only a few hundred kbs) so I used something like this:
string fileBase64 = null;
using (var memStream = new MemoryStream())
{
model.FormFile.CopyTo(memStream);
fileBase64 = Convert.ToBase64String(memStream.ToArray());
}
The MemoryStream is probably not appropriate in your case; as you mentioned large files which you will not want to keep in memory in their entirety.
So I would suggest opening the stream for reading, e.g.:
using (var fileStream = model.FormFile.OpenReadStream())
{
// Do yo' thang
}

Related

Streaming mPDF Output for Download

This is more of a conceptual question, but I'm wondering if it's possible to stream the output of mPDF directly to the user in a download (e.g. without saving in a temp folder on the server or loading into the user's browser).
I'm using a similar method successfully for downloading a zip file of S3 photos using ZipStream and AWS PHP S3 Stream Wrapper which works very well, so I would like to employ a similar method for my PDF generation.
I use the mPDF library to generate reports that have S3 images on Heroku. The mPDF documentation shows four output options including inline and download; inline loads it right into the user's browser and download forces the download prompt (desired behavior).
I've enabled S3 Stream Wrapper and embedded images in the PDF per mPDF Image() documentation like this:
$mpdf->imageVars['myvariable'] = '';
while (!feof($streamRead)) {
// Read 1,024 bytes from the stream
$mpdf->imageVars['myvariable'] .= fread($streamRead, 1024);
}
fclose($streamRead);
$imageHTML = '<img src="var:myvariable" class="report-img" />';
$mpdf->WriteHTML($imageHTML);
I've also added the header('X-Accel-Buffering: no'); which was required to get the ZipStream working within the Heroku environment, but the script always times out if there are more than a couple of images.
Is it possible to immediately prompt the download and just have the data stream directly to the user? I'm hoping this method can be used for more than just zip downloads but haven't had luck with this particular application yet.

Upload large file with Multi Part Form Data using RestSharp

I am developing a small application to upload large file using an Rest API. The Rest API accepts multi part form data. The upload should happen in stream. I am using RestSharp to invoke this API. I could upload small files, but when tried to upload 1GB file, it is taking longer time and i could not see any file in the destination. I am not sure, if i am using streaming or loading complete data into memory.
Please find my code below. Please share a C# sample code to upload large file stream using multi part form data. Is there anything i am missing in the below code:
var client = new RestClient("https://xxxxxxxxxxxxx/UploadFile");
RestRequest request = new RestRequest("/", Method.POST);
request.AddHeader("FileName", txtFileName.Text.Trim().ToString());
request.AddHeader("Content-Type", "multipart/form-data");
request.AddFile("fileData", txtFilePath.Text.Trim().ToString(), );
request.ReadWriteTimeout = 2147483647;
request.Timeout = 2147483647;
var response = client.Execute(request);
Thanks,
Rajkumar

Reading Xml from an absolute path

I need to access a remote Xml document from a WCF service. Right now I have:
XmlReader reader = XmlReader.Create("path");
But since the Xml doc is elsewhere on our network I need to give the XmlReader an absolute path, as opposed to having it look deeper in the project folder. How do I do this? I've found surprisingly little information about this. It seems like this should be a simple thing to do. Any help is appreciated!
Thanks
You can use overload that accepts Stream parameters as follows:
using (FileStream fileStream = new FileStream(#"\\computername\shared path"))
using (XmlReader reader = XmlReader.Create(fileStream))
{
// perform your custom code with XmlReader
}
Please note that you need appropriate permission to open remote stream. In WCF service context you may need to use impersonation.

How can I unlock a FileStream lock?

I am implementing a module to upload files in chunks from a client machine to a server. At the server side I am using a WCF Soap service.
In order to upload files in chunks, I have implemented this sample from Microsoft http://msdn.microsoft.com/en-us/library/aa717050.aspx. I have been able to get it working in a simple scenario, so it does upload the files in chunks. This chunking module is using a WSDualHttpBinding.
I need to implement a feature to re-upload a file in case the upload process is stopped for any reason (user choice, machine turned off, etc) while that specific file is being uploaded.
At my WCF service I have a method that handles the file writing at the server side:
public void UploadFile(RemoteFileInfo request)
{
FileInfo fi = new FileInfo(Path.Combine(Utils.StorePath(), request.FileName));
if (!fi.Directory.Exists)
{
fi.Directory.Create();
}
FileStream file = new FileStream(fi.FullName, FileMode.Create, FileAccess.Write);
int count;
byte[] buffer = new byte[4096];
while ((count = request.FileByteStream.Read(buffer, 0, buffer.Length)) > 0)
{
file.Write(buffer, 0, count);
file.Flush();
}
request.FileByteStream.Close();
file.Position = 0;
file.Close();
if (request.FileByteStream != null)
{
request.FileByteStream.Close();
request.FileByteStream.Dispose();
}
}
The chunking module is sending chunks while the function request.FileByteStream.Read(buffer, 0, buffer.Length) is being consumed.
After the filestream is initialized, then the file gets locked (this is the normal behavior when initializing a filestream for writing), but the problem I have is that I stop the upload process while the send/receive process is being performed, then the channel used by the chunking module is not cancelled, so the file keeps locked since the WCF service is still waiting for more data to be sent, until the Send Timeout expires (timeout is 1 hr since I need to upload files +2.5GB). At the next upload, if I try to upload the same file, I get an exception at the WCF service because the filestream cannot be initialized again for the same file.
I would like to know if there is a way to avoid/remove the file lock, so at the next run I can re-upload that same file even when the previous filestream already locked the file.
Any help would be appreciate it, Thanks.
I don't personally like this sort of solution. Maintaining the connection is not ideal.
Using your example, you could be half way through a 2.5GB file and the process could be aborted. You end up in the situation you have above. To make matters worse, you need to resubmit all of the data that has already been submitted.
I would go the route of handling the blocks myself and appending them to the same file server side. Call a WCF method that indicates a file is starting, upload the data in blocks and then call another method when the upload is complete. IF you are confident that the file names are unique then you could even accomplish this with a single method call.
Something like:
ulong StartFile(string filename) // This returns the data already uploaded
void UploadFile(string filename, ulong start, byte[] data)
void EndFile(string filename) // Just as a safety net
A simple solution to your problem above if you don't want to go the route I outlined above (it doesn't answer your question directly) is to use a temporary file name while you are doing the upload and then rename the file once the upload is complete. You should really be adopting this approach anyway to prevent an application on the server from picking up the file before the upload is complete.

getting a responseLength for a HttpWebRequest upload from another webfile to stream into the upload when the source doesn't implement ContentLength?

Background - I'm trying to stream an existing webpage to a separate web application, using HttpWebRequest/HttpWebResponse in C#. One issue I'm striking is that I'm trying to set the file upload request content-length using the file download's content-length, HOWEVER the issue seems to be when the source webpage is on a webserver for which the HttpWebResponse doesn't provide a content length.
HttpWebRequest downloadRequest = WebRequest.Create(new Uri("downloaduri")) as HttpWebRequest;
using (HttpWebResponse downloadResponse = downloadRequest.GetResponse() as HttpWebResponse)
{
var uploadRequest = (HttpWebRequest) WebRequest.Create(new Uri("uripath"));
uploadRequest.Method = "POST";
uploadRequest.ContentLength = downloadResponse.ContentLength; // ####
QUESTION: How could I update this approach to cater for this case (when the download response doesn't have a content-length set). Would it be to somehow use a MemoryStream perhaps? Any sample code would be appreciated.
If you're happy to download the response from the other web server completely, that would indeed make life easier. Just repeatedly write into a MemoryStream as you fetch from the first web server, then you know the length to set for the second request and you can write the data in easily (especially as MemoryStream has a WriteTo method to write its contents to another stream).
The downside of this is that you'll take a lot of memory if it's a large file. Is that likely to be a problem in your situation? Alternatives include:
Writing to a file instead of using a MemoryStream. You'll need to clean up the file afterwards, of course - you're basically using the file system as bigger memory :)
Using a chunked transfer encoding to "read a chunk, write a chunk"; this may be fiddly to get right - it's certainly not something I've done before.