There is any difference in uploading IFormFile vs Base64 string in .netcore web api? - file-upload

I'm using .net core web api to accept, upload and download the file content.
I've already tried IFormFile and simple base64 encoded file content
UploadFile(IFormFile file)
UploadFile([FromBody] string base64Filecontentstring)
I'm just wondering if there is any difference in using any of those? If there is, which one should you use and when?

For small files Base64 will work fine, it's easy to handle and avoids dependency on Http.IFormFile in Domain.
But sending large files as Base64 using JSON is not a good idea. It will take a lot of memory & time for converting back to the actual image for copying on the Server.
I suggest the excellent article: https://medium.com/#ma1f/file-streaming-performance-in-dotnet-4dee608dd953 which shows that base64 performance is 5x-20x worse.
It's up to you.

Related

Implementing basic S3 compatible API with akka-http

I'm trying to implement the file storage ыукмшсу with basic S3 compatible API using akka-http.
I use s3 java sdk to test my service API and got the problem with the putObject(...) method. I can't consume file properly on my akka-http backend. I wrote simple route for the test purposes:
def putFile(bucket: String, file: String) = put{
extractRequestEntity{ ent =>
val finishedWriting = ent.dataBytes.runWith(FileIO.toPath(new File(s"/tmp/${file}").toPath))
onComplete(finishedWriting) { ioResult =>
complete("Finished writing data: " + ioResult)
}
}
}
It saves file, but file is always corrupted. Looking inside the file I found the lines like these:
"20000;chunk-signature=73c6b865ab5899b5b7596b8c11113a8df439489da42ddb5b8d0c861a0472f8a1".
When I try to PUT file with any other rest client it works as fine as expected.
I know S3 uses "Expect: 100-continue" header and may it he causes problems.
I really can't figure out how to deal with that. Any help appreciated.
This isn't exactly corrupted. Your service is not accounting for one of the four¹ ways S3 supports uploads to be sent on the wire, using Content-Encoding: aws-chunked and x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD.
It's a non-standards-based mechanism for streaming an object, and includes chunks that look exactly like this:
string(IntHexBase(chunk-size)) + ";chunk-signature=" + signature + \r\n + chunk-data + \r\n
...where IntHexBase() is pseudocode for a function that formats an integer as a hexadecimal number as a string.
This chunk-based algorithm is similar to, but not compatible with, Transfer-Encoding: chunked, because it embeds checksums in the stream.
Why did they make up a new HTTP transfer encoding? It's potentially useful on the client side because it eliminates the need to either "read your payload twice or buffer [the entire object payload] in memory [concurrently]" -- one or the other of which is otherwise necessary if you are going to calculate the x-amz-content-sha256 hash before the upload begins, as you otherwise must, since it's required for integrity checking.
I am not overly familiar with the internals of the Java SDK, but this type of upload might be triggered by using .withInputStream() or it might be standard behavor for files too, or for files over a certain size.
Your minimum workaround would be to throw an HTTP error if you see x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD in the request headers since you appear not to have implemented this in your API, but this would most likely only serve to prevent storing objects uploaded by this method. The fact that this isn't already what happens automatically suggests that you haven't implemented x-amz-content-sha256 handling at all, so you are not doing the server-side payload integrity checks that you need to be doing.
For full compatibility, you'll need to implement the algorithm supported by S3 and assumed to be available by the SDKs, unless the SDKs specifically support a mechanism for disabling this algorithm -- which seems unlikely, since it serves a useful purpose, particularly (it appears) for streams whose length is known but that aren't seekable.
¹ one of four -- the other three are a standard PUT, a web-based html form POST, and the multipart API that is recommended for large files and mandatory for files larger than 5 GB.

Uploading a file via Jaxax REST Client interface, with third party server

I need to invoke a remote REST interface handler and submit it a file in request body. Please note that I don't control the server. I cannot change the request to be multipart, the client has to work in accordance to external specification.
So far I managed to make it work like this (omitting headers etc. for brevity):
byte[] data = readFileCompletely ();
client.target (url).request ().post (Entity.entity (data, "file/mimetype"));
This works, but will fail with huge files that don't fit into memory. And since I have no restriction on filesize, this is a concern.
Question: is it somehow possible to use streams or something similar to avoid reading the whole file into memory?
If possible, I'd prefer to avoid implementation-specific extensions. If not, a solution that works with RESTEasy (on Wildfly) is also acceptable.
ReastEasy as well as Jersey support InputStream out of the box so simply use Entity.entity(inputStream, "application/octet-stream"); or whatever Content-Type header you want to set.
You can go low-level and construct the HTTP request using a library such as the plain java.net.URLConnection.
I have not tried it myself but there is example code which reads a local file and writes it to the request stream without loading it into a byte array.
Upload files from Java client to a HTTP server
Of course this solution requires more manual coding but it should work (unless java.net.URLConnection loads the whole file into memory)

OutOfMemoryException with RestSharp downloading big files

I've got an OutOfMemoryException using RestSharp when I downloading a big sized files ( 1GB ). After reading some papers seems the cause of it could be that RestSharp is using internally the class HttpWebRequest. Here's some explanation http://support.microsoft.com/kb/908573 ( cause -> This issue occurs because the .NET Framework buffers the outgoing data by default when you use the HttpWebRequest class. )
My question is about if it's possible with RestSharp download this sort of files or I must looking for other options. I've tryed several combination with code, but I do not find the correct way for doing only with RestSharp API.

Base64 encode very large files in objective C to upload file in sharepoint

I have a requirement where user can upload files present in app to SharePoint via same app.
I tried using http://schemas.microsoft.com/sharepoint/soap/CopyIntoItems method of sharepoint. But it needs file in base64 encoded format to be embedded into body of SOAP request . My code crashed on device when I tried to convert even a 30 MB file in base64 encoded string? Same code executed just fine on simulator
Is there any other alternative to upload files (like file streaming etc) onto sharepoint?? I may have to upload files upto 500 MB? Is there more efficient library to convert NSData into base64 encoded string for large file???
Should I read file in chunks and then convert that into base64 encoded string and upload file once complete file is converted? Any other appraoches???
First off, your code probably crashed because it ran out of memory. I would do a loop where I read chunks that I converted and then pushed to a open socket. This probably means that you need to go to a lower level than NSURLConnection, I have tried to search for NSURLConnection and chunked upload without much success.
Some seem to suggest using ASIHttp, but looking at the homepage it seems abandoned by the developer, so I can't recommend that.
AFNetworking looks really good, it has blocks support and I can see in the example on the first page how it could be used for you. Look at the streaming request example. Basically create a NSInputStream that you push chunked data to and use it in a AFHTTPURLConnectionOperation.

Decoding base64 for a large file in Objective-C

I consuming Web-Services that streams a pdf file to my iOS device. I used SOAP message to interact with web-services and using NSXMLParser:foundCharacters() after the stream is complete and I want to get the content of pdf file from my streamed xml file which was created in first step. The data I get is encoded in base64 and I have the methods to decode the content back.For small file the easiest approach is reading/collect all content with NSXMLParser:foundCharacters from first streamed file and call the decoding base64 method when I get whole data from parse:didEndElement
(the above approach works fine I tested for this case and I made the right pdf file out of it).
Now my question is what is the best approach(optimizing memory/speed) to read/decode/write to make the final pdf from a big streamed files.
Is there any code available or any thought to accomplish this in objective-C