error writing mime multipart body part to output stream - file-upload

I have code that does async file uploads which works fine on my dev vm but after I deployed it to the client system, I keep getting this error:
"error writing mime multipart body part to output stream"
I know this is the line that is throwing the error but I can't seem to figure out why:
//Read the form data and return an async task.
await Request.Content.ReadAsMultipartAsync(provider);
The file size was only 1MB and I even tried different file types with much smaller sizes. Why would this occur, I need ideas

Since the error message is mentioning about an error while writing to output stream, can you check if the folder to where the response is being written out has necessary permissions for your application to write.

You can also get this error if a file with the same name already exists in the destination folder.

I had this issue but I had already set permissions on the destination folder.
I fixed the problem by setting permissions on the App_Data folder (I think this is where the file gets temporarily stored after being uploaded).

Related

JSZip reports missing bytes when reading back previously uploaded zip file

I am working on a webapp where the user provides an image file-text sequence. I am compressing the sequence into a single ZIP file uisng JSZip.
On the server I simply use PHP move_uploaded_file to the desired location after having checked the file upload error status.
A test ZIP file created in this way can be found here. I have downloaded the file, expanded it in Windows Explorer and verified that its contents (two images and some HTML markup in this instance) are all present and correct.
So far so good. The trouble begins when I try to fetch that same ZIP file and expand it using JSZip.loadAsync which consistently reports Corrupted zip: missing 210 bytes. My PHP code for squirting back the ZIP file is actually pretty simple. Shorn of the various security checks I have in place the essential bits of that code are listed below
if (file_exists($file))
{
ob_clean();
readfile($file);
http_response_code(200);
die();
} else http_response_code(399);
where the 399 code is interpreted in my webapp as a need to create a new resource locally instead of trying to read existing resource data. The trouble happens when I use the result text (on an HTTP response of 200) and feed it to JSZip.loadAsync.
What am I doing wrong here? I assume there is something too naive about the way I am using readfile at the PHP end but I am unable to figure out what that might be.
What we set out to do
Attempt to grab a server-side ZIP file from JavaScript
If it does not exist send back a reply (I simply set a custom HTTP response code of 399 and interpret it) telling the client to go prepare its own new local copy of that resource
If it does exist send back that ZIP file
Good so far. However, reading the existent ZIP file into PHP and sending it back does not make sense + is fraught with problems. My approach now is to send back an http_response_code of 302 which the client interprets as being an instruction to "go get that ZIP for yourself directly".
At this point to get the ZIP "directly" simply follow the instructions in this tutorial on MDN.

Request Body Too Large - Kestrel - fails at context.Request.Body.CopyToAsync

I am working on an ASP.Net core application where one of the actions is responsible to upload large files. The Limits.MaxRequestBodySize property is set to 100MB in the Startup.cs for Kestrel. The action that uploads the file is already decorated with [DisableRequestSizeLimit] and [RequestFormLimits(MultipartBodyLengthLimit = int.MaxValue)].
Despite ignoring the limit at action level and setting the maximum body size to 100MB at global level, the request fails with 500: BadHttpRequestException "Request body too large" when the file I am trying to upload is only 34MB. The exception occurs in one of the middleware at "await context.Request.Body.CopyToAsync(stream)". And the exception stack trace also mentions the Content-length is 129MB. The exception does not occur if I set the Limits.MaxRequestBodySize to 200MB or to null.
Questions:
Why is the request size 129MB when I am uploading only 34MB file? what makes the remaining ~100MB?
When the request is already in context.Request.Body, why is it throwing error while copying it ("await context.Request.Body.CopyToAsync(stream)")to a new stream?
I really appreciate any help with these. Please let me know if anything is unclear, I can provide more details.
Regards,
Siva
The issue could be that the default request limit in you application or the webserver is to low. Looks like the default maxAllowedContentLength is approx. 30MB.
Perhaps these link can help you out:
https://stackoverflow.com/a/59840618/432074
https://github.com/dotnet/aspnetcore/issues/20369#issuecomment-607057822
here is the solution:
There was nothing wrong with the MaxRequestBodySize or maxAllowedContentLength. It was the size of the request that was causing the issue. Eventhough I was uploading file size of ~34MB, the file was converted to byte array and then to base64. This resulted in increased request size. I used IFormFile interface to send the file instead of byte array/base64, it is working fine now.

Nest.js and Archiver - pipe stream zip file into http response

In my node.js application I'm downloading multiple user files from AWS S3, compress them to single zip (with usage of Archiver npm library) file and send back to client. All the way I'm operating on streams, and yet I can't send files to client (so client would start download after successful http request).
const filesStreams = await this.awsService.downloadFiles(
document?.documentFiles,
);
const zipStream = await this.compressService.compressFiles(filesStreams);
// ts-ignore
response.setHeader('Content-Type', 'application/octet-stream');
response.setHeader(
'Content-Disposition',
'attachment; filename="files.zip"',
);
zipStream.pipe(response);
Where response is express response object. zipStream is created with usage of Archiver:
public async compressFiles(files: DownloadedFileType[]): Promise<Archiver> {
const zip = archiver('zip');
for (const { stream, name } of files) {
zip.append((stream as unknown) as Readable, {
name,
});
}
return zip;
And I know it is correct - because when I pipe it into WriteStream to some file in my file system it works correctly (I'm able to unzip written file and it has correct content). Probably I could temporarily write file in file system, send it back to client with usage of response.download and remove save file afterwards, but it looks like very inefficient solution. Any help will be greatly appreciated.
So I found a source of a problem - I'll post it here just for record, in case if anyone would have same problem. Source of the problem was something totally different - I was trying to initiate a download by AJAX request, which of course won't work. I changed my frontend code and instead AJAX I used HTML anchor element with src attribute set on exactly same endpoint and it worked just fine.
I had a different problem, but rather it came from the lint. I needed to read all files from the directory, and then send them to the client in one zip. Maybe someone finds it useful.
There were 2 issues:
I mismatched cwd with root; see glob doc
Because I used PassThrough as a proxy object between Archiever and an output, lint shows stream.pipe(response) and type issue... what is a mistake - it works fine.

Cloudconvert File not found (upload failed)

I plan on using cloudconverts API API for converting docx files to pdf but im stuck with a File not found (upload failed) error each time i have started a conversion process and request the status of the conversion.
To make sure the file can be reached, i ran a test using their API and executing my request which was a success.
Im testing the conversion using Googles Advanced Rest Client and my header og payload is as follows:
Requesting a process:
Im getting an URL for my convertion process and all is good. So time to start my process of converting my file. Im using the option to let cloudconvert download the docx from my domain.
Starting my process:
The request for starting my process is also a succes and i now want to check the status of my conversion by calling the previous url as a GET. But this gives me an error message in the response saying: File not found (Upload failed)
As written in the beginning of my post, i tried using their API console to test if the file can be downloaded from my site, which it could and PDF was created successfully .. So i guess im missing something somewhere, just cant see it...
So yeah,
First problem was that there was wrong content-type header set. For JSON payload it should be "application/json". With "application/x-www.form-urlencoded" content type header the server expected different payload so the call resulted with error.
Second one was about JSON parsing. JSON is not the same as JavaScript object. Keys in JSON must contain double quotes characters.
Finally I'm not sure what do you mean by success response. If you talking about status code - well it's just bad API configuration/design.

Streaming PDF file from .NET Generic Handler generates Adobe Reader corruption error

I have a generic handler that serves files for download:
Dim request As HttpRequest = context.Request
Dim response As HttpResponse = context.Response
response.ContentType = "application/octet-stream"
response.AddHeader("content-disposition", "inline; filename=" & filename)
response.Buffer = True
response.OutputStream.Write(fileBytes, 0, fileBytes.Length)
response.Flush()
response.Close()
('fileBites' is my bite array, 'filename' is my file name).
When fileBites is, say, a .txt file - the download is triggered and the file is read perfectly.
I discovered, however, that .pdf and .docx files were being corrupted - In the case of .docx, Word was saying that the file needed to be recovered and asked me for permission to do so. When I granted this permission it fixed it immediately and displayed perfectly.
Obviously I didn't want users to see this corruption dialogue and after researching for a while I discovered this: http://forums.asp.net/t/1301978.aspx/1/10 - which suggested that the reason for the corruption was one extra empty bit was being written at the end of the byte array: I checked by dropping the length by one bit:
response.OutputStream.Write(fileBytes, 0, fileBytes.Length - 1)
and like magic, .docx downloads now work! (This is not my current problem, I include it for context and in case anybody else has the same issue)
My current problem is that although .docx files are now streaming correctly, .pdf files are not. They seem to transfer in one piece (at the correct KB size) but when I try and open the downloaded file Adobe Reader X tells me:
Adobe Reader could not open xxxx because it is either not a supported file type
or because the file has been damaged (for example, it was sent as an email
attachment and wasn't correctly decoded).
There was a fairly long unresolved discussion on the adobe forums dated 2008 (http://forums.adobe.com/thread/391712) that addresses this exact issue but this is now dead. I have tried all of the workarounds that users have posted (content type: /pdf not /octet, disposition: application not inline, different content-encodings and charsets, etc) but all to no avail.
I wonder if anybody has encountered this problem before that could point me somewhere vaguely approximating something that even remotely resembles the right direction!
(Answer in the comments and edits. See Question with no answers, but issue solved in the comments (or extended in chat) )
The OP wrote:
Having stared at it for hours, the answer appeared shortly after posting this question - which is so often the way! At any rate, here is the resolution for anybody else stuck with my specific issue:
When I added a file into the database, I also allowed it to be renamed. I would select a file, give it a name and store it in the DB as [Fileblob],[Filename] - I could pick any arbitrary name, I assumed, because it was no longer tied to a specific location in the file system. - Wrong! - With .txt and .docx files, this was fine, the original name was never invoked.
Obviously something has embedded the name of the file into the binary object when it was saved and checks the name provided in the content-disposition against the name embedded into the document when it is opened. It then throws a corruption error if they do not match.
Now, I am storing the file in the database as [Fileblob],[Filename],[originalFilename] and when opening it I am using:
response.AddHeader("content-disposition", "inline;filename=" & originalFilename)
..to give it a name it understands. I suppose the more elegant way would be to strip the original name from the PDF when storing it in the database as it is no longer needed but as a workaround this works just fine.