JSZip reports missing bytes when reading back previously uploaded zip file - php-7

I am working on a webapp where the user provides an image file-text sequence. I am compressing the sequence into a single ZIP file uisng JSZip.
On the server I simply use PHP move_uploaded_file to the desired location after having checked the file upload error status.
A test ZIP file created in this way can be found here. I have downloaded the file, expanded it in Windows Explorer and verified that its contents (two images and some HTML markup in this instance) are all present and correct.
So far so good. The trouble begins when I try to fetch that same ZIP file and expand it using JSZip.loadAsync which consistently reports Corrupted zip: missing 210 bytes. My PHP code for squirting back the ZIP file is actually pretty simple. Shorn of the various security checks I have in place the essential bits of that code are listed below
if (file_exists($file))
{
ob_clean();
readfile($file);
http_response_code(200);
die();
} else http_response_code(399);
where the 399 code is interpreted in my webapp as a need to create a new resource locally instead of trying to read existing resource data. The trouble happens when I use the result text (on an HTTP response of 200) and feed it to JSZip.loadAsync.
What am I doing wrong here? I assume there is something too naive about the way I am using readfile at the PHP end but I am unable to figure out what that might be.

What we set out to do
Attempt to grab a server-side ZIP file from JavaScript
If it does not exist send back a reply (I simply set a custom HTTP response code of 399 and interpret it) telling the client to go prepare its own new local copy of that resource
If it does exist send back that ZIP file
Good so far. However, reading the existent ZIP file into PHP and sending it back does not make sense + is fraught with problems. My approach now is to send back an http_response_code of 302 which the client interprets as being an instruction to "go get that ZIP for yourself directly".
At this point to get the ZIP "directly" simply follow the instructions in this tutorial on MDN.

Related

Updated JSON file is not reading during runtime

Team,
I have service to register a user with certain data along with unique mail id and phone no in JSON file format as a body (for ex: registerbody.json).
Before Post call I am generating unique mail id , phone no and updating the same json file (registerbody.json) fields which is in the same folder where feature file locates. I see the file is updated with the required data during runtime.
I used read () method and performed POST request
Surprisingly read method is not taking updated JSON file instead it is reading old data in the registerbody.json file.
Do you have any idea on this, why it is picking up old data even though file is updated with the latest information?
Please assist me with this.
Karate uses the Java classpath, which is typically target/test-classes. So if you edit a file in src/test/java Karate won't see it unless it is copied. This copying is automatically done when you build / compile your code.
My suggestion is use target/ as a temp folder and then you can read using the file: prefix:
* def payload = read('file:some.json')
Before Post call I am generating unique mail id , phone no and updating the same json file (registerbody.json)
You are making a big mistake here, Karate specializes in updating JSON based on variables. I suggest you take 5 minutes and read this part of the docs VERY carefully: https://github.com/intuit/karate#reading-files
Especially the part about embedded expressions: https://github.com/intuit/karate#embedded-expressions

Nextcloud in which source file is file uploading handled

I am going to make an app. But i am stuck in one issue. I am unable to find in which file of nextcloud, the codes are available which uploads file.
i want to find the file in which file uploading codes are situated.
I am going to make an app which will make a duplicate of uploaded file and will save in same directory with slightly changed name.
The public API for handling files lives in the \OCP\Files namespace, the implementation is in the \OC\Files namespace (https://github.com/nextcloud/server/tree/master/lib/private/Files).
Rather than modifying this code you should use the hooks functionality (never use classes or functions in the \OC\* namespace!): https://docs.nextcloud.com/server/12/developer_manual/app/hooks.html. This way you can execute your own code when a file is created or updated etc.
I guess you need the postWrite hook. Some sample code (untested):
\OC::$server->getRootFolder()->listen('\OC\Files', 'postWrite', function(\OCP\Files\Node $node) {
$node->copy('my/path');
});

Cloudconvert File not found (upload failed)

I plan on using cloudconverts API API for converting docx files to pdf but im stuck with a File not found (upload failed) error each time i have started a conversion process and request the status of the conversion.
To make sure the file can be reached, i ran a test using their API and executing my request which was a success.
Im testing the conversion using Googles Advanced Rest Client and my header og payload is as follows:
Requesting a process:
Im getting an URL for my convertion process and all is good. So time to start my process of converting my file. Im using the option to let cloudconvert download the docx from my domain.
Starting my process:
The request for starting my process is also a succes and i now want to check the status of my conversion by calling the previous url as a GET. But this gives me an error message in the response saying: File not found (Upload failed)
As written in the beginning of my post, i tried using their API console to test if the file can be downloaded from my site, which it could and PDF was created successfully .. So i guess im missing something somewhere, just cant see it...
So yeah,
First problem was that there was wrong content-type header set. For JSON payload it should be "application/json". With "application/x-www.form-urlencoded" content type header the server expected different payload so the call resulted with error.
Second one was about JSON parsing. JSON is not the same as JavaScript object. Keys in JSON must contain double quotes characters.
Finally I'm not sure what do you mean by success response. If you talking about status code - well it's just bad API configuration/design.

error writing mime multipart body part to output stream

I have code that does async file uploads which works fine on my dev vm but after I deployed it to the client system, I keep getting this error:
"error writing mime multipart body part to output stream"
I know this is the line that is throwing the error but I can't seem to figure out why:
//Read the form data and return an async task.
await Request.Content.ReadAsMultipartAsync(provider);
The file size was only 1MB and I even tried different file types with much smaller sizes. Why would this occur, I need ideas
Since the error message is mentioning about an error while writing to output stream, can you check if the folder to where the response is being written out has necessary permissions for your application to write.
You can also get this error if a file with the same name already exists in the destination folder.
I had this issue but I had already set permissions on the destination folder.
I fixed the problem by setting permissions on the App_Data folder (I think this is where the file gets temporarily stored after being uploaded).

.NET ZipPackage vs DotNetZip when getting streams to entries

I have been using the ZipPackage-class in .NET for some time and I really like the simple and intuitive API it has. When reading from an entry I do entry.GetStream() and I read from this stream. When writing/updating an entry I do entry.GetStream(FileAccess.ReadWrite) and write to this stream. Very simple and useful because I can hand over the reading/writing to some other code not knowing where the Stream comes from originally.
Now since the ZipPackage-API doesn't contain support for entry properties such as LastModified etc I have been looking into other zip-api's such as DotNetZip. But I'm a bit confused over how to use it. For instance, when wanting to read from an entry I first have to extract the entire entry into a MemoryStream, seek to the beginning and hand-over this stream to my other code. And to write to an entry I have to input a stream that the ZipEntry itself can read from. This seem very backwards to me. Am I using this API in a wrong way?
Isn't it possible for the ZipEntry to deliver the file straight from the disk where it is stored and extract it as the reader reads it? Does it really need to be fully extracted into memory first? I'm no expert but it seems wrong to me.
using the DotNetZip libraries does not require you to read the entire zip file into a memory stream. When you instantiate an instance an instance of ZipFile as shown below, the library is only reading from the zip file header. The zip file headers contain properties such as last modified, etc. Here is an example of opening a zip file. The DotNetZip library then reads the zip file headers and constructs a list of all entries on the zip:
using (Ionic.Zip.ZipFile zipFile = Ionic.Zip.ZipFile.Read(this.FileAbsolutePath))
{
...
}
It's up to you to then extract zip files either to a stream, to the file system, etc. In the example below, I'm using a string property accessor on zipFile to get a zip file named SomeFile.txt. The matching ZipEntry object is then extracted to a memory stream.
MemoryStream memStr = new MemoryStream();
zipFile["SomeFile.txt"].Extract(memStr); // Response.OutputStream);
Zip entries must be read into the .NET process space in order to be deflated, there's no way to bypass that by going straight into the filesystem. Similar to how your Windows Explorer shell zip extractor would work - The Windows shell extensions for 7zip or Windows built in Compressed Folders have to read entries into memory and then write them to the file system in order for you to be able to open an entry.
Okey I'm answering this my self because I found the answers. There are apparently methods for both these things I wanted in DotNetZip. For opening a read-stream -> myZipEntry.OpenReader() and for opening a write-stream -> myZipFile.UpdateEntry(e, (fn, obj) => Serialize(obj)). This works fine.