When I upload files via dropzone and save them, I add order number to the names of files (file_name_1, file_name_2, ...). How do I do that? Existing number of files in folder + 1. But... files in dropzone are being uploaded in parallel mode, so it is possible that number for two file will be the same and one file rewrite another. I think that it would be a good idea to take some order number of file from dropzone and send it to server. How can I do that?
Related
I need to upload files and folders to the server while preserving the hierarchy. At the moment I am using a plugin multiFileUpload that allows you to upload multiple files at the same time, but it ignores the selected folders. I know that neither vaadin nor Html5 has a universal solution that works everywhere for uploading folders.
I'm ready to write my own solution, but climbed the Internet can't find a way to display file selection (perhaps there will a JavaScript call) but the main question - is it possible somehow to POST a request Vaadin's and upload files by way of creating subfolders in which they were?
You can only upload files, not folders. It's simply not doable.
You can upload any number of files, but they won't be structured into folders.
I see two possibilities how you could still achieve what you need if you really wanted to, even if it changes the user experience a bit:
Let the user upload a .zip file of his folder structure. When they upload it, you unzip it on the server side and have now access to all the files in the correct folder structure.
Let the user upload all his files within his folder structure. After all files have been uploaded, You display all the files in a TreeGrid where the user can recreate the original structure using Drag-and-Drop or similar.
I'm using the flow.js library for file uploads in a project.
I'm confused about the parameter files received by filesAdded and filesSubmitted events. Would the parameter always contain files corresponding to a particular directory on disk (in case of directory uploads) and a single file (in case of file uploads) or can it contain files from seemingly unrelated uploads as well?
To give you an example, consider the scenario where a user adds two files in sequential fashion.
Can the filesAdded and filesSubmitted events be triggered with the files parameter containing both these files even when they were part of unrelated uploads?
The problem arises when you have multiple upload sections on a page, and you use a single instance of Flow to handle all uploads. In this case, if files uploaded on distinct upload sections appear together in events like filesAdded or filesSubmitted, then it mixes up your files belonging to separate upload actions. One solution is creating a new Flow instance for every upload section on a page, but I wanted to understand the behavior filesAdded and filesSubmitted and whether the behavior would permit one to solve the problem using just one Flow instance.
Is it possible to ensure that all files created in iCloud container have unique filenames? I can imagine that with many devices running I will eventually stumble upon the files created with the same name on a number of devices.
Certainly the easiest solution would be to prepend all filenames with hour/minute/second, but I'd like to maintain a nice file structure where files in conflict would be renamed with version #.
In my case I organize file storage by month and year and so within each months file are named as File 1, File 2, File 3... File n.
I found that iCloud automatically renames files created with the same name on different devices. iCloud numbers files if it spots the digit at the end of file name, e.g. if file File - 1337.jpg was created twice on different devices, then after sync one file or another will be renamed to File -1338.jpg automatically.
I am wondering the best way to achieve de-duplicated (single instance storage) file storage within Amazon S3. For example, if I have 3 identical files, I would like to only store the file once. Is there a library, api, or program out there to help implement this? Is this functionality present in S3 natively? Perhaps something that checks the file hash, etc.
I'm wondering what approaches people have use to accomplish this.
You could probably roll your own solution to do this. Something along the lines of:
To upload a file:
Hash the file first, using SHA-1 or stronger.
Use the hash to name the file. Do not use the actual file name.
Create a virtual file system of sorts to save the directory structure - each file can simply be a text file that contains the calculated hash. This 'file system' should be placed separately from the data blob storage to prevent name conflicts - like in a separate bucket.
To upload subsequent files:
Calculate the hash, and only upload the data blob file if it doesn't already exist.
Save the directory entry with the hash as the content, like for all files.
To read a file:
Open the file from the virtual file system to discover the hash, and then get the actual file using that information.
You could also make this technique more efficient by uploading files in fixed-size blocks - and de-duplicating, as above, at the block level rather than the full-file level. Each file in the virtual file system would then contain one or more hashes, representing the block chain for that file. That would also have the advantage that uploading a large file which is only slightly different from another previously uploaded file would involve a lot less storage and data transfer.
We put hundreds of image files on Amazon S3 that our users need to synchronize to their local directories. In order to save storage space and bandwidth, we zip the files stored on S3.
On the user's end they have a python script that runs every 5 min to get a current list of files, and download new/updated files.
My question is what's the best way determine what is new or changed to download?
Currently we add an additional header that we put with the compressed file which contains the MD5 value of the uncompressed file...
We start with a file like this:
image_file_1.tif 17MB MD5 = xxxx1234
We compress it (with 7zip) and put it to S3 (with Python/Boto):
image_file_1.tif.z 9MB MD5 = yyy3456 x-amz-meta-uncompressedmd5 = xxxx1234
The problems is we can't get a large list of files from S3 that include the x-amz-meta-uncompressedmd5 header without an additional API for EACH one (SLOW for hundreds/thousands of files).
Our most practical solution is have users get a full list of files (without the extra headers), download the files that do not exist locally. If it does exist locally, then do and additional API call to get the full headers to compare local MD5 checksum against x-amz-meta-uncompressedmd5.
I'm thinking there must be a better way.
You could include the MD5 hash of the uncompressed image into the compressed filename.
So image_file_1.tif could become image_file_1.xxxx1234.tif.z
Your user python file which does the synchronising would therefore have the information needed to determine if it needed to go get the file again from S3, and could either strip out the MD5 part of the filename, or maintain it, depending on what you wanted to do.
Or, you could also maintain, on S3, a single file containing the full file list including the MD5 metadata. So the python script just need to fetch that single file, parse that, and then decide what to do.