I am trying to upload large files using axios put request to S3 bucket(preSigned url), the upload just paused when the system goes to sleep or i close the lid. It's resuming the upload after i open the system. is there any way to keep the upload live even on system sleep or closing the lid.
I am expecting a way to keep the upload live.
Related
Hi I am testing a file upload scenario in Jmeter. The way the upload works is when i upload a file grater than 10MB, lets say i upload a 100MB file the upload is broken down into 10MB chunks. In the browser developer tools i see 5 threads each uploading 10MB chunk and the thread that completes first picks the next chunk(6th) and so on until the entire 100 MB is uploaded. I see this is not something handles in the app code and rather on the browser level. How can i simulate the same from the Jmeter Tool ?
I have read Google Drive API documentation but I'm not able to understand the following:
Can files and folder be created and modified on drive in background of app?
My application needs working of drive in background.
For Files:
If you will check "Working with File Contents":
Lifecycle of a Drive file
The Drive Android API lets your app access files even if the device is offline. To support offline cases, the API implements a sync engine, which runs in the background to upstream and downstream changes as network access is available and to resolve conflicts.
Check this image from the document.
The lifecycle of a DriveFile object:
Perform an initial download request if the file is not yet synced to the local context but the user wants to open the file. The API handles this automatically when a file is requested.
Open the contents of a file. This creates a temporary duplicate of the file's binary stream which is only available to your application.
Read or modify the file contents, making changes to the temporary duplicate.
Commit or discard any file content changes that have been made.
If there are changes, the file contents are queued for upload to sync them back to the server.
Google API does support running in background. For folders there is no documentation regarding creating of folder can be done in background, but I think same implementation can be done.
I hope this helps.
Using FineUploader in S3 mode we're seeing uploads just pause periodically. clicking the pause and restart will allow the upload to finish. i'd be glad to post relevant snippets of the implementation if it'd help, but its pretty much stock. we're uploading large PDF files with scanned page images.
thanks
Most definitely a network issue between your client and S3 (or your signature server). In some cases, if there is a connection issue, the upload won't fail until the browser times out the request. The default browser timeout is a large value. Pausing actually aborts the request, and "continue" re-creates it and starts with the chunk after the last successful one.
This was discussed in #743 in the issue tracker, and we ultimately decided not to allow custom timeouts to be specified due to some reasons detailed in the issue.
I'm developing an application that uses (lots) of image processing.
The general overview of the system is:
User Uploads photos to server (Raw photo, with FULL resolution)
Server Fetches new photos, and apply image processing on them
Server resizes image and serves those photos (delete the full one?)
My current situation is that I have almost no expertise in image hosting nor large data uploading and managing.
What I plan to do is:
User uploads directly from Browser to Amazon S3 (Full Image)
User notifies my server, and add the uploaded file to the Queue for my workers
When worker receives a job, it downloads the full image (from Amazon), and process it. Updates database, and then re-uploads the image to Cloudinary (resize in server?)
Use the hosted image on Cloudinary from now on.
My doubts are regarding the process time. I don't want to upload it directly to my server, because it would require a lot of traffic and create a bottleneck, so using Amazon S3 would reduce that. And hosting images with Amazon would not be that good, since they don't provide specific API's to deal with images as Cloudinary does.
Working with separate servers for uploading, and only triggering my server when upload is done by the browser is ok? Using Cloudinary for hosting images is also something that makes sense? Sending to Amazon, instead of my own server (direct upload to my server) should be avoided?
(This is more a guidance/design question)
Why wouldn't you prefer uploading directly to Cloudinary?
The image can be uploaded directly from the browser to your Cloudinary account, without any further servers involved. Cloudinary then notifies you about the uploaded image and its details, then you can perform all the image processing in the cloud via Cloudinary. You can either manipulate the image while keeping the original, or you may choose to replace the original with the manipulated one.
Ok, so the question should be clear just from the title.
I'm uploading directly to S3 from plupload 2 beta.
Chrome/Firefox/Safari, of course, are all ok.
IE9 completes the upload - it appears in the bucket, everything is fine, but eventually plupload returns an error. When I watch the network activity through Fiddler2, you can see the request to s3 does not get a response for a looooong time. Well after the progress through plupload has hit 100%.
When the response come back, plupload has branded the file as 'FAILED' (code 4) even though the file has actually completed.
So, to summarise, the file upload does complete but plupload doesn't get a response (quick enough?) from s3 and errors out.
Code is here.
EDIT: So it looks like the progress is hitting 100 but the waiting time is the file actually uploading. The problem seems to be plupload throwing an error for a correctly uploaded file.