Splitting a pdf File after uploading in amplify app with s3 storage - amazon-s3

I'm trying to create a webapp for uploading some files, I created an amplify app (react) with a storage hooked up, now I would like to work the files either before being uploaded or after, to split and retrieve only some pages,
I confess that I don't know where to start to get this result, could you advise me where to start without using lambda
I followed the amplify guides to build the app and storage, and I used this component to upload files:
https://ui.docs.amplify.aws/react/connected-components/storage/fileuploader
how can i get the result?
where should i start from?

Related

How to view the list of files in AWS S3 using Vuejs API?

I upload the files by referring https://www.youtube.com/watch?v=9x5LGaL2W7E.But I don't find any reference videos or links to view the files in the bucket with access key and secret key not with userID and Password. I am specially looking forward to develop this API in Vue.Js(VUE 2)
Navigate me.
You could somehow achieve that but the best solution is probably to use the AWS CLI and run something like s3 ls on your given bucket.
Here is the reference: https://docs.aws.amazon.com/cli/latest/reference/s3/

How to upload an image to IBM Cloud object storage (COS) using Node.js?

I am using ibm-cos-sdk but I can able to store text files only, but I want to upload images and pdf files. Can someone help me out of that?
The IBM Cloud solution tutorial on how to apply end to end security to a cloud app features a file sharing app. The files are stored in COS, the app built using Node.js. See these lines of code for the example.
// upload to COS
await cos.upload({
Bucket: COS_BUCKET_NAME,
Key: `${fileDetails.userId}/${fileDetails.id}/${fileDetails.name}`,
Body: fs.createReadStream(file.path),
ContentType: fileDetails.type,
}).promise();
The snippet uses the mentioned ibm-cos-sdk with the S3 interface. I have used the app and code to upload images and PDFs.

Migrate videos from Vimeo to S3

I have a large quantity of videos on my Vimeo account that I would like to migrate to my AWS S3 account.
Rather than go through the time consuming process of downloading from Vimeo to my local machine then uploading from my local machine to S3, is there a way where I can do a direct transfer from Vimeo to S3?
If possible, I would want to create a script to iterate through each video via Vimeo API and set up the path to where it would go into S3 then initiate a direct transfer. Any ideas or suggestions would be much appreciated!
If you have a PRO account or higher, you can use the API to get download links for videos on your account, including download links for the original source file. Those download file links should be able to be used for importing into S3. Note that the links provided via the Vimeo API are expiring HTTP 302 redirects to the video file resource, so make sure you take note of the expiration time also provided in the response.
Download links are returned with the rest of a video's metadata, so I suggest using the fields parameter to only return the metadata needed.
http://developer.vimeo.com/api/common-formats#json-filter
https://developer.vimeo.com/api/reference/videos#GET/users/{user_id}/videos

Storing a remote hosted image on S3 directly using java sdk

I know I can download the image on server and then upload again to S3 or any other cloud hosting service, but is there any way to store the image asset directly on s3 by supplying URL of asset instead of a file, because I don't want to add unwanted download and upload on my server.
Note: I am assured that the URI will be 99.9% up and image file will also be there. And I am OK to use services other than S3 if they have such a feature.
No. There is no API call for Amazon S3 that will retrieve content from another location.
You must supply the content as part of the API call.

Amazon S3: how to use/load an mp3 file simultaneously

I'm using Amazon S3 to store some mp3 files.
My web application, uses the Soundmanager2 javascript library to load the files from the Amazon bucket, and play them to users.
When the first user clicks on an mp3, soundmanager starts playing the file, and as intended, caches the rest of the song as it is being played.
Problem is, if a second user clicks on the same mp3, he must wait until the first user caches the whole song, which is unacceptable for my website.
I understand that Amazon S3 somehow 'streams' the file exclusively to the first request. Is there a way to be able to use that file simultaneously, i.e. users be able to play the same mp3's at the same time?
Also, would the CloudFront functionality solve this issue?
Thank you for your help!
Alex
(By the way, my application is built on Ruby on Rails 3, and hosted on Heroku)
There is no limitation in S3 that restricts simultaneous downloads of a single object.
I would suggest that you use a tool, like Charles, to inspect the HTTP requests and see if another service is causing the second client's request to be delayed.