I'd like to write a script that can take a list of urls for some files on S3 and upload them directly to a LightSail instance. I know I can download the files from S3 and use sftp to upload to LightSail, but I'm hoping there is a way that I can trigger the file transfer directly from S3 to LightSail. Has anyone done this before?
Is there a way I can autosave autocad files or changes on the autocad files directly to S3 Bucket?, probably an API I can utilize for this workflow?
While I was not able to quickly find a plug in that does that for you, what you can do is one of the following:
Mount S3 bucket as a drive. You can read more at CloudBerry Drive - Mount S3 bucket as Windows drive
This might create some performance issues with AutoCad.
Sync saved files to S3
You can set a script to run every n minutes that automatically syncs your files to S3 using aws s3 sync. You can read more about AWS S3 Sync here. Your command might look something like
aws s3 sync /path/to/cad/files s3://bucket-with-cad/project/test
How to download simple storage service(s3) bucket files directly on user's local machine?
you can check the aws s3 cli so to copy a file from s3.
The following cp command copies a single object to a specified file locally:
aws s3 cp s3://mybucket/test.txt test2.txt
Make sure to use quotes " in case you have spaces in your key
aws s3 cp "s3://mybucket/test with space.txt" "./test with space.txt"
I have uploaded images from the hosting to the Amazon S3 by using the
s3cmd put -r –acl-public –guess-mime-type folder_name s3://abc/path/
Now I want to download the images from the Amazon S3 to my local system. Please suggest me the command so that I can download it.
Thanks
Use get.
s3cmd get s3://abc/path/
I have a 17.7GB file on S3. It was generated as the output of a Hive query, and it isn't compressed.
I know that by compressing it, it'll be about 2.2GB (gzip). How can I download this file locally as quickly as possible when transfer is the bottleneck (250kB/s).
I've not found any straightforward way to compress the file on S3, or enable compression on transfer in s3cmd, boto, or related tools.
S3 does not support stream compression nor is it possible to compress the uploaded file remotely.
If this is a one-time process I suggest downloading it to a EC2 machine in the same region, compress it there, then upload to your destination.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html
If you need this more frequently
Serving gzipped CSS and JavaScript from Amazon CloudFront via S3
Late answer but I found this working perfectly.
aws s3 sync s3://your-pics .
for file in "$(find . -name "*.jpg")"; do gzip "$file"; echo "$file"; done
aws s3 sync . s3://your-pics --content-encoding gzip --dryrun
This will download all files in s3 bucket to the machine (or ec2 instance), compresses the image files and upload them back to s3 bucket.
Verify the data before removing dryrun flag.
There are now pre-built apps in Lambda that you could use to compress images and files in S3 buckets. So just create a new Lambda function and select a pre-built app of your choice and complete the configuration.
Step 1 - Create a new Lambda function
Step 2 - Search for prebuilt app
Step 3 - Select the app that suits your need and complete the configuration process by providing the S3 bucket names.