AWS S3 download files with exec permission - amazon-s3

I've been struggling with this one for quite a while. Thought it would work out-of-box based on AWS documentation of supporting the acl header.
I'm using the AWS S3 CLI in order to download files from my S3 bucket. Some of the files will need to have 'exec' permissions (running on Linux).
I can chmod the files but I would like to control that during the upload rather than during the download.
So, the question is whether I can use the AWS CLI so that it will automatically grant execution (or other) permissions based on something that I can set during the upload or afterwards on the uploaded file.
Thanks,

Related

Migrate moodle data to Amazon s3

I have migrated moodle data directory to Amazon s3. Now I am trying to access all the files from the s3 storage using this plugin moodle-tool_objectfs
Attaching my settings screenshot. I am trying to access all the media files amazon s3 instead from server file system. Example, site logo, course materials in PDF format, etc.,
Thanks for the shout-out Russell!
It sounds like you have manually migrated the content to S3 rather than relying on this plugin to do the work for you. I'd guess that your manual migration has put the files into a structure/path that the plugin isn't expecting. especially if you have copied your complete moodledata folder into S3 and not just the uploaded user files. (The tool_objectfs plugin does not replace the need for a normal moodledata directory, it just allows the majority of your files to be stored in S3.)
Usually you would have a Moodle site set up with a normal moodledata directory and then you would install our tool_objectfs plugin which would migrate files from moodledata to your s3 storage, relying on the plugin to perform the migration for you.

How to download file from S3 into EC2 instance using packers to build custom AMI

I am trying to create a custom AMI using packers.
I want to install some specific software on the custom AMI and my setups are present in S3 bucket. But it seems there is no direct way to download S3 file in packers just like cfn-init.
So is there any way to download file on EC2 instance using packers.
Install the awscli in the instance and use iam_instance_profile to give the instance permissions to get the files from S3.
I can envisage an instance where this is ineffective.
When building the image upon aws you use your local creds. Whilst the image is building this building packer image has a packer user and is not you and so not your creds and can't access the S3 (if private)
One option https://github.com/enmand/packer-provisioner-s3
Two option, use local-shell provisioner you pull down the S3 files to your machine using aws S3 cp, then file provisioner to upload to the correct folder in the builder image, you can then use remote-shell to do any other work on the files. I chose this as, although it's more code, it is more universal when I share my build, other have no need to install other stuff
Three option wait and wait. There is an enhancement spoke of in 2019 packer GitHub to offer an S3 passthrough using local cars but isn't on the official roadmap.
Assuming awscli is already installed on Ec2, use below sample commmand in a shell provisioner.
sudo aws s3 cp s3://bucket-name/path_to_folder/file_name /home/ec2-user/temp

GoReplay - Upload to S3 does not work

I am trying to capture all incoming traffic on a specific port using GoReplay and to upload it directly to S3 servers.
I am running a simple file server on port 8000 and a gor instance using the (simple) command
gor --input-raw :8000 --output-file s3://<MyBucket>/%Y_%m_%d_%H_%M_%S.log
I does create a temporal file at /tmp/ but other than that, id does not upload any thing to S3.
Additional information :
The OS is Ubuntu 14.04
AWS cli is installed.
The AWS credentials are deffined within the environent
It seems the information you are providing or scenario you explained is not complete however to upload a file from your EC2 machine to S3 is simple as written command below.
aws s3 cp yourSourceFile s3://your bucket
To see your file you can see your file by using below command
aws s3 ls s3://your bucket
However, s3 is object storage and you can't use it to upload those files which are continually editing or adding or updating.

AWS FTP behavior

I'm having some issue on my AWS S3 bucket and vsftpd.
I've created a vsftpd instance and mount AWS S3 bucket. My issue is that everytime I upload a file and the connection was disrupted, it appends the existing file on the S3 bucket instead of override it when the FTP client retry. What should I set on the S3 bucket policy to have such behavior to override instead of append?
There are no Amazon S3 configuration settings that would impact this behaviour -- it is totally the result of the software you are using.
It's also worth mentioning that FTP is a rather old protocol and these days there are much better alternatives, such as uploads via the browser or Dropbox-like shared folders.
One of the easiest options is to have your users upload directly to Amazon S3 -- that way, you don't need to run any servers. This could be done by uploading via a browser, or by providing users with some software, such as Cloudberry Explorer or the AWS Command-Line Interface (CLI).
I highly encourage you to stop using FTP these days.

Can I cUrl a file from the web to Amazon S3?

Is there a way to transfer a file from the web directly to my Amazon S3 account?
For example, I want to transfer a large RDF file from www.data.gov directly to Amazon S3 without having to download the file to my local machine first.
You need a server somewhere that will execute the curl command. The easiest way is probably to use this a tool that I wrote for AWS EC2: https://github.com/mjhm/cURLServer. You can check out the docs on a live version at http://ec2-204-236-157-181.us-west-1.compute.amazonaws.com/