s3fs mount in Redhat 7.7 with IAM role dont work - amazon-s3

I am trying to use s3fs mount following the link as below using IAM role on the EC2 instance spin out of the RHEL 7.7 AMI.
https://medium.com/tensult/aws-how-to-mount-s3-bucket-using-iam-role-on-ec2-linux-instance-ad2afd4513ef
The problem is s3fs mount is not working after following the steps. Can anyone help what is wrong ?
Regards
Pradeep

Related

An error occurred (InvalidParameter) when calling the ExportImage operation: The given S3 object is not local to the region

I want to export a copy of my Amazon Machine Image (AMI) as a virtual machine (VM) to deploy in my on-site virtualization environment.
I Follow the steps mentioned in the link below
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-export-vm-using-import-export/
While running the export image command as below
aws ec2 export-image --image-id ami-id --disk-image-format VMDK --s3-export-location S3Bucket=my-export-bucket,S3Prefix=exports/
I encountered S3 Object not local to the region error. I checked My S3 Bucket and AMI both are in the same region.
What could be the solution to this error? I try changing AMI and Buckets but did'nt work
Can you check your AWS config file? If it has a different region, then that might be the issue.
Location:
Home_Directory/.aws/config

aws s3 ls gives error botocore.utils.BadIMDSRequestError: <botocore.awsrequest.AWSRequest object at 0x7f3f6cb44d00>

Recently I installed aws cli on a linux machine following the documentation from aws official website. In the first go, I was able to run the s3 commands without any issue. As part of my development, I uninstalled aws-cli and re-installed it. I was getting the error botocore.utils.BadIMDSRequestError: <botocore.awsrequest.AWSRequest object at 0x7f3f6cb44d00>
when I execute aws s3 ls
I figured it out.
I just need to add the region
aws configure
AWS Access Key ID [******************RW]:
AWS Secret Access Key [******************7/]:
Default region name [None]: **us-east-1**
Then it works!
Thanks.

How to download file from S3 into EC2 instance using packers to build custom AMI

I am trying to create a custom AMI using packers.
I want to install some specific software on the custom AMI and my setups are present in S3 bucket. But it seems there is no direct way to download S3 file in packers just like cfn-init.
So is there any way to download file on EC2 instance using packers.
Install the awscli in the instance and use iam_instance_profile to give the instance permissions to get the files from S3.
I can envisage an instance where this is ineffective.
When building the image upon aws you use your local creds. Whilst the image is building this building packer image has a packer user and is not you and so not your creds and can't access the S3 (if private)
One option https://github.com/enmand/packer-provisioner-s3
Two option, use local-shell provisioner you pull down the S3 files to your machine using aws S3 cp, then file provisioner to upload to the correct folder in the builder image, you can then use remote-shell to do any other work on the files. I chose this as, although it's more code, it is more universal when I share my build, other have no need to install other stuff
Three option wait and wait. There is an enhancement spoke of in 2019 packer GitHub to offer an S3 passthrough using local cars but isn't on the official roadmap.
Assuming awscli is already installed on Ec2, use below sample commmand in a shell provisioner.
sudo aws s3 cp s3://bucket-name/path_to_folder/file_name /home/ec2-user/temp

How to use Amazon S3 as Moodle Data Root

I am trying to move my moodledata folder content into Amazon S3. i didnt found any document (or guide) to configure this setup.
I am using MOODLE 3.3 STABLE build version.
Can anyone help me to setup this?
You could use s3fs and mount it on your webserver.
I suggest to use local directory (for performance) to:
cache, localcache and sessions

Change user ownership of s3fs mounted buckets

how can I modify the user:group ownership of a s3fs mounted bucket?
I have a git installation that I would essentially like to store on my Amazon S3 account in a bucket, and then using Sparkleshare, via my web host, sync this data accross multiple machines.
- I Have set up the sparkleshare to successfully sync three machines. Works like a charm.
This is syncing to a folder at /home/git/dropbox No problems there.
I want the sync folder to me a mounted S3 bucket though
I can mount the buckets right next to that dropbox folder, but no love changing ownership to git:git
Problem: when you create the mount with root:root user, only that user has access to the bucket.
I tried to create the mount with S3FS logged in as the GIT user, but no luck, it still mounts and assigns permissions as the root:root user.
Do I uninstall S3FS and re-install using the GIT user?
Any help would be greatly appreciated!
Rick
You simply want to mount it as that user. You can also automount it by adding the uid and gid that you want it mounted as. For example, your /etc/fstab would have an entry such as the following:
s3fs#s3bucketName /mnt/point fuse defaults,noatime,allow_other,uid=500,gid=48,use_cache=/tmp,default_acl=public-read 0 0
On Ubuntu I am finding that whichever user does the s3fs mount will own it, even though ls will show the owner as root:root, and in fact root cannot use it. When you did the mount as the git user are you sure you could not write to it?
1.69 seems to have fixed a uid/gid issue
https://code.google.com/p/s3fs/downloads/detail?name=s3fs-1.69.tar.gz&can=2&q=