S3 mounted not print as mounts at ansible_facts - ansible-facts

I have a mount fuse AWS S3 bucket in my machine, but it is not listed at ansible_facts mounts. It there any chance to make it listed?

Related

An error occurred (InvalidParameter) when calling the ExportImage operation: The given S3 object is not local to the region

I want to export a copy of my Amazon Machine Image (AMI) as a virtual machine (VM) to deploy in my on-site virtualization environment.
I Follow the steps mentioned in the link below
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-export-vm-using-import-export/
While running the export image command as below
aws ec2 export-image --image-id ami-id --disk-image-format VMDK --s3-export-location S3Bucket=my-export-bucket,S3Prefix=exports/
I encountered S3 Object not local to the region error. I checked My S3 Bucket and AMI both are in the same region.
What could be the solution to this error? I try changing AMI and Buckets but did'nt work
Can you check your AWS config file? If it has a different region, then that might be the issue.
Location:
Home_Directory/.aws/config

move files to s3 in EC2

I have S3 bucket in EC2 . I want to remove multiple files between s3 folders . however it showing deleted files but files are still there
command:
aws s3 rm s3://mybucket/path1/publish/test/dummyfile_*.dat
got below message
delete: s3://mybucket/path1/publish/test/dummyfile_*.dat,. But file is still present
can anyone please help
"Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all Regions."
from https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#CoreConcepts
If you make a copy of a S3 object to an EC2 instance, you simply made a copy of it.
You can use aws s3 sync to synchronize S3 objects (files) between S3 and your EC2 instance, see https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html

How to Mount S3 Bucket on EC2 Ubuntu Server and Store Web Applications Uploads Directly in that bucket and retrive When User access that Files

I Have Amazon EC2 Instance With Ubuntu 16.04 x64 and Hosted a Web Application on it.
Need to Mount S3 Bucket as one of the Folder and Need to Save User Uploaded Files Directly To S3 Bucket and Retrive When User Access That Files.
I Mounted S3 and Tried Uploading Files, But Files are not Uploading
This might be what you're looking for: https://github.com/s3fs-fuse/s3fs-fuse
BTW network based file systems can be slow for servers, do look into it for any performance issues!

Copying aws snapshot to S3 bucket

I want to copy an EBS snapshot to my S3 bucket, but i cannot find a way to do it after trying and researching.
I shall be grateful to you for any information that could get me started on a solution.
There is an answer within the AWS forums, but it's rather roundabout:
Create a temporary EBS volume from the snapshot. (Snapshots: Actions: Create volume)
Create a temporary EC2 Linux instance, install aws cli
Attach the volume to the instance and mount. (EBS Volumes: Actions: Attach volume - must be same availability zone)
find name of mounted snapshot volume from lsblk - eg /dev/xvdj
Copy the volume contents to your system - eg sudo bash -c "dd if=/dev/xvdj bs=8M | gzip > /home/ubuntu/volbk.gz"
Copy your .gz file to S3 - aws s3 cp ~/volbk.gz s3://my-bucket-name
check your s3 bucket contents arrived ok; unmount the snapshot volume
Terminate the instance.
Delete the snapshot EBS volume.
from here, with my additions (Nov 2 answer):
https://forums.aws.amazon.com/thread.jspa?messageID=151285
copy-snapshot command is the AWS CLI command that copies the snapshot of EBS volume and stores it in Amazon S3. You can copy the snapshot within the same region or from one region to another.
This example command copies the snapshot of arbitrary id from one region to another.
aws --region us-east-1 ec2 copy-snapshot --source-region us-west-2 --source-snapshot-id snap-066877671789bd71b --description "This is my copied snapshot."
for more info about this refer https://docs.aws.amazon.com/cli/latest/reference/ec2/copy-snapshot.html
With Amazon EBS, you can create point-in-time snapshots of volumes, which can be stored in Amazon S3. After you've created a snapshot and it has finished copying to Amazon S3, you can copy it from one AWS region to another, or within the same region. The snapshot copy ID is different than the ID of the original snapshot.
EBS snapshots are stored in Amazon S3. However, you will not find your snapshots in any of your S3 buckets. AWS uses the S3 infrastructure to store your EBS snapshots, but you cannot access them while they reside in S3.
You can copy the AWS EBS Snapshot using either AWS EC2 Console or Command Line.
i) Copy EBS snapshot using Console-:
Open the EC2 console-> Choose snapshot in the navigation pane-> Choose copy from actions list -> In Copy Snapshot dialog box provide necessary details like destination region, description, encryption etc and select copy.
ii) Copy EBS snapshot using Command line-:
Run the below command in AWS CLI:
aws --region <destination region> ec2 copy-snapshot --source -<source region> --source -snapshot-id <snap-0xyz9999999> --description

AWS S3 auto save folder

Is there a way I can autosave autocad files or changes on the autocad files directly to S3 Bucket?, probably an API I can utilize for this workflow?
While I was not able to quickly find a plug in that does that for you, what you can do is one of the following:
Mount S3 bucket as a drive. You can read more at CloudBerry Drive - Mount S3 bucket as Windows drive
This might create some performance issues with AutoCad.
Sync saved files to S3
You can set a script to run every n minutes that automatically syncs your files to S3 using aws s3 sync. You can read more about AWS S3 Sync here. Your command might look something like
aws s3 sync /path/to/cad/files s3://bucket-with-cad/project/test