Azure Devops - pipeline to delete single s3 file - amazon-s3

I would like a pipeline setup that I can run manually. The idea here is that it deletes a single file held within an AWS S3 account. I know technically there are many ways to do this, but what is best practice?
Thank you!

You can use a task: AWS CLI and add it into pipeline to delete a single file held within an AWS S3 account.
You can follow below steps :
1、 You should create a service connection before adding a AWS CLI task to pipeline.
Create AWS service connection
2、 Add AWS CLI task to pipeline and configure required parameters. Please know the meaning of parameters about AWS CLI. You can refer the document :
Command structure in the AWS CLI
The command structure is like:
aws <command> <subcommand> [options and parameters]
In this example, you can use the command below to delete a single s3 file:
aws s3 rm s3://BUCKET_NAME/uploads/file_name.jpg
“s3://BUCKET_NAME/uploads/file_name.jpg” is the file path you saved in S3.
AWS CLI in pipeline
3 run the pipeline and the single s3 file can be deleted successfully.

Related

Providing credentials to the AWS CLI in ECS/Fargate

I would like to create an ECS task with Fargate, and have that upload a file to S3 using the AWS CLI (among other things). I know that it's possible to create task roles, which can provide the task with permissions on AWS services/resources. Similarly, in OpsWorks, the AWS SDK is able to query instance metadata to obtain temporary credentials for its instance profile. I also found these docs suggesting that something similar is possible with the AWS CLI on EC2 instances.
Is there an equivalent for Fargate—i.e., can the AWS CLI, running in a Fargate container, query the metadata service for temporary credentials? If not, what's a good way to authenticate so that I can upload a file to S3? Should I just create a user for this task and pass in AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables?
(I know it's possible to have an ECS task backed by EC2, but this task is short-lived and run maybe monthly; it seemed a good fit for Fargate.)
"I know that it's possible to create task roles, which can provide the
task with permissions on AWS services/resources."
"Is there an equivalent for Fargate"
You already know the answer. The ECS task role isn't specific to EC2 deployments, it works with Fargate deployments as well.
You can get the task metadata, including IAM access keys, through the ECS metadata service. But you don't need to worry about that, because the AWS CLI, and any AWS SDK, will automatically pull that information when it is running inside an ECS task.

Terraform resource for AWS S3 Batch Operation

I couldn't find Terraform resource for AWS S3 batch operation? I was able to create AWS s3 inventory file through terraform but couldn't create an s3 batch operation.
Did anyone create the s3 batch opearion through terraform?
No, there is no Terraform resource for an S3 batch operation. In general, most Terraform providers only have resources for things that are actually resources (they hang around), not things that could be considered "tasks". For the same reason, there's no CloudFormation resource for S3 batch operations either.
Your best bet is to use a module that allows you to run shell commands and use the AWS CLI for it. I like to use this module for these kinds of tasks. You would use it in combination with the AWS CLI command for S3 batch jobs.

gsutil cannot copy to s3 due to authentication

I need to copy many (1000+) files to s3 from GCS to leverage an AWS lambda function. I have edited ~/.boto.cfg and commented out the 2 aws authentication parameters but a simple gsutil ls s3://mybucket fails from either an GCE or EC2 VM.
Error is The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256..
I use gsutil version: 4.28 and locations of GCS and S3 bucket are respectively US-CENTRAL1 and US East (Ohio) - in case this is relevant.
I am clueless as the AWS key is valid and I enabled http/https. Downloading from GCS and uploading to S3 using my laptop's Cyberduck is impracticable (>230Gb)
As per https://issuetracker.google.com/issues/62161892, gsutil v4.28 does support AWS v4 signatures by adding to ~/.boto a new [s3] section like
[s3]
# Note that we specify region as part of the host, as mentioned in the AWS docs:
# http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
host = s3.eu-east-2.amazonaws.com
use-sigv4 = True
The use of that section is inherited from boto3 but is currently not created by gsutil config so it needs to be added explicitly for the target endpoint.
For s3-to-GCS, I will consider the more server-less Storage Transfer Service API.
I had a similar problem. Here is what I ended up doing on a GCE machine:
Step 1: Using gsutil, I copied files from GCS to my GCE hard drive
Step 2: Using aws cli (aws s3 cp ...), I copied files from GCE hard drive to s3 bucket
The above methodology has worked reliably for me. I tried using gsutil rsync but it fail unexpectedly.
Hope this helps

GoReplay - Upload to S3 does not work

I am trying to capture all incoming traffic on a specific port using GoReplay and to upload it directly to S3 servers.
I am running a simple file server on port 8000 and a gor instance using the (simple) command
gor --input-raw :8000 --output-file s3://<MyBucket>/%Y_%m_%d_%H_%M_%S.log
I does create a temporal file at /tmp/ but other than that, id does not upload any thing to S3.
Additional information :
The OS is Ubuntu 14.04
AWS cli is installed.
The AWS credentials are deffined within the environent
It seems the information you are providing or scenario you explained is not complete however to upload a file from your EC2 machine to S3 is simple as written command below.
aws s3 cp yourSourceFile s3://your bucket
To see your file you can see your file by using below command
aws s3 ls s3://your bucket
However, s3 is object storage and you can't use it to upload those files which are continually editing or adding or updating.

Code deploy from Bitbucket to AWS S3 bucket through Teamcity

I am trying make some Continuous Integration in Teamcity. Which is deploying code from Bitbucket to AWS s3 bucket.
We have a repository in bitbucket and it contains couple of folders as
I just need to deploy this build folder with all the contents in it to the AWS and inside one of the S3 Bucket Repository.
We can overwrite the existing copy of Build Folder which is already existing in the S3 bucket Repo.
My initial approach was i have to create a IAM user or a role with sufficient permissions for TeamCity to access AWS services
I need to install a AWS code deploy plugin in Teamcity.
But my only question is that how can i get the code from Bitbucket to Teamcity and deploy in AWS bucket.
Is there anyway around Please do the need full to me..
Thanks in Advance..