AWS CodeBuild ECR CannotPullContainerError - aws-codebuild

CodeBuild project fails at the Provisioning phase due to the following error
BUILD_CONTAINER_UNABLE_TO_PULL_IMAGE: Unable to pull customer's container image. CannotPullContainerError: Error response from daemon: pull access denied for <image-name>, repository does not exist or may require 'docker login': denied: User: arn:aws:sts::<id>

The issue was with the Image Pull credentials.
CodeBuild was using default AWS CodeBuild credentials for pulling the image while the ECRAccessPolicy was attached to the Project Service Role.
I fixed it by updating the image pull credentials to use project service role.

To add additional clarity (not enough reputation yet to comment on an existing answer), the CodeBuild project service role needs to have the following permissions if trying to pull from a private repository:
{
"Action":[
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
],
"Effect":"Allow",
"Resource":[
"arn:aws:ecr:us-east-1:ACCOUNT_ID:repository/REPOSITORY_NAME*"
]
}
Also, the ECR repository policy should also look something like this (scope down root if desired):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_ID:root"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
]
}
]
}

fwiw I stumbled across this issue when using terraform to create my codebuild pipeline.
The setting to change for this was image_pull_credentials_type which should be set to SERVICE_ROLE rather than CODEBUILD in the environment block of the resource "aws_codebuild_project".
Thank you to Chaitanya for the response which pointed me in this direction with the accepted answer.

Related

s3/gcs - Copy s3 subdirectory to gcs without access to root? [duplicate]

I'm looking to set up a transfer job to take files stored within an S3 bucket and load them to a GCS bucket. The credentials that I have give me access to the folder that contains the files that I need from S3 but not to the higher level folders.
When I try to set up the transfer job with the S3 bucket name under 'Amazon S3 bucket' and the access key ID & secret access key filled-in, the access is denied as you would expect given the limits of my credentials. However, access is still denied if I add the extra path information as a Prefix item (e.g. 'Production/FTP/CompanyName') and I do have access to this folder.
It seems as though I can't get past the fact that I do not have access to the root directory. Is there any way around this?
According to the documentation link:
The Storage Transfer Service uses the
project-[$PROJECT_NUMBER]#storage-transfer-service.iam.gserviceaccount.com
service account to move data from a Cloud Storage source bucket.
The service account must have the following permissions for the source
bucket:
storage.buckets.get Allows the service account to get the location of
the bucket. Always required.
storage.objects.list Allows the service account to list objects in the
bucket. Always required.
storage.objects.get Allows the service account to read objects in the
bucket. Always required.
storage.objects.delete Allows the service account to delete objects in
the bucket. Required if you set deleteObjectsFromSourceAfterTransfer
to true.
The roles/storage.objectViewer and roles/storage.legacyBucketReader
roles together contain the permissions that are always required. The
roles/storage.legacyBucketWriter role contains the
storage.objects.delete permissions. The service account used to
perform the transfer must be assigned the desired roles.
You have to set this permissions on your AWS bucket.
Paul,
Most likely your IAM role is missing s3:ListBucket permission. Can you update your IAM role to have s3:ListBucket , s3:GetBucketLocation and try again?
On AWS permission policy should like below , in case you want to give access to subfolder.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::<bucketname>",
"arn:aws:s3:::<bucketname>/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": "arn:aws:s3:::<bucketname>",
"Condition": {
"StringLike": {
"s3:prefix": [
"<subfolder>/*"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::<bucketname>/<subfolder>",
"arn:aws:s3:::<bucketname>/<subfolder>/*"
],
"Condition": {}
}
]
}

Terraform 403 Forbidden Error while executing terraform plan

We are currently using S3 as our backend for preserving the tf state file. While executing terraform plan we are receiving the below error:
Error: Forbidden: Forbidden
status code: 403, request id: 18CB0EA827E6FE0F, host id: 8p0TMjzvooEBPNakoRsO3RtbARk01KY1KK3z93Lwyvh1Nx6sw4PpRyfoqNKyG2ryMNAHsdCJ39E=
We have enabled the debug mode and below is the error message we have noticed.
2020-05-31T20:02:20.842+0400 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: Accept-Encoding: gzip
2020-05-31T20:02:20.842+0400 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4:
2020-05-31T20:02:20.842+0400 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4:
2020-05-31T20:02:20.842+0400 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: -----------------------------------------------------
2020/05/31 20:02:20 [ERROR] <root>: eval: *terraform.EvalRefresh, err: Forbidden: Forbidden
status code: 403, request id: 2AB56118732D7165, host id: 5sM6IwjkufaDg1bt5Swh5vcQD2hd3fSf9UqAtlL4hVzVaGPRQgvs1V8S3e/h3ta0gkRcGI7GvBM=
2020/05/31 20:02:20 [ERROR] <root>: eval: *terraform.EvalSequence, err: Forbidden: Forbidden
status code: 403, request id: 2AB56118732D7165, host id: 5sM6IwjkufaDg1bt5Swh5vcQD2hd3fSf9UqAtlL4hVzVaGPRQgvs1V8S3e/h3ta0gkRcGI7GvBM=
2020/05/31 20:02:20 [TRACE] [walkRefresh] Exiting eval tree: aws_s3_bucket_object.xxxxxx
2020/05/31 20:02:20 [TRACE] vertex "aws_s3_bucket_object.xxxxxx": visit complete
2020/05/31 20:02:20 [TRACE] vertex "aws_s3_bucket_object.xxxxxx: dynamic subgraph encountered errors
2020/05/31 20:02:20 [TRACE] vertex "aws_s3_bucket_object.xxxxxx": visit complete
We have tried reverting the code and tfstate file to a working version and tried. Also, deleted the tfstate file locally as well. Still the same error.
s3 bucket policy is as below:
{
"Sid": "DelegateS3Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxx:role/Administrator"
},
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectTagging"
],
"Resource": [
"arn:aws:s3:::xxxxxx/*",
"arn:aws:s3:::xxxxxx"
]
}
The same role is being assumed by terraform for execution and still it fails. I have emptied the bucket policy as well and tried but didn't see any success. I understand it is something to do with the bucket policy itself, but not sure how to fix it.
Any pointers to fix this issue is highly appreciated.
One thing to check is who you are (from an AWS API perspective), before running Terraform:
aws sts get-caller-identity
If the output is like this, then you are authenticated as an IAM User who will not have access to the bucket since it grants access to an IAM Role and not an IAM User:
{
"UserId": "AIDASAMPLEUSERID",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/DevAdmin"
}
In that case, you'll need to configure AWS CLI to assume arn:aws:iam::xxxxxx:role/Administrator.
[profile administrator]
role_arn = arn:aws:iam::xxxxxx:role/Administrator
source_profile = user1
Read more on that process here:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html
If get-caller-identity returns something like this, then you are assuming the IAM Role and the issue is likely with the actions in the Bucket policy:
{
"UserId": "AIDASAMPLEUSERID",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:assumed-role/Administrator/role-session-name"
}
According to the Backend type: S3 documentation, you also need s3:PutObject:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mybucket"
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::mybucket/path/to/my/key"
}
]
}
While I can't see why PutObject would be needed for plan, it is conceivably what is causing this Forbidden error.
You can also look for denied S3 actions in CloudTrail if you have enabled that.
The issue is fixed now. We have performed a s3 copy action prior to this which had copied all the s3 objects from account A to account B. The issue here is copy command always moves objects along with the same user permissions which made the current user role not able to access these newly copied objects resulting in Forbidden 403 error.
We have cleared all the objects in this bucket and run the aws sync command instead of cp which fixed the issue for us. Thank you Alain for the elaborated explanation. Those surely helped us in fixing this issue.
This helped us point to right issue.
Steps followed:
Backup all the s3 objects.
Empty the bucket.
Run terraform plan.
Once the changes are made to the bucket, run aws sync command.

AWS CloudWatch logstream data completely missing

I am looking for help in troubleshooting a curious CloudWatch issue.
I have a CloudWatch logshipping agent setup for /var/log/syslog on Ubuntu.
The aws log is full of happy INFO messages such as the following. No errors, no exceptions:
The log stream in CloudWatch console is completely empty - no data whatsoever. I tried manually creating the logstream using the instance name (matching the name from the log) - no improvement.
The instance is using IAM role with the following policy, which works perfectly fine for other instances:
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"ec2:DescribeTags",
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListMetrics",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
],
"Resource": "*"
}

Cloudwatch monitoring script errors when attempting to send metrics for ASG

I am trying to send use the Cloudwatch monitoring script to send metrics for memory, disk and swap utilization from an EC2 instance to Cloudwatch. In order to run the script I need to provide it AWS credentials or an IAM role. When attempting to use an IAM role I find that I get the below error
[ec2-user#ip-x-x-x-x aws-scripts-mon]$ /home/ec2-user/aws-scripts-
mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --auto-
scaling=only --verbose --aws-iam-role=ACCT-CloudWatch-
service-role
Using AWS credentials file </home/ec2-user/aws-scripts-
mon/awscreds.conf>
WARNING: Failed to call EC2 to obtain Auto Scaling group name. HTTP
Status Code: 0. Error Message: Failed to obtain credentials for IAM
role ACCT-CloudWatch-service-role. Available roles: ACCT-service-role
WARNING: The Auto Scaling metrics will not be reported this time.
[ec2-user#ip-x-x-x-x aws-scripts-mon]$
This is what my IAM policy looks like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingNotificationTypes",
"autoscaling:DescribeAutoScalingInstances",
"ec2:DescribeTags",
"autoscaling:DescribePolicies",
"logs:DescribeLogStreams",
"autoscaling:DescribeTags",
"autoscaling:DescribeLoadBalancers",
"autoscaling:*",
"ssm:GetParameter",
"logs:CreateLogGroup",
"logs:PutLogEvents",
"ssm:PutParameter",
"logs:CreateLogStream",
"cloudwatch:*",
"autoscaling:DescribeAutoScalingGroups",
"ec2:*",
"kms:*",
"autoscaling:DescribeLoadBalancerTargetGroups"
],
"Resource": "*"
}
]
}
What could I be missing?
The message states the problem comes from the role it tries to use:
Failed to obtain credentials for IAM
role ACCT-CloudWatch-service-role. Available roles: ACCT-service-role
Modify this part of your command to --aws-iam-role=ACCT-
service-role (I am assuming that this role is the one configured correctly)

403 access denied error for files uploaded to s3 using aws command line tool

I'm trying to upload images to s3 using the aws command line tool. I keep getting a 403 access denied error.
I think the --acl flag from here http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html should fix this but all the options I've tried haven't helped.
I have a django app running which uploads to s3 and I can access those images fine.
did you set any IAM or bucket policy to allow upload to your bucket?
There is (at least) 3 ways to enable a user to access a bucket:
specify an acl at the bucket level (go to the bucket page, select your bucket and click "properties", there you can grant more accesses)
attach a policy to the bucket itself, e.g to something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn of user or role"
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::your bucket name"
}
]
}
attach an policy to your IAM user, e.g to give admin rights:go to IAM > users > your user > attach policy > AmazonS3FullAccess
If you want to build your own policy, you can use the aws policy generator
If you want more specific help, please provide more details (which users should have which permissions on your bucket, etc)
hope this helps