How do I log into AWS before pulling image from ECR? - gitlab-ci

I'm pulling image into my pipeline from ECR. First I need to authenticate using
eval $(aws ecr get-login --region eu-west-2 --no-include-email | sed 's|https://||')
however I don't have an idea how to use it within CI workflow since this should run
before the image is pulled:
e2e:
stage: test
image: 2xxxxxxxxxxxxx8.dkr.ecr.eu-west-2.amazonaws.com/training-users_db
services:
- postgres:12.2-alpine
variables:
AWS_ACCOUNT_ID: $AWS_ACCOUNT_ID
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_KEY: $AWS_SECRET_ACCESS_KEY
POSTGRES_DB: users
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
DATABASE_TEST_URL: postgres://postgres:postgres#postgres:5432/users
FLASK_ENV: development
SECRET_KEY: $SECRET_KEY
script:
- eval $(aws ecr get-login --region eu-west-2 --no-include-email | sed 's|https://||')
How can I log into ECR right before the image is pulled ?

This is the closest I've found for documentation on this:
https://mherman.org/blog/gitlab-ci-private-docker-registry/
But I haven't been able to get it to work as described yet. I still get some authentication errors when it tries to pull my image. I have a support ticket open to try and figure it out, but maybe you can get it working?

Related

ECR login fails in gitlab runner

I'm trying to deploy ECS with task definition and I'm using ECR to store my docker image in was. When I try to login ECR in GitLab CI/CD with shared runner. I'm getting errors.
image: docker:19.03.10
services:
- docker:dind
variables:
REPOSITORY_URL: <REPOSITORY_URL>
TASK_DEFINITION_NAME: <Task_Definition>
CLUSTER_NAME: <CLUSTER_NAME>
SERVICE_NAME: <SERVICE_NAME>
before_script:
- apk add --no-cache curl jq python py-pip
- pip install awscli
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set region $AWS_DEFAULT_REGION
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"
stages:
- build
- deploy
build:
stage: build
script:
- echo "Building image..."
- docker build -t $REPOSITORY_URL:latest .
- echo "Tagging image..."
- docker tag $REPOSITORY_URL:latest $REPOSITORY_URL:$IMAGE_TAG
- echo "Pushing image..."
- docker push $REPOSITORY_URL:latest
- docker push $REPOSITORY_URL:$IMAGE_TAG
Error details:
There are two approaches that you can take to access a private registry. Both require setting the CI/CD variable DOCKER_AUTH_CONFIG with appropriate authentication information.
Per-job: To configure one job to access a private registry, add DOCKER_AUTH_CONFIG as a CI/CD variable.
Per-runner: To configure a runner so all its jobs can access a private registry, add DOCKER_AUTH_CONFIG as an environment variable in the runner’s configuration.
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#access-an-image-from-a-private-container-registry
I see the following issues in your config:
docker login is missing
without DOCKER_HOST docker:dind will not work
Please try to follow this tutorial - link, youtube video about the mentioned tutorial is here.

Push existing tarball image with kaniko

I want to build a Docker image (tarball) in my GitLab CI pipeline using kaniko, then scan it with trivy and push it to an AWS ECR using kaniko.
Step 1: kaniko build (tarball)
Step 2: trivy scan
Step 3: kaniko push (to AWS ECR!)
Unfortunately I can't find a way to push an existing tarball image with kaniko without rebuilding it.
I also tried crane for the push, but can't get a login due to the non-existent credHelper.
I don't actually want to do big installations, nor do I want to create a custom image for this.
Is this possible? What would be potential solutions?
Coincidentally, I did exactly this a while ago. Here is how I did it:
docker:build:
stage: build
image:
name: Kaniko image
entrypoint: [""]
script:
- mkdir tar_images
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context ${CI_PROJECT_DIR} --no-push --destination $CI_REGISTRY/<image_name>:<image_tag> --tarPath tar_images/$file_name.tar
artifacts:
paths:
- tar_images
when: on_success
# scan all built images
# currently bug with grype as docker registry is not public!
docker:scan:
stage: scan
image:
name: trivy
entrypoint: [""]
script:
- mkdir scan_result
- cd tar_images
- |
for tar_image in *.tar;
do
[ -e "$tar_image" ] || continue;
file_name=${tar_image%.*};
echo $file_name;
if [ "$vulnerability_scanner" = "trivy" ]; then
trivy image --timeout 15m --offline-scan --input $tar_image -f json -o ../scan_result/$file_name.json --severity CRITICAL;
fi
done
artifacts:
paths:
- scan_result
expire_in: 1 month
# push all images without detected security issues
docker:push:
stage: push
image:
name: gcr.io/go-containerregistry/crane:debug
entrypoint: [""]
rules:
- if: $UPDATE
script:
- cd tar_images
- |
for tar_image in *.tar;
do
file_name=${tar_image%.*};
vulnerabilities=`awk -F '[:,]' '/"Vulnerabilities"/ {gsub("[[:blank:]]+", "", $2); print $2}' ../scan_result/$file_name.json`; # find vulnerabilities in json file
if ! [ -z "$vulnerabilities" ]; then # if vulnerabilities found in image
echo "There are security issues with the image $img.Dockerfile. Image is not pushed to registry!";
else # push image
crane auth login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY;
crane push $tar_image $CI_REGISTRY_IMAGE/<image_name>:<image_tag>;
fi
done
What happens here is that in the first job the images are built using kaniko. They are stored as tar files and made accessible to the next job via artifacts. In the next job they are scanned using trivy and the scan results are stored as artifacts. Then the scan reports are analyzed and if no vulnerabilities had been detected the image is pushed using crane.
The code above probably does not work out of the box as I copied it out of a bigger yaml file.

Bitbucket Pippelines EKS Container image Change

It is required to deploy the ECR Image to EKS via Bitbucket pipelines.
So I have created the step below. But I am not sure about the correct command for the KUBECTL_COMMAND to change (set) the deployment image with the new one in a namespace in the EKS cluster:
- step:
name: 'Deployment to Production'
deployment: Production
trigger: 'manual'
script:
- pipe: atlassian/aws-eks-kubectl-run:2.2.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
CLUSTER_NAME: 'xxx-zaferu-dev'
KUBECTL_COMMAND: 'set image deployment.apps/xxx-dev xxx-application=123456789.dkr.ecr.eu-west-1.amazonaws.com/ci-cd-test:latest'
- echo " Deployment has been finished successfully..."
So I am looking for the correct way for this step!
If this is not the best way for the CI/CD deployment, I am planning to use basic command to change the conatiner image :
image: python:3.8
pipelines:
default:
- step:
name: Update EKS deployment
script:
- aws eks update-kubeconfig --name <cluster-name>
- kubectl set image deployment/<deployment-name> <container-name>=<new-image>:<tag> -n <namespace>
- aws eks describe-cluster --name <cluster-name>
I tried to use:
KUBECTL_COMMAND: 'set image deployment.apps/xxx-dev xxx-application=123456789.dkr.ecr.eu-west-1.amazonaws.com/ci-cd-test:latest'
but it gives an error :
INFO: Successfully updated the kube config.
Error from server (NotFound): deployments.apps "xxx-app" not found
sorry I got my bug, a missing namespace :)
- kubectl set image deployment/<deployment-name> <container-name>=<new-image>:<tag> -n <namespace>
I forgot to add -n and then I realized.

. gitlab-ci. yml pipeline run only on one branch

i have . gitlab-ci. yml file. when i push to stage branch it make stage commands (only stage) but when i merge to main it's still make "only stage" command
what i am missing ??
variables:
DOCKER_REGISTRY: 036470204880.dkr.ecr.us-east-1.amazonaws.com
AWS_DEFAULT_REGION: us-east-1
APP_NAME: apiv6
APP_NAME_STAGE: apiv6-test
DOCKER_HOST: tcp://docker:2375
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:latest
- aws ecs update-service --cluster apiv6 --service apiv6 --force-new-deployment
only:
- main
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME_STAGE:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME_STAGE:latest
- aws ecs update-service --cluster apiv6-test --service apiv6-test-service --force-new-deployment
only:
- stage
Itamar, I believe this is a YAML limitation. See this GitLab issue as reference.
The problem is that you have two jobs with the same name. But when the YAML file is parsed, you're actually overriding the first job.
Also, from the official GitLab documentation:
Use unique names for your jobs. If multiple jobs have the same name, only one is added to the pipeline, and it’s difficult to predict which one is chosen
Please, try renaming one of your jobs and test it again.

Running parallel builds in codebuild

I am trying to run commands in parallel in codebuild using the batch-list function, however, I cannot get it to work as intended. The commands are getting executed sequentially and not in parallel. Below is the buildspec file.
version: 0.2
phases:
pre_build:
commands:
- aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_URI
- aws eks update-kubeconfig --name $EKS_CLUSTER_NAME
build:
commands:
- echo "hi world" && sleep 10
- echo "hey" && sleep 20
batch:
fast-fail: false
build-list:
- identifier: build1
- identifier: build2
Can anyone please guide me on what I am doing wrong or what am I missing here.
P.S: this is just a sample buildspec.