I create Spinnaker from Helm Chart (https://github.com/helm/charts/tree/master/stable/spinnaker).
Then I want to add ECR to my Spinnaker. I connect to haylard:
kubectl exec -it -n spinnaker spinnaker-spinnaker-halyard-0 bash
Then I put this command:
hal config provider docker-registry account add ecr-registry --repositories REPOSITORY_NAME --address https://ID.dkr.ecr.REGION.amazonaws.com --username AWS --password-command "aws --region REGION ecr get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d | sed 's/^AWS://'"
But on this spinnaker-spinnaker-halyard-0 aws CLI is not installed. So this ECR cannot be added.
Any ideas?
AWS cli is not installed on halyard by default I believe. If required, you can access the container as root and install it. However, if you are trying to configure ECR with spinnaker using --password-command, you won't need aws cli to be installed in halyard pod. Hope this helps.
Related
I'm trying to deploy ECS with task definition and I'm using ECR to store my docker image in was. When I try to login ECR in GitLab CI/CD with shared runner. I'm getting errors.
image: docker:19.03.10
services:
- docker:dind
variables:
REPOSITORY_URL: <REPOSITORY_URL>
TASK_DEFINITION_NAME: <Task_Definition>
CLUSTER_NAME: <CLUSTER_NAME>
SERVICE_NAME: <SERVICE_NAME>
before_script:
- apk add --no-cache curl jq python py-pip
- pip install awscli
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set region $AWS_DEFAULT_REGION
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"
stages:
- build
- deploy
build:
stage: build
script:
- echo "Building image..."
- docker build -t $REPOSITORY_URL:latest .
- echo "Tagging image..."
- docker tag $REPOSITORY_URL:latest $REPOSITORY_URL:$IMAGE_TAG
- echo "Pushing image..."
- docker push $REPOSITORY_URL:latest
- docker push $REPOSITORY_URL:$IMAGE_TAG
Error details:
There are two approaches that you can take to access a private registry. Both require setting the CI/CD variable DOCKER_AUTH_CONFIG with appropriate authentication information.
Per-job: To configure one job to access a private registry, add DOCKER_AUTH_CONFIG as a CI/CD variable.
Per-runner: To configure a runner so all its jobs can access a private registry, add DOCKER_AUTH_CONFIG as an environment variable in the runner’s configuration.
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#access-an-image-from-a-private-container-registry
I see the following issues in your config:
docker login is missing
without DOCKER_HOST docker:dind will not work
Please try to follow this tutorial - link, youtube video about the mentioned tutorial is here.
i have . gitlab-ci. yml file. when i push to stage branch it make stage commands (only stage) but when i merge to main it's still make "only stage" command
what i am missing ??
variables:
DOCKER_REGISTRY: 036470204880.dkr.ecr.us-east-1.amazonaws.com
AWS_DEFAULT_REGION: us-east-1
APP_NAME: apiv6
APP_NAME_STAGE: apiv6-test
DOCKER_HOST: tcp://docker:2375
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:latest
- aws ecs update-service --cluster apiv6 --service apiv6 --force-new-deployment
only:
- main
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME_STAGE:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME_STAGE:latest
- aws ecs update-service --cluster apiv6-test --service apiv6-test-service --force-new-deployment
only:
- stage
Itamar, I believe this is a YAML limitation. See this GitLab issue as reference.
The problem is that you have two jobs with the same name. But when the YAML file is parsed, you're actually overriding the first job.
Also, from the official GitLab documentation:
Use unique names for your jobs. If multiple jobs have the same name, only one is added to the pipeline, and it’s difficult to predict which one is chosen
Please, try renaming one of your jobs and test it again.
I was setting up my new Mac for my eks environment.
After the installation of kubectl, aws-iam-authenticator and the kubeconfig file placement in default location. I ran the command kubectl command and got this error mentioned below in command block.
My cluster uses v1alpha1 client auth api version so basically i wanted to use the same one in my Mac as well.
I tried with latest version (1.23.0) of kubectl as well, still the same error. Whereas When i tried to do with aws-iam-authenticator (version 0.5.5) I was not able to download lower version.
Can someone help me to resolve it?
% kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
Thanks and Regards,
Saravana
I have the same problem
You're using aws-iam-authenticator 0.5.5, AWS changed the way it behaves in 0.5.4 to require v1beta1.
It depends on your configuration, but you can try to change the K8s context you're using to v1beta1
by checking your kubeconfig file (usually in ~/.kube/config) from client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Otherwise switch back to aws-iam-authenticator 0.5.3 - you might need to build it from source if you're using the M1 architecture as there's no darwin-arm64 binary built for it
This worked for me using M1 chip
sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config
I fixed the issue with command below
aws eks update-kubeconfig --name mycluster
I also solved this by updating the apiVersion value in my kube config file (~/.kube/config).
client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Also make sure the AWS CLI version is up-to-date. Otherwise, AWS IAM Authenticator might not work with v1beta1:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
This might be helpful to fix this issue for those who were using GitHub actions.
For my situation I was using kodermax/kubectl-aws-eks with GitHub actions.
I added the KUBECTL_VERSION and IAM_VERSION environment variables for each steps using kodermax/kubectl-aws-eks to keep them in fixed versions.
- name: deploy to cluster
uses: kodermax/kubectl-aws-eks#master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA_STAGING }}
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: my-app
IMAGE_TAG: ${{ github.sha }
KUBECTL_VERSION: "v1.23.6"
IAM_VERSION: "0.5.3"
Using kubectl 1.21.9 fixed it for me, with asdf:
asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9
And I would recommend having a .tools-versions file with:
kubectl 1.21.9
This question is a duplicate of error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" CircleCI
Please change the authentication apiVersion from v1alpha1 to v1beta1.
Old
apiVersion: client.authentication.k8s.io/v1alpha1
New
apiVersion: client.authentication.k8s.io/v1beta1
Sometimes this can happen if the Kube cache is corrupted (which happened in my case).
Deleting and recreating the below folder worked for me.
sudo rm -rf $HOME/.kube && mkdir -p $HOME/.kube
I'm trying to remove the objects (empty bucket) and then copy new ones into an AWS S3 bucket:
aws s3 rm s3://BUCKET_NAME --region us-east-2 --recursive
aws s3 cp ./ s3://BUCKET_NAME/ --region us-east-2 --recursive
The first command fails with the following error:
An error occurred (InvalidRequest) when calling the ListObjects
operation: You are attempting to operate on a bucket in a region that
requires Signature Version 4. You can fix this issue by explicitly
providing the correct region location using the --region argument, the
AWS_DEFAULT_REGION environment variable, or the region variable in the
AWS CLI configuration file. You can get the bucket's location by
running "aws s3api get-bucket-location --bucket BUCKET". Completed 1
part(s) with ... file(s) remaining
Well, the error prompt is self-explanatory but the problem is that I've already applied the solution (I've added the --region argument) and I'm completely sure that it is the correct region (I got the region the same way the error message is suggesting).
Now, to make things even more interesting, the error happens in a gitlab CI environment (let's just say some server). But just before this error occurs, there are other buckets which the exact same command can be executed against and they work. It's worth mentioning that those other buckets are in different regions.
Now, to top it all off, I can execute the command on my personal computer with the same credentials as in CI server!!! So to summarize:
server$ aws s3 rm s3://OTHER_BUCKET --region us-west-2 --recursive <== works
server$ aws s3 rm s3://BUCKET_NAME --region us-east-2 --recursive <== fails
my_pc$ aws s3 rm s3://BUCKET_NAME --region us-east-2 --recursive <== works
Does anyone have any pointers what might the problem be?
For anyone else that might be facing the same problem, make sure your aws is up-to-date!!!
server$ aws --version
aws-cli/1.10.52 Python/2.7.14 Linux/4.13.9-coreos botocore/1.4.42
my_pc$ aws --version
aws-cli/1.14.58 Python/3.6.5 Linux/4.13.0-38-generic botocore/1.9.11
Once I updated the server's aws cli tool, everything worked. Now my server is:
server$ aws --version
aws-cli/1.14.49 Python/2.7.14 Linux/4.13.5-coreos-r2 botocore/1.9.2
I am getting Exception ( Determine Target Server Group )
Path parameter "app" value must not be null. when enabling server group. Can anyone tell me what I could be doing wrong? I can enable the server manually but when I put it in a stage it fails with the error.
Please upgrade to spinnaker version 1.17 That version solves issues with the enable server group stage.
To Upgrade Spinnaker:
Access halyard pod
get pods name
export HALYARD=$(kubectl -n spinnaker get pod -l app=halyard -oname | cut -d'/' -f 2)
Access Halyard pod with bash
kubectl -n spinnaker exec -it ${HALYARD} /bin/bash
Obtain the version by running Halyard Command
hal version bom
Set the version you want to use. Refer to the releases page Versions1
export UPGRADE_VERSION=1.17.6
hal config version edit --version $UPGRADE_VERSION
Deploy and apply the new version with hal
hal deploy apply