I am new to aws I was trying to create a pipeline. But it turns this error once it builds
[Container] 2020/05/23 04:32:56 Phase context status code: Decrypted Variables Error Message: parameter does not exist: JWT_SECRET
Even though the token was stored by running this command
s ssm put-parameter --name JWT_SECRET --value "myjwtsecret" --type SecureString
I tried to fix that by adding this line buildspec.yml post build commands. but still does not fix the problem
- kubectl set env deployment/simple-jwt-api JWT_SECRET=$JWT_SECRET
My buildspec.yml contain this added line to configure the pass of my jwt secret to the app
env:
parameter-store:
JWT_SECRET: JWT_SECRET
Check my github repos for more details about the code
Also once I run this under cmd to test the api endpoints kubectl get services simple-jwt-api -o wide I have got this error
Error from server (NotFound): services "simple-jwt-api" not found
Well it is obvious since the pipeline failed to build. Please how can I fix it?
In my case I go this error while I have created my stack in different region than the cluster. So whenever it search for the variable it does not find it. So, be carful to point to the same region in every creation action :).
The best solution I found was to add a region tag when declaring the env variables.
aws ssm put-parameter --name JWT_SECRET --value "myjwtsecret" --type SecureString --region <your-cluster-region>
I also encountered this same issue,
Changing the kubectl version in the buildspec.yml file worked for me
- curl -LO https://dl.k8s.io/release/v<YOUR_KUBERNETES_VERSION>/bin/linux/amd64/kubectl
# Download the kubectl checksum file
- curl -LO "https://dl.k8s.io/v<YOUR_KUBERNETES_VERSION>/bin/linux/amd64/kubectl.sha256"
Note that the <YOUR_KUBERNETES_VERSION> must be the same with what you have on your created cluster dashboard.
Related
I'm trying to build an image on aws codebuild, the buildspec files contains the following command.
- docker buildx build --platform=linux/amd64 -t $IMAGE_REPO_NAME:$IMAGE_TAG .
This errors out with unknown flag: --platform. I think it might be because of a docker version or something like that but am not sure how to fix it.
If I don't provide the buildx and build flag then when I try to run the container on ECS I get the error
standard_init_linux.go:228: exec user process caused: exec format error
Does anyone know how to fix this ?
I was setting up my new Mac for my eks environment.
After the installation of kubectl, aws-iam-authenticator and the kubeconfig file placement in default location. I ran the command kubectl command and got this error mentioned below in command block.
My cluster uses v1alpha1 client auth api version so basically i wanted to use the same one in my Mac as well.
I tried with latest version (1.23.0) of kubectl as well, still the same error. Whereas When i tried to do with aws-iam-authenticator (version 0.5.5) I was not able to download lower version.
Can someone help me to resolve it?
% kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
Thanks and Regards,
Saravana
I have the same problem
You're using aws-iam-authenticator 0.5.5, AWS changed the way it behaves in 0.5.4 to require v1beta1.
It depends on your configuration, but you can try to change the K8s context you're using to v1beta1
by checking your kubeconfig file (usually in ~/.kube/config) from client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Otherwise switch back to aws-iam-authenticator 0.5.3 - you might need to build it from source if you're using the M1 architecture as there's no darwin-arm64 binary built for it
This worked for me using M1 chip
sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config
I fixed the issue with command below
aws eks update-kubeconfig --name mycluster
I also solved this by updating the apiVersion value in my kube config file (~/.kube/config).
client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Also make sure the AWS CLI version is up-to-date. Otherwise, AWS IAM Authenticator might not work with v1beta1:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
This might be helpful to fix this issue for those who were using GitHub actions.
For my situation I was using kodermax/kubectl-aws-eks with GitHub actions.
I added the KUBECTL_VERSION and IAM_VERSION environment variables for each steps using kodermax/kubectl-aws-eks to keep them in fixed versions.
- name: deploy to cluster
uses: kodermax/kubectl-aws-eks#master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA_STAGING }}
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: my-app
IMAGE_TAG: ${{ github.sha }
KUBECTL_VERSION: "v1.23.6"
IAM_VERSION: "0.5.3"
Using kubectl 1.21.9 fixed it for me, with asdf:
asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9
And I would recommend having a .tools-versions file with:
kubectl 1.21.9
This question is a duplicate of error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" CircleCI
Please change the authentication apiVersion from v1alpha1 to v1beta1.
Old
apiVersion: client.authentication.k8s.io/v1alpha1
New
apiVersion: client.authentication.k8s.io/v1beta1
Sometimes this can happen if the Kube cache is corrupted (which happened in my case).
Deleting and recreating the below folder worked for me.
sudo rm -rf $HOME/.kube && mkdir -p $HOME/.kube
I have been using the docker build --ssh flag to give builds access to my keys from ssh-agent.
When I try the same thing with podman it does not work. I am working on macOS Monterey 12.0.1. Intel chip. I have also reproduced this on Ubuntu and WSL2.
❯ podman --version
podman version 3.4.4
This is an example Dockerfile:
FROM python:3.10
RUN mkdir -p -m 0600 ~/.ssh \
&& ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#github.com:ruarfff/a-private-repo-of-mine.git
When I run DOCKER_BUILDKIT=1 docker build --ssh default . it works i.e. the build succeeds, the repo is cloned and the ssh key is not baked into the image.
When I run podman build --ssh default . the build fails with:
git#github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Error: error building at STEP "RUN --mount=type=ssh git clone git#github.com:ruarfff/a-private-repo-of-mine.git": error while running runtime: exit status 128
I have just begun playing around with podman. Looking at the docs, that flag does appear to be supported. I have tried playing around with the format a little, specifying the id directly for example but no variation of specifying the flag or the mount has worked so far. Is there something about how podman works that I may be missing that explains this?
Adding this line as suggested in the comments:
RUN --mount=type=ssh ssh-add -l
Results in this error:
STEP 4/5: RUN --mount=type=ssh ssh-add -l
Could not open a connection to your authentication agent.
Error: error building at STEP "RUN --mount=type=ssh ssh-add -l": error while running runtime: exit status 2
Edit:
I belive this may have something to do with this issue in buildah. A fix has been merged but has not been released yet as far as I can see.
The error while running runtime: exit status 2 does not to me appear to be necessarily related to SSH or --ssh for podman build. It's hard to say really, and I've successfully used --ssh like you are trying to do, with some minor differences that I can't relate to the error.
I am also not sure ssh-add being run as part of building the container is what you really meant to do -- if you want it to talk to an agent, you need to have two environment variables being exported from the environment in which you run ssh-add, these define where to find the agent to talk to and are as follows:
SSH_AUTH_SOCK, specifying the path to a socket file that a program uses to communicate with the agent
SSH_AGENT_PID, specifying the PID of the agent
Again, without these two variables present in the set of exported environment variables, the agent is not discoverable and might as well not exist at all so ssh-add will fail.
Since your agent is probably running as part of the set of processes to which your podman build also belongs to, at the minimum the PID denoted by SSH_AGENT_PID should be valid in that namespace (meaning it's normally invalid in the set of processes that container building is isolated to, so defining the variable as part of building the container would be a mistake). Similar story with SSH_AUTH_SOCK -- the path to the socket file dumped by starting the agent program, would not normally refer to a file that exists in the mount namespace of the container being built.
Now, you can run both the agent and ssh-add as part of building a container, but ssh-add reads keys from ~/.ssh and if you had key files there as part of the container image being built you wouldn't need --ssh in the first place, would you?
The value of --ssh lies in allowing you to transfer your authority to talk to remote services defined through your keys on the host, to the otherwise very isolated container building procedure, through use of nothing else but an SSH agent designed for this very purpose. That removes the need to do things like copying key files into the container. They (keys) should also normally not be part of the built container, especially if they were only to be used during building. The agent, on the other hand, runs on the host, securely encapsulates the keys you add to it, and since the host is where you'd have your keys that's where you're supposed to run ssh-add at to add them to the agent.
I am trying to start busybox container as non root on CentOS 8 server, but its giving the below message.
What is the correct way to start the container as non-root user?
podman run -it --name busy docker.io/library/busybox sh
Trying to pull docker.io/library/busybox...Getting image source signatures
Copying blob bdbbaa22dec6 done
Copying config 6d5fcfe5ff done
Writing manifest to image destination
Storing signatures
ERRO[0003] Error pulling image ref //busybox:latest: Error committing the finished image: error adding layer with blob "sha256:bdbbaa22dec6b7fe23106d2c1b1f43d9598cd8fc33706cc27c1d938ecd5bffc7": Error processing tar file(exit status 1): there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Failed
Error: unable to pull docker.io/library/busybox: unable to pull image: Error committing the finished image: error adding layer with blob "sha256:bdbbaa22dec6b7fe23106d2c1b1f43d9598cd8fc33706cc27c1d938ecd5bffc7": Error processing tar file(exit status 1): there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Yes, the command you run is correct. On my Fedora 31 system it works just fine.
[testuser#fedora31 ~]$ podman run -it --name busy docker.io/library/busybox sh
Trying to pull docker.io/library/busybox...
Getting image source signatures
Copying blob bdbbaa22dec6 done
Copying config 6d5fcfe5ff done
Writing manifest to image destination
Storing signatures
/ # exit
[testuser#fedora31 ~]$ podman --version
podman version 1.8.0
[testuser#fedora31 ~]$
The flag --rm is also often useful.
It seems the error you get is related to the UID mapping.
Here is some information regarding running "rootless" podman:
https://github.com/containers/libpod/blob/master/docs/tutorials/rootless_tutorial.md
What also might be interesting:
"Does not work on NFS or parallel filesystem homedirs"
Quote from
https://github.com/containers/libpod/blob/master/rootless.md
In official docs we can see:
# docker build github.com/creack/docker-firefox
It just works fine to me. docker-firefox is a repository and has Dockerfile within root dir.
Then I want to buid redis image and exact version 2.8.10 :
# docker build github.com/docker-library/redis/tree/99c172e82ed81af441e13dd48dda2729e19493bc/2.8.10
2014/11/05 16:20:32 Error trying to use git: exit status 128 (Initialized empty Git repository in /tmp/docker-build-git067001920/.git/
error: The requested URL returned error: 403 while accessing https://github.com/docker-library/redis/tree/99c172e82ed81af441e13dd48dda2729e19493bc/2.8.10/info/refs
fatal: HTTP request failed
)
I got error above. What's the right format with build docker image from github repos?
docker build url#ref:dir
Git URLs accept context configuration in their fragment section,
separated by a colon :. The first part represents the reference that
Git will check out, this can be either a branch, a tag, or a commit
SHA. The second part represents a subdirectory inside the repository
that will be used as a build context.
For example, run this command to use a directory called docker in the
branch container:
docker build https://github.com/docker/rootfs.git#container:docker
https://docs.docker.com/engine/reference/commandline/build/
The thing you specified as repo URL is not a valid git repository. You will get error when you will try
git clone github.com/docker-library/redis/tree/99c172e82ed81af441e13dd48dda2729e19493bc/2.8.10
Valid URL for this repo is github.com/docker-library/redis. So you may want to try following:
docker build github.com/docker-library/redis
But this will not work too. To build from github, docker requires Dockerfile in repository root, howerer, this repo doesn't provide this one. So, I suggest, you only have to clone this repo and build image using local Dockerfile.
One can use the following example which sets up a Centos 7 container for testing ORC file format. Make sure to escape the # sign:
$ docker build https://github.com/apache/orc.git\#:docker/centos7 -t orc-centos7