how to remove operator-lifecycle-manager from operatorhub.io - kubernetes-operator

I have installed an operator from operatorhub.io. Now how do you remove the operator lifecycle manager again?
Install:
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0
Delete:?

If you have the operator-sdk installed you can simply run
operator-sdk olm uninstall
Otherwise, taking a look at the OLM github repo's Makefile, you can follow these steps:
- kubectl delete -f deploy/upstream/quickstart/crds.yaml
- kubectl delete -f deploy/upstream/quickstart/olm.yaml
- kubectl delete catalogsources.operators.coreos.com
- kubectl delete clusterserviceversions.operators.coreos.com
- kubectl delete installplans.operators.coreos.com
- kubectl delete operatorgroups.operators.coreos.com subscriptions.operators.coreos.com
- kubectl delete apiservices.apiregistration.k8s.io v1.packages.operators.coreos.com
- kubectl delete ns olm
- kubectl delete ns openshift-operator-lifecycle-manager
- kubectl delete ns openshift-operators
- kubectl delete ns operators
- kubectl delete clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit
- kubectl delete clusterrole.rbac.authorization.k8s.io/aggregate-olm-view
- kubectl delete clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager
- kubectl delete clusterroles.rbac.authorization.k8s.io "system:controller:operator-lifecycle-manager"
- kubectl delete clusterrolebindings.rbac.authorization.k8s.io "olm-operator-binding-openshift-operator-lifecycle-manager"

Related

. gitlab-ci. yml pipeline run only on one branch

i have . gitlab-ci. yml file. when i push to stage branch it make stage commands (only stage) but when i merge to main it's still make "only stage" command
what i am missing ??
variables:
DOCKER_REGISTRY: 036470204880.dkr.ecr.us-east-1.amazonaws.com
AWS_DEFAULT_REGION: us-east-1
APP_NAME: apiv6
APP_NAME_STAGE: apiv6-test
DOCKER_HOST: tcp://docker:2375
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:latest
- aws ecs update-service --cluster apiv6 --service apiv6 --force-new-deployment
only:
- main
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME_STAGE:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME_STAGE:latest
- aws ecs update-service --cluster apiv6-test --service apiv6-test-service --force-new-deployment
only:
- stage
Itamar, I believe this is a YAML limitation. See this GitLab issue as reference.
The problem is that you have two jobs with the same name. But when the YAML file is parsed, you're actually overriding the first job.
Also, from the official GitLab documentation:
Use unique names for your jobs. If multiple jobs have the same name, only one is added to the pipeline, and it’s difficult to predict which one is chosen
Please, try renaming one of your jobs and test it again.

GitLab CI denies access to push using a deploy key with write access

I added a deploy key with write access to my GitLab repository. My .gitlab-ci.yml file contains:
- git clone git#gitlab.domain:user/repo.git
- git checkout master
- git add myfile.pdf
- git commit -m "Generated PDF file"
- git push origin master
The deploy key works when cloning the repository.
Pushing is not possible, even if the deploy key has write access.
remote: You are not allowed to upload code.
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#domain/user/repo.git/': The requested URL returned error: 403
I just encountered the same problem and saw this question without answer, so there is my solution.
Problem
The problem is caused by the fact that the remote url used by git to push the code is in the form http(s)://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#git.mydomain.com/group/project.git.
This url is using http(s) protocol so git doesn't use the ssh deploy key that you setup.
Solution
The solution is to change the push url of the remote origin so it matches ssh://git#git.mydomain.com/group/project.git.
The easiest way to do so is to use the predefined variable CI_REPOSITORY_URL.
Here is an example of code doing so by using sed:
# Change url from http(s) to ssh
url_host=$(echo "${CI_REPOSITORY_URL}" | sed -e 's|https\?://gitlab-ci-token:.*#|ssh://git#|g')
echo "${url_host}"
# ssh://git#git.mydomain.com/group/project.git
# Set the origin push url to the new one
git remote set-url --push origin "${url_host}"
Also, those using docker executor may want to verify the SSH host key as suggested by the gitlab documentation on deploy keys for docker executor.
So I give a more complete example for docker executor.
The code is mainly from gitlab documentation on ssh deploy keys.
In this example, the private deploy key is stored inside a variable named SSH_PRIVATE_KEY.
create:push:pdf:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "${SSH_PRIVATE_KEY}" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- git config --global user.email "email#example.com"
- git config --global user.name "User name"
- gitlab_hostname=$(echo "${CI_REPOSITORY_URL}" | sed -e 's|https\?://gitlab-ci-token:.*#||g' | sed -e 's|/.*||g')
- ssh-keyscan "${gitlab_hostname}" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- git checkout master
- git add myfile.pdf
- git commit -m "Generated PDF file"
- url_host=$(echo "${CI_REPOSITORY_URL}" | sed -e 's|https\?://gitlab-ci-token:.*#|ssh://git#|g')
- git remote set-url --push origin "${url_host}"
- git push origin master

Unable to finish Gitlab-CI job

I have CI setup, that deploy changes to the server. Everything works perfect, changes are pulled to server, but when all tasks are ended, runner still waiting:
What is wrong? It should be finished with success.
Here is the .gitlab-ci.yml:
stages:
- deploy
before_script:
# Setup SSH deploy keys
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY" | base64 --decode)
deploy_staging:
type: deploy
environment:
name: staging
url: example.com
script:
- ssh -o StrictHostKeyChecking=no root#example.com "cd public_html/gitlab-test && git checkout master && git pull origin master && exit"
only:
- master
Update:
Output:
Running with gitlab-runner 11.6.1 (8d829975)
on Shared heeGPy6w
Using Shell executor...
Running on demeter...
Fetching changes...
HEAD is now at 4eaccda Update .gitlab-ci.yml
From https://git.example.com/user/ssh-test
4eaccda..ce1729c master -> origin/master
Checking out ce1729c4 as master...
Skipping Git submodules setup
$ which ssh-agent || ( apt-get install -qq openssh-client )
/usr/bin/ssh-agent
$ eval $(ssh-agent -s)
Agent pid 14151
$ ssh-add <(echo "$SSH_PRIVATE_KEY" | base64 --decode)
Identity added: /dev/fd/63 (/dev/fd/63)
$ ssh -o StrictHostKeyChecking=no root#example.com "cd public_html/gitlab-test && git checkout master && git pull origin master"
Already on 'master'
From https://git.example.com/user/ssh-test
* branch master -> FETCH_HEAD
4eaccda..ce1729c master -> origin/master
Updating 4eaccda..ce1729c
Fast-forward
.gitlab-ci.yml | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
$ exit 0
and after this, still waiting...
Finally, I resolved my problem.
Reason was in line - eval $(ssh-agent -s) - when I commented it, the job could be finished (but of course, connection didn't work). So, I attempted add killing command at the end of script:
- eval $(ssh-agent -k)
It was a solution. Now everything works excellent.
Finally code:
stages:
- deploy
before_script:
# Setup SSH deploy keys
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent)
- ssh-add <(echo "$SSH_PRIVATE_KEY" | base64 --decode)
deploy_staging:
type: deploy
environment:
name: staging
url: example.com
script:
- ssh -o StrictHostKeyChecking=no root#example.com "cd public_html/gitlab-test && git checkout master && git pull origin master && exit 0"
- eval $(ssh-agent -k)
only:
- master

Setting up SSL between Helm and Tiller

I am following these instructions to setup SSL between helm and tiller
When I helm-init like this, I get an error
helm init --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
$HELM_HOME has been configured at /Users/Koustubh/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
When I check my pods, I get
tiller-deploy-6444c7d5bb-chfxw 0/1 ContainerCreating 0 2h
and after describing the pod, I get
Warning FailedMount 7m (x73 over 2h) kubelet, gke-myservice-default-pool-0198f291-nrl2 Unable to mount volumes for pod "tiller-deploy-6444c7d5bb-chfxw_kube-system(3ebae1df-e790-11e8-98ae-42010a9800f9)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"tiller-deploy-6444c7d5bb-chfxw". list of unmounted volumes=[tiller-certs]. list of unattached volumes=[tiller-certs default-token-9x886]
Warning FailedMount 1m (x92 over 2h) kubelet, gke-myservice-default-pool-0198f291-nrl2 MountVolume.SetUp failed for volume "tiller-certs" : secrets "tiller-secret" not found
If I try to delete the running tiller pod like this, it just gets stuck
helm reset --debug --force
How can I solve this issue? --upgrade flag with helm init, but that doesn't work either.
I had this issue but resolved it by deleting both the tiller deployment and the service and re-initalising.
I'm also using RBAC so have added those commands too:
# Remove existing tiller:
kubectl delete deployment tiller-deploy -n kube-system
kubectl delete service tiller-deploy -n kube-system
# Re-init with your certs
helm init --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
# Add RBAC service account and role
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
# Re-initialize
helm init --service-account tiller --upgrade
# Test the pod is up
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
tiller-deploy-69775bbbc7-c42wp 1/1 Running 0 5m
# Copy the certs to `~/.helm`
cp tiller.cert.pem ~/.helm/cert.pem
cp tiller.key.pem ~/.helm/key.pem
Validate that helm is only responding via tls
$ helm version
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Error: cannot connect to Tiller
$ helm version --tls
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Thanks to
https://github.com/helm/helm/issues/4691#issuecomment-430617255
https://medium.com/#pczarkowski/easily-install-uninstall-helm-on-rbac-kubernetes-8c3c0e22d0d7

Make kubectl work in gitlab ci

I am searching for a way to use kubectl in gitlab.
So far I have the following script:
deploy_to_dev:
stage: deploy
image: docker:dind
environment:
name: dev
script:
- mkdir -p $HOME/.kube
- echo $KUBE_CONFIG | base64 -d > $HOME/.kube/config
- kubectl config view
only:
- develop
But it says that gitlab does not know kubectl. So can you point me in the right direction.
You are using docker:dindimage which does not have the kubectl binary, you should bring your own image with the binary or download it in the process
deploy_to_dev:
stage: deploy
image: alpine:3.7
environment:
name: dev
script:
- apk update && apk add --no-cache curl
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl && mv ./kubectl /usr/local/bin/kubectl
- mkdir -p $HOME/.kube
- echo -n $KUBE_CONFIG | base64 -d > $HOME/.kube/config
- kubectl config view
only:
- develop
Use image google/cloud-sdk which has a preinstalled installation of gcloud and kubectl.
build:
stage: build
image: google/cloud-sdk
services:
- docker:dind
script:
# Make gcloud available
- source /root/.bashrc