tkn - could not find the requested resource - tekton

Running these examples, when getting to:
tkn pipelinerun logs hello-goodbye-run -f -n default
I get this error:
Error: the server could not find the requested resource (get pipelineruns.tekton.dev hello-goodbye-run)
The pipeline is there:
kubectl get pipelinerun -A
NAMESPACE NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
default hello-goodbye-run True Succeeded 9h
kubectl api-resources | grep pipelineruns
pipelineruns pr,prs tekton.dev/v1beta1 true PipelineRun
tkn version
Client version: 2020-04-01T10:57:38Z
Pipeline version: unknown
Info: I am running on Apple arm64, if that could be an issue. I don't think as all pods are running.

Related

Docker Tag Error 25 on gitlab-ci.yml trying to start GitLab Pipeline

I'm going through the "Scalable FastAPI Application on AWS" course. My gitlab-ci.yml file is below.
stages:
- docker
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
cache:
key: ${CI_JOB_NAME}
paths:
- ${CI_PROJECT_DIR}/services/talk_booking/.venv/
build-python-ci-image:
image: docker:19.03.0
services:
- docker:19.03.0-dind
stage: docker
before_script:
- cd ci_cd/python/
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/chris_/talk-booking:cicd-python3.9-slim .
- docker push registry.gitlab.com/chris_/talk-booking:cicd-python3.9-slim
My Pipeline fails with this error:
See https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker build -t registry.gitlab.com/chris_/talk-booking:cicd-python3.9-slim .
invalid argument "registry.gitlab.com/chris_/talk-booking:cicd-python3.9-slim" for "-t, --tag" flag: invalid reference format
See 'docker build --help'.
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 125
It may or may not be relevant but the Container Registry for the GitLab project says there's a Docker connection error.
Thanks
I created a new GitLab account with a new username and things are working now. The underscore does appear to have been the issue.

AWS-S3 orb - Circle CI - Unexpected argument(s): arguments

I'm getting the following error in my build:
#!/bin/sh -eo pipefail
# Error calling workflow: 'build-deploy'
# Error calling job: 'build_test_es'
# Error calling command: 'aws-s3/sync'
# Unexpected argument(s): arguments
#
# -------
# Warning: This configuration was auto-generated to show you the message above.
# Don't rerun this job. Rerunning will have no effect.
false
Exited with code 1
This is how my config.yml file looks, I've supressed some parts.
version: 2.1
orbs:
aws-s3: circleci/aws-s3#1.0.0
jobs:
build_test_es:
docker:
- image: circleci/node:10.15
steps:
- checkout
- setup_remote_docker
- run:
name: NPM install
command: |
cd app
pwd
npm install
- run:
name: NPM build
command: |
cd app
pwd
npm run build
- run: mkdir bucket && echo "lorum ipsum" > bucket/build_asset.txt
- aws-s3/sync:
from: bucket
to: 's3://my-s3-bucket-name/prefix'
arguments: |
--acl public-read \
--cache-control "max-age=86400"
overwrite: true
As you can see I'm using the default command from the docs:
https://circleci.com/orbs/registry/orb/circleci/aws-s3#commands-sync
Is the orb broken? Have I misspleded something?
Fixed it by updating the orb. Nice way to waste time.
version: 2.1
orbs:
aws-s3: circleci/aws-s3#1.0.3

Setting up SSL between Helm and Tiller

I am following these instructions to setup SSL between helm and tiller
When I helm-init like this, I get an error
helm init --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
$HELM_HOME has been configured at /Users/Koustubh/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
When I check my pods, I get
tiller-deploy-6444c7d5bb-chfxw 0/1 ContainerCreating 0 2h
and after describing the pod, I get
Warning FailedMount 7m (x73 over 2h) kubelet, gke-myservice-default-pool-0198f291-nrl2 Unable to mount volumes for pod "tiller-deploy-6444c7d5bb-chfxw_kube-system(3ebae1df-e790-11e8-98ae-42010a9800f9)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"tiller-deploy-6444c7d5bb-chfxw". list of unmounted volumes=[tiller-certs]. list of unattached volumes=[tiller-certs default-token-9x886]
Warning FailedMount 1m (x92 over 2h) kubelet, gke-myservice-default-pool-0198f291-nrl2 MountVolume.SetUp failed for volume "tiller-certs" : secrets "tiller-secret" not found
If I try to delete the running tiller pod like this, it just gets stuck
helm reset --debug --force
How can I solve this issue? --upgrade flag with helm init, but that doesn't work either.
I had this issue but resolved it by deleting both the tiller deployment and the service and re-initalising.
I'm also using RBAC so have added those commands too:
# Remove existing tiller:
kubectl delete deployment tiller-deploy -n kube-system
kubectl delete service tiller-deploy -n kube-system
# Re-init with your certs
helm init --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
# Add RBAC service account and role
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
# Re-initialize
helm init --service-account tiller --upgrade
# Test the pod is up
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
tiller-deploy-69775bbbc7-c42wp 1/1 Running 0 5m
# Copy the certs to `~/.helm`
cp tiller.cert.pem ~/.helm/cert.pem
cp tiller.key.pem ~/.helm/key.pem
Validate that helm is only responding via tls
$ helm version
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Error: cannot connect to Tiller
$ helm version --tls
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Thanks to
https://github.com/helm/helm/issues/4691#issuecomment-430617255
https://medium.com/#pczarkowski/easily-install-uninstall-helm-on-rbac-kubernetes-8c3c0e22d0d7

Cannot get TCP port information from Kubernetes host-Openshift Origin-oc

I was following Openshift's Local Cluster Management documentation.
After I ran oc cluster up
[root#user ~]# oc cluster up
Starting OpenShift using openshift/origin:v3.6.0 ...
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ...
WARNING: Docker version is 1.21, it needs to be >= 1.22
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v3.6.0 image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... FAIL
Error: Cannot get TCP port information from Kubernetes host
Caused By:
Error: cannot create container using image openshift/origin:v3.6.0
Caused By:
Error: Error response from daemon: SHM size must be greater then 0
[root#ip-172-31-0-186 ~]# oc cluster up --loglevel=5
-- Checking OpenShift client ...
-- Checking Docker client ...
I0803 04:30:33.543172 1417 up.go:590] No Docker environment variables found. Will attempt default socket.
I0803 04:30:33.543221 1417 up.go:595] No Docker host (DOCKER_HOST) configured. Will attempt default socket.
-- Checking Docker version ...
I0803 04:30:33.543240 1417 helper.go:114] Retrieving Docker version
I0803 04:30:33.554087 1417 helper.go:120] Docker version results: &types.Version{Version:"1.9.1", APIVersion:"1.21", GitCommit:"78ee77d/1.9.1", GoVersion:"go1.4.2", Os:"linux", Arch:"amd64", KernelVersion:"3.10.0-693.el7.x86_64", Experimental:false, BuildTime:""}
I0803 04:30:33.554126 1417 helper.go:124] APIVersion: 1.21
I0803 04:30:33.554158 1417 up.go:686] Checking that docker API version is at least 1.22
WARNING: Docker version is 1.21, it needs to be >= 1.22
-- Checking for existing OpenShift container ...
I0803 04:30:33.554181 1417 helper.go:171] Inspecting docker container "origin"
I0803 04:30:33.555084 1417 helper.go:175] Container "origin" was not found
-- Checking for openshift/origin:v3.6.0 image ...
I0803 04:30:33.555101 1417 helper.go:143] Inspecting Docker image "openshift/origin:v3.6.0"
I0803 04:30:33.556444 1417 helper.go:146] Image "openshift/origin:v3.6.0" found: &types.ImageInspect{ID:"c6d16974c8a3a5da3ab799533daa2dbd54e56b1f0ebbad59345154fc8e836ff2", RepoTags:[]string{"docker.io/openshift/origin:v3.6.0"}, RepoDigests:[]string{}, Parent:"395d30169bc02cca2e7083926b0fd6f2e6b7034a6de41a811cce0ab7c7473fca", Comment:"", Created:"2017-08-01T18:34:13.736398725Z", Container:"ae53137cc1b98b2f93051589d6aee252e505ac82f8e7a31f5ab49bfc0e9dc91a", ContainerConfig:(*container.Config)(0xc420277b00), DockerVersion:"1.12.6", Author:"", Config:(*container.Config)(0xc4202e2120), Architecture:"amd64", Os:"linux", Size:611206034, VirtualSize:974248741, GraphDriver:types.GraphDriverData{Name:"devicemapper", Data:map[string]string{"DeviceId":"7", "DeviceName":"docker-202:2-25214823-c6d16974c8a3a5da3ab799533daa2dbd54e56b1f0ebbad59345154fc8e836ff2", "DeviceSize":"107374182400"}}, RootFS:types.RootFS{Type:"", Layers:[]string(nil), BaseLayer:""}}
-- Checking Docker daemon configuration ...
I0803 04:30:33.556503 1417 helper.go:65] Retrieving Docker daemon info
I0803 04:30:33.681753 1417 helper.go:71] Docker daemon info: &types.Info{ID:"IITV:S6LY:XNQS:LA63:VAH6:POZR:RGCW:MFWK:OTI7:DEII:AQK5:FDC6", Containers:0, ContainersRunning:0, ContainersPaused:0, ContainersStopped:0, Images:6, Driver:"devicemapper", DriverStatus:[][2]string{[2]string{"Pool Name", "docker-202:2-25214823-pool"}, [2]string{"Pool Blocksize", "65.54 kB"}, [2]string{"Base Device Size", "107.4 GB"}, [2]string{"Backing Filesystem", ""}, [2]string{"Data file", "/dev/loop0"}, [2]string{"Metadata file", "/dev/loop1"}, [2]string{"Data Space Used", "1.091 GB"}, [2]string{"Data Space Total", "107.4 GB"}, [2]string{"Data Space Available", "18.09 GB"}, [2]string{"Metadata Space Used", "1.339 MB"}, [2]string{"Metadata Space Total", "2.147 GB"}, [2]string{"Metadata Space Available", "2.146 GB"}, [2]string{"Udev Sync Supported", "true"}, [2]string{"Deferred Removal Enabled", "false"}, [2]string{"Deferred Deletion Enabled", "false"}, [2]string{"Deferred Deleted Device Count", "0"}, [2]string{"Data loop file", "/var/lib/docker/devicemapper/devicemapper/data"}, [2]string{"Metadata loop file", "/var/lib/docker/devicemapper/devicemapper/metadata"}, [2]string{"Library Version", "1.02.140-RHEL7 (2017-05-03)"}}, SystemStatus:[][2]string(nil), Plugins:types.PluginsInfo{Volume:[]string(nil), Network:[]string(nil), Authorization:[]string(nil)}, MemoryLimit:true, SwapLimit:true, KernelMemory:false, CPUCfsPeriod:true, CPUCfsQuota:true, CPUShares:false, CPUSet:false, IPv4Forwarding:true, BridgeNfIptables:true, BridgeNfIP6tables:true, Debug:false, NFd:15, OomKillDisable:true, NGoroutines:25, SystemTime:"2017-08-03T04:30:33.681150233-04:00", ExecutionDriver:"native-0.2", LoggingDriver:"json-file", CgroupDriver:"", NEventsListener:0, KernelVersion:"3.10.0-693.el7.x86_64", OperatingSystem:"Red Hat Enterprise Linux Server 7.4 (Maipo)", OSType:"", Architecture:"", IndexServerAddress:"https://index.docker.io/v1/", RegistryConfig:(*registry.ServiceConfig)(0xc4210fb700), NCPU:2, MemTotal:3973541888, DockerRootDir:"/var/lib/docker", HTTPProxy:"", HTTPSProxy:"", NoProxy:"", Name:"ip-172-31-0-186.us-west-2.compute.internal", Labels:[]string(nil), ExperimentalBuild:false, ServerVersion:"1.9.1", ClusterStore:"", ClusterAdvertise:"", SecurityOptions:[]string(nil)}
I0803 04:30:33.681847 1417 helper.go:42] Looking for "172.30.0.0/16" in []*registry.NetIPNet{(*registry.NetIPNet)(0xc4210f1a10), (*registry.NetIPNet)(0xc4210f1a70)}
I0803 04:30:33.681859 1417 helper.go:46] Found "172.30.0.0/16"
-- Checking for available ports ...
I0803 04:30:33.681920 1417 run.go:181] Creating container named ""
config:
image: openshift/origin:v3.6.0
entry point:
/bin/bash
command:
-c
cat /proc/net/tcp && ( [ -e /proc/net/tcp6 ] && cat /proc/net/tcp6 || true)
host config:
pid mode: host
user mode:
network mode: host
FAIL
Error: Cannot get TCP port information from Kubernetes host
Caused By:
Error: cannot create container using image openshift/origin:v3.6.0
Caused By:
Error: Error response from daemon: SHM size must be greater then 0
I have placed kubernetes config file in .kube/config. Still getting same error. Kubernetes cluster should be in same machine?
UPDATE-1
Install latest version from docker docs
To resolve dependency, installed container-selinux (sudo yum install ftp://fr2.rpmfind.net/linux/centos/7.3.1611/extras/x86_64/Packages/container-selinux-2.9-4.el7.noarch.rpm)
After I try to bringup the cluster with oc cluster up. This time, it failing at docker configuration.
[root#ip-172-31-0-186 ~]# oc cluster up
Starting OpenShift using openshift/origin:v3.6.0 ...
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v3.6.0 image ... OK
-- Checking Docker daemon configuration ... FAIL
Error: did not detect an --insecure-registry argument on the Docker daemon
Solution:
Ensure that the Docker daemon is running with the following argument:
--insecure-registry 172.30.0.0/16
Docs says, update the --insecure-registry 172.30.0.0/16 in /etc/sysconf/docker. But for new version of docker there is no file in that location. Any way I have created and updated the /etc/sysconf/docker. But still getting the above error.
Ok, the problem is insecure registry configuration. Specify insecure registry in daemon.json in /etc/docker with below config
{
"insecure-registries": [
"172.30.0.0/16"
]
}
This is working latest docker also.
For any particular version of Kubernetes or OpenShift the supported Docker version is little behind.
So I would advise you to install docker not the latest one from docker documentation. But install using your linux distribution's package manager. For Fedora and CentOS just do:
sudo yum install -y docker
Once you have done that all the dependency management will be taken care of and you don't need to manually install anything else.
Now that you have installed docker using the package manager you will find the /etc/sysconfig/docker. And you can add that line --insecure-registry 172.30.0.0/16.
HTH.

How check the httpd is enabled and running using InSpec with Kitchen-docker on CentOS?

Running my test with InSpec I am unable to test if the httpd is enabled and running.
InSpec test
describe package 'httpd' do
it { should be_installed }
end
describe service 'httpd' do
it { should be_enabled }
it { should be_running }
end
describe port 80 do
it { should be_listening }
end
The output for kitchen verify is:
System Package
✔ httpd should be installed
Service httpd
✖ should be enabled
expected that `Service httpd` is enabled
✖ should be running
expected that `Service httpd` is running
Port 80
✖ should be listening
expected `Port 80.listening?` to return true, got false
Test Summary: 1 successful, 3 failures, 0 skipped
Recipe for httpd installation:
if node['platform'] == 'centos'
# do centos installation
package 'httpd' do
action :install
end
execute "chkconfig httpd on" do
command "chkconfig httpd on"
end
execute 'apache start' do
command '/usr/sbin/httpd -DFOREGROUND &'
action :run
end
I do not know what I am doing wrong.
More info
CentOS version on docker instance
kitchen exec --command 'cat /etc/centos-release'
-----> Execute command on default-centos-72.
CentOS Linux release 7.2.1511 (Core)
Chef version installed in my host
Chef Development Kit Version: 1.0.3
chef-client version: 12.16.42
delivery version: master (83358fb62c0f711c70ad5a81030a6cae4017f103)
berks version: 5.2.0
kitchen version: 1.13.2
UPDATE 1: Kitchen yml with driver attributes
The platform has the configuration recommended by coderanger :
---
driver:
name: docker
use_sudo: false
provisioner:
name: chef_zero
verifier: inspec
platforms:
- name: centos-7.2
driver:
platform: rhel
run_command: /usr/lib/systemd/systemd
provision_command:
- /bin/yum install -y iniscripts net-tools wget
suites:
- name: default
run_list:
- recipe[apache::default]
verifier:
inspec_tests:
- test/integration
attributes:
And it is the output when run kitchen test:
... some docker steps...
Step 16 : RUN echo ssh-rsa\ AAAAB3NzaC1yc2EAAAADAQABAAABAQDIp1HE9Zbtl3zAH2KKL1mVzb7BU1WxK7mi5xpIxNRBar7EZAAzxi1pVb1JwUXFSCVoAmUyfn/lBsKlgXnUD49pKrqkeLQQW7NoG3uCFiXBUTof8nFVuLYtw4CTiAudplyMvu5J7HQIP1Hve1caY27tFs/kpkQaXHCEuIkqgrM2rreMKK0n8im9b36L2SwWyM/GwqcIS1z9mMttid7ux0\+HOWWHqZ\+7gumOauh6tLRbtjrm3YYoaIAMyv945MIX8BFPXSQixThBVOlXGA9iTwUZWjU6WvZThxVFkKPR9KZtUTuTCT7Y8\+wFtQ/9XCHpPR00YDQvS0Vgdb/LhZUDoNqV\ kitchen_docker_key >> /home/kitchen/.ssh/authorized_keys
---> Using cache
---> c0e6b9e98d6a
Successfully built c0e6b9e98d6a
d486d7ebfe000a3138db06b1424c943a0a1ee7b2a00e8a396cb8c09f9527fb4b
0.0.0.0:32841
Waiting for SSH service on localhost:32841, retrying in 3 seconds
Waiting for SSH service on localhost:32841, retrying in 3 seconds
Waiting for SSH service on localhost:32841, retrying in 3 seconds
Waiting for SSH service on localhost:32841, retrying in 3 seconds
.....
You cannot, at least not out of the box. This is one area where kitchen-docker shows its edges. We try to pretend that a container is like a tiny VM but in reality it isn't, and one notable place where the pretending breaks down is init systems. With CentOS 7, it uses systemd. It is possible to get systemd to run inside the container (see https://github.com/poise/yolover-example/blob/master/.kitchen.yml#L17-L33) but not all features are supported and it can generally be a bit odd :-/ That example should be enough to make your tests work though. For completeness, CentOS 6 uses Upstart which just flat out won't run inside Docker so no love there either.