Traefik Ingress Controller on Kubernetes, receiving permissions error - traefik

I've rolled out the Traefik Ingress Controller on my AKS Cluster (Kubernetes) on Azure. I've got the TOML file talking to the POD, but I receive the following on the logs:
ime="2018-12-21T00:09:36Z" level=error msg="Failed to read new
account, ACME data conversion is not available : permissions 755 for
certs are too open, please use 600" time="2018-12-21T00:09:36Z"
level=error msg="Unable to add ACME provider to the providers list:
unable to get ACME account : permissions 755 for certs are too open,
please use 600"
I have resolved this before in a docker environment where I simply chmod 600 the directory in question. However, I can't do that as I don't have direct access to the underlying storage.
If I open a shell to the container, the system and try to chmod that way the system tells me its a read only filesystem.
Any help is really appreciated.

Rutnet solved the issue by deploing a custom Traefik container which includes the required permissions.

Related

MinIO operator on minikube is not working

I'm trying to use the MinIO operator on a minikube (1 node) deployed in an EC2 machine.
The operator is deployed correctly and the same is for the tenant creation and it seems all good until I try to make a connection to the created tenant.
In this case I receive a 500 internal server error then I'm unable to create buckets or to use the mc client that MinIO provides.
I tried both with the MinIO console (using a port-forward) and the command line minio command to create the tenant and both worked.
This is what I see with kubectl:
mc test
kubectl get all -n minio-tenant-aisync
kubectl get all --all-namespaces
I am new to Kubernetes and MinIO then I don't know if I am missing something, could you help me please?
The first mc command that you are running shows there is something listening on port 9000 of your localhost, however you are getting a TLS verification error because MinIO by default is using a certificate issued by the local kubernetes certificate authority, also the returned certificate is not valid for localhost domain, the solution for this is to add the --insecure flag to your mc command (and include it in all subsequent commands unless you use a valid certificate), ie:
./mc alias set minio https://localhost:9000 [accesskey] [secretkey] --insecure

kubectl unable to connect to server: x509: certificate signed by unknown authority

i'm getting an error when running kubectl one one machine (windows)
the k8s cluster is running on CentOs 7 kubernetes cluster 1.7
master, worker
Here's my .kube\config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.10.12.7:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:localhost.localdomain
name: system:node:localhost.localdomain#kubernetes
current-context: system:node:localhost.localdomain#kubernetes
kind: Config
preferences: {}
users:
- name: system:node:localhost.localdomain
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
the cluster is built using kubeadm with the default certificates on the pki directory
kubectl unable to connect to server: x509: certificate signed by unknown authority
One more solution in case it helps anyone:
My scenario:
using Windows 10
Kubernetes installed via Docker Desktop ui 2.1.0.1
the installer created config file at ~/.kube/config
the value in ~/.kube/config for server is https://kubernetes.docker.internal:6443
using proxy
Issue: kubectl commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump which displayed the proxy html error page.
Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal
So kubectl doesn't trust the cluster, because for whatever reason the configuration has been messed up (mine included). To fix this, you can use openssl to extract the certificate from the cluster
openssl.exe s_client -showcerts -connect IP:PORT
IP:PORT should be what in your config is written after server:
Copy paste stuff starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- (these lines included) into a new text file, say... myCert.crt If there are multiple entries, copy all of them.
Now go to .kube\config and instead of
certificate-authority-data: <wrongEncodedPublicKey>`
put
certificate-authority: myCert.crt
(it assumes you put myCert.crt in the same folder as the config file)
If you made the cert correctly it will trust the cluster (tried renaming the file and it no longer trusted afterwards).
I wish I knew what encoding certificate-authority-data uses, but after a few hours of googling I resorted to this solution, and looking back I think it's more elegant anyway.
Run:
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400
here devops1-218400 is my project name. Replace it with your project name.
I got the same error while running $ kubectl get nodes as a root user. I fixed it by exporting kubelet.conf to environment variable.
$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes
For my case, its simple worked by adding --insecure-skip-tls-verify at end of kubectl commands, for single time.
Sorry I wasn't able to provide this earlier, I just realized the cause:
So on the master node we're running a kubectl proxy
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
I stopped this and voila the error was gone.
I'm now able to do
kubectl get nodes
NAME STATUS AGE VERSION
centos-k8s2 Ready 3d v1.7.5
localhost.localdomain Ready 3d v1.7.5
I hope this helps those who stumbled upon this scenario.
I my case I resolved this issue copying the kubelet configuration to my home kube config
cat /etc/kubernetes/kubelet.conf > ~/.kube/config
This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network
For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:
When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.
On GCP
check: gcloud version
-- localMacOS# gcloud version
Run:
--- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'
Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?
ref: .x509 #market place deployments on GCP #Kubernetes
I got this because I was not connected to the office's VPN
In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name and export KOPS_STATE_STORE=s3://"paste your S3 store" .
Now you should be able to access and see the resources of your cluster.
This is an old question but in case that also helps someone else here is another possible reason.
Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.
You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.
Update: When copying the ~/.kube/config file content on a local pc from a master node make sure to replace the hostname of the loadbalancer with a valid IP. In my case the problem was related to the dns lookup.
Hope this helps.

OpenShift Origin Build - unable to use git as a source

I'm trying to do a simple build of a nodejs app I wrote in OpenShift Origin using the following yaml:
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "dyn-kickstart"
spec:
triggers:
- type: "GitHub"
github:
secret: "secret101"
source:
git:
uri: git#bitbucket.org:serverninja02/dynamic-kickstart.git
sourceSecret:
name: "github"
strategy:
type: Docker
dockerStrategy:
dockerfilePath: .
forcePull: true
noCache: true
output:
to:
kind: "DockerImage"
name: "docker-registry-default.apps.reedfamily.local/serverninja/dynamic-kickstart:v0.0.1
The command I'm running to create the build:
$ cat dynamic-kickstart.yml | oc create -f -
What I'm running into is that the build service account doesn't seem to be able to access the github url to clone:
Cloning "git#bitbucket.org:serverninja02/dynamic-kickstart.git" ...
error: build error: Warning: Permanently added 'bitbucket.org,192.168.1.81' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I did follow the instructions on creating the ssh-privatekey secret, placing in the secret store, and linking to the build sa. I also double-checked that key and tested through ssh forwarding that I can log into the OpenShift node and ssh git#bitbucket.org.
I'm not sure what I'm doing wrong but even with using the http git url and making it a public repo, it still doesn't work as it complains about the peer certificate not being trusted:
Cloning "https://serverninja02#bitbucket.org/serverninja02/dynamic-kickstart.git" ...
error: build error: fatal: unable to access 'https://serverninja02#bitbucket.org/serverninja02/dynamic-kickstart.git/': Peer's certificate issuer has been marked as not trusted by the user.
At this point, I'm unsure where to go with this as OpenShift Origin doesn't seem to want to build anything from git as a source.
Any help or suggestions would be greatly appreciated!
OpenShift Version: 1.3.0
OpenShift Kubernetes Version: v1.3.0+52492b4
This is a flat network behind a router. DNS is on Active Directory with a wildcard entry for *.apps.reedfamily.local.
This is a test bed environment in a .local domain. However I'm using this build to potentially build this out as a POC for my company to host OpenShift.
I figured out the answer to my problem!!! So I'll share:
The /etc/resolv.conf was configured automatically during the build of my OpenShift nodes when I ran openshift-ansible. Unfortunately, there was a search domain placed in /etc/resolv.conf that must have been causing issues.
# Generated by NetworkManager
search apps.reedfamily.local
nameserver 192.168.1.40
Once I removed "search apps.reedfamily.local", that fixed the problem immediately on the next build!

how docker-machine uses docker api to copy certificates

My question is, as I understand docker-machine uses docker remote API to do whatever it does, for example to regenerate certificates. I have checked docker API but couldn't find how it's possible to send certificates to that machine using only docker api, can someone help please?
The TLS files are hosted locally on the Docker client. For this reason you should protect the files as if they were a root password.
This page will walk you through generating the files needed to negotiate a connection over TLS. Note that the remote daemon must be running TLS.
https://docs.docker.com/engine/security/https/
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=$HOST:2376 version
Note: Docker over TLS should run on TCP port 2376.
Warning: As shown in the example above, you don’t have to run the
docker client with sudo or the docker group when you use certificate
authentication. That means anyone with the keys can give any
instructions to your Docker daemon, giving them root access to the
machine hosting the daemon. Guard these keys as you would a root
password!

SSL error while cloning from github enterprise on AWS EC2 instance

My ultimate goal is to be able to do pip installs from our github enterprise server using the Elastic Beanstalk. The issue is that the ec2 instances will not trust our SSL certificate from Network Solutions.
Traceback from an Elastic Beanstalk Python EC2 instance:
>> git clone https://my.ghe.com/some/repo.git
Cloning into 'squire'...
fatal: unable to access 'https://my.ghe.com/some/repo.git/': Peer's Certificate issuer is not recognized.
I've tried a half-dozen possible fixes to no avail. Has anyone had any success cloning over https? I'd like to avoid cloning over ssh so I don't have to deal with the ssh keys in eb.
Check out this answer about using git config to disable SSL verification and other SSL-overriding options:
How can I make git accept a self signed certificate?