Jenkins/Kubernetes cannot clone from gitea repository with valid cert - ssl

I am using Jenkins with Kubernetes agents, trying to build a Gitea-hosted git repository using an organizational folder configuration. When my build triggers, I get
stderr: fatal: unable to access 'https://<servername.com>/homelab/java-spring-microservice.git/': server certificate verification failed. CAfile: none
The repository (also hosted within the kubernetes cluster) has a valid LetsEncrypt certificate on its ingress (managed by cert-manager.) I'm able to clone this repo fine from git command-line (without having TLS disabled.)
My Jenkinsfile looks like this:
podTemplate(containers: [
containerTemplate(
name: 'maven',
image: 'maven:3.8.4-openjdk-11',
command: 'sleep',
args: '30d'
)
]) {
node(POD_LABEL) {
stage('Checkout') {
checkout scm
container('maven') {
stage('Build') {
sh '''
mvn clean package
'''
}
}
}
}
}
I've looked around and seen ways to get around this by disabling TLS for the git operation, but that seems wrong-headed to me, since TLS appears to be working. I'll admit to being a bit uncertain of how exactly all this works when things are under kubernetes (where should I be looking to see if the CA trust chain is correct, etc.)

After some digging, I determined that both the version of jenkins/inbound-agent and the version of jenkins itself were built against base images that did not have up-to-date CA trust chains. I was able to resolve the problem by updating to the latest Jenkins, updating the pod template for kubernetes, and updating the kubernetes plugin for jenkins.

Related

GitHub self-hosted action runner git LFS fails x509 certificate signed by unknown authority

I am trying to create a GitHub action that runs on a windows server self-hosted runner and I'm stuck on my checkout failing at the LFS download portion
I'm using
- uses: actions/checkout#v3
with:
lfs: true
The checkout for the normal code works fine, but when it gets to the LFS download step I get a lot of messages complaining about x509: certificate signed by unknown authority.
LFS: Get "https://github-cloud.githubusercontent.com/alambic/details_changed_to_protect_the_innocent": x509: certificate signed by unknown authority
The self-hosted runner is on a domain that is behind a firewall that interrogates https traffic and inserts its own certificate into the chain, so I'm guessing that the unknown authority is that certificate, but I don't know where that certificate needs to be trusted so that things work.
The certificate is trusted by the OS and is installed in the certificate store through a group policy, but it seems that git LFS is verifying the certificate chain separate from that and complains anyway because the certificate is unexpected.
A common solution I've seen floating around for things like this is just turn off SSL checking, but that feels like just a temporary hack and not a real solution. I would like for this to work with all security in place.
As an additional note, this is running on a server that is also running TeamCity, and the TeamCity GitHub config is able to clone repos with LFS from that same server, so these problems are just inside of the GitHub action runner environment that gets set up.
Since the firewall only inserts its certificate into https traffic, I was able to get things working using an ssh-key. I added the private key as a secret and the public key to the repo's deploy keys, and now everything is working as expected.
- uses: actions/checkout#v3
with:
lfs: true
ssh-key: ${{secrets.repo_ssh}}

helm: x509: certificate signed by unknown authority

I'm using Kubernetes and I recently updated my admin certs used in the kubeconfig. However, after I did that, all the helm commands fail thus:
Error: Get https://cluster.mysite.com/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: x509: certificate signed by unknown authority
kubectl works as expected:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-1-0-34.eu-central-1.compute.internal Ready master 42d v1.7.10+coreos.0
ip-10-1-1-51.eu-central-1.compute.internal Ready master 42d v1.7.10+coreos.0
ip-10-1-10-120.eu-central-1.compute.internal Ready <none> 42d v1.7.10+coreos.0
ip-10-1-10-135.eu-central-1.compute.internal Ready <none> 27d v1.7.10+coreos.0
ip-10-1-11-71.eu-central-1.compute.internal Ready <none> 42d v1.7.10+coreos.0
ip-10-1-12-199.eu-central-1.compute.internal Ready <none> 8d v1.7.10+coreos.0
ip-10-1-2-110.eu-central-1.compute.internal Ready master 42d v1.7.10+coreos.0
As far as I've been able to read, helm is supposed to use the same certificates as kubectl, which makes me curious as how how kubectl works, but helm doesn't?
This is a production cluster with internal releases handled through helm charts, so it being solved is imperative.
Any hints would be greatly appreciated.
As a workaround you can try to disable certificate verification. Helm uses the kube config file (by default ~/.kube/config). You can add insecure-skip-tls-verify: true for the cluster section:
clusters:
- cluster:
server: https://cluster.mysite.com
insecure-skip-tls-verify: true
name: default
Did you already try to reinstall helm/tiller?
kubectl delete deployment tiller-deploy --namespace kube-system
helm init
Also check if you have configured an invalid certificate in the cluster configuration.
In my case, I was running for a single self-manage and the config file was also container ca-file, so the following the above answer was throwing below error
Error: Kubernetes cluster unreachable: Get "https://XX.XX.85.154:6443/version?timeout=32s": x509: certificate is valid for 10.96.0.1, 172.31.25.161, not XX.XX.85.154
And my config was
- cluster:
certificate-authority-data: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
server: https://54.176.85.154:6443
insecure-skip-tls-verify: true
So I had to remove the certificate-authority-data.
- cluster:
server: https://54.176.85.154:6443
insecure-skip-tls-verify: true
Use --insecure-skip-tls-verify to skip tls verification via command line
helm repo add stable --insecure-skip-tls-verify https://charts.helm.sh/stable
In my case the error was caused by an untrusted certificate from the Helm repository.
Downloading the certificate and specifying it using the --ca-file option solved the issue (at least in Helm version 3).
helm repo add --ca-file /path/to/certificate.crt repoName https://example/repository
--ca-file string, verify certificates of HTTPS-enabled servers using this CA bundle
Adding the line below the -cluster to /home/centos/.kube/config file fixed my issue
insecure-skip-tls-verify: true
fixed my issue.
my config file now looks like this.
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/centos/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Nov 2021 20:51:44 EDT
provider: minikube.sigs.k8s.io
version: v1.23.2
name: cluster_info
server: https://192.168.49.2:8443
insecure-skip-tls-verify: true
name: minikube
contexts:
I encountered an edge case for this. You can also get this error if you have multiple kubeconfig files referenced in the KUBECONFIG variable, and more than one file has clusters with the same name.
For my case, it was an old version of helm (v. 3.6.3 in my case) after I upgraded to helm v.3.9.0 brew upgrade helm everything worked again.
Although adding repo with --ca-file did the thing, when it tried to download from that repo with the command posted under, I still got the x509: certificate signed by unknown authority
helm dependency update helm/myStuff
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "myRepo" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 18 charts
Downloading myService from repo https://myCharts.me/
Save error occurred: could not download https://myCharts.me/stuff.tgz ...
x509: certificate signed by unknown authority
Deleting newly downloaded charts, restoring pre-update state
What I needed to do, apart from adding repo with --ca-file was to download the repository certificate and install it as Current User:
Place all certificates in the following store: Trusted Root Certification Authorities:
After installing the certificate I also needed to restart the computer. After restart, when you open the browser and paste the repo URL it should connect without giving a warning and trusting the site (this way you know you installed the certificate successfully).
You can go ahead and run the command, it should pick the certificate this time.
helm dependency update helm/myStuff
....
Saving 18 charts
Downloading service1 from repo https://myCharts.me/
Downloading service2 from repo https://myCharts.me/
....

OpenShift Origin Build - unable to use git as a source

I'm trying to do a simple build of a nodejs app I wrote in OpenShift Origin using the following yaml:
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "dyn-kickstart"
spec:
triggers:
- type: "GitHub"
github:
secret: "secret101"
source:
git:
uri: git#bitbucket.org:serverninja02/dynamic-kickstart.git
sourceSecret:
name: "github"
strategy:
type: Docker
dockerStrategy:
dockerfilePath: .
forcePull: true
noCache: true
output:
to:
kind: "DockerImage"
name: "docker-registry-default.apps.reedfamily.local/serverninja/dynamic-kickstart:v0.0.1
The command I'm running to create the build:
$ cat dynamic-kickstart.yml | oc create -f -
What I'm running into is that the build service account doesn't seem to be able to access the github url to clone:
Cloning "git#bitbucket.org:serverninja02/dynamic-kickstart.git" ...
error: build error: Warning: Permanently added 'bitbucket.org,192.168.1.81' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I did follow the instructions on creating the ssh-privatekey secret, placing in the secret store, and linking to the build sa. I also double-checked that key and tested through ssh forwarding that I can log into the OpenShift node and ssh git#bitbucket.org.
I'm not sure what I'm doing wrong but even with using the http git url and making it a public repo, it still doesn't work as it complains about the peer certificate not being trusted:
Cloning "https://serverninja02#bitbucket.org/serverninja02/dynamic-kickstart.git" ...
error: build error: fatal: unable to access 'https://serverninja02#bitbucket.org/serverninja02/dynamic-kickstart.git/': Peer's certificate issuer has been marked as not trusted by the user.
At this point, I'm unsure where to go with this as OpenShift Origin doesn't seem to want to build anything from git as a source.
Any help or suggestions would be greatly appreciated!
OpenShift Version: 1.3.0
OpenShift Kubernetes Version: v1.3.0+52492b4
This is a flat network behind a router. DNS is on Active Directory with a wildcard entry for *.apps.reedfamily.local.
This is a test bed environment in a .local domain. However I'm using this build to potentially build this out as a POC for my company to host OpenShift.
I figured out the answer to my problem!!! So I'll share:
The /etc/resolv.conf was configured automatically during the build of my OpenShift nodes when I ran openshift-ansible. Unfortunately, there was a search domain placed in /etc/resolv.conf that must have been causing issues.
# Generated by NetworkManager
search apps.reedfamily.local
nameserver 192.168.1.40
Once I removed "search apps.reedfamily.local", that fixed the problem immediately on the next build!

Self signed certificate SSL Error using self hosted GitLab

I have a hosted Git repo on my company intranet. I can clone, pull, push, etc successfully with command line Git by disabling sslverify. I know this is not ideal but I have no control over our certificate or IT infrastructure so it is what it is.
I paid for GitLab EE, setup the omnibus package and I'm trying to clone the repo via https. However I get an error that it cannot verify the SSL certificate. This is not entirely unexpected but I cannot figure out how to bypass the ssl verification with GitLab EE. In the http settings I set self verified to true and pointed it to my .pem in /etc/gitlab/ssl but I get the same error.
Can I just set sslverify to false like I did command line git?
Since GitLab fails to pull from a Repo because the certificate check failed, you can set git specific settings in your /etc/gitlab/gitlab.rb. There is a key called omnibus_gitconfig['system'] there your config should be something like:
omnibus_gitconfig['system'] = { "http" => ["sslVerify = false"]}
This is bad practice and you should use it with caution.
You could specify the domain to disable certificate checks for with:
omnibus_gitconfig['system'] = { "http \"https://example.com\"" => ["sslVerify = false"]}
You can define it in omnibus configuration package like Fairy says.
Or you can use int a git bash command :
git config --global sslVerify false
This will disable the HTTPS verification of current repository

SSL error while cloning from github enterprise on AWS EC2 instance

My ultimate goal is to be able to do pip installs from our github enterprise server using the Elastic Beanstalk. The issue is that the ec2 instances will not trust our SSL certificate from Network Solutions.
Traceback from an Elastic Beanstalk Python EC2 instance:
>> git clone https://my.ghe.com/some/repo.git
Cloning into 'squire'...
fatal: unable to access 'https://my.ghe.com/some/repo.git/': Peer's Certificate issuer is not recognized.
I've tried a half-dozen possible fixes to no avail. Has anyone had any success cloning over https? I'd like to avoid cloning over ssh so I don't have to deal with the ssh keys in eb.
Check out this answer about using git config to disable SSL verification and other SSL-overriding options:
How can I make git accept a self signed certificate?