GitHub self-hosted action runner git LFS fails x509 certificate signed by unknown authority - ssl-certificate

I am trying to create a GitHub action that runs on a windows server self-hosted runner and I'm stuck on my checkout failing at the LFS download portion
I'm using
- uses: actions/checkout#v3
with:
lfs: true
The checkout for the normal code works fine, but when it gets to the LFS download step I get a lot of messages complaining about x509: certificate signed by unknown authority.
LFS: Get "https://github-cloud.githubusercontent.com/alambic/details_changed_to_protect_the_innocent": x509: certificate signed by unknown authority
The self-hosted runner is on a domain that is behind a firewall that interrogates https traffic and inserts its own certificate into the chain, so I'm guessing that the unknown authority is that certificate, but I don't know where that certificate needs to be trusted so that things work.
The certificate is trusted by the OS and is installed in the certificate store through a group policy, but it seems that git LFS is verifying the certificate chain separate from that and complains anyway because the certificate is unexpected.
A common solution I've seen floating around for things like this is just turn off SSL checking, but that feels like just a temporary hack and not a real solution. I would like for this to work with all security in place.
As an additional note, this is running on a server that is also running TeamCity, and the TeamCity GitHub config is able to clone repos with LFS from that same server, so these problems are just inside of the GitHub action runner environment that gets set up.

Since the firewall only inserts its certificate into https traffic, I was able to get things working using an ssh-key. I added the private key as a secret and the public key to the repo's deploy keys, and now everything is working as expected.
- uses: actions/checkout#v3
with:
lfs: true
ssh-key: ${{secrets.repo_ssh}}

Related

How to stop repo tool verifying SSL certificates?

I'm building an elderly Yocto project. repo (https://gerrit.googlesource.com/git-repo) is used to pull all the sources from their respective repositories & keep on top of changes. Recently, repo has started checking CA certs for the https locations of these sources & refusing to download them as the host machine doesn't have them in its CA store (its an elderly host, to match the elderly Yocto project). Under the hood, repo is using curl to download the sources, which handily provides a -k or --insecure option to bypass SSL certificate validation.
Is there a way to instruct the repo init command to do the same? Or is my only option to keep visiting the sites, downloading the certs, adding them to the host cert store on a case by case basis? (This, whilst protecting me from MITM attacks is impractical in this case).
I've tried export PYTHONHTTPSVERIFY=0 in my script that calls repo, but that doesn't help.
The errors from bitbake look like this: 17:18:25 fatal: unable to access 'https://git.openembedded.org/meta-openembedded/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none

Allow kubernetes storageclass resturl HTTPS with self-signed certificate

I'm currently trying to setup GlusterFS integration for a Kubernetes cluster. Volume provisioning is done with Heketi.
GlusterFS-cluster has a pool of 3 VMs
1st node has Heketi server and client configured. Heketi API is secured with a self-signed certificate OpenSSL and can be accessed.
e.g. curl https://heketinodeip:8080/hello -k
returns the expected response.
StorageClass definition sets the "resturl" to Heketi API https://heketinodeip:8080
When storageclass was created successfully and I try to create a PVC, this fails:
"x509: certificate signed by unknown authority"
This is expected, as ususally one has to allow this insecure HTTPS-connection or explicitly import the issuer CA (e.g. a file simply containing the pem-String)
But: How is this done for Kubernetes? How do I allow this insecure connection to Heketi from Kubernetes, allowing insecure self-signed cert HTTPS or where/how do I import a CA?
It is not an DNS/IP problem, this was resolved with correct subjectAltName settings.
(seems that everybody is using Heketi, and it seems to be still a standard usecase for GlusterFS integration, but always without SSL, if connected to Kubernetes)
Thank you!
To skip verification of server cert, caller just need specify InsecureSkipVerify: true. Refer this github issue for more information (https://github.com/heketi/heketi/issues/1467)
In this page, they have specified a way to use self signed certificate. Not explained thoroughly but still can be useful (https://github.com/gluster/gluster-kubernetes/blob/master/docs/design/tls-security.md#self-signed-keys).

OpenSSL in GitLab, what verification for self-signed certificate?

On Debian, using GitLab, I ran into issues with my self-signed certificate.
Reading through the code after a lot of searching on the Internet (I guess, it's the last resort, FOSS is helpful), I found the following lines in gitlab-shell/lib/gitlab_net.rb which left me... perplexed.
if config.http_settings['self_signed_cert']
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
end
Most Stack Overflow responses about the diverse issues I've had until now have led me to believe that VERIFY_NONE, as you'd expect, doesn't verify anything. VERIFY_PEER seems, based on my reading, to be the correct setting for self-signed.
As I read it, it feels like taking steps to secure my connection using a certificate, and then just deciding to not use it? Is it a bug, or am I misreading the source?
gitlab-shell (on the GitLab server) has to communicate to the GitLab instance through an HTTPS or SSH URL API.
If it is a self-signed certificate, it doesn't want any error/warning when trying to access those GitLab URLs, hence the SSL::VERIFY_NONE.
But, that same certificate is also used by clients (outside of the GitLab server), using those same GitLab HTTPS URLs from their browser.
For them, the self-signed certificate is useful, provided they install it in their browser keystore.
For those transactions (clients to GitLab), the certificate will be "verified".
The OP Kheldar point's out in Mislav's post:
OpenSSL expects to find each certificate in a file named by the certificate subject’s hashed name, plus a number extension that starts with 0.
That means you can’t just drop My_Awesome_CA_Cert.pem in the directory and expect it to be picked up automatically.
However, OpenSSL ships with a utility called c_rehash which you can invoke on a directory to have all certificates indexed with appropriately named symlinks.
(See for instance OpenSSL Verify location)
cd /some/where/certs
c_rehash .

The command heroku ssl says my domains have no certificate installed

I just want to say that this is not normally something I do, but I have been tasked with it recently...
I have followed the heroku documentation for setting up SSL closely, but I am still encountering a problem.
I have added my cert to heroku using the following command:
heroku certs:add path_to_crt path_to_key
This part seems to work. I receive a message saying:
Adding SSL Endpoint to my_app ... done
I have also setup a CNAME for my hosting service to point to the endpoint associated with the cert command above. However, when I browse to the site I still receive a SSL error. It says my certificate isn't trusted and points to the *.heroku.com license, not the one I have just uploaded.
I have noticed that when I execute the following command:
heroku ssl
I receive the following:
my_domain_name has no certificate
My assumption is that there should be a certificate associated with this domain at this point.
Any ideas?
Edit: It appears that I did not wait long enough for the certificate stuff to trickle through the internets... however, my question regarding the "heroku ssl" command still puzzles me.
The Heroku ssl command is for legacy certificates:
$ heroku ssl -h
Usage: heroku ssl
list legacy certificates for an app
The command you need is heroku certs which will output the relevant certificate info for that project.

Issuer details are not valid. Issuer details should be registered in advance

I am trying to run a test of the SAML2 SSO using WSO2 Identity Server 4.0.0 M7 but am not successful.
I tried to use the 3.2.3 binary but ran into the bug about long hostnames and the identity.xml file (http://stackoverflow.com/questions/9600392/unable-to-configure-wso2-identity-server-for-openid).
These are the examples I'm using:
http://sureshatt.blogspot.com/2012/08/saml20-sso-with-wso2-identity-server.html
http://wso2.org/library/articles/2010/07/saml2-web-browser-based-sso-wso2-identity-server
I've stood up a new Tomcat7 server and configured it for HTTPS, which works cleanly in the browser. The certs are signed by our trusted enterprise CA and both the private key and chain certs are installed.
Same for the WSO2-IS host which has a new wso2carbon.jks with the private key signed by the same CA. I've exported the host cert from wso2carbon.jks and imported same into the client-truststore.jks. The trusted CA-signed certs are also in client-truststore.jks (at this point just to be sure). They are also in wso2carbon.jks (used to trust the CA reply).
I've changed the HostName and MgtHostName in carbon.xml to match the CN in the private key; the Carbon console comes up cleanly with no SSL issues and I can log in using the 'admin' user with no problem. From there I've updated the SSO configuration using the above example links as guides. That works with no errors.
When I go to each site (e.g., saml2.demo, avis.com, etc.) they redirect perfectly to IS to authenticate. However when I log in I get the error in the log "Issuer details are not valid. Issuer details should be registered in advance". And then I'm stuck.
What have I missed?
Have you done the 5th step of the topic 2 Configuring the WSO2 Identity Server ? Please check the value you've registered as the Issuer is as same as the one that comes in the SAML Authentication Request message.