Allow kubernetes storageclass resturl HTTPS with self-signed certificate - ssl

I'm currently trying to setup GlusterFS integration for a Kubernetes cluster. Volume provisioning is done with Heketi.
GlusterFS-cluster has a pool of 3 VMs
1st node has Heketi server and client configured. Heketi API is secured with a self-signed certificate OpenSSL and can be accessed.
e.g. curl https://heketinodeip:8080/hello -k
returns the expected response.
StorageClass definition sets the "resturl" to Heketi API https://heketinodeip:8080
When storageclass was created successfully and I try to create a PVC, this fails:
"x509: certificate signed by unknown authority"
This is expected, as ususally one has to allow this insecure HTTPS-connection or explicitly import the issuer CA (e.g. a file simply containing the pem-String)
But: How is this done for Kubernetes? How do I allow this insecure connection to Heketi from Kubernetes, allowing insecure self-signed cert HTTPS or where/how do I import a CA?
It is not an DNS/IP problem, this was resolved with correct subjectAltName settings.
(seems that everybody is using Heketi, and it seems to be still a standard usecase for GlusterFS integration, but always without SSL, if connected to Kubernetes)
Thank you!

To skip verification of server cert, caller just need specify InsecureSkipVerify: true. Refer this github issue for more information (https://github.com/heketi/heketi/issues/1467)
In this page, they have specified a way to use self signed certificate. Not explained thoroughly but still can be useful (https://github.com/gluster/gluster-kubernetes/blob/master/docs/design/tls-security.md#self-signed-keys).

Related

HAPROXY ingress controller setup using mTLS with configmap with just the ingress load balancer because it's ssl offloaded. No need for backend check

I was able to achieve ssl offloading with Haproxy. So great product and appreciate that capability!
With that said, I need to doing mutual TLS but am a little confused on how that will work with the ingress controller configmap.
Going through this reference i've created a client cert, intermediate cert and root cert.
To note, I am terminating the ssl cert (which is from letsencrpt) on the load balancer currently.
However, the client cert and org CA are different than the lesencrypt tls/ssl cert that I have assigned as the SSL now; does that matter?
So, the first question I would have is does the ssl-certificate have to be set to the CA that will sign the client and server certs or can I just use the new ones I created in the instruction.
Setting up the configmap.
This is the part i'm confused on.
You can setup server-ca and server-crt but I don't think that applys here because after the ssl offloading there is nothing meant to be checked. However, I do want mTLS via the ssl termination.
So there is an configuration client-ca
Sets the client certificate authority enabling HAProxy to check clients certificate (TLS authentication), thus enabling client mTLS.
NB, ssl-offloading should be enabled for TLS authentication to work.
The client in this case being the actual client I want which is the device/frontend. Not the loadbalancer acting as a client to the backend server.
When I look at how this is setup:
frontend mysite
bind 192.168.56.20:80
bind 192.168.56.20:443 ssl crt /etc/haproxy/certs/ssl.crt verify required ca-file /etc/haproxy/certs/intermediate-ca.crt ca-verify-file /etc/haproxy/certs/root-ca.crt
http-request redirect scheme https unless { ssl_fc }
default_backend apiservers
Is it possible to do the same with the controller configmap as what is listed here below? There's a lot more going on that what I am seeing as flags / configurations that are in this methodology of applying client mTLS. Is there a way to achieve this in kubernetes without configmap?
The ssl parameter enables SSL termination for this listener. The crt parameter identifies the location of the PEM-formatted SSL certificate. This certificate should contain both the public certificate and private key.
You can restrict who can access your application by giving trusted clients a certificate that they must present when connecting. HAProxy will check for this if you add a verify required parameter to the bind line, as shown:
the ssl argument enables HTTPS
the crt argument specifies the server SSL certificate, which you will typically obtain from a certificate provider like Let’s Encrypt
the verify required argument requires clients to send a client certificate
the ca-file argument specifies the intermediate certificate with which we will verify that the client’s certificate has been signed with our organization’s CA
the ca-verify-file argument (introduced in HAProxy 2.2) includes the root CA certificate, allowing HAProxy to send a shorter list of CAs to the client in the SERVER HELLO message that will be used for verification, but keeping upper level CAs, such as the root, out of that list. HAProxy requires the root CA to be set with this argument or else included in the intermediate-ca.crt file (compatibility with older versions of HAProxy).
Also, my reasoning for now wanting to use letsencrypt and rather a private CA is because I can't renew device certificates every 60 - 90 days. That would not be efficient. In this case, and please let me know otherwise, I think it better to use either a real key/cert provider or in development testing utilize the openssl certs like in the HAProxy instruction.
It's odd but you really have to think about what a "client" is with these abstractions because I would never use this for a normal web page login but rather the server to server communication and in that sense this server is a client to this server. Or in my case this device is a client to this loadbalancer.

Kubernetes: mount certificate to pod

I'd like to deploy an ldap server on my kubernetes cluster. The server itself is up and running, but I'd like to enable SSL encryption for it as well.
I already have cert-manager up and running and I also use a multitude of SSL certificates with my ingresses with my HTTP traffic. It would be really nice if I could just use a CertificateRequest with my ldap server as well, managed and updated by cert-manager.
My problem is I have no idea how to mount a Certificate to my kubernetes pod. I know that cert-manager creates a secret and puts the certificate data in it. The problem with that is I have no idea of the validity of that certificate this way, and can't remount/reapply the new certificate.
Has anybody done anything like this? Is there a non-hacky way to incorporate ingresses to terminate SSL encryption?

Emtpy "ca.crt" file from cert-manager

I use cert-manager to generate TLS certificates for my application on Kubernetes with Let's Encrypt.
It is running and I can see "ca.crt", "tls.crt" and "tsl.key" inside the container of my application (in /etc/letsencrypt/).
But "ca.crt" is empty, and the application complains about it (Error: Unable to load CA certificates. Check cafile "/etc/letsencrypt/ca.crt"). The two other files look like normal certificates.
What does that mean?
With cert-manager you have to use the nginx-ingress controller which will work as expose point.
ingress nginx controller will create one load balancer and you can setup your application tls certificate there.
There is nothing regarding certificate inside the pod of cert-manager.
so setup nginx ingress with cert-manager that will help to manage the tls certificate. that certificate will be stored in kubernetes secret.
Please follow this guide for more details:
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
I noticed this:
$ kubectl describe certificate iot-mysmartliving -n mqtt
...
Status:
Conditions:
...
Message: Certificate issuance in progress. Temporary certificate issued.
and a related line in the docs:
https://docs.cert-manager.io/en/latest/tasks/issuing-certificates/index.html?highlight=gce#temporary-certificates-whilst-issuing
They explain that the two existing certificates are generated for some compatibility, but they are not valid until the issuer has done its work.
So that suggests that the issuer is not properly set up.
Edit: yes it was. The DNS challenge was failing, the debug line that helped was
kubectl describe challenge --all-namespaces=true
More generally,
kubectl describe clusterissuer,certificate,order,challenge --all-namespaces=true
According to the documentation, cafile is for something else (trusted root certificates), and it would probably be more correct to use capath /etc/ssl/certs on most systems.
You can follow this guide if you have Windows Operating System:
tls.
Article is about how to enable Mosquitto and clients to use the TLS protocol.
Establishing a secure TLS connection to the Mosquitto broker requires key and certificate files. Creating all these files with the correct settings is not the easiest thing, but is rewarded with a secure way to communicate with the MQTT broker.
If you want to use TLS certificates you've generated using the Let's Encrypt service.
You need to be aware that current versions of mosquitto never update listener settings when running, so when you regenerate the server certificates you will need to completely restart the broker.
If you use DigitalOcean Kubernetes try to follow this instruction: ca-ninx, you can use Cert-Manager and ingress nginx controller, they will work like certbot.
Another solution is to create the certificate locally on your machine and then upload it to kubernetes secret and use secret on ingress.

Confusion about HTTPS --> How is SSL handshake happing

I've always been an end consumer of HTTPS and have never really understood it that well but am looking to change that.
I am calling a RESTful web service over HTTPS. For example...
curl -X GET \
https://myCompanydns/rest/connect/v1.4/myEndpoint
With all my requests I send a basic authentication header i.e a username and password.
When I make these calls via my application I was expecting to have to add a certificate into like a jks (which I've had to do in the past) but on this occasion I've found that I can call the HTTPS web service without that.
For HTTPS to work I believe there is an SSL handshake? How is that happening successfully is this scenario without a jks?
Again, sorry for this beginner type question.
When doing a https://... request the client needs to verify that the servers certificate is the expected one - and not some man in the middle. This is done (among other things) by making sure that the servers certificate was issued by a trusted certificate authority (CA). Which CA is trusted is setup in the local trust store (i.e. local to the client). In the above call where no explicit trust store is given curl is using its default trust store. In the case where you've explicitly gave a jks you've provided the application with a specific trust store it should use.
For more on how the server certificates gets validated see SSL Certificate framework 101: How does the browser actually verify the validity of a given server certificate?.

Issuer details are not valid. Issuer details should be registered in advance

I am trying to run a test of the SAML2 SSO using WSO2 Identity Server 4.0.0 M7 but am not successful.
I tried to use the 3.2.3 binary but ran into the bug about long hostnames and the identity.xml file (http://stackoverflow.com/questions/9600392/unable-to-configure-wso2-identity-server-for-openid).
These are the examples I'm using:
http://sureshatt.blogspot.com/2012/08/saml20-sso-with-wso2-identity-server.html
http://wso2.org/library/articles/2010/07/saml2-web-browser-based-sso-wso2-identity-server
I've stood up a new Tomcat7 server and configured it for HTTPS, which works cleanly in the browser. The certs are signed by our trusted enterprise CA and both the private key and chain certs are installed.
Same for the WSO2-IS host which has a new wso2carbon.jks with the private key signed by the same CA. I've exported the host cert from wso2carbon.jks and imported same into the client-truststore.jks. The trusted CA-signed certs are also in client-truststore.jks (at this point just to be sure). They are also in wso2carbon.jks (used to trust the CA reply).
I've changed the HostName and MgtHostName in carbon.xml to match the CN in the private key; the Carbon console comes up cleanly with no SSL issues and I can log in using the 'admin' user with no problem. From there I've updated the SSO configuration using the above example links as guides. That works with no errors.
When I go to each site (e.g., saml2.demo, avis.com, etc.) they redirect perfectly to IS to authenticate. However when I log in I get the error in the log "Issuer details are not valid. Issuer details should be registered in advance". And then I'm stuck.
What have I missed?
Have you done the 5th step of the topic 2 Configuring the WSO2 Identity Server ? Please check the value you've registered as the Issuer is as same as the one that comes in the SAML Authentication Request message.