Hyperledger fabric mutual TLS authentication causes Orderer error: "tls: bad certificate" - ssl

A section of the values.yaml file we use to provision our orderers and peers:
orderers:
# cert/key pair generated by Letsencrypt for a single orderer
# DNS name (e.g. ord0.network.example.com)
# ORDERER_GENERAL_TLS_CERTIFICATE & ORDERER_GENERAL_TLS_PRIVATEKEY
# mounted on /var/hyperledger/tls/server/pair/tls.crt
# mounted on /var/hyperledger/tls/server/pair/tls.key
tls: <k8s secret holding both tls.crt and tls.key>
# ORDERER_GENERAL_TLS_ROOTCAS
# mounted on /var/hyperledger/tls/server/cert/cert.pem
tlsRootCert: <k8s holding the letsencrypt x3 cross-signed certificate>
# ORDERER_GENERAL_TLS_CLIENTROOTCAS
# same as tlsRootCert
# mounted on /var/hyperledger/tls/client/cert/cert.pem
tlsClientRootCert: <k8s holding the letsencrypt x3 cross-signed certificate>
# cert/key generated by fabric-ca-client enroll for the
# NON admin identity "ord0"
# mounted on /var/hyperledger/msp/signcerts
cert: ord0-idcert
# mounted on /var/hyperledger/msp/keystore
key: ord0-idkey
# also generated by fabric-ca-client enroll for the
# NON admin identity "ord0"
# mounted on /var/hyperledger/admin_msp/cacerts/cert.pem
caCert: ord-ca-cert
Additional Order environment variables related to mutual TLS authentication:
ORDERER_GENERAL_TLS_ENABLED=true
ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED=true
peers:
# cert/key pair generated by Letsencrypt for a single peer
# e.g. peer0.org1.network.example.com
# CORE_PEER_TLS_CERT_FILE & CORE_PEER_TLS_KEY_FILE
# mounted on /var/hyperledger/tls/server/pair/tls.crt
# mounted on /var/hyperledger/tls/server/pair/tls.key
tls: <k8s secret holding both tls.crt and tls.key>
# CORE_PEER_TLS_ROOTCERT_FILE
# mounted on /var/hyperledger/tls/server/cert/cert.pem
tlsRootCert: <k8s holding the letsencrypt x3 cross-signed certificate>
# CORE_PEER_TLS_CLIENTROOTCAS_FILES
# same as tlsRootCert
# mounted on /var/hyperledger/tls/client/cert/cert.pem
tlsClientRootCert: <k8s holding the letsencrypt x3 cross-signed certificate>
# CORE_PEER_TLS_CLIENTCERT_FILE & CORE_PEER_TLS_CLIENTKEY_FILE
# mounted on /var/hyperledger/tls/client/pair/tls.crt
# mounted on /var/hyperledger/tls/client/pair/tls.key
tlsClient: <k8s secret holding both tls.crt and tls.key>
# cert/key generated by fabric-ca-client enroll for the
# NON admin identity "peer0"
# mounted on /var/hyperledger/msp/signcerts
cert: peer0-idcert
# mounted on /var/hyperledger/msp/keystore
key: peer0-idkey
# also generated by fabric-ca-client enroll for the
# NON admin identity "peer0"
caCert: peer-ca-cert
Additional environment variables related to TLS mutual authentication:
CORE_PEER_TLS_ENABLED=true
CORE_PEER_TLS_CLIENTAUTHREQUIRED=true
When issuing a command from within the peer0 POD which involves communication with one orderer (namely ord0) we get the bad certificate error:
peer channel join full command:
CORE_PEER_MSPCONFIGPATH=/var/hyperledget/admin_msp \
peer channel join -o ord0.network.example.com:443 \
-b /var/hyperledger/mychannel.block \
--tls \
--cafile /var/hyperledger/tls/server/cert/cert.pem \
--certfile /var/hyperledger/tls/server/cert/cert.pem \
--keyfile /var/hyperledger/tls/client/pair/tls.key \
--clientauth
Log line from the orderer:
2019-07-03 14:04:09.717 UTC [core.comm] ServerHandshake -> ERRO 68c TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=10.0.3.97:43398
2019-07-03 14:04:09.717 UTC [grpc] handleRawConn -> DEBU 68d grpc: Server.Serve failed to complete security handshake from "10.0.3.97:43398": remote error: tls: bad certificate
2019-07-03 14:04:10.599 UTC [core.comm] ServerHandshake -> ERRO 68e TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=10.0.3.97:43404
2019-07-03 14:04:10.599 UTC [grpc] handleRawConn -> DEBU 68f grpc: Server.Serve failed to complete security handshake from "10.0.3.97:43404": remote error: tls: bad certificate
2019-07-03 14:04:12.274 UTC [core.comm] ServerHandshake -> ERRO 690 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=10.0.3.97:43420
Note: 10.0.3.97 is the POD IP of the ingress controller.

Related

Traefik V2 (armv6) - Reverse Proxy with SSL termination - without Docker

I have a raspberry pi that I want to use for SSL termination and as a reverse proxy for other pis running services.
Why? I was able to get HTTPS using my Synology NAS but ACME recently got upgraded in Let's Encrypt and my NAS version is too old. I also want to use Traefik as a learning experience.
I have managed to successfully install Traefik V2 -outside of Docker- and it is running fine and redirecting to the appropriate back-end servers. However, SSL doesn't work. Not sure what I might have configured incorrectly. I appreciate anyone's help.
I see the following error on the web interface, but I can't find any log files being created in the specified path.
After burning my eyes reading their docs and anything I could find online, I have the following 'traefik.yaml' file:
#################################
# Traefik V2 Static Configuration
#################################
# Global Configurations
global:
# Check for Update
checkNewVersion: true
# Configure the transport between Traefik and your servers
serversTransport:
# Skip the check of server certificates
insecureSkipVerify: true
# How manny connections per server
maxIdleConnsPerHost: 42
# Define timeouts
forwardingTimeouts:
dialTimeout: 42
responseHeaderTimeout: 42
idleConnTimeout: 42
# Configure the network entrypoints into Traefik V2. Which port will receive packets and if TCP/UDP
entryPoints:
# HTTP Entry Point
web:
# Listen on TCP port 80 (80/tcp)
address: ":80"
# redirect http to https
http:
redirections:
entryPoint:
# Where to redirect
to: web-secure
# Scheme to use
scheme: https
# Make it always happen
permanent: true
# Specify the timeouts for the transports
transport:
# Controls the behavior during the shutdown phase
lifeCycle:
requestAcceptGraceTimeout: 42
graceTimeOut: 42
# Timeouts for incoming requests to Traefik V2 instance. No effect on UDP.
respondingTimeouts:
readTimeout: 42
writeTimeout: 42
idleTimeout: 42
# Define how the Proxy Protocol should behave and what to trust.
proxyProtocol:
# Specify IPs for secure mode
trustedIPs:
- 10.0.0.1
- 127.0.0.1
forwardedHeaders:
# Specify IPs for secure mode
trustedIPs:
- 10.0.0.1
- 127.0.0.1
# HTTPS Entry Point
web-secure:
# Listen on TCP port 80 (80/tcp)
address: ":443"
# Define TLS with Let's Encrypt for all
http:
tls:
certResolver: letsencrypt
# Specify the timeouts for the transports
transport:
# Controls the behavior during the shutdown phase
lifeCycle:
requestAcceptGraceTimeout: 42
graceTimeOut: 42
# Timeouts for incoming requests to Traefik V2 instance. No effect on UDP.
respondingTimeouts:
readTimeout: 42
writeTimeout: 42
idleTimeout: 42
# Define how the Proxy Protocol should behave and what to trust.
proxyProtocol:
# Specify IPs for secure mode
trustedIPs:
- 10.0.0.1
- 127.0.0.1
forwardedHeaders:
# Specify IPs for secure mode
trustedIPs:
- 10.0.0.1
- 127.0.0.1
# Configure the providers
providers:
providersThrottleDuration: 42
# If using a dynamic file
file:
filename: "/etc/traefik/traefik-dynamic.yaml"
watch: true
debugLogGeneratedTemplate: true
rest:
insecure: true
# Traefik's Dashboard located in http://<ip>/dashboard/ (last / necessary)
api:
# Enable the dashboard
dashboard: true
# Location of Log files
log:
# Logging levels are: DEBUG, PANIC, FATAL, ERROR, WARN, INFO
level: DEBUG
filePath: "/etc/traefik/traefik.log"
# SSL Certificates
certificatesResolvers:
# Use Let's Encrypt for SSL Certificates
letsencrypt:
# Enable ACME (Let's Encrypt automatic SSL)
acme:
# E-mail used for registration
email: <my e-mail>
# Leave commented for PROD servers uncomment for Non Prod
#caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"
# File or key used for certificates storage.
storage: acme.json
# Optional
#keyType: RSA4096
# Use HTTP-01 ACME challenge
httpChallenge:
entryPoint: web
And the following 'traefik-dynamic.yaml' file:
#################################
# Traefik V2 Dynamic Configuration
#################################
# Definition on how to handle HTTP requests
http:
# Define the routers
routers:
# Map Traefik Dashboard requests to the Service
Traefik:
middlewares:
- BasicAuth
rule: "Host(`traefik.subdomain.dns1.us`)"
service: api#internal
tls:
certResolver: letsencrypt
# Map PLEX to the Server
# No EntryPoints defined so that it listens in all of them
PLEX:
rule: "Host(`plex.subdomain.dns1.us`)"
service: PLEX
tls:
certResolver: letsencrypt
# Define the middlewares
middlewares:
# Basic auth for the dashboard
BasicAuth:
basicAuth:
# Specify user and password (generator: https://www.web2generators.com/apache-tools/htpasswd-generator)
users:
- "<user>:<password>"
# Define the services
services:
#PLEX Service
PLEX:
loadBalancer:
# Backend URLs
servers:
- url: "http://10.0.0.21:32400"
# Enable sticky sessions
sticky:
cookie: {}
# Pass the client Host header to the server
passHostHeader: true
Issue was the /etc/traefik/acme.json file.
I removed it and restarted the Raspberry Pi. Traefik re-created the file and no errors showed up.

Kubevirt virtctl image-upload gives "remote error: tls: bad certificate error"

I am trying to upload windows10 image to pvc inorder to create a windows10 vm using kubevirt.
I used below virtctl command:
$ virtctl image-upload --image-path=/Win10_20H2_v2_English_x64.iso --pvc-name=win10-vm --access-mode=ReadWriteMany --pvc-size=5G --uploadproxy-url=https://<cdi-uploadproxy IP>:443 --insecure
Result :
pvc is created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
win10-vm Bound pv0002 10Gi RWX 145m
win10-vm-scratch Bound pv0003 10Gi RWX 145m
cdi-image-upload pod is created.
[root#master kubevirt]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
cdi cdi-apiserver-847d4bc7dc-l6fz7 1/1 Running 1 135m
cdi cdi-deployment-66d7555b79-d57bm 1/1 Running 1 135m
cdi cdi-operator-895bb5c74-hpk44 1/1 Running 1 135m
cdi cdi-uploadproxy-6c8698cd8b-z67xc 1/1 Running 1 134m
default cdi-upload-win10-vm 1/1 Running 0 53s
But upload gets timeout.When I checked logs of the cdi-upload-win10-vm pod,I got following errors:
I0413 10:58:38.695097 1 uploadserver.go:70] Upload destination: /data/disk.img
I0413 10:58:38.695263 1 uploadserver.go:72] Running server on 0.0.0.0:8443
2021/04/13 10:58:40 http: TLS handshake error from [::1]:57710: remote error: tls: bad certificate
2021/04/13 10:58:45 http: TLS handshake error from [::1]:57770: remote error: tls: bad certificate
2021/04/13 10:58:50 http: TLS handshake error from [::1]:57882: remote error: tls: bad certificate
2021/04/13 10:58:55 http: TLS handshake error from [::1]:57940: remote error: tls: bad certificate
2021/04/13 10:59:00 http: TLS handshake error from [::1]:58008: remote error: tls: bad certificate
2021/04/13 10:59:05 http: TLS handshake error from [::1]:58066: remote error: tls: bad certificate
2021/04/13 10:59:10 http: TLS handshake error from [::1]:58136: remote error: tls: bad certificate

TLS Connection - Beats to Redis

I have Winlogbeat installed on a Windows box using Redis output.
The Redis server is configured for TLS on port 6380.
Both ends start services up successfully but the connection does not succeed. I have tried different combinations of protocols and cipher suites but no luck. What am I missing? The error messages:
Redis:
Error accepting a client connection: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher
Windows:
ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to redis(tcp://10.1.1.4:6380): remote error: tls: handshake failure
Here is the redis-server config. The CA is self-signed, 2048-bit key and x509 certificate. The server certificate is also x509. I think I may need to rebuild the CA, but feedback is appreciated on this.
#tls configs
tls-port 6380
tls-cert-file /etc/ssl/redis.crt
tls-key-file /etc/ssl/private/redis.key
tls-ca-cert-file /usr/local/share/ca-certificates/ca.crt
tls-auth-clients no
tls-prefer-server-ciphers no
tls-protocols "TLSv1.2"
tls-dh-params-file /etc/ssl/redis.dh
tls-ciphers DEFAULT
tls-ciphersuites ECDHE-ECDSA-CHACHA20-POLY1305
And the Beats config.
output.redis:
hosts: ["10.1.1.4:6380"]
password: "redispass"
key: "winlogbeat"
db: 0
timeout: 5
data_type: "list"
ssl:
enabled: true
certificate_authorities: ["C:\\Program Data\\Winlogbeat\\ca.crt"]
insecure: true
supported_protocols: [TLSv1.2]
cipher_suites: [ECDHE-ECDSA-CHACHA20-POLY1305]
curve_types: [P-256]
Finally worked it out, in case anyone is interested.
Needed to install and configure the CA with proper params, and then create a SAN server certificate for Redis using IP addresses and a hostname. Here are the articles I followed.
On the server that hosts Redis, create the root CA: https://blog.devolutions.net/2020/07/tutorial-how-to-generate-secure-self-signed-server-and-client-certificates-with-openssl (just the root CA from this article).
Create the Redis server certificate using IP address subject alternative names: https://www.golinuxcloud.com/openssl-generate-csr-create-san-certificate/
Install Redis from source with TLS support: https://godfrey-tutu.medium.com/redis-6-deployment-with-tls-authentication-on-centos-7-8b6e34d11cd0
Beats and Redis will negotiate best encryption. If you keep the existing Redis config and add the TLS piece on a different unused port, then Redis will start up and listen on both.
Here is the new Beat config:
output.redis:
hosts: ["10.1.1.4:6380"]
password: "redispass"
key: "winlogbeat"
db: 0
timeout: 5
data_type: "list"
ssl:
enabled: true
certificate_authorities: ["C:\\Program Data\\Winlogbeat\\ca.crt"]
insecure: true
supported_protocols: [TLSv1.1, TLSv1.2]
And Redis...
tls-port 6380
tls-cert-file /etc/pki/tls/certs/redis.crt
tls-key-file /etc/pki/tls/private/redis.key
tls-ca-cert-file /etc/pki/tls/certs/ca.crt
tls-auth-clients no
tls-protocols "TLSv1.1 TLSv1.2"
tls-prefer-server-ciphers yes

How to run remote code as user with certificate from a worker node

I created a user in the Master.
First I created a key and certificate for him: dan.key and dan.crt
Then I created it inside Kubernetes:
kubectl config set-credentials dan \
--client-certificate=/tmp/dan.crt \
--client-key=/tmp/dan.key
This is the ~/.kube/config:
users:
- name: dan
user:
as-user-extra: {}
client-certificate: /tmp/dan.crt
client-key: /tmp/dan.key
I want to be able to run commands from a remote worker as the user I created.
I know how to do it with service account token:
kubectl --server=https://192.168.0.13:6443 --insecure-skip-tls-verify=true --token="<service_account_token>" get pods
I copied the certifiacte and the key to the remote worker and ran:
[workernode tmp]$ kubectl --server=https://192.168.0.13:6443 --client-certificate=/tmp/dan.crt --client-key=/tmp/dan.key get pods
Unable to connect to the server: x509: certificate signed by unknown authority
I followed this question:
kubectl unable to connect to server: x509: certificate signed by unknown authority
I tried like he wrote:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
But I am still receiving:
Unable to connect to the server: x509: certificate signed by unknown authority
I copied the certifiacte and the key to the remote worker and ran:
[workernode tmp]$ kubectl --server=https://192.168.0.13:6443 --client-certificate=/tmp/dan.crt --client-key=/tmp/dan.key get pods
Unable to connect to the server: x509: certificate signed by unknown authority
You were missing the critical piece of data telling kubectl how to trust the https: part of that request, namely --certificate-authority=/path/to/kubernetes/ca.pem
You didn't encounter that error while using --token=... because of the --insecure-skip-tls-verify=true which you should definitely, definitely not do.
I tried like he wrote:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
But I am still receiving:
You have followed the wrong piece of advice from whatever article you were reading; that --accept-hosts flag only controls the remote hostnames from which kubectl proxy will accept connections, and has zero to do with SSL anythings.

why kubelet communicate with apiserver by using TLS needs password?v1.3

I deployed apiserver using TLS on master node and it worked fine,my question appeared when I deploying the kubelet and tring to communicate with apiserver.
the kubelet conf as follows:
/opt/bin/kubelet \
--logtostderr=true \
--v=0 \
--api_servers=https://kube-master:6443 \
--address=0.0.0.0 \
--port=10250 \
--allow-privileged=false \
--tls-cert-file="/var/run/kubernetes/kubelet_client.crt" \
--tls-private-key-file="/var/run/kubernetes/kubelet_client.key"
--kubeconfig="/var/lib/kubelet/kubeconfig"
/var/lib/kubelet/kubeconfig is following:
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
client-certificate: /var/run/kubernetes/kubelet_client.crt
client-key: /var/run/kubernetes/kubelet_client.key
clusters:
- name: kube-cluster
cluster:
certificate-authority: /var/run/kubernetes/ca.crt
contexts:
- context:
cluster: kube-cluster
user: kubelet
name: ctx-kube-system
current-context: ctx-kube-system
As I want to achieve the comunication using a two-way(both client and server)CA authentication and expect for a fluky reply,but apiserver ask me to provide my username and password which I have never used before,some command lines as following:
> kubectl version
> Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.2", GitCommit:"9bafa3400a77c14ee50782bb05f9efc5c91b3185", GitTreeState:"clean", BuildDate:"2016-07-17T18:30:39Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
> Please enter Username: kubelet
> Please enter Password: kubelet
> error: You must be logged in to the server (the server has asked for the client to provide credentials)
I tried all these on master minion.Could anyone please resolve this conundrum?Thanks in advance.
You have to enable client certificate authorization via the --client-ca-file flag on the apiserver.
From http://kubernetes.io/docs/admin/authentication/:
Client certificate authentication is enabled by passing the --client-ca-file=SOMEFILE option to apiserver. The referenced file must contain one or more certificates authorities to use to validate client certificates presented to the apiserver. If a client certificate is presented and verified, the common name of the subject is used as the user name for the request.
From http://kubernetes.io/docs/admin/kube-apiserver/:
--client-ca-file="": If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config="": The path to the cloud provider configuration file. Empty string for no configuration file.