Hyperledger Fabric CA: x509: certificate is valid for rca-ord, not localhost - hyperledger-fabric-ca

we have started an instance of fabric-ca-server with following settings in docker-compose.yml
version: '2'
networks:
test:
services:
myservice:
container_name: my-container
image: hyperledger/fabric-ca
command: /bin/bash -c "fabric-ca-server start -b admin:adminpw"
environment:
- FABRIC_CA_SERVER_HOME=/etc/hyperledger/fabric-ca
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_CSR_CN=rca-ord
- FABRIC_CA_SERVER_CSR_HOSTS=rca-ord
- FABRIC_CA_SERVER_DEBUG=true
volumes:
- ./scripts:/scripts
- ./data:/data
networks:
- test
ports:
- 7054:7054
but when we try to enroll a user against this server using the command below:
root#fd85cc416f52:/# fabric-ca-client enroll -u https://user:userpw#localhost:7054 --tls.certfiles $FABRIC_CA_SERVER_HOME/tls-cert.pem
we get the error below:
2018/12/08 22:18:03 [INFO] TLS Enabled
2018/12/08 22:18:03 [INFO] generating key: &{A:ecdsa S:256}
2018/12/08 22:18:03 [INFO] encoded CSR
Error: POST failure of request: POST https://localhost:7054/enroll
{"hosts":["fd85cc416f52"],"certificate_request":"-----BEGIN CERTIFICATE REQUEST-----\nMIIBQDCB6AIBADBcMQswCQYDVQQGEwJVUzEXMBUGA1UECBMOTm9ydGggQ2Fyb2xp\nbmExFDASBgNVBAoTC0h5cGVybGVkZ2VyMQ8wDQYDVQQLEwZGYWJyaWMxDTALBgNV\nBAMTBHVzZXIwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAATREdPvOeaWG9TzaEyk\nhFXRnJFJouDXShr0D1745bCt/0n3qjpqviZiApd1t62VrpMX0j8DBa6tkF7C+rEr\nRvwnoCowKAYJKoZIhvcNAQkOMRswGTAXBgNVHREEEDAOggxmZDg1Y2M0MTZmNTIw\nCgYIKoZIzj0EAwIDRwAwRAIgASXupobxJia/FFlLiwYzYpacvSA6RiIc/LR/kvdB\nT8ICIA1nJ2RfHrwMhOWocxMAIuLUsBvKS3S5DIwCHp0/gBpn\n-----END CERTIFICATE REQUEST-----\n","profile":"","crl_override":"","label":"","NotBefore":"0001-01-01T00:00:00Z","NotAfter":"0001-01-01T00:00:00Z","CAName":""}: Post https://localhost:7054/enroll: x509: certificate is valid for rca-ord, not localhost
on the server-side we can see following message printed out when the request is sent:
my-container | 2018/12/08 22:18:03 http: TLS handshake error from 127.0.0.1:56518: remote error: tls: bad certificate
we have also tried:
root#fd85cc416f52:/# ls $FABRIC_CA_SERVER_HOME
IssuerPublicKey IssuerRevocationPublicKey ca-cert.pem fabric-ca-server-config.yaml fabric-ca-server.db msp tls-cert.pem
root#fd85cc416f52:/# fabric-ca-client enroll -u https://user:userpw#localhost:7054 --tls.certfiles $FABRIC_CA_SERVER_HOME/ca-cert.pem
with same result
wondering if someone can help us what is wrong here and how can we fix it? thanks

You have generated a TLS certificate on the server using FABRIC_CA_SERVER_CSR_HOSTS=rca-ord, but then you are sending your request to localhost in the URL you specify in the enroll command.
To get this to work, you should change your environment variable to also include 'localhost'. For example: FABRIC_CA_SERVER_CSR_HOSTS=rca-ord,localhost.
Delete the old TLS certificate and generate a new one, and it should work.

Related

Kubevirt virtctl image-upload gives "remote error: tls: bad certificate error"

I am trying to upload windows10 image to pvc inorder to create a windows10 vm using kubevirt.
I used below virtctl command:
$ virtctl image-upload --image-path=/Win10_20H2_v2_English_x64.iso --pvc-name=win10-vm --access-mode=ReadWriteMany --pvc-size=5G --uploadproxy-url=https://<cdi-uploadproxy IP>:443 --insecure
Result :
pvc is created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
win10-vm Bound pv0002 10Gi RWX 145m
win10-vm-scratch Bound pv0003 10Gi RWX 145m
cdi-image-upload pod is created.
[root#master kubevirt]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
cdi cdi-apiserver-847d4bc7dc-l6fz7 1/1 Running 1 135m
cdi cdi-deployment-66d7555b79-d57bm 1/1 Running 1 135m
cdi cdi-operator-895bb5c74-hpk44 1/1 Running 1 135m
cdi cdi-uploadproxy-6c8698cd8b-z67xc 1/1 Running 1 134m
default cdi-upload-win10-vm 1/1 Running 0 53s
But upload gets timeout.When I checked logs of the cdi-upload-win10-vm pod,I got following errors:
I0413 10:58:38.695097 1 uploadserver.go:70] Upload destination: /data/disk.img
I0413 10:58:38.695263 1 uploadserver.go:72] Running server on 0.0.0.0:8443
2021/04/13 10:58:40 http: TLS handshake error from [::1]:57710: remote error: tls: bad certificate
2021/04/13 10:58:45 http: TLS handshake error from [::1]:57770: remote error: tls: bad certificate
2021/04/13 10:58:50 http: TLS handshake error from [::1]:57882: remote error: tls: bad certificate
2021/04/13 10:58:55 http: TLS handshake error from [::1]:57940: remote error: tls: bad certificate
2021/04/13 10:59:00 http: TLS handshake error from [::1]:58008: remote error: tls: bad certificate
2021/04/13 10:59:05 http: TLS handshake error from [::1]:58066: remote error: tls: bad certificate
2021/04/13 10:59:10 http: TLS handshake error from [::1]:58136: remote error: tls: bad certificate

Hashicorp Consul - How to do verified TLS from Pods in Kubernetes cluster

I'm having some difficulty understanding Consul end-to-end TLS. For reference, I'm using Consul in Kubernetes (via the hashicorp/consul Helm chart). Only one datacenter and Kubernetes cluster - no external parties or concerns.
I have configured my override values.yaml file like so:
global:
datacenter: sandbox
gossipEncryption:
secretName: "consul"
secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"
tls:
enabled: true
httpsOnly: true
enableAutoEncrypt: true
serverAdditionalDNSSANs: ["'consul.service.consul'"]
server:
replicas: 3
bootstrapExpect: 3
storage: 20Gi
dns:
clusterIP: 172.20.53.53
ui:
service:
type: 'LoadBalancer'
syncCatalog:
enabled: true
All other values are as default from the shipped values.yaml file.
This works, and Consul client logs suggest that all agents area connecting nicely using TLS, with relevant certs and keys being created by (as I understand) the Auto-encryption feature of Consul.
What I don't understand is how to initiate a HTTPS connection from an application on Kubernetes, running in a Pod, to a Consul server. Since the Pod's container does not (presumably) have the Consul root CA cert in its trust store, all HTTPS calls fail, as per wget example below:
# Connect to Pod:
laptop$> kubectl exec -it my-pod sh
# Attempt valid HTTPS connection:
my-pod$> wget -q -O - https://consul.service.consul:8501
Connecting to consul.service.consul:8501 (10.110.1.131:8501)
ssl_client: consul.service.consul: certificate verification failed: unable to get local issuer certificate
wget: error getting response: Connection reset by peer
# Retry, but ignore certificate validity issues:
my-pod$> wget --no-check-certificate -q -O - https://consul.service.consul:8501/v1/status/leader
"10.110.1.131:8300"
How am I supposed to enforce end-to-end (verified) HTTPS connections from my apps on Kubernetes to Consul if the container does not recognize the certificate as valid?
Am I misunderstanding something about certificate propagation?
Many thanks - Aaron
Solved with thanks to Hashicorp on their Consul discussion forum.
Create a Kubernetes secret named consul with a key named CONSUL_GOSSIP_ENCRYPTION_KEY and an appropriate encryption key value.
Generate value using consul keygen
Install the hashicorp/consul Helm chart with an values-override.yaml , such as below:
global:
datacenter: sandbox
gossipEncryption:
secretName: "consul"
secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"
tls:
enabled: true
httpsOnly: true
enableAutoEncrypt: true
serverAdditionalDNSSANs: ["'consul.service.consul'"]
server:
replicas: 3
bootstrapExpect: 3
storage: 20Gi
dns:
clusterIP: 172.20.53.53
ui:
service:
type: 'LoadBalancer'
syncCatalog:
enabled: true
Create an example Pod spec to represent our application.
Ensure it mounts the Consul server CA cert secret.
Ensure the Pod’s container has HOST_IP exposed as an environment variable.
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: test-pod
spec:
volumes:
- name: consul-consul-ca-cert
secret:
secretName: consul-consul-ca-cert
hostNetwork: false
containers:
- name: consul-test-pod
image: alpine
imagePullPolicy: IfNotPresent
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 24h; done"]
volumeMounts:
- name: consul-consul-ca-cert
mountPath: /consul/tls/ca
Upon creation of the Pod, kubectl exec into it, and ensure the ca-certificates and curl packages are installed (I’m using Alpine Linux in this example).
(curl is purely for testing purposes)
#> apk update
#> apk add ca-certificates curl
Copy the mounted Consul server CA certificate into the /usr/local/share/ca-certificates/ and execute update-ca-certificates to add it to the system root CA store.
#> cp /consul/tls/ca/tls.crt /usr/local/share/ca-certificates/consul-server-ca.crt
#> update-ca-certificates # might give a trivial warning - ignore it
The Consul server is now accessible (and trusted) over HTTPS as below:
#> curl https://consul.service.consul:8501/v1/status/leader
## No TLS errors ##
We also want to talk to the Consul client (instead of the server) over HTTPS, for performance reasons.
Since the Consul client has its own CA cert, we need to retrieve that from the server.
This requires the consul-k8s binary, so we need to get that.
#> cd /usr/local/bin
#> wget https://releases.hashicorp.com/consul-k8s/0.15.0/consul-k8s_0.15.0_linux_amd64.zip # (or whatever latest version is)
#> unzip consul-k8s_0.15.0_linux_amd64.zip
#> rm consul-k8s_0.15.0_linux_amd64.zip
Get the Consul client CA cert and install it via update-ca-certificates :
#> consul-k8s get-consul-client-ca -server-addr consul.service.consul -server-port 8501 -output-file /usr/local/share/ca-certificates/consul-client-ca.crt
#> update-ca-certificates # might give a trivial warning - ignore it
The Consul client is now accessible (and trusted) over HTTPS as below:
#> curl https://$HOST_IP:8501/v1/status/leader
## No TLS errors ##
We can also access the Consul KV service from the client without issue:
#> curl https://$HOST_IP:8501/v1/kv/foo/bar/baz
## No TLS errors ##
Naturally, all of the above should be automated by the implementer. These manual steps are purely for demonstration purposes.

How to run remote code as user with certificate from a worker node

I created a user in the Master.
First I created a key and certificate for him: dan.key and dan.crt
Then I created it inside Kubernetes:
kubectl config set-credentials dan \
--client-certificate=/tmp/dan.crt \
--client-key=/tmp/dan.key
This is the ~/.kube/config:
users:
- name: dan
user:
as-user-extra: {}
client-certificate: /tmp/dan.crt
client-key: /tmp/dan.key
I want to be able to run commands from a remote worker as the user I created.
I know how to do it with service account token:
kubectl --server=https://192.168.0.13:6443 --insecure-skip-tls-verify=true --token="<service_account_token>" get pods
I copied the certifiacte and the key to the remote worker and ran:
[workernode tmp]$ kubectl --server=https://192.168.0.13:6443 --client-certificate=/tmp/dan.crt --client-key=/tmp/dan.key get pods
Unable to connect to the server: x509: certificate signed by unknown authority
I followed this question:
kubectl unable to connect to server: x509: certificate signed by unknown authority
I tried like he wrote:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
But I am still receiving:
Unable to connect to the server: x509: certificate signed by unknown authority
I copied the certifiacte and the key to the remote worker and ran:
[workernode tmp]$ kubectl --server=https://192.168.0.13:6443 --client-certificate=/tmp/dan.crt --client-key=/tmp/dan.key get pods
Unable to connect to the server: x509: certificate signed by unknown authority
You were missing the critical piece of data telling kubectl how to trust the https: part of that request, namely --certificate-authority=/path/to/kubernetes/ca.pem
You didn't encounter that error while using --token=... because of the --insecure-skip-tls-verify=true which you should definitely, definitely not do.
I tried like he wrote:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
But I am still receiving:
You have followed the wrong piece of advice from whatever article you were reading; that --accept-hosts flag only controls the remote hostnames from which kubectl proxy will accept connections, and has zero to do with SSL anythings.

Kubernetes authentication with certificate

I am trying to authenticate with a locally hosted Kubernetes cluster (v1.6.4) using a certificate.
This takes part in the context of using the Kubernetes plugin for Jenkins.
I am following the guidelines for Minikube in the Kubernetes-plugin README file which I adapted to my scenario:
Convert the client certificate to PKCS:
$ sudo openssl pkcs12 -export -out kubernetes.pfx -inkey /etc/kubernetes/pki/apiserver.key -in /etc/kubernetes/pki/apiserver.crt -certfile /etc/kubernetes/pki/ca.crt -passout pass:jenkins
In Jenkins, create credentials using a certificate
Kind: Certificate
Certificate: Upload PKCS#12 certificate and upload file kubernetes.pfx
Password: jenkins (as specified during certificate creation)
Manage Jenkins -> Add new cloud -> Kubernetes
Kubernetes URL: https://10.179.1.121:6443 (as output by kubectl config view)
Kubernetes server certificate key: paste the contents of /etc/kubernetes/pki/ca.crt.
Disable https certificate check: checked because the test setup does not have a signed certificate
Kubernetes Namespace: tried both default and kubernetes-plugin
Credentials: CN=kube-apiserver (i.e. the credentials created above)
Now when I click on Test Connection, this is the error message shown in the Jenkins Web UI:
Error connecting to https://10.179.1.121:6443: Failure executing: GET at: https://10.179.1.121:6443/api/v1/namespaces/kubernetes-plugin/pods. Message: Unauthorized.
The Jenkins logs show this message:
Sep 05, 2017 10:22:03 AM io.fabric8.kubernetes.client.Config tryServiceAccount
WARNING: Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
The documentation is, unfortunately, mostly limited to Kubernetes running on Minikube and to Google Cloud Engine, but I do not see a conceptual difference between the former and a locally hosted Kubernetes cluster.
The following Curl call for testing results in a very different error message:
$ curl --insecure --cacert /etc/kubernetes/pki/ca.crt --cert kubernetex.pfx:secret https://10.179.1.121:6443
User "system:anonymous" cannot get at the cluster scope.
More verbose:
$ curl -v --insecure --cacert /etc/kubernetes/pki/ca.crt --cert kubernetex.pfx:secret https://10.179.1.121:6443
* About to connect() to 10.179.1.121 port 6443 (#0)
* Trying 10.179.1.121...
* Connected to 10.179.1.121 (10.179.1.121) port 6443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* NSS: client certificate not found: kubernetex.pfx
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=kube-apiserver
* start date: Jun 13 11:33:55 2017 GMT
* expire date: Jun 13 11:33:55 2018 GMT
* common name: kube-apiserver
* issuer: CN=kubernetes
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.179.1.121:6443
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< Content-Type: text/plain
< X-Content-Type-Options: nosniff
< Date: Tue, 05 Sep 2017 10:34:23 GMT
< Content-Length: 57
<
* Connection #0 to host 10.179.1.121 left intact
I have also set up a ServiceAccount:
$ kubectl describe serviceaccount --namespace=kubernetes-plugin
Name: default
Namespace: kubernetes-plugin
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-6qwj1
Tokens: default-token-6qwj1
Name: jenkins
Namespace: kubernetes-plugin
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: jenkins-token-1d623
Tokens: jenkins-token-1d623
This question deals with a related problem, recommending to use either a ServiceAccount or a certificate, but the answer to the latter aproach lacks the details about how to tie an RBAC profile to that certificate. The Kubernetes documentation about authentication does not seem to cover this use case.
The WARNING: Error reading service account token indicates that the key used to encrypt ServiceAccount tokens is different between kube-apiserver (--service-account-key-file) and kube-controller-manager (--service-account-private-key-file). If your kube-apiserver command-line doesn't specify --service-account-key-file then the value of --tls-private-key-file is used and I suspect that this is the issue.
I'd suggest always explicitly setting kube-apiserver --service-account-key-file to match the kube-controller-manager --service-account-private-key-file value.

why kubelet communicate with apiserver by using TLS needs password?v1.3

I deployed apiserver using TLS on master node and it worked fine,my question appeared when I deploying the kubelet and tring to communicate with apiserver.
the kubelet conf as follows:
/opt/bin/kubelet \
--logtostderr=true \
--v=0 \
--api_servers=https://kube-master:6443 \
--address=0.0.0.0 \
--port=10250 \
--allow-privileged=false \
--tls-cert-file="/var/run/kubernetes/kubelet_client.crt" \
--tls-private-key-file="/var/run/kubernetes/kubelet_client.key"
--kubeconfig="/var/lib/kubelet/kubeconfig"
/var/lib/kubelet/kubeconfig is following:
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
client-certificate: /var/run/kubernetes/kubelet_client.crt
client-key: /var/run/kubernetes/kubelet_client.key
clusters:
- name: kube-cluster
cluster:
certificate-authority: /var/run/kubernetes/ca.crt
contexts:
- context:
cluster: kube-cluster
user: kubelet
name: ctx-kube-system
current-context: ctx-kube-system
As I want to achieve the comunication using a two-way(both client and server)CA authentication and expect for a fluky reply,but apiserver ask me to provide my username and password which I have never used before,some command lines as following:
> kubectl version
> Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.2", GitCommit:"9bafa3400a77c14ee50782bb05f9efc5c91b3185", GitTreeState:"clean", BuildDate:"2016-07-17T18:30:39Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
> Please enter Username: kubelet
> Please enter Password: kubelet
> error: You must be logged in to the server (the server has asked for the client to provide credentials)
I tried all these on master minion.Could anyone please resolve this conundrum?Thanks in advance.
You have to enable client certificate authorization via the --client-ca-file flag on the apiserver.
From http://kubernetes.io/docs/admin/authentication/:
Client certificate authentication is enabled by passing the --client-ca-file=SOMEFILE option to apiserver. The referenced file must contain one or more certificates authorities to use to validate client certificates presented to the apiserver. If a client certificate is presented and verified, the common name of the subject is used as the user name for the request.
From http://kubernetes.io/docs/admin/kube-apiserver/:
--client-ca-file="": If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config="": The path to the cloud provider configuration file. Empty string for no configuration file.