Kubevirt virtctl image-upload gives "remote error: tls: bad certificate error" - ssl

I am trying to upload windows10 image to pvc inorder to create a windows10 vm using kubevirt.
I used below virtctl command:
$ virtctl image-upload --image-path=/Win10_20H2_v2_English_x64.iso --pvc-name=win10-vm --access-mode=ReadWriteMany --pvc-size=5G --uploadproxy-url=https://<cdi-uploadproxy IP>:443 --insecure
Result :
pvc is created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
win10-vm Bound pv0002 10Gi RWX 145m
win10-vm-scratch Bound pv0003 10Gi RWX 145m
cdi-image-upload pod is created.
[root#master kubevirt]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
cdi cdi-apiserver-847d4bc7dc-l6fz7 1/1 Running 1 135m
cdi cdi-deployment-66d7555b79-d57bm 1/1 Running 1 135m
cdi cdi-operator-895bb5c74-hpk44 1/1 Running 1 135m
cdi cdi-uploadproxy-6c8698cd8b-z67xc 1/1 Running 1 134m
default cdi-upload-win10-vm 1/1 Running 0 53s
But upload gets timeout.When I checked logs of the cdi-upload-win10-vm pod,I got following errors:
I0413 10:58:38.695097 1 uploadserver.go:70] Upload destination: /data/disk.img
I0413 10:58:38.695263 1 uploadserver.go:72] Running server on 0.0.0.0:8443
2021/04/13 10:58:40 http: TLS handshake error from [::1]:57710: remote error: tls: bad certificate
2021/04/13 10:58:45 http: TLS handshake error from [::1]:57770: remote error: tls: bad certificate
2021/04/13 10:58:50 http: TLS handshake error from [::1]:57882: remote error: tls: bad certificate
2021/04/13 10:58:55 http: TLS handshake error from [::1]:57940: remote error: tls: bad certificate
2021/04/13 10:59:00 http: TLS handshake error from [::1]:58008: remote error: tls: bad certificate
2021/04/13 10:59:05 http: TLS handshake error from [::1]:58066: remote error: tls: bad certificate
2021/04/13 10:59:10 http: TLS handshake error from [::1]:58136: remote error: tls: bad certificate

Related

Kubernetes Dashboard TLS cert issue

I am deploying the standard Kubernetes Dashboard (Jetstack) to the K3s cluster I have deployed on my RPI cluster, I am using lets-encrypt to provision the TLS cert and setting the following options on the dashboard deployment:
spec:
args:
- --tls-cert-file=/tls.crt
- --tls-key-file=/tls.key
volumeMounts:
- mountPath: /certs
name: kubernetes-dashboard-certs
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: cluster.smigula.com-tls
The cert is valid when I visit the URL in my browser, however the pod raises this:
http: TLS handshake error from 10.42.0.8:43704: remote error: tls: bad certificate
It appears that the ingress is terminating the TLS connection when the pod expects to terminate it. What should I do? Thanks.
[edit] I changed the resource kind from Ingress to IngressRouteTCP and set passthrough: true in the tls: section. Still same result.

How to configure ssl for ldap/opendj while using ISTIO service mesh

I have a couple of microservices and our backend is opendj/ldap. It has been configured to use SSL. Now we are trying to use ISTIO as our k8s service mesh. Every other service works fine but the ldap server - opendj - is not. My gues is it's because of the ssl configuration. It's meant to use self-signed cert.
I have a script that creates a self-signed cert in istio namespace and I have tried to use it like this on the gateway.yaml
- port:
number: 4444
name: tcp-admin
protocol: TCP
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
I also have tried to use
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: opendj-istio-mtls
spec:
host: opendj.{{.Release.Namespace }}.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
credentialName: tls-certificate
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: opendj-receive-tls
spec:
targets:
- name: opendj
peers:
- mtls: {}
For the ldap server but it's not connecting. While trying to use the tls spec in gateway.yaml I am getting this error
Error: admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: server cannot have TLS settings for non HTTPS/TLS ports
And the logs from opendj server
INFO - entrypoint - 2020-06-17 12:49:44,768 - Configuring OpenDJ.
WARNING - entrypoint - 2020-06-17 12:49:48,987 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
WARNING - entrypoint - 2020-06-17 12:49:53,293 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
Can someone please help me out how I should approach this.
To Enable non-https traffic over TLS connections you have to use Protocol TLS. TLS implies the connection will be routed based on the SNI header to the destination without terminating the TLS connection. You can check this.
- port:
number: 4444
name: tls
protocol: TLS
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
Please check this istio documentation also.

Istio remote error: tls: error decrypting message

I am starting out with Istio and trying to enable TLS on north-south traffic by creating a gateway resource enabled with TLS and am following this doco https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/.
I have following everything to the dot but I keep getting this error from the Istiod pod logs:
2020-05-21T04:41:44.467181Z info grpc: Server.Serve failed to complete security handshake from "10.x.x.x:34774": remote error: tls: bad certificate
2020-05-21T04:41:54.416502Z info grpc: Server.Serve failed to complete security handshake from "10.x.x.x:56768": remote error: tls: error decrypting message
2020-05-21T04:42:00.305269Z info grpc: Server.Serve failed to complete security handshake from "10.x.x.x:56834": remote error: tls: error decrypting message
Any idea why this is happening? I did check for typos while creating certs but cannot find any.
This works for when I disable TLS and use HTTP. So I am assuming that the error is from using the certificates and the logs tell the same thing too.
Details about the cluster:
AWS EKS Version: 1.14
Istio Version: 1.51
Any help would be greatly appreciated!

Hyperledger Fabric CA: x509: certificate is valid for rca-ord, not localhost

we have started an instance of fabric-ca-server with following settings in docker-compose.yml
version: '2'
networks:
test:
services:
myservice:
container_name: my-container
image: hyperledger/fabric-ca
command: /bin/bash -c "fabric-ca-server start -b admin:adminpw"
environment:
- FABRIC_CA_SERVER_HOME=/etc/hyperledger/fabric-ca
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_CSR_CN=rca-ord
- FABRIC_CA_SERVER_CSR_HOSTS=rca-ord
- FABRIC_CA_SERVER_DEBUG=true
volumes:
- ./scripts:/scripts
- ./data:/data
networks:
- test
ports:
- 7054:7054
but when we try to enroll a user against this server using the command below:
root#fd85cc416f52:/# fabric-ca-client enroll -u https://user:userpw#localhost:7054 --tls.certfiles $FABRIC_CA_SERVER_HOME/tls-cert.pem
we get the error below:
2018/12/08 22:18:03 [INFO] TLS Enabled
2018/12/08 22:18:03 [INFO] generating key: &{A:ecdsa S:256}
2018/12/08 22:18:03 [INFO] encoded CSR
Error: POST failure of request: POST https://localhost:7054/enroll
{"hosts":["fd85cc416f52"],"certificate_request":"-----BEGIN CERTIFICATE REQUEST-----\nMIIBQDCB6AIBADBcMQswCQYDVQQGEwJVUzEXMBUGA1UECBMOTm9ydGggQ2Fyb2xp\nbmExFDASBgNVBAoTC0h5cGVybGVkZ2VyMQ8wDQYDVQQLEwZGYWJyaWMxDTALBgNV\nBAMTBHVzZXIwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAATREdPvOeaWG9TzaEyk\nhFXRnJFJouDXShr0D1745bCt/0n3qjpqviZiApd1t62VrpMX0j8DBa6tkF7C+rEr\nRvwnoCowKAYJKoZIhvcNAQkOMRswGTAXBgNVHREEEDAOggxmZDg1Y2M0MTZmNTIw\nCgYIKoZIzj0EAwIDRwAwRAIgASXupobxJia/FFlLiwYzYpacvSA6RiIc/LR/kvdB\nT8ICIA1nJ2RfHrwMhOWocxMAIuLUsBvKS3S5DIwCHp0/gBpn\n-----END CERTIFICATE REQUEST-----\n","profile":"","crl_override":"","label":"","NotBefore":"0001-01-01T00:00:00Z","NotAfter":"0001-01-01T00:00:00Z","CAName":""}: Post https://localhost:7054/enroll: x509: certificate is valid for rca-ord, not localhost
on the server-side we can see following message printed out when the request is sent:
my-container | 2018/12/08 22:18:03 http: TLS handshake error from 127.0.0.1:56518: remote error: tls: bad certificate
we have also tried:
root#fd85cc416f52:/# ls $FABRIC_CA_SERVER_HOME
IssuerPublicKey IssuerRevocationPublicKey ca-cert.pem fabric-ca-server-config.yaml fabric-ca-server.db msp tls-cert.pem
root#fd85cc416f52:/# fabric-ca-client enroll -u https://user:userpw#localhost:7054 --tls.certfiles $FABRIC_CA_SERVER_HOME/ca-cert.pem
with same result
wondering if someone can help us what is wrong here and how can we fix it? thanks
You have generated a TLS certificate on the server using FABRIC_CA_SERVER_CSR_HOSTS=rca-ord, but then you are sending your request to localhost in the URL you specify in the enroll command.
To get this to work, you should change your environment variable to also include 'localhost'. For example: FABRIC_CA_SERVER_CSR_HOSTS=rca-ord,localhost.
Delete the old TLS certificate and generate a new one, and it should work.

How to run remote code as user with certificate from a worker node

I created a user in the Master.
First I created a key and certificate for him: dan.key and dan.crt
Then I created it inside Kubernetes:
kubectl config set-credentials dan \
--client-certificate=/tmp/dan.crt \
--client-key=/tmp/dan.key
This is the ~/.kube/config:
users:
- name: dan
user:
as-user-extra: {}
client-certificate: /tmp/dan.crt
client-key: /tmp/dan.key
I want to be able to run commands from a remote worker as the user I created.
I know how to do it with service account token:
kubectl --server=https://192.168.0.13:6443 --insecure-skip-tls-verify=true --token="<service_account_token>" get pods
I copied the certifiacte and the key to the remote worker and ran:
[workernode tmp]$ kubectl --server=https://192.168.0.13:6443 --client-certificate=/tmp/dan.crt --client-key=/tmp/dan.key get pods
Unable to connect to the server: x509: certificate signed by unknown authority
I followed this question:
kubectl unable to connect to server: x509: certificate signed by unknown authority
I tried like he wrote:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
But I am still receiving:
Unable to connect to the server: x509: certificate signed by unknown authority
I copied the certifiacte and the key to the remote worker and ran:
[workernode tmp]$ kubectl --server=https://192.168.0.13:6443 --client-certificate=/tmp/dan.crt --client-key=/tmp/dan.key get pods
Unable to connect to the server: x509: certificate signed by unknown authority
You were missing the critical piece of data telling kubectl how to trust the https: part of that request, namely --certificate-authority=/path/to/kubernetes/ca.pem
You didn't encounter that error while using --token=... because of the --insecure-skip-tls-verify=true which you should definitely, definitely not do.
I tried like he wrote:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
But I am still receiving:
You have followed the wrong piece of advice from whatever article you were reading; that --accept-hosts flag only controls the remote hostnames from which kubectl proxy will accept connections, and has zero to do with SSL anythings.