PostgreSQL cannot work when opens SSL on Windows 10 - ssl

I am not sure this topic will fit here but I don't know ask it where.
I am trying to open the SSL of PostgreSQL 10.16 on Windows10.
I read a lot of documents about creating an SSL certificate to PostgreSQL but documents spend on Windows 10 are rare and don't detail.
These are the steps which I did:
Step 1, I download the OpenSSL version of Windows from
https://slproweb.com/products/Win32OpenSSL.html
and install it with the path C:\OpenSSL-Win64, setting system variable.
Step 2, I use cmd run as an admin to create server key with command line:
genrsa -out server.key 4096
, setting appropriate permission and owner on the private key file (here https://stackoverflow.com/a/51463654)
icacls server.key /reset
icacls server.key /inheritance:r /grant:r "CREATOR OWNER:F"
and I got a response from cmd.exe in this command-line
C:\WINDOWS\system32>icacls server.key /reset
processed file: server.key
Successfully processed 1 file; Failed processing 0 files
C:\WINDOWS\system32>icacls server.key /inheritance:r /grant:r "CREATOR OWNER:F"
processed file: server.key
Successfully processed 1 files; Failed processing 0 files
continue to create the server certificate:
req -new -x509 -days 1826 -key server.key -out server.crt
Step 3, since I am self-signing, I use the server certificate as the trusted root certificate, so I have 3 files: server key, server crt, and root crt (this is a copy of server crt)
I cut these three files to C:\Program Files\PostgreSQL\10\data
Step 4, I am setting postgresql.conf:
listen_addresses = '*'
port = 5432
ssl=on
ssl_cert_file = 'server.crt'
ssl_key_file = 'server.key'
ssl_ca_file = 'root.crt'
and add command-line to the end of pg_hba.conf:
# IPv4 remote connections for authenticated users
hostssl all postgres 0.0.0.0/0 md5 clientcert=1
Finally, I get an error as below when I restart my PostgreSQL
The PostgreSQL -x64-10 -PostgreSQL Server 10 service on the Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs.
and in my log of PostgreSQL:
2021-03-28 16:35:44.735 +07 [7624] LOG: database system was shut down at 2021-03-28 16:34:51 +07
2021-03-28 16:35:45.099 +07 [7044] LOG: database system is ready to accept connections
2021-03-28 17:39:37.827 +07 [7044] LOG: received fast shutdown request
2021-03-28 17:39:37.834 +07 [7044] LOG: aborting any active transactions
2021-03-28 17:39:37.839 +07 [7044] LOG: worker process: logical replication launcher (PID 7972) exited with exit code 1
2021-03-28 17:39:37.843 +07 [7880] LOG: shutting down
2021-03-28 17:39:37.877 +07 [7044] LOG: database system is shut down
I suspect PostgreSQL did not read the 3 files that I put in its data directory.
I have referenced these documents
https://www.howtoforge.com/postgresql-ssl-certificates
chmod og-rwx server.key in windows
https://www.postgresql.org/docs/10/ssl-tcp.html
I have been messing with it for many days and I don't know how to solve this problem.

Related

Libvirt and TLS ignoring CA file settings

I have 3 KVM/libvirt hypervisors that I would like to communicate with each other.
I have my own CA and subordinate CA.
I created the certificates for each machine and I have the following one each /etc/libvirt/libvirtd.conf of them:
listen_tls = 1
key_file = "/etc/pki/tls/private/serverX_libvirt_key.pem"
cert_file = "/etc/pki/tls/certs/serverX_crt.pem"
ca_file = "/etc/pki/tls/certs/CA_chain.pem"
The CA_chain.pem file obviously contains the chain certificates (Int-CA & CA).
The key file and the certificates are validated correctly.
openssl verify -CAfile /etc/pki/tls/certs/CA_chain.pem /etc/pki/tls/certs//etc/pki/tls/certs/serverX_crt.pem
/etc/pki/tls/certs/serverX_crt.pem: OK
Client certificates are defined as in the documentation as:
ls -lrt /etc/pki/libvirt/private/clientkey.pem
-r--------. 1 root root 3243 Apr 30 09:45 /etc/pki/libvirt/private/clientkey.pem
And of course it's verified from our CA:
openssl verify -CAfile /etc/pki/tls/certs/CA_chain.pem /etc/pki/libvirt/clientcert.pem
/etc/pki/libvirt/clientcert.pem: OK
ls -lrt /els -lrt /etc/pki/libvirt/clientcert.pem
-rw-r--r--. 1 root root 2297 Apr 30 10:07 /etc/pki/libvirt/clientcert.pemtc/pki/libvirt/private/clientkey.pem
However, I cannot connect to the hypervisors!
Using virsh, I get the following error:
[root#serverX ~]# virsh -c qemu+tls://serverY list
error: failed to connect to the hypervisor
error: Cannot read CA certificate '/etc/pki/CA/cacert.pem': No such file or directory
The certificate permissions are correct, the SELinux is correct.
But obviously, something is missing and I cannot connect to the servers.
Any help would be appreciated.
OK, I found it.
The issue seems to be that the libvirt's client has no option for TLS. Thus, we are forced to use the default locations. And although I used the default locations for the private key and the certificate, I didn't do the same for the CA.
Thus, I created a link from /etc/pki/tls/certs/CA_chain.pem to /etc/pki/CA/cacert.pem and now everything is fine.
mkdir -p /etc/pki/CA/
ln -s /etc/pki/tls/certs/CA_chain.pem /etc/pki/CA/cacert.pem

VS Code Remote - windows path with whitespace in it - ssh error

I want to connect to my server via ssh. I have installed the remote dev package in VS Code, I can connect via ssh in VSC terminal, but not via the ssh 'panel'. When I do so, I get:
[10:45:40.155] Spawned 9044
[10:45:40.266] > local-server> Spawned ssh: 7472
[10:45:40.292] stderr> OpenSSH_7.9p1, OpenSSL 1.1.1a 20 Nov 2018
[10:45:41.149] stderr> debug1: Server host key: ecds...56 SHA256:5SDO....
[10:45:41.183] stderr> 'C:\Users\Name' is not recognized as an internal or external command,
[10:45:41.183] stderr> operable program or batch file.
[10:45:41.186] stderr> Host key verification failed.
[10:45:41.189] > local-server> ssh child died, shutting down
[10:45:41.197] Local server exit: 0
[10:45:41.198] Received install output: OpenSSH_7.9p1, OpenSSL 1.1.1a 20 Nov 2018
debug1: Server host key: ecdsa-s.....
'C:\Users\Name' is not recognized as an internal or external command,
operable program or batch file.
Host key verification failed.
As you can see, I have C/users/Name Surname/... user which causes trouble - it gets parsed with whitespace between Name and Surname
it happens probably when it tries to reach this:
[10:45:40.091] Local server env: {"DISPLAY":"1","ELECTRON_RUN_AS_NODE":"1","SSH_ASKPASS":"c:\\Users\\Name Surname\\.vscode\\extensions\\ms-vscode-remote.remote-ssh-0.50.0\\out\\local-server\\askpass.bat","VSCODE_SSH_ASKPASS_NODE":"C:\\Users\\Name Surname\\AppData\\Local\\Programs\\Microsoft VS Code\\Code.exe","VSCODE_SSH_ASKPASS_MAIN":"c:\\Users\\Name Surname\\.vscode\\extensions\\ms-vscode-remote.remote-ssh-0.50.0\\out\\askpass-main.js","VSCODE_SSH_ASKPASS_HANDLE":"\\\\.\\pipe\\vscode-ssh-askpass-1e1200d27-sock"}
My question is, what can I do about it?
In the extension settings search for: #ext:ms-vscode-remote.remote-ssh Path
Then under Path specify an absolute path for an ssh installation. On my windows install it was located here: C:\Windows\System32\OpenSSH\ssh.exe

How to fix "Service [XXX]: SSL server needs a certificate" on an Stunnel server?

I had an Stunnel server configuration that was working fine last week. It seems that after a sudo apt-get update && sudo apt-get upgrade that is not the case anymore.
Version:
$ ls -la /usr/bin/stunnel
?????????? 1 root root 8 Xxx XX 2016 /usr/bin/stunnel -> stunnel4
$ stunnel -version
stunnel 5.30 on x86_64-pc-linux-gnu platform
Compiled with OpenSSL 1.0.2e 3 Dec 2015
Running with OpenSSL 1.0.2g 1 Mar 2016
Update OpenSSL shared libraries or rebuild stunnel
And this is my server stunnel.conf
verify = 2
debug = 7
output = stunnel.log
options = NO_SSLv3
[XXX]
client = no
verify = 0
accept = 9888
connect = localhost:9879
key = path/to/file.key
CAfile = path/to/ca.pem
What was working before, now gives to following error:
$ sudo stunnel stunnel.conf
[ ] Initializing service [XXX]
[!] Service [XXX]: SSL server needs a certificate
Why do I need a certificate now? Isn't this a server? I already provided a private key and CA certificate and thought that was enough.
Please correct me if I'm wrong, but I think an Stunnel server doesn't need the clients' certs on the configuration in order to start a session to listen on.
Whatever the issue is. I appreciate any help.
I got it fixed by changing CAfile to cert:
[XXX]
...
cert = path/to/ca.pem

Unable to ssh into app: "Application 'prod' not found" but it is in the apps list. How can I fix this?

$ rhc apps
RSA 1024 bit CA certificates are loaded due to old openssl compatibility
dev # http://
...
prod # http://
$ rhc ssh prod
RSA 1024 bit CA certificates are loaded due to old openssl compatibility
Application 'prod' not found.
$ rhc ssh --app dev
RSA 1024 bit CA certificates are loaded due to old openssl compatibility
Application 'dev' not found.
I'm not sure what else to say. I don't want to delete my ssh keys because I use them elsewhere; plus the error doesn't seem to be related to ssh keys.
I have found that I can log in to apps that I created but not apps that are shared with me. Even when using the ssh address provided for that app (rhc ssh 565fc20989f5cfec5f111012#...):
$ rhc ssh 565fc20989f5cfec5fddfd12#prod-xyzdomain.rhcloud.com
RSA 1024 bit CA certificates are loaded due to old openssl compatibility
Application '565fc20989f5cfec5fddfd12#prod-xyzdomain.rhcloud.com' not found.
If your application is "prod-myapp.rhcloud.com" your application name is actually just "prod". The naming scheme is <application name>-<domain>.rhcloud.com.
So the command to ssh into your application would be "rhc ssh prod" or "rhc ssh prod -n myapp" if the application is not in your default domain.

Docker Registry incorrectly claims an expired CA cert

I followed the Docker Registry installation docs precisely, and have a registry running on a remote Ubuntu VM. On that VM, the Docker container is running with the following command:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/auth:/auth \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/registry.key \
registry:2
On the remote VM, I have the following directory structure:
/home/myuser/
certs/
registry.crt
registry.key
/etc/docker/certs.d/myregistry.example.com:5000/
ca.crt
ca.key
The ca.crt is the same exact cert as ~/certs/registry.crt (just renamed); same goes for ca.key and registry.key being the same/just renamed. I created the ca* files per a suggestion from the error output you'll see below.
I am almost 100% sure the CA cert is still valid, although any help ruling that out (e.g. how can I actually tell?) would be appreciated. When I start the container and look at the Docker logs, I don't see any errors.
I then attempt to login from my local laptop (Mac):
docker login myregistry.example.com:5000
It queries me for my username, password and email (although I don't recall ever specifying an email when setting up Basic Auth). After entering these correctly (I have checked and double checked...) I get the following error:
myuser#mymachine:~/tmp$docker login myregistry.example.com:5000
Username: my_ciuser
Password:
Email: myuser#example.com
Error response from daemon: invalid registry endpoint https://myregistry.example.com:5000/v0/:
unable to ping registry endpoint https://myregistry.example.com:5000/v0/ v2 ping attempt failed with error:
Get https://myregistry.example.com:5000/v2/: x509: certificate has expired or is not yet valid
v1 ping attempt failed with error: Get https://myregistry.example.com:5000/v1/_ping: x509:
certificate has expired or is not yet valid. If this private registry supports only HTTP or
HTTPS with an unknown CA certificate, please add
`--insecure-registry myregistry.example.com:5000` to the daemon's
arguments. In the case of HTTPS, if you have access to the registry's CA
certificate, no need for the flag; simply place the CA certificate
at /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
So from my perspective, I guess the following are possible:
The CA cert is invalid (if so, why?!?)
The CA cert is an intermediary cert (if so, how can I tell?)
The CA cert is expired (if so, how do I tell?)
This is a bad error message, and some other facet of the registry is not configured properly (if so, how do I troubleshoot further?)
Perhaps my cert is not located in the correct place on the server, or doesn't have the right permissions set (if so, where does the cert need to be?)
Something else that I would never expect in a million years
Any ideas/thoughts?
As said in the error message:
... In the case of HTTPS, if you have access to the registry's CA
certificate, no need for the flag; simply place the CA certificate
at /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
where myregistry.example.com:5000 - your CN with port.
You should copy your ca.crt into each Docker Daemon that will connect to your Docker Registry and put it in this folder: /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
After this action you need to restart Docker daemon, for example, via sudo service docker stop && service docker start on CentOS (or call similar procedure on your OS).
I had the similar error:
Then I added my private registry to the insecureregistries list.
See below image for docker-desktop