Puppet: could not retrieve catalog from remote server - ssl

Running sudo puppet agent -t from host: host.internaltest.com
err: Could not retrieve catalog from remote server: Error 400 on SERVER: Another local or imported resource exists with the type and title Host[host.internaltest.com] on node host.internaltest.com
This machine had its ssl certs messed with so I cleaned it off the master and then using autosign (bad bad i know!) I ran sudo puppet agent -t which regenerated the ssl cert but also threw this error. Let me know if you need more information, I haven't delete with this aspect of puppet too much.

Most likely puppetmaster has this cert in the memory. You need to clean the cert both on client and in the master
#On client machine do this assuming puppet libdir = /var/lib/puppet
rm -rf /var/lib/puppet/ssl/*/*.pem
#On the puppet-master
puppet cert clean host.internaltest.com
# Restart puppet-master
/sbin/service puppetmasterd restart
# If you are using puppet-master behind passenger, you may need to restart httpd
/sbin/service httpd restart
# then run puppet agent on the client to regenerate the cert

If one uses an stunnel and globally set http_proxy this error will occur when it is redirected to the wrong endpoint.

Related

Certbot unable to find AWS credentials when issuing certificate via dns for route53

I need to get an certificate for my domain hosted on AWS Route 53 from LetsEncrypt. I do not have any port 80 or 443 exposed since the server is used for VPN and does not have a public access.
So the only way to do this is via DNS validation of route 53.
So far I have installed certbot and dns-route53 plugin
sudo snap install --beta --classic certbot
sudo snap set certbot trust-plugin-with-root=ok
sudo snap install --beta certbot-dns-route53
sudo snap connect certbot:plugin certbot-dns-route53
I have created a special user in my AWS account who has access to Route53 and I have added the access key id and secret access key in the ~/.aws/config and also ~/.aws/credentials which looks something like this
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
Basically followed every step given here: https://certbot-dns-route53.readthedocs.io/en/stable/
Now when I run the following command:
sudo certbot certonly -d mydomain.com --dns-route53
It gives the following output:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator dns-route53, Installer None
Requesting a certificate for mydomain.com
Performing the following challenges:
dns-01 challenge for mydomain.com
Cleaning up challenges
Unable to locate credentials
To use certbot-dns-route53, configure credentials as described at https://boto3.readthedocs.io/en/latest/guide/configuration.html#best-practices-for-configuring-credentials and add the necessary permissions for Route53 access.
I went to the documentation given in the error message: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#best-practices-for-configuring-credentials
but I do not think there is anything wrong I am doing
I even went to the root level by doing sudo su and exported the AWS keys as env vars there and even exported the AWS keys in the home as well but it still throws me the same error.
so I also ran into this same issue, and it's likely because of you running certbot with sudo, when do you do that, whatever user you've used as ~/, is ignored, as instead, it's looking in /root/.
I fixed it by (centos) is my user where I have the .aws/ directory with config and credential files.
sudo -s
ln -s /home/centos/.aws/ ~/.aws
ls -lsa ~/.aws
... /root/.aws -> /home/centos/.aws/

How to resolve peer unverified exception in a secure nifi cluster?

I set up a secured NiFi cluster with TLS certificates provided by the organisation.On accessing the UI I am getting the error as "javax.net.ssl.SSLPeerUnverifiedException: Hostname abc.com not verified: certificate: sha256/abc/abcabc= DN: CN=abc.com, OU=Abc Operations, O=Abc Corporation Limited, C=SG subjectAltNames: [abc.com]".I have referred the link https://nifi.apache.org/docs/nifi-docs/html/walkthroughs.html#securing-nifi-with-provided-certificates.
Is there anything I missed to enable peer to peer communication while using SSL?
I had same problem and found solution in NiFi TLS-toolkit.
Notion: on my cluster auth worked correctly and problem was only in java verification SSL
Shortly: problem indeed in --subjectAlternativeNames
Generating ssl-keys with own rootCA not worked for me. Good instrunction (but old): https://community.cloudera.com/t5/Community-Articles/How-to-create-user-generated-keys-for-securing-NiFi/ta-p/245551
CentOS Linux 8
NiFi 1.14.0
nifi-toolkit 1.15.2
My way with NiFi TLS-toolkit:
Download nifi-toolkit-*.tar.gz to linux machine (let's ip machine is 0.0.0.1, we need it because this VM will be as "certificateAuthorityHostname") link at this page
sudo wget https://dlcdn.apache.org/nifi/1.15.2/nifi-toolkit-1.15.2-bin.tar.gz
Unarchive it
sudo tar -xvf nifi-toolkit-1.15.2-bin.tar.gz
Generate all keys by long command
../security_output - this dir (or any other name) need to be created before run main command (it's useful to store all key-files in one place)
sudo ./bin/tls-toolkit.sh standalone -h - this help-command to better understand args
OU - equal VM-names in my cluster
!!! --subjectAlternativeNames - it's main reason why raise error javax.net.ssl.SSLPeerUnverifiedException: Hostname <ip / dns> not verified
-O - this arg overwrite your keys in folder, be careful
generaet coomand: sudo ./bin/tls-toolkit.sh standalone --hostnames '0.0.0.1,0.0.0.2,0.0.0.3' -c '0.0.0.1' -C 'CN=0.0.0.1,OU=nifi-prod-cluster-01' -C 'CN=0.0.0.2,OU=nifi-prod-cluster-02' -C 'CN=0.0.0.3,OU=nifi-prod-cluster-03' -O -o ../security_output --subjectAlternativeNames '0.0.0.1,0.0.0.2,0.0.0.3,nifi-prod-cluster-01,nifi-prod-cluster-02,nifi-prod-cluster-03'
After generating keys I archive full dir security_output:
sudo tar -zcvf security_output.tar.gz security_output
And copy this tar/dir to other VM of cluster: to 0.0.0.2 and 0.0.0.3 in my example
Then we need to move keystore.jks and truststore.jks to nifi/conf/ directory near nifi.properties
Edit nifi.properties. Passwords of keys will be in security_output/0.0.0.X/nifi.properties. I replace only this params:
nifi.security.autoreload.enabled=false
nifi.security.autoreload.interval=10 secs
nifi.security.keystore=./conf/keystore.jks
nifi.security.keystoreType=jks
nifi.security.keystorePasswd=34dgsOBKdS+9DGHIm849ALK3JaNBdd738ddsgjfghb4J
nifi.security.keyPasswd=34dgsOBKdS+9DGHIm849ALK3Jaddsgjfghb4J
nifi.security.truststore=./conf/truststore.jks
nifi.security.truststoreType=jks
nifi.security.truststorePasswd=/n1xI9AjcwutNBdd738uOQeQL5O9ALK3i3KwylEYMW5
nifi.security.user.authorizer=single-user-authorizer
nifi.security.allow.anonymous.authentication=false
nifi.security.user.login.identity.provider=single-user-provider
nifi.security.user.jws.key.rotation.period=PT1H
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
Restart nifi:
sudo service nifi restart && tail -f /opt/nifi/logs/nifi-app.log
UPD. Maybe you want to set one password for keys for all machines (it's easier to setup) or set number of days for keys: https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html#standalone
Links:
Usefull link for my guide (but old): https://pierrevillard.com/tag/tls-toolkit/
This helps me find good idea: https://community.cloudera.com/t5/Community-Articles/Using-the-TLS-Toolkit-to-simplify-security/ta-p/247531

svn commands fail with SSL: bad packet length on Solaris

I have a host with Solaris installed:
# uname -a
SunOS <hostname> 5.10 Generic_147147-26 sun4u sparc SUNW,Sun-Fire-V250
When I'm trying to execute any svn command as non-root user to a remote repository I'm getting SSL error:
<username># svn log https://svn.example.com/repository/file
SSL negotiation failed: SSL error: bad packet length
But if I switch to a root user everything seems to work fine:
root# svn log https://svn.example.com/repository/file
Authentication realm: <https://svn.example.com:443>
Password for 'root':
I've tried to remove ssl.svn.server/<hash> file for remote host and this works for root account (I have to accept the certificate again) but I'm getting the same SSL error for non-root user.
Any ideas how to fix this?
Is it possible that the regular user "username" had accepted a previous/different certificate from that host?
You may try to remove or rename the .svn directory into your "username" home and see if it solves the issue: if this is the case probably your "username" have stored some wrong setting in its .svn settings files.
Is it possible that root and "usarname" are using a different svn executable due to $PATH order? (You may check it using which svn command).
Hope it helps.

How to start httpd service using ansible module on Centos

I am suing passphrase protected SSL certificate for my Apache server. Whenever I try to restart the Apache server is is asking for passphrase, I can enter the passphrase manually however how to restart the service using Ansible module (either systemd or command)? How can I pass my passphrse in ansible to start the service?
To make apache receive the passphrase everytime it restarts, add this to the httpd.conf:
SSLPassPhraseDialog exec:/path/to/passphrase-file
and in your /path/to/passphrase-file file:
#!/bin/sh
echo "the passphrase"
answer from https://serverfault.com/questions/71043/ssl-password-on-apache2-restart

Cannot validate certificate for ip because it doesn't contain any IP SANs

I have installed OpenShift3 with Docker and Kubernetes with the ansible installer.
After the installation I want to create my docker registration on my master but I get the following error (I read it was something with SSL but I can't find a solution):
commands (from the sample):
[root#ip-10-0-0-x centos]# export CURL_CA_BUNDLE=`pwd`/openshift.local.config/master/ca.crt
[root#ip-10-0-0-x centos]# sudo chmod a+rwX openshift.local.config/master/admin.kubeconfig
[root#ip-10-0-0-x centos]# sudo chmod +r openshift.local.config/master/openshift-registry.kubeconfig
[root#ip-10-0-0-x centos]# oadm registry --create --credentials=openshift.local.config/master/openshift-registry.kubeconfig --config=openshift.local.config/master/admin.kubeconfig
error:
error: error getting client: couldn't read version from server: Get https://10.0.0.x:8443/api: x509: cannot validate certificate for 10.0.0.x because it doesn't contain any IP SANs
additional info
[root#ip-10-0-0-x centos]# kubectl version
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v1.1.0-alpha.0-1605-g44c91b1", GitCommit:"44c91b1", GitTreeState:"not a git tree"}
Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.1.0-alpha.0-1605-g44c91b1", GitCommit:"44c91b1", GitTreeState:"not a git tree"}
[root#ip-10-0-0-191 centos]# oc get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 172.30.0.1 <none> 443/TCP <none> 1d
[root#ip-10-0-0-x centos]# kubernetes apiserver
F0924 12:15:13.674745 75545 server.go:223] No --service-cluster-ip-range specified
The Ansible installer should generate certs for you that have the right IPs in the certs. Your local kubeconfig file (that oadm is using to connect to the server) should have been generated by the Ansible installer - can you verify that is the case? The file is in ~/.kube/config - does it point to the system that the Ansible installer used? Are you using an IaaS for OpenShift, deploying to local machines, or Vagrant?