Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 7 years ago.
Improve this question
I'm struggling since 2 days with signing of Puppet-Agents now. The Problem is as follows:
On the master I delete all existing signatures with
puppet cert clean esx-poc-1.xxx.de
On the agent I delete the whole ssl directory with
rm -rf /var/lib/puppet/ssl/
After running one of the following commands on the agent...
puppet certificate generate esx-poc-1.xxx.de --ca-location remote
puppet agent --server puppetmaster.int.xxx.com --waitforcert 60 --test
...I can list the certificates on the master with:
puppet cert list --all
The output is:
"esx-poc-1.xxx.de" (SHA256)
71:72:D8:3E:09:9E:B1:5C:DA:78:A8:B8:A1:2B:E4:09:B8:00:8A:AF:49:02:CC:B2:B5:C3:25:79:59:0D:A8:F5
+ "puppetmaster.int.xxx.com" (SHA256) 7B:00:8C:4F:CE:B2:0D:2F:A1:BB:A7:C4:25:B0:11:01:2B:EC:90:46:D1:CB:BE:AA:AD:3F:B4:70:0C:83:3F:78
(alt names: "DNS:puppet", "DNS:puppet.xxx.de",
"DNS:puppetmaster.int.xxx.com")
After signing the agent with:
puppet cert sign esx-poc-1.xxx.de
The fingerprint differs from the one above:
"esx-poc-1.xxx.de" (SHA256) 49:F6:59:FD:3C:28:C6:54:7F:6E:A7:56:56:DB:64:9A:E2:08:10:90:11:83:7A:A6:0E:E1:CD:39:F0:E0:1C:25
Is that correct?
Performing an agent-run aferwards results in the following error:
Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 400 on SERVER: Could not retrieve facts for esx-poc-1.xxx.de: Failed to submit 'replace facts' command for esx-poc-1.xxx.de to PuppetDB at puppetmaster.int.xxx.com:8081: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [certificate signature failure for /CN=puppetmaster.int.xxx.com] Info: Retrieving plugin Info: Loading facts in /var/lib/puppet/lib/facter/last_run.rb Info: Loading facts in /var/lib/puppet/lib/facter/pe_version.rb Info: Loading facts in /var/lib/puppet/lib/facter/puppi_projects.rb Info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb Info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb Info: Loading facts in /var/lib/puppet/lib/facter/iptables.rb Info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed to submit 'replace facts' command for esx-poc-1.xxx.de to PuppetDB at puppetmaster.int.xxx.com:8081: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [certificate signature failure for /CN=puppetmaster.int.xxx.com] Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run
Thx for any help!
Ran into this exact same issue myself. The problem ended up being that the puppetdb-terminus package was at version 1.1.0 while puppetdb itself was still at 1.0.5.
After downgrading puppetdb-terminus down to 1.0.5 everything operated normally.
In puppet 3.4 I noticed that if the hostnames are not set this error could be thrown.
For example; I had two debian boxes; one was named debian1 and the other debian2 in the hosts file. But, both of their /etc/hostname settings where debian; after I changed their name with hostname and set their names in /etc/hostname they worked just fine.
This might be a dumb question, but, do you have a node definition for this machine? I.e.,
node 'esx-poc-1.xxx.de' {
.....
}
I had this error after changing permission of files in /etc/puppet.
Changing them back to 'pe-puppet' (for an enterprise version) solved it for me
After completely reinstalling puppetdb it's finally working...
Related
I am running a knife solo cook and started getting this error below. Did some searching and it seems it is due to the LetsEncrypt SSL ca needing updates. I updated these on the server and can wget the URL below just fine. But I still get this error w/ chef. I wasn't sure if there was some cache (I did clear out the local-mode-cache dir) or something I am missing here. Any help would be great! Thanks.
================================================================================
Error executing action add on resource 'postgresql_repository[pg repo]'
================================================================================
OpenSSL::SSL::SSLError
----------------------
apt_repository[postgresql_org_repository] (/home/ubuntu/chef-solo/local-mode-cache/cache/cookbooks/postgresql/resources/repository.rb line 76) had an error: OpenSSL::SSL::SSLError: remote_file[/home/ubuntu/chef-solo/local-mode-cache/cache/https___download_postgresql_org_pub_repos_apt_ACCC4CF8_asc] (/opt/chef/embedded/lib/ruby/gems/2.5.0/gems/chef-14.1.1/lib/chef/provider/apt_repository.rb line 199) had an error: OpenSSL::SSL::SSLError: SSL Error connecting to https://download.postgresql.org/pub/repos/apt/ACCC4CF8.asc - SSL_connect returned=1 errno=0 state=error: certificate verify failed (certificate has expired)
Resource Declaration:
---------------------
# In /home/ubuntu/chef-solo/local-mode-cache/cache/cookbooks/rails_app/recipes/postgresql_server_single.rb
Update - I was able to solve this by editing /opt/chef/embedded/ssl/certs/cacert.pem on the server and removing the DST Root CA X3 certificate.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I want to add a new control plane node into the cluster.
So, I run in an existing control plane server:
kubeadm token create --print-join-command
I run this command in new control plane node:
kubeadm join 10.0.0.151:8443 --token m3g8pf.gdop9wz08yhd7a8a --discovery-token-ca-cert-hash sha256:634db22bc69b47b8f2b9f733d2f5e95cf8e56b349e68ac611a56d9da0cf481b8 --control-plane --apiserver-advertise-address 10.0.0.10 --apiserver-bind-port 6443 --certificate-key 33cf0a1d30da4c714755b4de4f659d6d5a02e7a0bd522af2ebc2741487e53166
I got this message:
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
error execution phase control-plane-prepare/download-certs: error downloading certs: the Secret does not include the required certificate or key - name: external-e
tcd.crt, path: /etc/kubernetes/pki/apiserver-etcd-client.crt
I run in an existing production control plane node:
kubeadm init phase upload-certs --upload-certs
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
0a3f5486c3b9303a4ace70ad0a9870c2605d67eebcd500d68a5e776bbd628a3b
Re-run this command in the new control plane node:
kubeadm join 10.0.0.151:8443 --token m3g8pf.gdop9wz08yhd7a8a --discovery-token-ca-cert-hash sha256:634db22bc69b47b8f2b9f733d2f5e95cf8e56b349e68ac611a56d9da0cf481b8 --control-plane --apiserver-advertise-address 10.0.0.10 --apiserver-bind-port 6443 --certificate-key 0a3f5486c3b9303a4ace70ad0a9870c2605d67eebcd500d68a5e776bbd628a3b
I got the same message:
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
error execution phase control-plane-prepare/download-certs: error downloading certs: the Secret does not include the required certificate or key - name: external-etcd.crt, path: /etc/kubernetes/pki/apiserver-etcd-client.crt
To see the stack trace of this error execute with --v=5 or higher
What's I am wrong?
I have all certs in the new node installed before doing this op:
# ls /etc/kubernetes/pki/
apiserver.crt apiserver.key ca.crt front-proxy-ca.crt front-proxy-client.key
apiserver-etcd-client.crt apiserver-kubelet-client.crt ca.key front-proxy-ca.key sa.key
apiserver-etcd-client.key apiserver-kubelet-client.key etcd front-proxy-client.crt sa.pub
I didn't see how to specify etcd certs files:
Usage:
kubeadm init phase upload-certs [flags]
Flags:
--certificate-key string Key used to encrypt the control-plane certificates in the kubeadm-certs Secret.
--config string Path to a kubeadm configuration file.
-h, --help help for upload-certs
--kubeconfig string The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. (default "/etc/kubernetes/admin.conf")
--skip-certificate-key-print Don't print the key used to encrypt the control-plane certificates.
--upload-certs Upload control-plane certificates to the kubeadm-certs Secret.
Global Flags:
--add-dir-header If true, adds the file directory to the header of the log messages
--log-file string If non-empty, use this log file
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--one-output If true, only write logs to their native severity level (vs also writing to each lower severity level)
--rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem.
--skip-headers If true, avoid header prefixes in the log messages
--skip-log-headers If true, avoid headers when opening log files
-v, --v Level number for the log level verbosity
You also need to pass the --config flag to your kubeadm init phase command (use sudo if needed). So instead of:
kubeadm init phase upload-certs --upload-certs
you should for example run:
kubeadm init phase upload-certs --upload-certs --config kubeadm-config.yaml
This topic is also explained by Uploading control-plane certificates to the cluster docs.
I am trying to index this page using Apache Nutch selenium driver but when running parsechecker command it is throwing SSLHandShake exception.
bin/nutch parsechecker -Dplugin.includes='protocol-selenium|parse-tika' -Dselenium.grid.binary=/usr/bin/geckodriver -Dselenium.enable.headless=true -followRedirects -dumpText https://us.vwr.com/store/product?partNum=68300-353
Fetch failed with protocol status: exception(16), lastModified=0: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
When i have tried protocol-httpclient, Nutch is able to crawl content of page but it is not crawling dynamic content as httpclient is not support it. i have also tried protocol-interactiveselenium as well but with this also i am getting SSL handshake issue.
I have downloaded certificate and installed in JRE as well, but still facing same issue.
Version: Nutch 1.16
Update-1
Now when i checked hadoop.log, it is showing below error in log file:
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
... 12 more
I think that this is related to NUTCH-2649. For protocol-httpclient and protocol-http currently, Nutch has a dummy TrustManager for the connection (i.e we don't validate the certificates). As described in NUTCH-2649 protocol-selenium does not use the custom TrustManager and it tries to properly validate the certificate.
That being said, adding the certificate to the JVM should solve the issue for this specific domain. Perhaps selenium is not having access to the list of allowed certificates.
I am running puppet 3.7. The certs are expiring for me so I have updated the certs (after creating a backup so I am able to get back to the original state and that's fine). After updating the certs on puppetmaster using this, updating certs on the agent using this and updating certs on puppetdb using this, I am unable to run puppet agent successfully on a client box. It gives me the following error:
root#ip-10-181-36:/var/lib/puppet# sudo puppet agent -t
Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in 'issue_deprecation_warning')
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: Error 403 on SERVER: Forbidden request: newer-generic-host(127.0.0.1) access to /node/ip-10-181-36 [find] authenticated at :39
Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: newer-generic-host(127.0.0.1) access to /catalog/ip-10-181-36 [find] authenticated at :1
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Error: Could not send report: Error 403 on SERVER: Forbidden request: newer-generic-host(127.0.0.1) access to /report/ip-10-181-36 [save] authenticated at :91
I am stuck at this point and no googling or reading docs or seeing the logs is helping. Does anyone have any ideas?
Trying to update dependencies on a phoenix app by running: mix deps.get
The only STOUT is:
07:20:21.642 [error] SSL: :certify: ssl_handshake.erl:1507:Fatal error: certificate expired
07:20:21.674 [error] SSL: :certify: ssl_handshake.erl:1507:Fatal error: certificate expired
Registry update failed (http_error)
{:failed_connect, [{:to_address, {'repo.hex.pm', 443}}, {:inet, [:inet], {:tls_alert, 'certificate expired'}}]}
** (Mix) Failed to fetch registry
I have updated elixir and erlang with brew update but that hasn't helped.
Since the certificate for repo.hex.pm is not expired in reality but is very recently issued the error message might be cause by a wrong time on your computer. Thus make sure that you have the current time on your system and try again.