Can't reload Puppet configuration -> inability to connect to Puppet Server - ssl

I'm using two Vagrant VMs to test some things with Puppet, but when I go to request a cert, I get a cryptic error message that I can't find any information about.
I should note that in correspondence with good Linux server administration I'm use /var/ and /opt/ for storing sensitive cert info, but otherwise a standard Puppet setup.
# Client node details
IP: 192.168.250.10
Hostname: client.example.com
Puppet version: 4.3.2
OS: CentOS Linux release 7.0.1406 (on Vagrant)
# Puppet server details
IP: 192.168.250.6
Hostname: puppet-server.example.com
Puppet version: 4.3.2
OS: CentOS Linux release 7.0.1406 (on Vagrant)
# client's and server's /etc/hosts files are identical
192.168.250.5 puppetmaster.example.com
192.168.250.6 puppet.example.com puppet-server.example.com
192.168.250.7 dashserver.example.com dashboard.example.com
192.168.250.10 client.example.com
192.168.250.20 webserver.example.com
# /etc/puppetlabs/puppet/puppet.conf on both client and server
[main]
logdest = syslog
[user]
bucketdir = $clientbucketdir
vardir = /var/opt/puppetlabs/server
ssldir = $vardir/ssl
[agent]
server = puppet.example.com
[master]
certname = puppet.example.com
vardir = /var/opt/puppetlabs/puppetserver
ssldir = $vardir/ssl
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
trusted_server_facts = true
reports = store
cacert = /var/opt/puppetlabs/puppetserver/ssl/certs/ca.pem
cacrl = /var/opt/puppetlabs/puppetserver/ssl/crl.pem
hostcert = /var/opt/puppetlabs/puppetserver/ssl/certs/{puppet, client}.example.com.pem # respectively, obviously
hostprivkey = /var/opt/puppetlabs/puppetserver/ssl/private_keys/{puppet, client}.example.com.pem # respectively, obviously
Finally, the error I get:
$ sudo puppet resource service puppet ensure=stopped enable=false
Notice: /Service[puppet]/ensure: ensure changed 'running' to 'stopped'
service { 'puppet':
ensure => 'stopped',
enable => 'false',
}
$ sudo puppet resource service puppet ensure=running enable=true
Notice: /Service[puppet]/ensure: ensure changed 'stopped' to 'running'
service { 'puppet':
ensure => 'running',
enable => 'true',
}
$ puppet agent --test --server=puppet.example.com
Error: Could not request certificate: Permission denied # dir_initialize - /etc/puppetlabs/puppet/ssl/private_keys
Exiting; failed to retrieve certificate and waitforcert is disabled
First of all, with this setup Puppet should not be using /etc/puppetlabs/puppet/ssl/private_keys. It's not using my configuration file correctly:
$ puppet config print ssldir
/etc/puppetlabs/puppet/ssl
Next, I went through and regenerated the keys on BOTH the server and the client nodes as prescribed in the Puppet docs, however I still got the same error and both the client AND server still think my $ssldir is /etc/puppetlabs/puppet/ssl when it should be /var/opt/puppetlabs/puppetserver/ssl.
Any thoughts?

You need to specify the ssl and vardir config in the agent section as well as master.
the user section is only applicable to the puppet apply commands etc

Related

Where is Capistrano retrieving the IP Address from?

I have to deploy a rails app after the server had a problem, and the IP Address has changed.
I've updated the IP Address in deploy/production.rb, and also git's remote branches, to the correct value, namely 192.168.30.24, but as you can see from the following output, the deployment is failing due to trying to connect over 192.168.30.23.
Where is Capistrano retrieving 192.168.30.23 from?
INFO [fa83a838] Running /usr/bin/env git remote update as code#192.168.30.24
DEBUG [fa83a838] Command: cd /var/www/paperless_office/repo && ( export RBENV_ROOT="~/.rbenv" RBENV_VERSION="2.3.0" GIT_ASKPASS="/bin/echo" GIT_SSH="/tmp/paperless_office/git-ssh.sh" ; /usr/bin/env git remote update )
DEBUG [fa83a838] Fetching origin
DEBUG [fa83a838] ssh: connect to host 192.168.30.23 port 22: No route to host
Capfile
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
require 'capistrano/rbenv'
require 'capistrano/bundler'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r }
production.rb as follows:
role :app, %w{192.168.30.24}
role :web, %w{192.168.30.24}
role :db, %w{192.168.30.24}
server '192.168.30.24', user: 'code', roles: %w{web app}
after 'deploy:publishing', 'deploy:restart'
Thanks
Fixed this by removing the remote repo that Capistrano builds, so that on the next deploy, it was rebuilt using the correct IP Address.
I was deploying to /var/www/app_name so the repo to remove was /var/www/app_name/repo

Cannot validate certificate for ip because it doesn't contain any IP SANs

I have installed OpenShift3 with Docker and Kubernetes with the ansible installer.
After the installation I want to create my docker registration on my master but I get the following error (I read it was something with SSL but I can't find a solution):
commands (from the sample):
[root#ip-10-0-0-x centos]# export CURL_CA_BUNDLE=`pwd`/openshift.local.config/master/ca.crt
[root#ip-10-0-0-x centos]# sudo chmod a+rwX openshift.local.config/master/admin.kubeconfig
[root#ip-10-0-0-x centos]# sudo chmod +r openshift.local.config/master/openshift-registry.kubeconfig
[root#ip-10-0-0-x centos]# oadm registry --create --credentials=openshift.local.config/master/openshift-registry.kubeconfig --config=openshift.local.config/master/admin.kubeconfig
error:
error: error getting client: couldn't read version from server: Get https://10.0.0.x:8443/api: x509: cannot validate certificate for 10.0.0.x because it doesn't contain any IP SANs
additional info
[root#ip-10-0-0-x centos]# kubectl version
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v1.1.0-alpha.0-1605-g44c91b1", GitCommit:"44c91b1", GitTreeState:"not a git tree"}
Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.1.0-alpha.0-1605-g44c91b1", GitCommit:"44c91b1", GitTreeState:"not a git tree"}
[root#ip-10-0-0-191 centos]# oc get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 172.30.0.1 <none> 443/TCP <none> 1d
[root#ip-10-0-0-x centos]# kubernetes apiserver
F0924 12:15:13.674745 75545 server.go:223] No --service-cluster-ip-range specified
The Ansible installer should generate certs for you that have the right IPs in the certs. Your local kubeconfig file (that oadm is using to connect to the server) should have been generated by the Ansible installer - can you verify that is the case? The file is in ~/.kube/config - does it point to the system that the Ansible installer used? Are you using an IaaS for OpenShift, deploying to local machines, or Vagrant?

Vagrant remote box with libvirt issue

I am trying to make vagrant work with the following setup
Two machines - One controller and one host
Installed vagrant + vagrant-nodemaster plugin in controller (1.5.4 vagrant)
Installed vagrant + vagrant-node + vagrant-libvirt in host machine
After installation I started nodeserver in host machine in an unused port.
With the following configuration pushed from controller to host (using vagrant remote config upload )
config.vm.define :vm3 do |vm3|
vm3.vm.network :private_network,
:ip => "192.168.170.57",
:libvirt__network_name => "vagrantnw",
:libvirt__dhcp_enabled => false
end
config.vm.provider :libvirt do |libvirt|
libvirt.driver = "qemu"
# leave out host to connect directly with qemu:///system
#libvirt.host = "localhost"
libvirt.connect_via_ssh = false # aeso needed
libvirt.username = "root"
libvirt.storage_pool_name = "default"
end
config.ssh.username = 'vagrant'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'
config.ssh.private_key_path = '/home/kk/ssh_privkey'
I am expecting that with the above configuration libvirt will create a vm with ip address as 192.168.170.57 with a valid nfs which can be mapped to host. Now, following are the issues I am facing
VM is always created in 192.168.121.xx network with a dynamic ip address assigned in the same subnet. I am not able to create vm in the specific network which I want.
I would like to remotely ssh into the vm using command 'vagrant remote ssh '. Or from a different host I would like to connect to the VM created above.
I would like to ftp a file to the guest once remote ssh is working fine. I believe we can do this using ansible . But wanted to check if a quick and dirty way to do it through vagrant .
Thanks
You can change the management network by adding the following lines in the provider definition:
config.vm.provider :libvirt do |libvirt|
...
libvirt.management_network_name = "vagrant-libvirt"
libvirt.management_network_address = "10.75.250.0/25"
end

`ssh` executable not found in any directories in the %PATH%

ERROR:
c:\Users\dhawal.vora>vagrant ssh
`ssh` executable not found in any directories in the %PATH% variable. Is an
SSH client installed? Try installing Cygwin, MinGW or Git, all of which
contain an SSH client. Or use your favorite SSH client with the following
authentication information shown below:
Host: 127.0.0.1
Port: 2222
Username: vagrant
Private key: c:/Users/dhawal.vora/.vagrant/machines/default/virtualbox/private_key
Kindly help????
Vagrant file is below-
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = "precise32"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
# config.push.define "atlas" do |push|
# push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
# end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision "shell", inline <<-SHELL
sudo apt-get install apache2
# SHELL
end
Adding C:\Program Files\Git\usr\bin to the PATH environment variable.
Add it manually or I believe you could run this in cmd:
set PATH=%PATH%;C:\Program Files\Git\usr\bin
updated from #Ygor Thomaz's comments
or (64 bits)
set PATH=%PATH%;C:\Program Files\Git\usr\bin
If this doesn't fix your problem, go through :
Get SSH working on Vagrant/Windows/Git
You can alternatively install openssh from here and then you can add the ssh.exe to your PATH by:
set PATH=%PATH%;C:\Program Files (x86)\OpenSSH\bin
or
set PATH=%PATH%;C:\Program Files\OpenSSH\bin
With Windows 10 I also couldn't get the 'set PATH' option to work. But when I amended the PATH variable through System Settings and started a new command prompt it worked fine.
Also, putty worked perfectly after I read the screen which told me to use a username of 'core'.
'core' was a requirement of my configuration which was trying to launch CoreOS.
Adding C:\Program Files\Git\usr\bin to the PATH environment variable didn't work for me.
So I configured PUTTY for ssh connection.
This well-written illustrative tutorial gives a great overview on ways to setup Vagrant SSH. The first way is via Git and the second way describes how to use Putty. It is very easy to follow.
Running Vagrant SSH on Windows
In my case even adding ssh to the PATH didn't solve the problem. What I had to do is connect to vagrant with ssh manually. After executing vagrant up, instead of executing vagrant ssh, I do this:
ssh vagrant#127.0.0.1 -p 2222
And the password is "vagrant"
For getting all the information about the ip, port and user you can use
vagrant ssh-config
Ope this helps somebody...

Puppet: could not retrieve catalog from remote server

Running sudo puppet agent -t from host: host.internaltest.com
err: Could not retrieve catalog from remote server: Error 400 on SERVER: Another local or imported resource exists with the type and title Host[host.internaltest.com] on node host.internaltest.com
This machine had its ssl certs messed with so I cleaned it off the master and then using autosign (bad bad i know!) I ran sudo puppet agent -t which regenerated the ssl cert but also threw this error. Let me know if you need more information, I haven't delete with this aspect of puppet too much.
Most likely puppetmaster has this cert in the memory. You need to clean the cert both on client and in the master
#On client machine do this assuming puppet libdir = /var/lib/puppet
rm -rf /var/lib/puppet/ssl/*/*.pem
#On the puppet-master
puppet cert clean host.internaltest.com
# Restart puppet-master
/sbin/service puppetmasterd restart
# If you are using puppet-master behind passenger, you may need to restart httpd
/sbin/service httpd restart
# then run puppet agent on the client to regenerate the cert
If one uses an stunnel and globally set http_proxy this error will occur when it is redirected to the wrong endpoint.