Chef provisioning ssh times out when used with chef zero - ssh

I am using Chef zero on my windows machine to ssh into a red hat linux machine and execute a command that's inside of a recipe. When I run the code below, it tries to SSH for 120 secs and times out. I'm not sure why this is happening. Any idea why this is happening?
require 'chef/provisioning'
require 'chef/provisioning/ssh_driver'
with_driver 'ssh'
machine "ssh" do
attribute "short_dns", new_resource.short_dns
attribute "long_dns", load_balancer_name
recipe "mycookbook::add_short_dns"
machine_options :transport_options => {
'is_windows' => false,
'ip_address' => '10.16.99.124',
'username' => 'myusername',
'ssh_options' => {
'password' => 'mypassword'
}
}
converge true
end
here is the error
- been waiting 110/120 -- sleeping 10 seconds for ssh (10.16.99.124 on ssh:C:/Users/user/.chef/provisioning/ssh) to be connectable ...[2015-06-23T14:54:33-05:00] INFO: Executing sudo pwd on myusername#10.16.99.124
================================================================================
Error executing action `converge` on resource 'machine[ssh]'
================================================================================
RuntimeError
------------
Machine ssh (10.16.99.124 on ssh:C:/Users/user/.chef/provisioning/ssh) did not become ready within 120 seconds

I'm still fighting with Chef Provisioning myself, so this may not be as helpful as I would like. One thing is that each of these is a key/value pair, so want to declare your variables differently (see below):
require 'chef/provisioning/ssh_driver'
with_driver 'ssh'
with_machine_options :transport_options => {
:username => 'centos',
:ssh_options => {
:password => 'password'
}
}

Amir,
Does the :C/Users/user/.chef/provisioning/ssh directory exist on your workstation? If not try creating it and making sure permissions are correct then try

Try to use the snippet below, notice extra options that will help you to debug an issue.
1) DEBUG level will allow to see SSH communication.
2) If you don't overwrite prefix, it will use SUDO by default
3) Sometimes when you recreate remote server, your "known_hosts" file remembers it and the next time you try to SSH into server after recreation, you receive thie message "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED". In fact SSH session hangs, but you don't see that on the client side. So better ignore it.
:transport_options => {
:is_windows => false,
:username => 'YOURUSER',
:ssh_options => {
:password => 'YOURPASSWRD',
:verbose => Logger::DEBUG,
:user_known_hosts_file => '/dev/null'
},
:options => {
:prefix => ''
}
},

Related

LDAP with starttls on redmine

Redmine does not use StartTLS by default. When I configure my LDAP server to require TLS, redmine fails to authenticate users.
With openldap you might see "Confidentially required" error message in redmine logs.
Make sure LDAPS is NOT enabled. ldaps:// is a different encryption scheme than StartTLS. With StartTLS unecrypted connection is promoted to encrypted over same port.
When using redmine 3.2.4 find a file with name redmine/app/models/auth_source_ldap.rb
search for "encryption", find:
options = { :host => self.host,
:port => self.port,
:encryption => (self.tls ? :simple_tls : nil)
}
When LDAPS is unchecked, we want to use StartTLS:
:encryption => (self.tls ? :simple_tls : :start_tls)
Save and restart your web server. Redmine should now use encrypted connection.
I know this is old but I just had a similar problem but with Redmine 4.1.2.
I had to make a similiar change to get StartTLS to work without LDAPS:
in redmine/app/models/auth_source_ldap.rb
Search for this block of code
if tls
options[:encryption] = {
:method => :simple_tls,
# Always provide non-empty tls_options, to make sure, that all
# OpenSSL::SSL::SSLContext::DEFAULT_PARAMS as well as the default cert
# store are used.
:tls_options => { :verify_mode => verify_peer? ? OpenSSL::SSL::VERIFY_PEER : OpenSSL::SSL::VERIFY_NONE }
}
and update it with the an else clause as:
if tls
options[:encryption] = {
:method => :simple_tls,
# Always provide non-empty tls_options, to make sure, that all
# OpenSSL::SSL::SSLContext::DEFAULT_PARAMS as well as the default cert
# store are used.
:tls_options => { :verify_mode => verify_peer? ? OpenSSL::SSL::VERIFY_PEER : OpenSSL::SSL::VERIFY_NONE }
}
else
options[:encryption] = {
:method => :start_tls,
:tls_options => { :verify_mode => OpenSSL::SSL::VERIFY_NONE}
}
end

Install a package from tarball using puppet

I'm trying to install ActiveMQ using puppet. this package comes in tar ball. how can I make sure each and every file is being pushed (recursively) from puppet and it makes sure the service is running. As it has its own executable in 'bin' dir.
I would ask is it essential to install activemq from a Tarball? It'd probably be easier to manage as a package, such as a yum or apt install.
Managing tarballs is always going to be more difficult, especially when updating versions, or dealing with issues like downloads failing.
I would recommend using an existing activemq module from the forge:
https://forge.puppet.com/modules?utf-8=%E2%9C%93&sort=latest_release&q=activemq
To give you a general idea of how it might look, here's some basic code that could work:
$activemq_home = "/usr/local/activemq"
package{"java-1.6.0-openjdk":
ensure => installed;
}
$activemq_version = "5.4.3"
user {"activemq":
ensure => present,
home => $activemq_home,
managehome => false,
shell => "/bin/sh",
}
group {"activemq":
ensure => present,
require => User["activemq"],
}
Exec{path => ["/usr/local/bin","/usr/bin","/bin"]}
$puppet_cache = "/usr/local/src/gitorious"
file {$puppet_cache:
ensure => directory,
owner => "root",
group => "root",
}
exec { 'download_amq_src':
unless => '/usr/bin/test -e ${activemq_home}/apache-activemq-${amq_version}-bin.tar.gz',
command => 'cd /tmp && /usr/bin/wget http://archive.apache.org/dist/activemq/apache-activemq/${amq_version}/apache-activemq-${amq_version}-bin.tar.gz',
require => File[$activemq_home],
}
# Unpack the archive in the amq user directory
exec { 'unpack_amq_src':
onlyif => '/usr/bin/test -d ${activemq_home}/apache-activemq-${amq_version}-bin',
command => 'cd $amq_home && /bin/tar -xf /tmp/apache-activemq-${amq_version}-bin.tar.gz',
require => Exec['download_amq'],
}
file {"/etc/init.d/activemq":
ensure => file,
mode => 755,
owner => "root",
group => "root",
content => template("activemq/etc/init.d/activemq.erb"),
require => File["/etc/activemq.conf"],
}
service{"activemq":
enable => true,
ensure => running,
require => File["/etc/init.d/activemq"],
}
file { "activemq.xml":
path => "$activemq_home/conf/activemq.xml",
ensure => present,
mode => 644,
owner => "activemq",
group => "activemq",
content => template("activemq/activemq.xml.erb"),
require => File["/etc/init.d/activemq"],
notify => Service["activemq"],
}

Puppet - Adding default nodes

I'm new to Puppet and following this tutorial to get into it:
http://www.pindi.us/blog/getting-started-puppet
I created an SSH module (/modules/ssh/manifests/init.pp) and added the following in the base node.pp (puppet/manifests/)
node default {
include ssh
}
The ssh module loks ike this:
class ssh {
include ssh::install, ssh::config, ssh::service
}
class ssh::install {
package {"ssh":
ensure => present,
}
}
class ssh::config {
file { "/etc/ssh/sshd_config":
ensure => present,
owner => 'root',
group => 'root',
mode => 600,
source => "puppet:///modules/ssh/sshd_config",
notify => Class["ssh::service"],
}
}
class ssh::service {
service { "ssh":
ensure => running,
hasstatus => true,
hasrestart => true,
enable => true,
}
}
Class["ssh::install"] -> Class["ssh::config"] -> Class["ssh::service"]
On the puppet I linked the module path with:
sudo puppet apply --modulepath=/vagrant/modules /vagrant/manifests/site.pp
which works.
If I then apply the nodes.pp I get the error:
Could not find class ssh for precise32 at /vagrant/manifests/nodes.pp:2 on node precise32...
Everything looks right, but I don't know where my error is.
It worked before as I installed SSH on the puppet yesterday, but I must have messed up something

How to use Vagrant & Puppet with https

I am trying for hours, but I just can't figure it out, how to enable a https connection with vagrant and puppet.
I have a folder files/htdocs which contains different configs-files. Like vhosts. It was a preset, with an empty ssl and empty vhosts_ssl folder. It put my ssl certificate in the ssl folder and my httpd-ssl.conf in the vhosts_ssl folder. Those files where working lokal with my MAMP Webserver.
In the Puppet config I wrote the following:
file { "/etc/httpd/vhosts":
replace => true,
ensure => present,
source => "/vagrant/files/httpd/vhosts",
recurse => true,
}
file { "/etc/httpd/vhosts_ssl":
replace => true,
ensure => present,
source => "/vagrant/files/httpd/vhosts_ssl/httpd-ssl.conf",
}
file { "/etc/httpd/ssl":
replace => true,
ensure => present,
source => "/vagrant/files/httpd/ssl",
recurse => true,
}
The normal vhosts are working, therefore I thougt I can copy the structure and just enter the new paths for ssl and vhosts_ssl.
But its not working. Maybe you know how to fix this.
Thanks.
I think I found a solution, but I have no time to test it right know.
Here is the link to the possible solution.
https://forge.puppetlabs.com/puppetlabs/apache
I will update my Questing/Answere when I tried it.

How Do You Expire A Memcached Entry On Heroku

I am using action caching on my Rails 3 app on Heroku with the :expires_in option. I've tried calling expire_action, directly in the controller upon update, and within a sweeper. Nothing seems to expire the cache entry properly.
In my controller:
caches_action :embed, :if => Proc.new { |c| c.request.format.js? || c.request.format.rss? }, :expires_in => 5.minutes
In my action:
expire_action :action => :embed, :format => :js
And I've also attempted it in a sweeper, attempting to use the url generator to get the exact key:
expire_action obj_embed_url(#obj.unique_token)
I wonder if it is Heroku using the Varnish cache layer, which you can't expire. (The cache clearly expires after the 5 minutes, because I can see the content update.) It appears that I have the memcached add-on configured correctly (using the Dalli gem; config.cache_store = :dalli_store), and I can see the appropriate environment variables...
$ heroku config |grep MEM
MEMCACHE_PASSWORD => xxxxxxxxxxxxxxxxx
MEMCACHE_SERVERS => xxx.xxx.northscale.net
MEMCACHE_USERNAME => appxxxxxx%40heroku.com
What am I missing here?
finally figured this out.
Heroku's paths must not be matching up with the expire create/expire calls. so if you specify the path in the cache creation, and call that path specifically in the expire, it will work. also i had to use "expire_fragment" instead of "expire_action". here's my code:
in your controller:
caches_action :load, :up, :juice, :fresh, :cache_path => :custom_cache_path.to_proc
def custom_cache_path
path = "#{params[:controller]}/#{params[:action]}"
path += "/#{params[:id]}" if params[:id]
path += "/#{params[:sha]}" if params[:sha]
path
end
in the expiring method:
expire_fragment "serve/up/#{#site.id}"
expire_fragment "serve/fresh/#{#site.secret}"