I have a Chef (solo) recipe which generates a CSS file and puts it somewhere into web root directory (/vagrant/css on VM in my case). The problem is that the recipe needs to know an absolute path to vagrant synced directory on VM - it is a folder where Vagrantfile is, and by default it maps to /vagrant inside a VM.
I know how to set that path:
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/synced/dir/on/vm"
end
But the problem is how to let the recipe know that /synced/dir/on/vm.
Currently I use that:
Vagrant.configure("2") do |config|
config.vm.provision :chef_solo do |chef|
chef.json = {
"base_directory" => "/vagrant" # THIS IS IT
}
end
end
It lets me use node["base_directory"] inside the recipe code, but there is a downside to that: if I was to write multiple recipes, it would be inconvinient to use node["base_directory"] in every recipe. It is much better that hardcoding the path, but it forces me to use same key on chef.json for every recipe.
Furthermore, if I'd wish to share my recipe, I would force users to use that "base_directory" => "/vagrant" key/value pair in their Vagrantfile.
Is there an API method to get this synched directory path on VM in the recipe code? Or more genarally: is there a way to get Vagrant-specific properties from Chef recipes?
I scoured Vagrant docs, but there seems to be just a single page on that topic, and because it is specific to Vagrant, there is no related information in Chef docs either.
So it seems there's some disconnect on the understanding of how this is meant to work.
When writing a recipe, it's common to use node attributes to define where thing will end up - such as your web_root directory.
I can conceive of the recipe's attributes file containing:
default['base_directory'] = '/var/www/html'
Which would apply to many production servers out there.
Then, when writing your recipes, use this attribute to send the files where you want them to, e.g.:
cookbook_file "#{node['base_directory']/css/myfile.css" do
owner "root"
...
end
When sharing your cookbook, anyone executing this on a server that has the /var/www/html directory will receive your file in the correct place.
In your Vagrantfile, in order to override the node's base_directory attribute to the synced directory, you can do something like this:
SYNCED_FOLDER = "/synced/dir/on/vm"
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", SYNCED_FOLDER
config.vm.provision :chef_solo do |chef|
chef.json = {
"base_directory" => SYNCED_FOLDER
}
end
end
However, you mentioned that you didn't want to have to specify base_directory in your recipes, so I'd ask what node attribute you are using to drive the target location of your web root?
If you're using something like the apache2 cookbook from the Community site, then there's already an attribute for this: node['apache']['docroot_dir'], so using that you can control where thing are referenced.
Related
Is there a way to migrate my Host's public ssh key to my VM?
The use case is:
I have a user who has a public SSH key which has access to a certain repository.
I am creating a VM that will be distributed to other developers (who have access with their SSH keys to this repository)
I would like to automate git cloning of the repository so it happens during exec-once ..
What should I do that involves as few manual paths as possible?
PS: I am using https://puphpet.com/ to generate the vagrant machine for me - I am not editing the Vagrantfile direclty.
As mentioned in comments you can use a vagrant file provision to copy your private key on your VM
in your Vagrantfile add
Vagrant.configure("2") do |config|
# ... other configuration
config.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "/home/vagrant/id_rsa.pub"
end
I'm using Terraform to automate build out of an AWS EC2 based docker host and then using its remote exec option to download a docker file, build and run it.
I'd hoped to integrate this with Serverspec but am struggling to work out two things:
The best way to pass the external dns of the newly created AWS EC2 instance to Serverspec.
How to configure the SSH options for Serverspec so that it executes correctly on an Amazon Linux AMI using the ec2-user account.
I would normally connect to the EC2 instance using a pre-defined key pair and never use a password however ServerSpec seems to run commands on the server with a sudo -p format.
Any advice much appreciated.
Contents of spec_helper.rb
require 'serverspec'
require 'net/ssh'
set :ssh_options, :user => 'ec2-user'
Also using edited rakefile as follows to force correct EC2 external dns (masked):
require 'rake'
require 'rspec/core/rake_task'
hosts = %w(
ec2-nn-nn-nn-nnn.eu-west-1.compute.amazonaws.com
)
set :ssh_options, :user => 'ec2-user'
task :spec => 'spec:all'
namespace :spec do
task :all => hosts.map {|h| 'spec:' + h.split('.')[0] }
hosts.each do |host|
short_name = host.split('.')[0]
role = short_name.match(/[^0-9]+/)[0]
desc "Run serverspec to #{host}"
RSpec::Core::RakeTask.new(short_name) do |t|
ENV['TARGET_HOST'] = host
t.pattern = "spec/Nexus/*_spec.rb"
end
end
end
You could make the IP address an output in Terraform. In fact, the link gives an example doing just that to get the IP address of an AWS instance, named web in this case:
output "address" {
value = "${aws_instance.web.public_dns}"
}
Then you can get this value from the command line after a terraform apply with terraform output address.
You can set the sudo password with the config option :sudo_password. If the ec2-user can run sudo without a password, set this to ''. (See this blog post for an example.) Or pass it in the SUDO_PASSWORD environment variable, described here: http://serverspec.org/tutorial.html
I'm new at puppet. First I installed and configured puppet.
If I check my certs on my master:
+ "puppet" (SHA256) FB:57:B2:B7:18:99:0F:15:DB:F0:E1:E8:12:31:99:75:BF:05:46:8D:78:A9:C4:DD:68:9E:A4:xxx (alt names: "DNS:puppet", "DNS:puppetmaster.example.com")
+ "puppetclient.example.com" (SHA256) 64:4F:0C:B2:EA:53:6B:2D:E3:5B:11:DB:80:E3:DF:AD:A6:AF:B5:B9:DB:05:6F:79:5D:E5:8Exxx
I try to apply some site.pp.
Here is my init.pp
class apache2 {
package { 'apache2':
ensure => installed,
}
service { 'apache2':
ensure => true,
enable => true,
require => Package['apache2'],
}
}
Here is my site.pp
node 'puppetclient.example.com' {
include apache2
}
I try:
sudo puppet apply site.pp and I get the following error
Error: Could not find default node or by name with 'puppet, puppet.example.com, puppet.example' on node puppet
Error: Could not find default node or by name with 'puppet, puppet.example.com, puppet.example' on node puppet
It seems it tries to execute my .pp on a host which does not exist (probably default hostnames). What am I doing wrong. I want it to be executed on my puppetclient.example.com.
Thanks
The error and the hostname from your comment imply that you are using the apply command on a wrong host i.e. the master not the remote client.
If you want to execute the command on different host then the puppet-master (server) you would need to install puppet agent on the remote client and run the command on the client. i.e. sudo puppet agent -t this will require the agent to be configured.
Puppet uses data from the facter to determine the node name. And facter data is populated from the actual hostname, /etc/hosts as well as /etc/sysconfig/network plus other information. You can read more about it on Puppets facter page.
The easies way to check the hostname is to run hostname command or facter hostname or facter fqdn
Bellow is how Puppet check the node name from the official website:
A given node will only get the contents of one node definition, even if two node statements could match a node’s name. Puppet will do the following checks in order when deciding which definition to use:
If there is a node definition with the node’s exact name, Puppet will use it.
If there is a regular expression node statement that matches the node’s name, Puppet will use it. (If more than one regex node matches, Puppet will use one of them, with no guarantee as to which.)
If the node’s name looks like a fully qualified domain name (i.e. multiple period-separated groups of letters, numbers, underscores and dashes), Puppet will chop off the final group and start again at step 1. (That is, if a definition for www01.example.com isn’t found, Puppet will look for a definition matching www01.example.)
Puppet will use the default node.
Thus, for the node www01.example.com, Puppet would try the following, in order:
www01.example.com -- A regex that matches www01.example.com
www01.example -- A regex that matches www01.example
www01 -- A regex that matches www01
default
P.S.
If you are going to downgrade this please be kind enough to provide a reason.
The command puppet apply is for constructing a catalog from local manifest files and data and applying it to the local machine. You are running it on your master, and your site manifest does not provide a node block that can be matched to that machine, so Puppet errors out. If you want to use puppet apply then you must arrange for the needed manifests and data to be present on the machine you want to configure, and you must run puppet apply there.
If you want to use a master / agent configuration, then you must run the master service or the puppetserver service on some designated machine, and all the manifests and data must reside there. Other machines do not need to have manifests or data, and they configure themselves by running puppet agent (locally), not puppet apply. The agent is often run as a daemon, but it can also be run in one-off mode, which many people use to run it under control of a separate scheduler, such as cron.
Obviously, you cannot configure a remote machine simply by running a command on the master, without some form of cooperation from the remote machine -- neither with Puppet nor with any other system you might imagine. Nor would you want your machines to be susceptible to such unrestricted remote control.
If you're looking for bona fide remote control then you could consider Puppet's "MCollective" product. It requires cooperation from the machines to be controlled, just as Puppet does, but it provides for ad hoc and on-demand control, which Puppet does not do. Among many other things, you can use it to run puppet agent remotely, on demand.
I'm not very familiar with ansible.
The problem I have at the moment is the following:
I have a master - nodes environment with multiple nodes.
My ansible needs to access my nodes but can't access them.
SSH Error: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
I'm able to SSH from my master to each node but only by using a key:
ssh -i key-to-node.pem centos#ec2...
Is it possible to setup something to allow ansible to connect to the created hosts?
You can define your pem file in your ansible.cfg:
private_key_file=key-to-node.pem
If you don't have one, create one at the same location where you playbook is or in /etc/ansible/ansible.cfg.
If you have different keys for your hosts, you can also define the key in your inventory:
ansible_ssh_private_key_file=key-to-node.pem
Also, if you would have configured ssh to work without explicitly passing the private key file (in your .ssh/config) Ansible would automatically work.
Adding an example from the OpenShift page, as mentioned in the comments.
I personally have never configured it this way (as I have set up everything via ~/.ssh/config but according to the docs it should be working like this:
[masters]
master.example.com ansible_ssh_private_key_file=1.pem
# host group for nodes, includes region info
[nodes]
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" ansible_ssh_private_key_file=2.pem
Alternatively, since you have multiple nodes and maybe the same key for all of them, you can define a separate nodes:vars section
[nodes:vars]
ansible_ssh_private_key_file=2.pem
I use vagrant with a 3rd party linux box.
The box has the default vagrant/vagrant credentials.
In my Vagrantfile I want it to use ssh so I have this
config.vm.provision :shell, :path => "bootstrap.sh"
config.ssh.private_key_path = "~/.ssh/id_rsa"
config.ssh.forward_agent = true
In my bootstrap script I want to add my public key to authorized_keys. This works if I do it post VM creation.
But when I re-provision the VM from scratch, the VM has not yet received the public key through my bootstrap shell script.
How can I have vagrant install my public key in authorized_keys and authenticate with vagrant/vagrant until this has happened? Or is there a better way?
Found something that works
Based on this Vagrant insecure by default?
Where we have
config.ssh.private_key_path = ["#{ENV['HOME']}/.ssh/id_rsa", \
"#{ENV['HOME']}/.vagrant.d/insecure_private_key"]
This seems to have the effect that vagrant tries keys until it finds one that works (the example enumerates host file system paths too - very nice indeed.)