Puppet conditional only if file exists in a particular directory - automation

I have the following file resource in my puppet manifest:
file{
'example.dat'
path => "/path/to/example.dat",
owner => devops,
mode => "0644",
source => "/path/to/example.txt"
}
I want to run the above snippet only when the .txt file is present in a particular directory. Otherwise, I do not want the snippet to run.
How do I go about doing that in puppet?

In general, you would create a custom fact that returns true when the directory in question satisfies your condition. For example:
Facter.add('contains_txt') do
setcode do
! Dir.glob("/path/to/dir/*.txt").empty?
end
end
Then you would write:
if $facts['contains_txt'] {
file { 'example.dat':
path => "/path/to/example.dat",
owner => devops,
mode => "0644",
source => "/path/to/example.txt",
}
}

Related

How to update Puppet ini_setting or ini_subsetting resource without section header in conf file?

I wonder if someone can help me with my conf file problem. I need to get the output like below but I get problems in using the inifile. I have put below my code and testing output. My service won't start because of the '[]'. Your comments and ideas are highly appreciated. Thanks!
Expected output
cat /etc/service.conf
info something something...
without section header
setting1=value1
Testings
testscript1.pp
ini_setting {'setx':
ensure => present,
path => '/etc/service.conf',
key_val_separator => '=',
setting => 'setting1',
value => 'value1',
}
output of testscript1.pp
cat /etc/service.conf
info something something...
[setx]
setting1=value1
testscript2.pp
$defaults = {
ensure => present,
path => '/etc/service.conf',
key_val_separator => '=',
}
$settings = {
' ' => {
'setting1' => 'value1',
}
}
create_ini_settings($settings,$defaults)
output of testscript2.pp
cat /etc/service.conf
info something something...
[ ]
setting1=value1
Since I really wanted to delete the [] character because it's causing error during service restart, I used section_prefix => '#',. The first puppet agent run is smooth and working. Problem now is if puppet agent runs on its frequency time (like let's say every hour), it will auto-append details in conf file due to lack of section header. I decided to use ini_subsetting but I'm getting errors with it.
testscript3.pp
ini_subsetting {'subset':
ensure => present,
section => '',
key_val_separator => '=',
path => '/etc/service.conf',
setting => 'setting1',
subsetting => '',
value => 'value1',
}
output of testscript3.pp
Error: Failed to apply catalog: Parameter path failed on Ini_subsetting[subset]: File paths must be fully qualified, not '/etc/service.conf'.
Any suggestions or advises are highly appreciated.
Thank you.
If the file you are managing does not have section markers of some kind, then it is not an INI file, not even in the generalized sense that the puppetlabs/inifile module supports. To the best of my knowledge, you'll need to choose a different approach to managing the file.
You could consider templating the whole file, or writing a custom type and provider for it, but before going to so much trouble, you should consider whether a good old file_line resource from puppetlabs/stdlib would be adequate for your needs.
Have you tried your testscript1.pp with section => ''?
It would look like this:
ini_setting {'setx':
ensure => present,
path => '/etc/service.conf',
key_val_separator => '=',
section => '',
setting => 'setting1',
value => 'value1',
}
And the output would be:
cat /etc/service.conf
info something something...
setting1=value1
Or you could try to use force_new_section_creation => false, as it is true by default and forces the creation of a section, as stated in the module’s reference.
As for your 3rd example, it probably fails because of the blank subsetting parameter. The ini_subsetting resource type requires both setting and subsetting parameters to work.

Logstash current date logstash.conf as backup_add_prefix (s3 input plugin)

I want to add the current date to every filename that is incoming to my s3 bucket.
My current config looks like this:
input {
s3 {
access_key_id => "some_key"
secret_access_key => "some_access_key"
region => "some_region"
bucket => "mybucket"
interval => "10"
sincedb_path => "/tmp/sincedb_something"
backup_add_prefix =>'%{+yyyy.MM.dd.HH}'
backup_to_bucket => "mybucket"
additional_settings => {
force_path_style => true
follow_redirects => false
}
}
}
Is there a way to use the current date in backup_add_prefix =>'%{+yyyy.MM.dd.HH}'
because the current syntax does not work as it produces: "
%{+yyyy.MM.dd.HH}test_file.txt" in my bucket.
Though it's not supported in s3 input plugin directly, it can be achieved. Use the following steps:
Go to logstash home path.
Open the file vendor/bundle/jruby/2.3.0/gems/logstash-input-s3-3.4.1/lib/logstash/inputs/s3.rb. The exact path will depend on your lagstash version.
Look for the method backup_to_bucket.
There is a line backup_key = "#{#backup_add_prefix}#{object.key}"
Add following lines before the above line:
t = Time.new
date_s3 = t.strftime("%Y.%m.%d")
Now change the backup_key to #{#backup_add_prefix}#{date_s3}#{object.key}
Now you are done. Restart your logstash pipeline. It should be able to achieve the desired result.

Install a package from tarball using puppet

I'm trying to install ActiveMQ using puppet. this package comes in tar ball. how can I make sure each and every file is being pushed (recursively) from puppet and it makes sure the service is running. As it has its own executable in 'bin' dir.
I would ask is it essential to install activemq from a Tarball? It'd probably be easier to manage as a package, such as a yum or apt install.
Managing tarballs is always going to be more difficult, especially when updating versions, or dealing with issues like downloads failing.
I would recommend using an existing activemq module from the forge:
https://forge.puppet.com/modules?utf-8=%E2%9C%93&sort=latest_release&q=activemq
To give you a general idea of how it might look, here's some basic code that could work:
$activemq_home = "/usr/local/activemq"
package{"java-1.6.0-openjdk":
ensure => installed;
}
$activemq_version = "5.4.3"
user {"activemq":
ensure => present,
home => $activemq_home,
managehome => false,
shell => "/bin/sh",
}
group {"activemq":
ensure => present,
require => User["activemq"],
}
Exec{path => ["/usr/local/bin","/usr/bin","/bin"]}
$puppet_cache = "/usr/local/src/gitorious"
file {$puppet_cache:
ensure => directory,
owner => "root",
group => "root",
}
exec { 'download_amq_src':
unless => '/usr/bin/test -e ${activemq_home}/apache-activemq-${amq_version}-bin.tar.gz',
command => 'cd /tmp && /usr/bin/wget http://archive.apache.org/dist/activemq/apache-activemq/${amq_version}/apache-activemq-${amq_version}-bin.tar.gz',
require => File[$activemq_home],
}
# Unpack the archive in the amq user directory
exec { 'unpack_amq_src':
onlyif => '/usr/bin/test -d ${activemq_home}/apache-activemq-${amq_version}-bin',
command => 'cd $amq_home && /bin/tar -xf /tmp/apache-activemq-${amq_version}-bin.tar.gz',
require => Exec['download_amq'],
}
file {"/etc/init.d/activemq":
ensure => file,
mode => 755,
owner => "root",
group => "root",
content => template("activemq/etc/init.d/activemq.erb"),
require => File["/etc/activemq.conf"],
}
service{"activemq":
enable => true,
ensure => running,
require => File["/etc/init.d/activemq"],
}
file { "activemq.xml":
path => "$activemq_home/conf/activemq.xml",
ensure => present,
mode => 644,
owner => "activemq",
group => "activemq",
content => template("activemq/activemq.xml.erb"),
require => File["/etc/init.d/activemq"],
notify => Service["activemq"],
}

Resolve duplicate declaration on puppet

I'm trying to call several times a defined instance of a puppet module to deploy multiple files from a given repository but I'm getting this error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate declaration: File[/bin/deploy_artifacts.rb] is already declared in file /etc/puppet/modules/deploy_artifacts/manifests/init.pp:11; cannot redeclare at /etc/puppet/modules/deploy_artifacts/manifests/init.pp:11 on node node.example.com
This is the init.pp manifest of the module:
define deploy_artifacts (
$repository)
{
notify{"La UUAA esta en el repositorio: $repository": }
file { "/bin/deploy_artifacts.rb":
ensure => present,
owner => root,
group => root,
mode => 700,
source => "puppet:///modules/deploy_artifacts/deploy_artifacts.rb";
}
exec {"Deployment":
require => File["/bin/deploy_artifacts.rb"],
command => "/usr/bin/time /bin/deploy_artifacts.rb $repository",
logoutput => true;
}
}
Now the node manifest:
node "node.example.com" {
deploy_artifacts {'test-ASO':
repository => 'test-ASO',
}
deploy_artifacts {'PRUEBA_ASO':
repository => 'PRUEBA_ASO',
}
}
I tried to rewrite the whole module to put into init.pp the common piece of code(file statement) and in another manifest the exec statement but when I call more than once the module deploy_artifacts it throws me the same duplicated error.
How can I rewrite the code to ensure that the file is in the client node before executing all the instances of the defined deploy_artifacts without duplications?
Is there another solution rather than declare a dedicated class only for the file? Thank you!
Try this:
The file:
class deploy_artifacts {
file { "/bin/deploy_artifacts.rb":
ensure => present,
owner => root,
group => root,
mode => 700,
source => "puppet:///modules/deploy_artifacts/deploy_artifacts.rb";
}
}
The type:
define deploy_artifacts::repository ($repository) {
include deploy_artifacts
exec {"Deployment":
command => "/usr/bin/time /bin/deploy_artifacts.rb $repository",
logoutput => true,
require => File["/bin/deploy_artifacts.rb"
}
}
The node definition:
node "node.example.com" {
deploy_artifacts::repository {'test-ASO':
repository => 'test-ASO',
}
deploy_artifacts::repository {'PRUEBA_ASO':
repository => 'PRUEBA_ASO',
}
}

How to use Vagrant & Puppet with https

I am trying for hours, but I just can't figure it out, how to enable a https connection with vagrant and puppet.
I have a folder files/htdocs which contains different configs-files. Like vhosts. It was a preset, with an empty ssl and empty vhosts_ssl folder. It put my ssl certificate in the ssl folder and my httpd-ssl.conf in the vhosts_ssl folder. Those files where working lokal with my MAMP Webserver.
In the Puppet config I wrote the following:
file { "/etc/httpd/vhosts":
replace => true,
ensure => present,
source => "/vagrant/files/httpd/vhosts",
recurse => true,
}
file { "/etc/httpd/vhosts_ssl":
replace => true,
ensure => present,
source => "/vagrant/files/httpd/vhosts_ssl/httpd-ssl.conf",
}
file { "/etc/httpd/ssl":
replace => true,
ensure => present,
source => "/vagrant/files/httpd/ssl",
recurse => true,
}
The normal vhosts are working, therefore I thougt I can copy the structure and just enter the new paths for ssl and vhosts_ssl.
But its not working. Maybe you know how to fix this.
Thanks.
I think I found a solution, but I have no time to test it right know.
Here is the link to the possible solution.
https://forge.puppetlabs.com/puppetlabs/apache
I will update my Questing/Answere when I tried it.