I have a backup script that stores the latest backup timestamp in an url like this http://myaws.com/LATEST, the file contains only a string representing the timestamp, for instance, "201402230400". The same script store the real backups in http://myaws.com/201402230400/mycompany-dump-201402230400.gz and http://myaws.com/201402230400/mycompany-data-201402230400.tgz.
The thing is I'm creating a puppet class that will read those urls and restore the files in my new VM based on the LATEST timestamp value. What I'm missing is how can I build a url from a content store in a file?
define download ($uri, $timeout = 300) {
exec {
"download $uri":
path => '/usr/bin',
command => "wget --timestamping -q '$uri' -O $name",
creates => $name,
timeout => $timeout
}
}
download {
"$latest_file":
uri => "http://myaws.com/LATEST",
timeout => 900;
}
download {
"$data_file":
uri => "http://myaws.com/file($latest_file)/mycompany-data-file($latest_file).tgz",
timeout => 900;
}
The call file($latest_file) is not working as expected. What I'm doing wrong?
I think you'll want to use generate to get the LATEST timestamp rather than exec inside of a custom type. Something like this perhaps (note that you need to change the format of the uri for the download as well):
$latest_file = generate(
'/usr/bin/curl',
'-s',
'http://myaws.com/LATEST'
)
define download ($uri, $timeout = 300) {
exec {
"download $uri":
path => '/usr/bin',
command => "wget --timestamping -q '$uri' -O $name",
creates => $name,
timeout => $timeout
}
}
download {
"$data_file":
uri => "http://myaws.com/${latest_file}/mycompany-data-${latest_file}.tgz",
timeout => 900;
}
Related
I have the following file resource in my puppet manifest:
file{
'example.dat'
path => "/path/to/example.dat",
owner => devops,
mode => "0644",
source => "/path/to/example.txt"
}
I want to run the above snippet only when the .txt file is present in a particular directory. Otherwise, I do not want the snippet to run.
How do I go about doing that in puppet?
In general, you would create a custom fact that returns true when the directory in question satisfies your condition. For example:
Facter.add('contains_txt') do
setcode do
! Dir.glob("/path/to/dir/*.txt").empty?
end
end
Then you would write:
if $facts['contains_txt'] {
file { 'example.dat':
path => "/path/to/example.dat",
owner => devops,
mode => "0644",
source => "/path/to/example.txt",
}
}
I want to add the current date to every filename that is incoming to my s3 bucket.
My current config looks like this:
input {
s3 {
access_key_id => "some_key"
secret_access_key => "some_access_key"
region => "some_region"
bucket => "mybucket"
interval => "10"
sincedb_path => "/tmp/sincedb_something"
backup_add_prefix =>'%{+yyyy.MM.dd.HH}'
backup_to_bucket => "mybucket"
additional_settings => {
force_path_style => true
follow_redirects => false
}
}
}
Is there a way to use the current date in backup_add_prefix =>'%{+yyyy.MM.dd.HH}'
because the current syntax does not work as it produces: "
%{+yyyy.MM.dd.HH}test_file.txt" in my bucket.
Though it's not supported in s3 input plugin directly, it can be achieved. Use the following steps:
Go to logstash home path.
Open the file vendor/bundle/jruby/2.3.0/gems/logstash-input-s3-3.4.1/lib/logstash/inputs/s3.rb. The exact path will depend on your lagstash version.
Look for the method backup_to_bucket.
There is a line backup_key = "#{#backup_add_prefix}#{object.key}"
Add following lines before the above line:
t = Time.new
date_s3 = t.strftime("%Y.%m.%d")
Now change the backup_key to #{#backup_add_prefix}#{date_s3}#{object.key}
Now you are done. Restart your logstash pipeline. It should be able to achieve the desired result.
I'm trying to install ActiveMQ using puppet. this package comes in tar ball. how can I make sure each and every file is being pushed (recursively) from puppet and it makes sure the service is running. As it has its own executable in 'bin' dir.
I would ask is it essential to install activemq from a Tarball? It'd probably be easier to manage as a package, such as a yum or apt install.
Managing tarballs is always going to be more difficult, especially when updating versions, or dealing with issues like downloads failing.
I would recommend using an existing activemq module from the forge:
https://forge.puppet.com/modules?utf-8=%E2%9C%93&sort=latest_release&q=activemq
To give you a general idea of how it might look, here's some basic code that could work:
$activemq_home = "/usr/local/activemq"
package{"java-1.6.0-openjdk":
ensure => installed;
}
$activemq_version = "5.4.3"
user {"activemq":
ensure => present,
home => $activemq_home,
managehome => false,
shell => "/bin/sh",
}
group {"activemq":
ensure => present,
require => User["activemq"],
}
Exec{path => ["/usr/local/bin","/usr/bin","/bin"]}
$puppet_cache = "/usr/local/src/gitorious"
file {$puppet_cache:
ensure => directory,
owner => "root",
group => "root",
}
exec { 'download_amq_src':
unless => '/usr/bin/test -e ${activemq_home}/apache-activemq-${amq_version}-bin.tar.gz',
command => 'cd /tmp && /usr/bin/wget http://archive.apache.org/dist/activemq/apache-activemq/${amq_version}/apache-activemq-${amq_version}-bin.tar.gz',
require => File[$activemq_home],
}
# Unpack the archive in the amq user directory
exec { 'unpack_amq_src':
onlyif => '/usr/bin/test -d ${activemq_home}/apache-activemq-${amq_version}-bin',
command => 'cd $amq_home && /bin/tar -xf /tmp/apache-activemq-${amq_version}-bin.tar.gz',
require => Exec['download_amq'],
}
file {"/etc/init.d/activemq":
ensure => file,
mode => 755,
owner => "root",
group => "root",
content => template("activemq/etc/init.d/activemq.erb"),
require => File["/etc/activemq.conf"],
}
service{"activemq":
enable => true,
ensure => running,
require => File["/etc/init.d/activemq"],
}
file { "activemq.xml":
path => "$activemq_home/conf/activemq.xml",
ensure => present,
mode => 644,
owner => "activemq",
group => "activemq",
content => template("activemq/activemq.xml.erb"),
require => File["/etc/init.d/activemq"],
notify => Service["activemq"],
}
I'm trying to call several times a defined instance of a puppet module to deploy multiple files from a given repository but I'm getting this error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate declaration: File[/bin/deploy_artifacts.rb] is already declared in file /etc/puppet/modules/deploy_artifacts/manifests/init.pp:11; cannot redeclare at /etc/puppet/modules/deploy_artifacts/manifests/init.pp:11 on node node.example.com
This is the init.pp manifest of the module:
define deploy_artifacts (
$repository)
{
notify{"La UUAA esta en el repositorio: $repository": }
file { "/bin/deploy_artifacts.rb":
ensure => present,
owner => root,
group => root,
mode => 700,
source => "puppet:///modules/deploy_artifacts/deploy_artifacts.rb";
}
exec {"Deployment":
require => File["/bin/deploy_artifacts.rb"],
command => "/usr/bin/time /bin/deploy_artifacts.rb $repository",
logoutput => true;
}
}
Now the node manifest:
node "node.example.com" {
deploy_artifacts {'test-ASO':
repository => 'test-ASO',
}
deploy_artifacts {'PRUEBA_ASO':
repository => 'PRUEBA_ASO',
}
}
I tried to rewrite the whole module to put into init.pp the common piece of code(file statement) and in another manifest the exec statement but when I call more than once the module deploy_artifacts it throws me the same duplicated error.
How can I rewrite the code to ensure that the file is in the client node before executing all the instances of the defined deploy_artifacts without duplications?
Is there another solution rather than declare a dedicated class only for the file? Thank you!
Try this:
The file:
class deploy_artifacts {
file { "/bin/deploy_artifacts.rb":
ensure => present,
owner => root,
group => root,
mode => 700,
source => "puppet:///modules/deploy_artifacts/deploy_artifacts.rb";
}
}
The type:
define deploy_artifacts::repository ($repository) {
include deploy_artifacts
exec {"Deployment":
command => "/usr/bin/time /bin/deploy_artifacts.rb $repository",
logoutput => true,
require => File["/bin/deploy_artifacts.rb"
}
}
The node definition:
node "node.example.com" {
deploy_artifacts::repository {'test-ASO':
repository => 'test-ASO',
}
deploy_artifacts::repository {'PRUEBA_ASO':
repository => 'PRUEBA_ASO',
}
}
We're receiving logs using Logstash with the following configuration:
input {
udp {
type => "logs"
port => 12203
}
}
filter {
grok {
type => "tracker"
pattern => '%{GREEDYDATA:message}'
}
date {
type => "tracker"
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS" ]
}
}
output{
tcp{
type => "logs"
host => "host"
port => 12203
}
}
We're then picking the logs up on the machine "host" with the following settings:
input {
tcp {
type => "logs"
port => 12203
}
}
output {
pipe {
command => "python /usr/lib/piperedis.py"
}
}
From here, we're doing parsing of the lines and putting them into a Redis database. However, we've discovered an interesting problem.
Logstash 'wraps' the log message in a JSON style package i.e.:
{\"#source\":\"source/\",\"#tags\":[],\"#fields\":{\"timestamp\":[\"2013-09-16 15:50:47,440\"],\"thread\":[\"ajp-8009-7\"],\"level\":[\"INFO\"],\"classname\":[\"classname\"],\"message\":[\"message"\]}}
We then, on receiving it and passing it on on the next machine, take that as the message and put it in another wrapper! We're only interested in the actual log message and none of the other stuff (source path, source, tags, fields, timestamp e.t.c.)
Is there a way we can use filters or something to do this? We've looked through the documentation but can't find any way to just pass the raw log lines between instances of Logstash.
Thanks,
Matt
The logstash documentation is wrong - it indicates that the default "codec" is plain but in fact it doesn't use a codec - it uses an output format.
To get a simpler output, change your output to something like
output {
pipe {
command => "python /usr/lib/piperedis.py"
message_format => "%{message}"
}
}
Why not just extract those messages from stdout?
line = sys.stdin.readline()
line_json = json.loads(line)
line_json['message'] # will be your #message