Checking changed status of ansible-pull - automation

I cannot find any documentation what variables (if any) are set by ansible-pull on remote copy pull.
Example:
$/usr/bin/ansible-pull -U ssh://git#gitlab/project/ansible.git
...
localhost | CHANGED => {
"after": "349ead2ffacb278baaff9bacdfd0629f30dc4c5a",
"before": "328e9450a6ea408ec684ace3f0a557fa13b8db01",
"changed": true,
"remote_url_changed": false
}
and I want task (or handler) that reloads service only if "changed" is true

Related

Cro run throwing connection reset by peer

I am trying to implement the Cro Service from the Cro getting started documentation. It compiled fine but when I tried to access the link using browser, it shows cannot reach the site and throws "Connection reset by peer" error with no other details. The code is below:
use Cro::HTTP::Log::File;
use Cro::HTTP::Server;
use Routes;
my Cro::Service $http = Cro::HTTP::Server.new(
http => <1.1>,
host => '0.0.0.0',
port => 3001,
application => routes(),
after => [
Cro::HTTP::Log::File.new( logs => $*OUT, errors => $*ERR)
]
);
$http.start;
say "Listening at http://server:3001";
react {
whenever signal(SIGINT) {
say "Shutting down...";
$http.stop;
done;
}
}
Is there a way to troubleshot this so that I can identify what the actual error is?

icinga2 notifications to cachet

I would like to share with you a way to send notifications from icinga2 to cachet via the API.
Icinga2 version : 2.4.10-1
Cachet version : 2.3.9
First of all, you have to know which component ID you want to use (in my case, because you can update component by name)
To get the component ID, you can use the curl command :
curl --insecure --request GET --url https://URL/api/v1/components -H "X-Cachet-Token: TOKEN"
URL : The URL of your cachet installation
TOKEN : The Token of the member in Cachet
Create command in /etc/icinga2/conf.d/commands.conf
object NotificationCommand "cachet-incident-notification-v2" {
import "plugin-notification-command"
command = [ PluginDir + "/cachet-notification-v2.sh" ]
env = {
"SERVICESTATE" = "$service.state$"
}
}
Create notification template in /etc/icinga2/conf.d/templates.conf
template Notification "cachet-incident-notification-v2" {
command = "cachet-incident-notification-v2"
states = [ OK, Warning, Critical, Unknown ]
types = [ Problem, Acknowledgement, Recovery, Custom,
FlappingStart, FlappingEnd,
DowntimeStart, DowntimeEnd, DowntimeRemoved ]
/*
period = "24x7"
*/
interval = 0
}
Create notification in /etc/icinga2/conf.d/notifications.conf
apply Notification "cachet-incident-notification-v2" to Service {
import "cachet-incident-notification-v2"
user_groups = host.vars.notification.pager.groups
assign where service.vars.cachetv2 == "1" && host.vars.cachetv2 == "1"
interval = 0 # Disable Re-notification
}
Add variable in your check service in /etc/icinga2/conf.d/service/your/service.conf
[...]
vars.cachetv2 = "1"
[...]
Add variable in your host config file in /etc/icinga2/conf.d/hosts/your/host
[...]
vars.cachetv2 = "1"
[...]
Create the script in /usr/lib/nagios/plugins/cachet-notification-v2.sh
#!/bin/bash
# Some Constants
NOW="$(date +'%d/%m/%Y')"
CACHETAPI_URL="https://URL/api/v1/components/<ID DU COMPOSANT>"
CACHETAPI_TOKEN="TOKEN><"
# Map Notification states for icinga2
# OK - 1 operational
# Warning - 3 Partial outage
# Critical - 4 Major outage
# Unknown - 2 Performance issues
case "$SERVICESTATE" in
'OK')
COMPONENT_STATUS=1
;;
'WARNING')
COMPONENT_STATUS=3
;;
'CRITICAL')
COMPONENT_STATUS=4
;;
'UNKNOWN')
COMPONENT_STATUS=2
;;
esac
curl -X PUT -H "Content-Type: application/json;" -H "X-Cachet-Token: ${CACHETAPI_TOKEN}" -d '{"status": "'"${COMPONENT_STATUS}"'"}' ${CACHETAPI_URL} -k
PS : Give the execution permission to the script
Check the syntax and reload
/etc/init.d/icinga2 checkconfig && /etc/init.d/icinga2 reload
The result :
When your check results in "CRITICAL", the status in Cachet will be MAJOR ISSUE
When your check results in "WARNING", the status in Cachet will be PARTIAL ISSUE
When your check results in "OK", the status in Cachet will be OPERATIONAL
When your check results in "UNKNOWN", the status in Cachet will be PERFORMANCE DELAY
I hope it will help.
Nicolas B.

json-server can we use other key instead of id for post and put request

I have fake api for testing in frontend side.
i have seen that id is required to put or post your data in json-server package, my question is can i use different key instead of id for ex.
{
id: 1, ---> i want to change this with my custom id
name: 'Test'
}
Let's see CLI options of json-server package:
$ json-server -h
...
--id, -i Set database id property (e.g. _id) [default: "id"]
...
Let's try to start json-server with new id called 'customId' (for example):
json-server --id customId testDb.json
Structure of testDb.json file: $ cat testDb.json
{
"messages": [
{
"customId": 1,
"description": "somedescription",
"body": "sometext"
}
]
}
Make a simple POST request via $.ajax function (or via Fiddler/Postman/etc.). Content-type of request should be set to application/json - explanation may be found on this project's github page:
A POST, PUT or PATCH request should include a Content-Type: application/json header to use the JSON in the request body. Otherwise it will result in a 200 OK but without changes being made to the data.
So... Make a request from Browser:
$.ajax({
type: "POST",
url: 'http://127.0.0.1:3000/messages/',
data: {body: 'body', description: 'description'},
success: resp => console.log(resp),
dataType: 'json'
});
Go to testDb and see the results. New chunk added. id automatically added with the desired name specified in --id key of console cmd.
{
"body": "body",
"description": "description",
"customId": 12
}
Voila!
I've came up with using custom routes in cases where I need custom id:
json-server --watch db.json --routes routes.json
routes.json:
{ "/customIDroute/:cusomID" : "/customIDroute?cusomID=:cusomID" }
If you start your server using a server.js file (read more about it in the docs), you can define custom ID routes in server.js like this
// server.js
const jsonServer = require('json-server')
const server = jsonServer.create()
const router = jsonServer.router('db.json')
const middlewares = jsonServer.defaults()
server.use(middlewares)
// custom routes
server.use(jsonServer.rewriter({
"/route/:id": "/route?customId=:id"
}))
server.use(router)
server.listen(3000, () => {
console.log('JSON Server is running')
})
And you would start your server with command:
node server.js
Internaly getById from lodash-id is used.
If you use file-based server version the equivalent to cli --id, -i
is router.db._.id = "customId";
If you want to do per resource, you can do it with a middleware like this (put before others):
server.use((req, res, next) => {
if (req.url.includes("/resourceName/")) {
router.db._.id = "code";
} else {
router.db._.id = "pk";
}
next();
});

Puppet - Adding default nodes

I'm new to Puppet and following this tutorial to get into it:
http://www.pindi.us/blog/getting-started-puppet
I created an SSH module (/modules/ssh/manifests/init.pp) and added the following in the base node.pp (puppet/manifests/)
node default {
include ssh
}
The ssh module loks ike this:
class ssh {
include ssh::install, ssh::config, ssh::service
}
class ssh::install {
package {"ssh":
ensure => present,
}
}
class ssh::config {
file { "/etc/ssh/sshd_config":
ensure => present,
owner => 'root',
group => 'root',
mode => 600,
source => "puppet:///modules/ssh/sshd_config",
notify => Class["ssh::service"],
}
}
class ssh::service {
service { "ssh":
ensure => running,
hasstatus => true,
hasrestart => true,
enable => true,
}
}
Class["ssh::install"] -> Class["ssh::config"] -> Class["ssh::service"]
On the puppet I linked the module path with:
sudo puppet apply --modulepath=/vagrant/modules /vagrant/manifests/site.pp
which works.
If I then apply the nodes.pp I get the error:
Could not find class ssh for precise32 at /vagrant/manifests/nodes.pp:2 on node precise32...
Everything looks right, but I don't know where my error is.
It worked before as I installed SSH on the puppet yesterday, but I must have messed up something

Chef provisioning ssh times out when used with chef zero

I am using Chef zero on my windows machine to ssh into a red hat linux machine and execute a command that's inside of a recipe. When I run the code below, it tries to SSH for 120 secs and times out. I'm not sure why this is happening. Any idea why this is happening?
require 'chef/provisioning'
require 'chef/provisioning/ssh_driver'
with_driver 'ssh'
machine "ssh" do
attribute "short_dns", new_resource.short_dns
attribute "long_dns", load_balancer_name
recipe "mycookbook::add_short_dns"
machine_options :transport_options => {
'is_windows' => false,
'ip_address' => '10.16.99.124',
'username' => 'myusername',
'ssh_options' => {
'password' => 'mypassword'
}
}
converge true
end
here is the error
- been waiting 110/120 -- sleeping 10 seconds for ssh (10.16.99.124 on ssh:C:/Users/user/.chef/provisioning/ssh) to be connectable ...[2015-06-23T14:54:33-05:00] INFO: Executing sudo pwd on myusername#10.16.99.124
================================================================================
Error executing action `converge` on resource 'machine[ssh]'
================================================================================
RuntimeError
------------
Machine ssh (10.16.99.124 on ssh:C:/Users/user/.chef/provisioning/ssh) did not become ready within 120 seconds
I'm still fighting with Chef Provisioning myself, so this may not be as helpful as I would like. One thing is that each of these is a key/value pair, so want to declare your variables differently (see below):
require 'chef/provisioning/ssh_driver'
with_driver 'ssh'
with_machine_options :transport_options => {
:username => 'centos',
:ssh_options => {
:password => 'password'
}
}
Amir,
Does the :C/Users/user/.chef/provisioning/ssh directory exist on your workstation? If not try creating it and making sure permissions are correct then try
Try to use the snippet below, notice extra options that will help you to debug an issue.
1) DEBUG level will allow to see SSH communication.
2) If you don't overwrite prefix, it will use SUDO by default
3) Sometimes when you recreate remote server, your "known_hosts" file remembers it and the next time you try to SSH into server after recreation, you receive thie message "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED". In fact SSH session hangs, but you don't see that on the client side. So better ignore it.
:transport_options => {
:is_windows => false,
:username => 'YOURUSER',
:ssh_options => {
:password => 'YOURPASSWRD',
:verbose => Logger::DEBUG,
:user_known_hosts_file => '/dev/null'
},
:options => {
:prefix => ''
}
},