Ansible dynamic inventory refresh - dynamic

I have created a simple dynamic inventory python which prints JSON to standard output mentioned below, but Ansible inventory does not refresh.
Command: ansible-playbook playbooks/deploy.yaml -i playbook/inventory_test.py
Inventory JSON:
{
'python_hosts': {
'hosts': ['10.220.21.122', '10.220.21.278'],
'vars': {
'ansible_ssh_user': 'projectuser',
}
},
'_meta': {
'hostvars': {
'10.220.21.122': {
'host_specific_var': 'testhost'
},
'10.220.21.278': {
'host_specific_var': 'towerhost'
}
}
}
}
I also tried this:
- hosts: localhost
tasks:
- name: test
script: ./inventory_test.py
- name: Refresh inventory
meta: refresh_inventory
- name: print new inventory
debug:
var: groups
But inventory still does not refresh automatically.
Ansible version is 2.6.4
Any help on this is really appreciated.

This is because of the cache...
Go to ~/.ansible/tmp and delete inventory_test.cache file.
Also, this is helpful answer if you able edit playbook...

Related

Accessing values of nested dictionaries

I'm working some separate tasks for automating VM deployments through tower.
Basically I just need a quick run down on how to gather/use the various properties of a registered return from a task.
I've got this.
tasks:
- name: Gather disk info from virtual machine using name
vmware_guest_disk_info:
hostname: "{{ vcenter }}"
username: "{{ username }}"
password: "{{ esxipassword }}"
datacenter: "{{ datacenter }}"
name: "{{ fqdn }}"
register: disk_info
- debug:
var: disk_info
This spits out the information I want. But, for the life of me I can't figure out how to select a single property. can someone tell me how to do that (particularly for the backing_filename) property?
I mean in powershell it would just be disk_info.backing_filename or something like backing = $disk_info | select -expandproperty backing_filename. Just looking for something like the equivalent of that.
Snip of output
{
"disk_info": {
"guest_disk_info": {
"0": {
"key": 2000,
"label": "Hard disk 1",
"summary": "104,857,600 KB",
"backing_filename": "[datastorex] vmname/vmname.vmdk",
To be fair, this one is not as simple as it looks, because your dictionary has a key being a string 0, but, would you be doing disk_info.guest_disk_info.0.backing_filename you would try to access an element 0, so a list, and not a dictionary key '0'.
Here would be an example playbook solving your issue:
- hosts: all
gather_facts: yes
tasks:
- debug:
var: disk_info.guest_disk_info['0'].backing_filename
vars:
disk_info:
guest_disk_info:
'0':
key: 2000
label: Hard disk 1
summary: 104,857,600 KB
backing_filename: "[datastorex] vmname/vmname.vmdk"
That gives:
{
"disk_info.guest_disk_info['0'].backing_filename": "[datastorex] vmname/vmname.vmdk"
}
While this works also, you would see that the YAML is representing a totally different structure, also including a list, and not only multiple nested dictionaries:
- hosts: all
gather_facts: yes
tasks:
- debug:
var: disk_info.guest_disk_info.0.backing_filename
vars:
disk_info:
guest_disk_info:
- key: 2000
label: Hard disk 1
summary: 104,857,600 KB
backing_filename: "[datastorex] vmname/vmname.vmdk"
To give you an equivalent in JSON, since you seems to have issue understanding the YAML constructions, your output is
{
"disk_info": {
"guest_disk_info": {
"0": {
"backing_filename": "[datastorex] vmname/vmname.vmdk"
}
}
}
}
That would be accessible via disk_info.guest_disk_info['0'].backing_filename.
While
{
"disk_info": {
"guest_disk_info": [
{
"backing_filename": "[datastorex] vmname/vmname.vmdk"
}
]
}
}
Would be accessible via disk_info.guest_disk_info.0.backing_filename

Jenkins job doesnt continue after starting selenium server

I am building a jenkins job that needs to run some tests against a selenium server. I have have defined a stage where I start the selenium server in 1 container, and after that I want to run my tests from another container, against the selenium server.
The selenium server seems to start fine, but after that the job just hangs, displaying a spinner:
This is what my pipeline script looks like:
agent {
kubernetes {
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: node
image: node:12.14.1
- name: selenium
image: vvoyer/selenium-standalone
"""
}
}
stages {
stage('Checkout codebase') {
// do checkout
}
stage('start selenium') {
steps {
container('selenium') {
sh '''
selenium-standalone install
selenium-standalone start //script hangs after this command
'''
}
}
}
stage('test') {
steps {
container('node') {
//build test project & run tests
}
}
}
}
}
Can anyone tell me what I am doing wrong here and how I can fix it?
Yeah Thanks, solved similar error by adding '&' at the end of shell script step inside Jenkinsfile.
ex:
sh label: '',script: 'docker-compose -f TestAutomation_UI_API/docker-compose-v3.yml up --scale chrome=3 &'

PhantomJS timeout issue when running in headless mode in GitLab CI

I am trying to use GitLab CI to run some client-side unit test written using QUnit. Now to run the Qunit test I am using the grunt-contrib-qunit plugin. To run these tests in headless mode I am using this plugin which hosts it on a localhost server in a console and runs all unit tests. When running this project locally I am successfully able to run all the unit tests but when I checking in my code which kicks of the CI process, on GitLab, it fails on starting the phantomjs server and gives timeout error. I am also providing the jsbin link of the two text files which are basically the output of the unit test from my console. One file is of my local system and another is from the GitLab CI that runs on GitLab website when I check-in my code.
Local Console Output File Dump
Gitlab CI Output Dump
Adding my gitlab-ci.yaml file
image: node:4.2.2
before_script:
- dir
- cd sapui5_repo
- dir
- cd app-with-tests
build:
stage: build
script:
- npm i
- npm run test
cache:
policy: push
paths:
- node_modules
artifacts:
paths:
- built
Also adding my gruntfile if that helps
/* global module */
module.exports = function (grunt) {
grunt.initConfig({
qunit: {
all: {
options: {
timeout: 9000,
urls: [
"http://localhost:9000/webcontent/test/unit/unitTests.qunit.html"
]
}
},
//all: ["webcontent/test/unit/unitTests.qunit.html"],
options: {
timeout: 2000,
}
},
connect: {
options: {
//open: true,
},
first: {
options: {
port: 9000,
//livereload: 3500,
base: "./"
}
},
second: {
options: {
open: {
target: "http://localhost:9000/webcontent"
},
keepalive: true,
port: 9000,
livereload: 3501,
base: "./",
}
}
},
});
grunt.loadNpmTasks("grunt-contrib-connect");
grunt.loadNpmTasks("grunt-contrib-qunit");
grunt.registerTask("test", [
"connect:first", "qunit"
]);
grunt.registerTask("default", [
"connect:second"
]);
};

How to use jsdoc options in jsdoc-grunt task

I want to add the -p flag in order to generate documentation for private methods using grunt-jsdoc. How can I do that?
According to the documentation at grunt-jsdoc they state that we can use any of the options specified in the useJsDocCli, however do not know how they should be specified in the grunt task. Here is my current grunt task:
jsdoc : {
dist : {
src: ['app/scripts/directives/*.js','README.md'],
options: {
destination: 'doc'
}
}
}
How can I specify that I want the task to run with the -p flag (or any other flags)?
There's an example in the documentation under the template section https://github.com/krampstudio/grunt-jsdoc#templates
Just use the flag name without the dash, for example:
dist : {
src: ['app/scripts/directives/*.js','README.md'],
options: {
destination: 'doc',
private: true
}
}
}

Grunt watch: only upload files that have changed

related
I was able to set up a Grunt task to SFTP files up to my dev server using grunt-ssh:
sftp: {
dev: {
files: {
'./': ['**','!{node_modules,artifacts,sql,logs}/**'],
},
options: {
path: '/path/to/project',
privateKey: grunt.file.read(process.env.HOME+'/.ssh/id_rsa'),
host: '111.111.111.111',
port: 22,
username: 'marksthebest',
}
}
},
But this uploads everything when I run it. There are thousands of files. I don't have time to wait for them to upload one-by-one every time I modify a file.
How can I set up a watch to upload only the files I've changed, as soon as I've changed them?
(For the curious, the server is a VM on the local network. It runs on a different OS and the setup is more similar to production than my local machine. Uploads should be lightning quick if I can get this working correctly)
What you need is grunt-newer, a task designed especially to update the configuration of any task depending on what file just changed, then run it. An example configuration could look like the following:
watch: {
all: {
files: ['**','!{node_modules,artifacts,sql,logs}/**'],
tasks: ['newer:sftp:dev']
}
}
You can do that using the watch event of grunt-contrib-watch.
You basically need to handle the watch event, modify the sftp files config to only include the changed files, and then let grunt run the sftp task.
Something like this:
module.exports = function(grunt) {
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
secret: grunt.file.readJSON('secret.json'),
watch: {
test: {
files: 'files/**/*',
tasks: 'sftp',
options: {
spawn: false
}
}
},
sftp: {
test: {
files: {
"./": "files/**/*"
},
options: {
path: '/path/on/the/server/',
srcBasePath: 'files/',
host: 'hostname.com',
username: '<%= secret.username %>',
password: '<%= secret.password %>',
showProgress: true
}
}
}
}); // end grunt.initConfig
// on watch events configure sftp.test.files to only run on changed file
grunt.event.on('watch', function(action, filepath) {
grunt.config('sftp.test.files', {"./": filepath});
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-ssh');
};
Note the "spawn: false" option, and the way you need to set the config inside the event handler.
Note2: this code will upload one file at a time, there's a more robust method in the same link.
You can achieve that with Grunt:
grunt-contrib-watch
grunt-rsync
First things first: I am using a Docker Container. I also added a public SSH key into my Docker Container. So I am uploading into my "remote" container only the files that have changed in my local environment with this Grunt Task:
'use strict';
module.exports = function(grunt) {
grunt.initConfig({
rsync: {
options: {
args: ['-avz', '--verbose', '--delete'],
exclude: ['.git*', 'cache', 'log'],
recursive: true
},
development: {
options: {
src: './',
dest: '/var/www/development',
host: 'root#www.localhost.com',
port: 2222
}
}
},
sshexec: {
development: {
command: 'chown -R www-data:www-data /var/www/development',
options: {
host: 'www.localhost.com',
username: 'root',
port: 2222,
privateKey: grunt.file.read("/Users/YOUR_USER/.ssh/id_containers_rsa")
}
}
},
watch: {
development: {
files: [
'node_modules',
'package.json',
'Gruntfile.js',
'.gitignore',
'.htaccess',
'README.md',
'config/*',
'modules/*',
'themes/*',
'!cache/*',
'!log/*'
],
tasks: ['rsync:development', 'sshexec:development']
}
},
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-rsync');
grunt.loadNpmTasks('grunt-ssh');
grunt.registerTask('default', ['watch:development']);
};
Good Luck and Happy Hacking!
I have recently ran into a similar issue where I wanted to only upload files that have changed. I'm only using grunt-exec. Providing you have ssh access to your server, you can do this task with much greater efficiency. I also created an rsync.json that is ignored by git, so collaborators can have their own rsync data.
The benefit is that if anyone makes a change it automatically uploads to their stage.
// Watch - runs tasks when any changes are detected.
watch: {
scripts: {
files: '**/*',
tasks: ['deploy'],
options: {
spawn: false
}
}
}
My deploy task is a registered task that compiles scripts then runs exec:deploy
// Showing exec:deploy task
// Using rsync with ssh keys instead of login/pass
exec: {
deploy: {
cmd: 'rsync public_html/* <%= rsync.options %> <%= rsync.user %>#<%= rsync.host %>:<%=rsync.path %>'
}
You see a lot of the <%= rsync %> stuff? I use that to grab info from rysnc.json which is ingored by git. I only have this because this is a team workflow.
// rsync.json
{
"options": "-rvp --progress -a --delete -e 'ssh -q'",
"user": "mmcfarland",
"host": "example.com",
"path": "~/stage/public_html"
}
Make sure you rsync.json is defined in grunt:
module.exports = function(grunt) {
var rsync = grunt.file.readJSON('path/to/rsync.json');
var pkg = grunt.file.readJSON('path/to/package.json');
grunt.initConfig({
pkg: pkg,
rsync: rsync,
I think it's not good idea to upload everything that changed at once to staging server. And working on the staging server is not a good idea too. You have to configure your local machine server, to be the same as staging/production
It's better to upload 1 time, when you do deployment.
You can archive all the files using grunt-contrib-compress. And push them using grunt-ssh as 1 file, then extract it on the server, that will be much faster.
that's example of compress task:
compress: {
main: {
options:{
archive:'build/build.tar.gz',
mode: 'tgz'
},
files: [
{cwd: 'build/', src: ['sites/all/modules/**'], dest:'./'},
{cwd: 'build/', src: ['sites/all/themes/**'], dest:'./'},
{cwd: 'build/', src: ['sites/default/files/**'], dest:'./'}
]
}
}
PS: Didn't ever look to rsync grunt modules.
I understand that it's might not what you are looking for. But i decided to create my answer as standalone answer.