Is it possible to run this command with uri module in ansible ?
- shell: unset http_proxy && curl -X POST -H "Cache-Control:no-cache" -F "access_key={{ api_user }}" -F "secret_key={{ api_pass }}" "http://mydomain.bla/api/v1/login_check"
I tried like this:
- uri:
url: http://mydomain.bla/api/v1/login_check
method: POST
user: "{{ api_user }}"
password: "{{ api_pass }}"
environment:
http_proxy: ''
And like this:
- uri:
url: http://mydomain.bla/api/v1/login_check
method: POST
body: "access_key={{ api_user }}&secret_key={{ api_pass }}"
environment:
http_proxy: ''
And still does not work.
I'm trying to get a token and store it in an ansible variable.
"use_proxy: false" or no does not work - that's why I'm using this ugly environment workaround
You forgot to use return_content: yes
From module's docs:
return_content (default: no) – Whether or not to return the body of the request as a "content" key in the dictionary result.
- uri:
url: http://mydomain.bla/api/v1/login_check
method: POST
body: "access_key={{ api_user }}&secret_key={{ api_pass }}"
return_content: yes
register: token_response
environment:
http_proxy: ''
- debug: var=token_response.content
Related
Need some help to retrieve child values from var file.
Below is my var file
api_user: root
api_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;isi33835326238346338613761366531376662613865376263656262
6138
Huston:
onefs_host: 10.88.55.00
Phoenix:
onefs_host: 10.76.52.01
Below is my playbook
---
- name: isi_increase
hosts: localhost
connection: local
vars_files:
- isilonvars.yml
tasks:
- name: Print
debug:
msg:
- "{{ Huston.onefs_host }}"
- "{{ api_user }}"
- "{{ api_password }}"
This code works perfectly
TASK [Print] ********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": [
"10.88.55.00",
"root",
"Sh00tm3!"
]
}
But as per my requirement i have to retrieve onefs_host IP as per location in my playbook. I am using extra vars here -e "location=Huston"
- name: Print
debug:
msg:
# - "{{ wcc.onefs_host }}"
- "{{ {{location}}.onefs_host }}"
- "{{ api_user }}"
- "{{ api_password }}"
I am getting the below error.
fatal: [localhost]: FAILED! => {"msg": "template error while templating string: expected token ':', got '}'. String: {{ {{isilon_location}}.onefs_host }}"}
Can you try this way
- name: Print
debug:
msg:
- "{{ vars[location]['onefs_host'] }}"
- "{{ api_user }}"
- "{{ api_password }}"
Currently we manually monitor splunk dashboards during our deploys. We would like to automate this. For this, we would like to come up with an ansible playbook with the splunk queries. This playbook will be run during deployment.
I am successfully able to make connection to splunk, but I am not able to get the search query working
####
# type: task
#
# vars:
# 5xxcheck_output(str,command): raw output from command
# 5xxcheck_response(str,command): raw output to json
#
# desc:
# uses splunk to get 5xxcheck
---
- name: Tasks to query splunk
hosts: localhost
connection: local
tasks:
- name: get search_id for 5xx check from splunk
uri:
url: https://<splunk_instance>/services/search/jobs
follow_redirects: all
method: POST
user: xxxxxx
password: xxxxxxx
force_basic_auth: yes
body: "search host=tc1* ResponseCode=500 earliest=-15m"
body_format: raw
validate_certs: no
status_code: 201
return_content: true
register: search_id
- debug: msg="{{ search_id.status }}"
- name: use the search_id to get the 5xx check results
uri:
url: https://<splunk_instance>/services/search/jobs/{{ search_id }}/results/
method: GET
user: xxxxxx
password: xxxxxxx
force_basic_auth: yes
body_format: raw
return_content: true
register: 5xxcheck_output
until: 5xxcheck_output.status > 0 and 5xxcheck_output.status != 500
- name: Put results into 5xxcheck_response
set_fact:
5xxcheck_response: "{{ 5xxcheck_output.json }}"
- name: Print 5xxcheck_response if -v
debug:
var: 5xxcheck_response
verbosity: 1
I would like to use uri module to parameterize the splunk search. I am able to execute the following 2 steps from terminal, to get the response
Step1: Get the SID(Search ID)
curl -u user:pwd -k https://<splunk-instance>/services/search/jobs -d search="search host=t1* ResponseCode=200 earliest=-15m"
<?xml version="1.0" encoding="UTF-8"?>
<response>
<sid>1604947864.xxxxxx</sid>
</response>
Step2: Use the SID to get the response
curl -u user:pwd -k https://<splunk-instance>/services/search/jobs/<SID>/results/ --get -d output_mode=raw
---
- name: Tasks to query splunk
hosts: localhost
connection: local
tasks:
- name: get search_id for 5xx check from splunk
uri:
url: https://splunk_instance/services/search/jobs/
follow_redirects: all
method: POST
user: xxxxx
password: xxxxx
force_basic_auth: yes
body_format: form-urlencoded
status_code: [200, 201, 202]
body:
- [ search, "search host=t1* ResponseCode=500 earliest=-15m" ]
- [ output_mode, "json" ]
validate_certs: no
return_content: true
register: search_id
- debug: msg="{{ search_id }}"
This worked out for me. Now I get the valid sid as a response when I run this playbook.
I'm running a playbook against a host and getting this error: ` "msg": "Unable to connect to vCenter or ESXi API at 192.11.11.111 on TCP/443: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618)"
We are using vcenter 6.5. I have a playbook that should let ansible controller talk to the vsphere vcenter. I exported the trusted root SSL certificates from the vsphere home page. Copied over to my ansible controller and installed with:
sudo mv 9dab0099.0.crt 9dab0099.r0.crl 11ec582d.0.crt /etc/pki/ca-trust/source/anchors
sudo update-ca-trust force -enable
sudo update-ca-trust extract
able to telnet to the host on 443
able to ping host continously
tried changing validate_certs to: no
tried changing validate_certs to: yes
tried changing validate_certs to: false
My playbook:
- name: Add an additional cpu to virtual machine server
hosts: '{{ target }}'
tasks:
- name: Login into vCenter and get cookies
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
folder: "{{ vm_folder }}"
cluster: "{{ vcenter_cluster }}"
datacenter: "{{ vcenter_datacenter }}"
name: "{{ vm_name }}"
- name:
uri:
url: https://{{ vcenter_hostname }} #/rest/com/vmware/cis/session
force_basic_auth: yes
validate_certs: no
method: POST
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
folder: "{{ vm_folder }}"
cluster: "{{ vcenter_cluster }}"
datacenter: "{{ vcenter_datacenter }}"
name: "{{ vm_name }}"
#register: login
- name: Stop virtual machine
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: "{{ vm_folder }}"
cluster: "{{ vcenter_cluster }}"
datacenter: "{{ vcenter_datacenter }}"
name: "{{ vm_name }}"
state: "poweredoff"
- name: reconfigure CPU and RAM of VM
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
cluster: "{{ vcenter_cluster }}"
datacenter: "{{ vcenter_datacenter }}"
name: "{{ vm_name }}"
state: "present"
validate_certs: "false"
folder: "{{ vm_folder }}"
hardware:
memory_gb: "{{ memory }}"
num_cpus: "{{ cpu }}"
scsi: "lsilogic"
ESXi fw rules are open.
Reproduced error with python 2.7.5 and python 3.6. Newest version of pyvmomi installed.
Can someone point me in the right direction from here?
Try to put
validate_certs: no
into the tasks
I have a set of variables and a task as follows. My intent is to dynamically do a healthcheck based on the URL the user chose.
vars:
current_hostname: "{{ ansible_hostname }}"
hc_url1: "https://blah1.com/healthcheck"
hc_url2: "https://blah2.com/healthcheck"
tasks:
- name: Notification Msg For Healthcheck
shell: "echo 'Performing healthcheck at the URL {{ lookup('vars', component) }} on host {{ current_hostname }}'"
Run playbook in Ansible 2.3
ansible-playbook ansible_playbook.yml -i inventory -k -v --extra-vars "component=hc_url1"
Error
fatal: [hostname]: FAILED! => {"failed": true, "msg": "lookup plugin (vars) not found"}
I know this happens because lookup plugin "var" was introduced in Ansible v2.5. Is there a way to do this in Ansible 2.3? I want get the value of {{ component }}, and then the value of {{ hc_url1 }}
PS - upgrading to 2.5 is not an option because of org restrictions
Alternatively, maybe you can do this using a dictionary.
For example,
vars:
current_hostname: "{{ ansible_hostname }}"
urls:
hc_url1: "https://blah1.com/healthcheck"
hc_url2: "https://blah2.com/healthcheck"
tasks:
- name: Notification Msg For Healthcheck
shell: "echo 'Performing healthcheck at the URL {{ urls[component] }} on host {{ current_hostname }}'"
That way, the user provided value of component will just be looked up as a key in the dictionary.
I am using ansible to create several ec2 instances, copy files into those newly created servers and run commands on those servers. The issue is that after creating the servers I still have to enter yes in the following ssh prompt:
TASK [Adding /etc/rc.local2 to consul servers] *********************************
changed: [localhost -> 172.31.52.147] => (item={u'ip': u'172.31.52.147', u'number': 0})
The authenticity of host '172.31.57.20 (172.31.57.20)' can't be established.
ECDSA key fingerprint is 5e:c3:2e:52:10:29:1c:44:6f:d3:ac:10:78:10:01:89.
Are you sure you want to continue connecting (yes/no)? yes
changed: [localhost -> 172.31.57.20] => (item={u'ip': u'172.31.57.20', u'number': 1})
The authenticity of host '172.31.57.19 (172.31.57.19)' can't be established.
ECDSA key fingerprint is 4e:71:15:fe:c9:ec:3f:54:65:e8:a1:66:74:92:f4:ff.
Are you sure you want to continue connecting (yes/no)? yes
How can I have ansible ignore this prompt and just answer yes automatically? For reference here is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
sudo: yes
vars_files:
- ami-keys.yml
- ami-image.yml
tasks:
- name: create 3 consul servers
ec2:
aws_access_key: '{{ aws_access_key }}'
aws_secret_key: '{{ aws_secret_key }}'
key_name: terra
group: default
instance_type: t2.micro
image: '{{ ami }}'
region: '{{ region }}'
wait: true
exact_count: 3
count_tag:
Name: consul-server
instance_tags:
Name: consul-server
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item }} port=22 delay=1 timeout=480 state=started
with_items:
- "{{ ec2['tagged_instances'][0]['private_ip'] }}"
- "{{ ec2['tagged_instances'][1]['private_ip'] }}"
- "{{ ec2['tagged_instances'][2]['private_ip'] }}"
# shows the json data for the instances created
- name: consul server ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
# bootstrapping
- name: Adding /etc/rc.local2 to consul servers
template:
src: template/{{ item.number }}.sh
dest: /etc/rc.local2
delegate_to: "{{ item.ip }}"
with_items:
- ip: "{{ ec2['tagged_instances'][0]['private_ip'] }}"
number: 0
- ip: "{{ ec2['tagged_instances'][1]['private_ip'] }}"
number: 1
- ip: "{{ ec2['tagged_instances'][2]['private_ip'] }}"
number: 2
ignore_errors: true
- name: give /etc/rc.local2 permissions to run and starting swarm
shell: "{{ item[1] }}"
delegate_to: "{{ item[0] }}"
with_nested:
- [ "{{ ec2['tagged_instances'][0]['private_ip'] }}",
"{{ ec2['tagged_instances'][1]['private_ip'] }}",
"{{ ec2['tagged_instances'][2]['private_ip'] }}" ]
- [ "sudo chmod +x /etc/rc.local2",
"sleep 10",
"consul reload",
"docker run --name swarm-manager -d -p 4000:4000 --restart=unless-stopped \
swarm manage -H :4000 \
--replication --advertise \
$(hostname -i):4000 \
consul://$(hostname -i):8500" ]
ignore_errors: true
Note: I have already tried running:
ansible-playbook -e 'host_key_checking=False' consul-server.yml
and it does not remove the prompt.
Going into /etc/ansible/ansible.cfg and uncommenting the line host_key_checking=False does remove the prompt however I want to avoid doing this and either enter something into my playbook or the command line when I run my playbook instead.
The common recommendation is to set host_key_checking=False in the Ansible configuration. This is a bad idea, because it assumes your network connection will never be compromised.
A much better idea that only assumes the network isn't MitMed when you first create the servers is to use ssh-keyscan to add the servers' fingerprints to the known hosts file:
- name: accept new ssh fingerprints
shell: ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts
with_items: '{{ ec2.instances }}'