starting and stopping ssh in ansible - ssh

I tried the following code
---
- name: Stop ssh
service:
name: sshd
state: stopped
- name: Start ssh
service:
name: sshd
state: started
and it said could not find sshd
I even tried the following code
- name: "Stop ssh"
service:
name: ssh
state: stopped
- name: "start ssh"
service:
name: ssh
state: started
Since i am not supposed to use restarted or with_items.
I still could not stop and start

Related

Ansible: How to check SSH access

Good morning all,
I'm racking my brains over a simple subject.
I'm on a "master" server and I would like to check if he manages to connect in SSH on a server list.
Example
ansible-playbook -i inventaire_test test_ssh.yml
---
tasks:
- name: test unreachable
ansible.builtin.ping:
register: test_ssh
ignore_unreachable: true
- name: test
fail:
msg: "test"
when: test_ssh.unreachable is defined
- name: header CSV
lineinfile:
insertafter: EOF
dest: /home/list.csv
line: "Server;OS;access"
delegate_to:localhost
- name: Info
lineinfile:
dest: /home/list.csv
line: "{{ inventory_hostname }};OK"
state: present
when: test_ssh is successful
delegate_to:localhost
- name: Info csv
lineinfile:
dest: /home/list.csv
line: "{{ inventory_hostname }};KO"
state: present
when: test_ssh.unreachable is undefined
delegate_to:localhost
I can't find a check_ssh module. There is ansible.builtin.ssh but I can't use it.
Do you have an idea?
Thanks in advance.
Regarding
I'm on a "master" server and I would like to check if he manages to connect in SSH on a server list. ... I can't find a check_ssh module.
According the documentation there is a
ping module – Try to connect to host, verify a usable python and return pong on success
... test module, this module always returns pong on successful contact. It does not make sense in playbooks, but it is useful from /usr/bin/ansible to verify the ability to login and that a usable Python is configured.
which seems to be doing what you are looking for.

Installing httpd through ansible roles - ok=1 instead of changed =1

I have created roles to install httpd.
But the status is always 'ok=1'
instead of 'changed=1'
How should I actually install httpd and get a status of 'changed=1'
master.yml->
- name: playbook
hosts: webservers
become: yes
roles:
-tasks
tasks.yml->
- name: installing apache latest
yum:
- name: httpd
state: present
have you started your service?
- name: service httpd started
service:
name: "httpd"
state: started
this is because you have state "present" in yum-module description.
how it works: if you have package already installed - status will be "ok", if you have not - status will be "changed".

Ansible how to ignore unreachable hosts before ansible 2.7.x

I'm using ansible to run a command against multiple servers at once. I want to ignore any hosts that fail because of the '"SSH Error: data could not be sent to remote host \"1.2.3.4\". Make sure this host can be reached over ssh"' error because some of the hosts in the list will be offline. How can I do this? Is there a default option in ansible to ignore offline hosts without failing the playbook? Is there an option to do this in a single ansible cli argument outside of a playbook?
Update: I am aware that the ignore_unreachable: true works for ansible 2.7 or greater, but I am working in an ansible 2.6.1 environment.
I found a good solution here. You ping each host locally to see if you can connect and then run commands against the hosts that passed:
---
- hosts: all
connection: local
gather_facts: no
tasks:
- block:
- name: determine hosts that are up
wait_for_connection:
timeout: 5
vars:
ansible_connection: ssh
- name: add devices with connectivity to the "running_hosts" group
group_by:
key: "running_hosts"
rescue:
- debug: msg="cannot connect to {{inventory_hostname}}"
- hosts: running_hosts
gather_facts: no
tasks:
- command: date
With current version on Ansible (2.8) something like this is possible:
- name: identify reachable hosts
hosts: all
gather_facts: false
ignore_errors: true
ignore_unreachable: true
tasks:
- block:
- name: this does nothing
shell: exit 1
register: result
always:
- add_host:
name: "{{ inventory_hostname }}"
group: reachable
- name: Converge
hosts: reachable
gather_facts: false
tasks:
- debug: msg="{{ inventory_hostname }} is reachable"

vmware_vm_facts vCenter password validation failing

I am using Ansible and vCenter to provision a VM. When I run my playbook, I get an authentication error:
Cannot complete login due to an incorrect user name or password.
However, using the same credentials, I am able to log into vCenter manually.
Here is my simplified playbook:
---
- name: create a new VM on an ESX server
hosts: localhost
connection: local
tasks:
- name: include vars
include_vars:
dir: 'group_vars/prod'
files_matching: 'secret-esx.yml'
- name: gather facts from target host
local_action:
module: vmware_vm_facts
hostname: vi-devops-esx9.lab.vi.local
username: "{{ esx_username }}"
password: "{{ esx_password }}"
validate_certs: no
register: qe_facts
Why can I access vCenter, but vmware_vm_facts cannot with the same credentials?
My hostname was incorrect. Fixing my hostname fixed the authentication error.

Deploy/run a Redis service using Ansible and Docker

I'm using Ansible docker module to setup a Redis service (see ansible role below)
- hosts: redis
roles:
- role: angstwad.docker_ubuntu
sudo: true
tasks:
- name: data container
sudo: true
docker:
name: redis-data
image: busybox
state: started
volumes:
- /data/redis
- name: redis container
sudo: true
docker:
name: redis-service
image: redis:3
command: redis-server --appendonly yes
state: started
expose: 6379
volumes_from:
- redis-data
After provisioning, redis-service container is up but when I try to connect to redis using redis-cli I have the following error:
vagrant#dev1:~$ redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
NOTE: redis-service seems up and running:
vagrant#dev1:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3e8f27b14479 redis:3 "/entrypoint.sh redis" 12 minutes ago Up 12 minutes 6379/tcp redis-service
vagrant#dev1:~$ docker logs 3e8f27b14479
...
1:M 02 Sep 15:41:16.532 * The server is now ready to accept connections on port 6379
Do you have any idea of what might cause the problem?
I finally found the problem: ports attribute must be set too (not only expose)
- hosts: redis
roles:
- role: angstwad.docker_ubuntu
sudo: true
tasks:
- name: data container
sudo: true
docker:
name: redis-data
image: busybox
state: started
volumes:
- /data/redis
- name: redis container
sudo: true
docker:
name: redis-service
image: redis:3
command: redis-server --appendonly yes
state: started
expose: 6379
ports:
- 6379:6379
volumes_from:
- redis-data