I'm using Ansible to setup an instance of Ubuntu 18.04 (remote) and run certain programs within the user environment. I have a command I'd like to execute inside a terminal on the remote that requires the terminal stay open.
If I'm on Ubuntu and run the following command I get exactly what I expect.
# DISPLAY=:0 nohup gnome-terminal -- roscore
Use the current display for the user
nohup so the terminal won't close if the parent terminal closes
start a new gnome-terminal instance
-- = run a command inside the new gnome-terminal instance
roscore can be replaced by any command that requires an open stream to a terminal window
My Ansible task looks like this when trying to recreate the same command
- name: Start terminal on remote machine
shell:
args:
cmd: DISPLAY=:0 nohup gnome-terminal -- roscore
executable: /bin/bash
When running this command I get the following verbose output
changed: [] => {
"changed": true,
"cmd": "DISPLAY=:0 nohup gnome-terminal -- roscore",
"delta": "0:00:00.243119",
"end": "",
"invocation": {
"module_args": {
"_raw_params": "DISPLAY=:0 nohup gnome-terminal -- roscore",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": "/bin/bash",
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "",
"stderr": "nohup: ignoring input",
"stderr_lines": [
"nohup: ignoring input"
],
"stdout": "",
"stdout_lines": []
}
When I execute this it appears that the terminal is opened for just a moment on the remote machine but it does not stay open. What is Ansible doing that would close the remote terminal session after running the command?
What I want is an Ansible task that will allow a terminal window to open on a remote Ubuntu 18.04 machine. Stretch goal would be to get the command running in the now open terminal.
Any help would be appreciated and glad to clarify where needed. Thank you!
I've decided to go a different direction but wanted to post what I learned.
To execute a command from Ansible that will open a terminal window on the Ubuntu 18.04 (remote) machine will require the following command:
- name: Start terminal on remote machine
shell:
args:
cmd: DISPLAY=:0 nohup gnome-terminal </dev/null >/dev/null 2>&1 &
executable: /bin/bash
Notice the </dev/null >/dev/null 2>&1 &. This is necessary for Ansible to be able to disown the process while allowing the terminal to remain open on the remote machine.
In Theory, I haven't proven this but to run a command inside the terminal would require an extra gnome-terminal argument -e.
-e, --command=STRING
Execute the argument to this option inside the terminal.
Example
- name: Start terminal on remote machine
shell:
args:
cmd: DISPLAY=:0 nohup gnome-terminal -e "bash -c 'whoami'" </dev/null >/dev/null 2>&1 &
executable: /bin/bash
Related
This is my main.yml file in task:
- name: Use npm
shell: >
/bin/bash -c "source $HOME/.nvm/nvm.sh && nvm use 16.16.0"
become: yes
become_user: root
- name: Run build-dev
shell: |
cd /home/ec2-user/ofiii
npm install
npm run build-dev
become: yes
become_user: root
when: platform == "dev"
And the output when running the script:
fatal: [172.31.200.13]: FAILED! => {
"changed": true,
"cmd": "cd /home/ec2-user/ofiii\nnpm install\nnpm run build-stag\n",
"delta": "0:00:00.061363",
"end": "2022-11-09 09:45:17.917829",
"msg": "non-zero return code",
"rc": 127,
"start": "2022-11-09 09:45:17.856466",
"stderr": "/bin/sh: line 1: npm:命令找不到\n/bin/sh: line 2: npm:命令找不到",
"stderr_lines": ["/bin/sh: line 1: npm:命令找不到", "/bin/sh: line 2: npm:命令找不到"],
"stdout": "",
"stdout_lines": []
}
the error is "npm:command not found" but I am really sure about the installation and the path to be set appropriatelly in the machine, the thing that I doubting is the script
I don't know how to modify my script,I tried to use npm module, but I failed
The problem is that each task environment is separate and you are setting nvm environment in a separate task.
The "Run build-dev" knows nothing about the paths set up from "Use npm"
I'd suggest combining these two tasks, with a few additional changes explained below:
- name: Run build-dev
shell: |
source $HOME/.nvm/nvm.sh
nvm use 16.16.0
npm install
npm run build-dev
args:
executable: /bin/bash
chdir: /home/ec2-user/ofiii
become: yes
become_user: root
when: platform == "dev"
Additional changes:
Using bash -c "..." in a shell module would result /bin/sh -c "/bin/bash -c '...'", it's better to use executable: /bin/bash instead
Shell module has chdir argument to specify the directory the script to run at
Check shell module documentation for other arguments and examples.
This might seem a little confusing, but I have a server I want to SSH into but the password is kinda long and complex, I understand for security I shouldn't save my password in plain text but for my convenience, I'm not too worried. Regardless, I'm trying to make it so I can just start up a Linux distro in WSL that automatically connects me to my SSH server and logs in but I'm having trouble. My settings.json block looks kinda like:
{
"guid": "{46ca431a-3a87-5fb3-83cd-11ececc031d2}",
"hidden": false,
"name": "SSH",
"source": "Windows.Terminal.Wsl",
"commandline": "/usr/bin/sshpass -p 'password' ssh -o StrictHostKeyChecking=no user#host"
},
then when I start that distro I get:
[error 0x80070002 when launching `/usr/bin/sshpass -p 'password' ssh -o StrictHostKeyChecking=no user#host']
Is there another way to do this? And yes I have sshpass installed on literally all my distros just to be 100% sure. I googled around for an sshpass Windows version but can't find it. I've also tried just using sshpass instead of /usr/bin/sshpass but it doesn't work either.
As far as I know error 0x80070002 is a file not found error, but I don't know where the command isn't being found. Does the "commandline" setting get loaded before even the Linux Kernel? Is there a way to launch the command AFTER Linux initializes?
Like the picture below, you should add the fingerprint to the .ssh/known_hosts file first.
Then add sshpass command line to the Window Termial settings.
I have three tasks in my playbook. For all of those, Ansible needs to connect to hosts specified in the inventory file. The first two tasks executed well. The third task says
<10.182.1.23> ESTABLISH SSH CONNECTION FOR USER: root
<10.182.1.23> SSH: EXEC sshpass -d12 ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 10.182.1.23 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1485219301.67-103341754305609 `" && echo ansible-tmp-1485219301.67-103341754305609="` echo $HOME/.ansible/tmp/ansible-tmp-1485219301.67-103341754305609 `" ) && sleep 0'"'"''
fatal: [10.182.1.23]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}
Here is my playbook
This is a screenshot of my playbook
Here is playbook.yml
---
- hosts: all
strategy: debug
gather_facts: no
vars:
contents: "{{ lookup('file','/etc/redhat-release')}}"
mydate: "{{lookup('pipe','date +%Y%m%d%H%M%S**.%5N**')}}"
tasks:
- name: cat file content
debug: msg='the content of file is {{contents}} at date {{mydate}}.'
- name: second task
debug: msg='this is second task at time {{mydate}}.'
- name: fourth task
command: sudo cat /etc/redhat-release
register: result
- debug: var=result
Here is my inventory file
[hosts]
10.182.1.23 ansible_connection=ssh ansible_ssh_user=username ansible_ssh_pass=passphrase
I am not able to understand how it is able to connect to the host for the top two tasks and why it threw an error for the third.
I am new to using Ansible. Please help me with this.
I have three tasks in my playbook. For all of those, Ansible needs to connect to hosts specified in the inventory file.
That's not true. All lookup plugins perform their actions locally on the control machine.
"Lookups: Like all templating, these plugins are evaluated on the Ansible control machine"
I am not able to understand how it is able to connect to the host for the top two tasks and why it threw an error for the third.
Because your first two tasks use the debug module with lookup plugins. They just print the value of msg argument to the output and do not require connection to the remote host.
So your first two tasks display the contents of the local file /etc/redhat-release and local date-time.
The third task tries to run the code on the target machine and fails to connect.
try below linux command to determine whether ssh is flawless.
ssh remoteuser#remoteserver
It shouldn't prompt for password.
I am a complete Ansible newbie, so apologies in advance!
I am trying to run an Ansible playbook whose role is to enable file sharing/transfer/synchronicity between programs I build locally and on a remote machine (as you can guess, I didn't write the playbook).
My issue is that I cannot ping the remote host when I don't use --connection=local. I can, however, ssh the remote host. When I run the playbook, it throws the error:
host1 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true}
If I do
ansible-playbook cvfms.yml --connection=local
then I don't get the ssh error, but the playbook can't do anything, since I suspect the connection should be other than local for it to run.
For further information, here is my /etc/ansible/hosts file:
[group_name]
host1 ansible_ssh_host=lengau.chpc.ac.za
I also have a /etc/ansible/host_var, saying my username at the machine.
Any help on this issue would be deeply appreciated!
In answer to the comments: when I run ansible-playbook -vvv cvfms.yml, I get the output:
PLAYBOOK: cvmfs.yml ************************************************************
2 plays in /home/testuser/Documents/DevOps-master/Ansible/cvmfs.yml
PLAY [Enable CVMFS] ************************************************************
TASK [setup] *******************************************************************
Using module file /usr/lib/python2.6/site-packages/ansible-2.2.0- py2.6.egg/ansible/modules/core/system/setup.py
<lengau.chpc.ac.za> ESTABLISH SSH CONNECTION FOR USER: khenninger
<lengau.chpc.ac.za> SSH: EXEC ssh -q -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/testuser/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=khenninger -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r lengau.chpc.ac.za '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468582573.96-15730857177484 `" && echo ansible-tmp-1468582573.96-15730857177484="` echo $HOME/.ansible/tmp/ansible-tmp-1468582573.96-15730857177484 `" ) && sleep 0'"'"''
fatal: [host1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}
to retry, use: --limit #/home/testuser/Documents/DevOps- master/Ansible/cvmfs.retry
PLAY RECAP *********************************************************************
host1 : ok=0 changed=0 unreachable=1 failed=0
In response to the other question:
I set up the private keys like this:
On my "home" machine I now have a file /home/testuser/.ssh/id_rsa, which contains the private key I obtained via
ssh-keygen -t rsa.
This private key is stored in the directory /home/user/.ssh/ on the remote machine as well.
As far as I can gather, that was the right thing to do.
I still get the same issues as above when I run ansible-playbook or when I ping.
And to add some weirdness, all of this only happens if I am root. If I am a normal user on my home machine, the ssh works fine, and the playbook runs on the local connection with a new error message, as below:
ansible-playbook cvmfs.yml --connection=local
PLAY [Enable CVMFS] ************************************************************
TASK [setup] *******************************************************************
ok: [196.24.44.83]
ok: [host1]
TASK [Inform the team] *********************************************************
fatal: [host1]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'slack_token' is undefined\n\nThe error appears to have been in '/home/testuser/Documents/DevOps-master/Ansible/cvmfs.yml': line 7, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n pre_tasks:\n - name: Inform the team\n ^ here\n"}
fatal: [196.24.44.83]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'slack_token' is undefined\n\nThe error appears to have been in '/home/testuser/Documents/DevOps-master/Ansible/cvmfs.yml': line 7, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n pre_tasks:\n - name: Inform the team\n ^ here\n"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #cvmfs.retry
PLAY RECAP *********************************************************************
196.24.44.83 : ok=1 changed=0 unreachable=0 failed=1
host1 : ok=1 changed=0 unreachable=0 failed=1
There is something seriously wrong with something on my local machine, I think...
I found the answer! I feel both elated and very dumb (blush)...
The problem was fixed by (after doing all the steps above) running:
ansible-playbook cvmfs.yml --ask-pass
After which point it runs fine. The output of
ansible all -m ping --ask-pass
was then "success" in all cases, not just over the local network. And the playbook runs fine. Yay!
I'm trying to run ansible role on multiple servers, but i get an error:
fatal: [192.168.0.10]: UNREACHABLE! => {"changed": false, "msg":
"Failed to connect to the host via ssh.", "unreachable": true}
My /etc/ansible/hosts file looks like this:
192.168.0.10 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.11 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.12 ansible_sudo_pass='passphrase' ansible_ssh_user=user
I have no idea what's going on - everything looks fine - I can login via SSH, but ansible ping returns the same error.
The log from verbose execution:
<192.168.0.10> ESTABLISH SSH CONNECTION FOR USER: user <192.168.0.10>
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o
KbdInteractiveAuthentication=no -o
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.10
'/bin/sh -c '"'"'( umask 22 && mkdir -p "echo
$HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829" &&
echo "echo
$HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829"
)'"'"''
Can you help me somehow? If I have to use ansible in local mode (-c local), then it's useless.
I've tried to delete ansible_sudo_pass and ansible_ssh_user, but it did'nt help.
You need to change the ansible_ssh_pass as well or ssh key, for example I am using this in my inventory file:
192.168.33.100 ansible_ssh_pass=vagrant ansible_ssh_user=vagrant
After that I can connect to the remote host:
ansible all -i tests -m ping
With the following result:
192.168.33.100 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Hope that help you.
EDIT: ansible_ssh_pass & ansible_ssh_user don't work in the latest version of Ansible. It has changed to ansible_user & ansible_pass
mkdir /etc/ansible
cat > hosts
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
Go to your playbook directory and run ansible all -m ping or ansible ping -m "server-group-name"
I had this issue, but it was for a different reason than was documented in other answers. My host that I was trying to deploy to was only available by going through a jump box. Originally, I thought that it was because Ansible wasn't recognizing my SSH config file, but it was. The solution for me was to make sure that the user that was present in the SSH config file matched the user in the Ansible playbook. That resolved the issue for me.
Try to modify your host file to:
192.168.0.10
192.168.0.11
192.168.0.12
$ansible -m ping all -vvv
After installing ansible on Ubuntu or CentOS.
You can have messages below. Do not panic, you must have an access right to the file /tmp of user [/home/user_name/.ansible/tmp/].
"Authentication or permission failure".
This preconisaion will solve the problem.
[Your_server ~]$ ansible -m ping all
rusub-bm-gt | SUCCESS => {
"changed": false,
"ping": "pong"
}
Your_server | SUCCESS => {
"changed": false,
"ping": "pong"
}
Best Practice for me I'm using SSH keys to access to server hosts
1.Create hosts file in inventories folder
[example]
example1.com
example2.com
example3.com
2. Create ansible-playbook file playbook.yml
---
- hosts:
- all
- roles:
- example
3. let's try to deploy ansible-playbook with multiple server hosts
ansible-playbook playbook.yml -i inventories/hosts example --user vagrant
The ansible_ssh_port changed while reloading the vm.
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
So I had to update the inventory/hosts file as follows:
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='centos' ansible_ssh_private_key_file=<key path>