Ansible loop lookup issue - variables

I'm trying to install apache http from source and using the following compilation command, which results in mentioned error. Any help please?
- name: Install Apache, Version n -- {{ apache_version }})
command: "{{ item }} chdir={{ apache_source_dir }}"
with_items:
- >
./configure --with-apr={{ apr-dir }} --with-apr-util={{ apr_util_dir }}
--enable-mods-shared=all --enable-ssl --enable-so
--with-pcre={{ pcre_dir}}/pcre-config --prefix={{ apache_install_dir }}`
- /usr/bin/make
- /usr/bin/make install
become: yes
Error:
TASK [Install Apache, Version n -- 2.4.43)] ***************************************************
fatal: [physoaapp03-tst]: FAILED! => {"msg": "Unable to look up a name or access an attribute in template string (./configure --with-apr={{ apr-dir }} --with-apr-util={{ apr_util_dir }} --enable-mods-shared=all --enable-ssl --enable-so --with-pcre={{ pcre_dir}}/pcre-config --prefix={{ apache_install_dir }}`\n).\nMake sure your variable name does not contain invalid characters like '-': unsupported operand type(s) for -: 'StrictUndefined' and 'StrictUndefined'"}
Can anyone let me know where do i need to improve in code?
Thanks

Make sure your variable name does not contain invalid characters like
'-'
The error already describes the issue and how to resolve it.
Change {{ apr-dir }} to {{ apr_dir }} and so on. Change the declaration too.
Please go through the documentation regarding variable names in ansible to avoid this type of problem in the future.

Related

SonarScanner.MSBuild.exe is not recognized on windows agent - GitHub Actions

I'm getting below error on windows agent while beginning the Sonarqube scanner.
MSBuild.SonarQube.Runner.exe : The term 'MSBuild.SonarQube.Runner.exe' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the
path is correct and try again.
Below is the command I have used inside my workflow.
- name: Build and analyze
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
shell: powershell
run: |
MSBuild.SonarQube.Runner.exe begin /k:"${{ env.PROJECT_KEY }}" /v:"${{ env.YEAR }}.${{ env.PERIOD }}.${{ env.REVISION }}.${{ github.run_number }}" /n:${{ env.PROJECT_NAME }} /d:sonar.host.url=${{ env.SONARQUBE_HOST_URL }} /d:sonar.login=${{ secrets.SONAR_TOKEN }} /d:sonar.verbose="false"
msbuild ProjectName.sln /p:SkipInvalidConfigurations=true /p:TrackFileAccess=false /p:Configuration=Release /p:Platform="x86"
MSBuild.SonarQube.Runner.exe end /d:sonar.login="${{ secrets.SONAR_TOKEN }}"
Does anyone have idea how to resolve this? Please help.

Ansible ssh fails with error: Data could not be sent to remote host

I have an ansible playbook that executes a shell script on remote host "10.8.8.88" as many times as the number files provided as parameter
ansible-playbook test.yml -e files="file1,file2,file3,file4"
playbook looks like below:
- name: Call ssh
shell: ~./execute.sh {{ item }}
with_items: {{ files.split(',') }}
This works fine for fewer files say 10 to 15 files. But I happen to have 145 files in the argument.
This is when the execution broke and play failed mid-way with below error message:
TASK [shell] *******************************************************************
[WARNING]: conditional statements should not include jinja2 templating
delimiters such as {{ }} or {% %}. Found: entrycurrdb.stdout.find("{{ BASEPATH
}}/{{ vars[(item | splitext)[1].split('.')[1]] }}/{{ item | basename }}") == -1
and actualfile.stat.exists == True
[WARNING]: sftp transfer mechanism failed on [10.8.8.88]. Use ANSIBLE_DEBUG=1
to see detailed information
[WARNING]: scp transfer mechanism failed on [10.8.8.88]. Use ANSIBLE_DEBUG=1
to see detailed information
fatal: [10.8.8.88]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"10.8.8.88\". Make sure this host can be reached over ssh: ", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
10.8.8.88 : ok=941 changed=220 unreachable=1 failed=0 skipped=145 rescued=0 ignored=0
localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I have the latest Ansible and the "pipeline" and "ssh" settings in ansible.cfg are as defaults.
I have the following questions.
How can I resolve the above issue?
I guess this could be due to a network issue. Is it possible to run infinite ssh ping to the remote server for testing purposes to see if the ansible command-line breaks? It will help me prove my case. A sample command that keeps pinging the remote using ssh is what I'm looking for.
It is possible to force ansible to retry ssh connection a few times in case of such failures so that it may connect in during retries. If so, I would appreciate where and how that can be set in ansible-playbook code as vars variable and not in ansible.cfg file?
https://docs.ansible.com/ansible/2.4/intro_configuration.html#retries
Something similar to:
vars:
ansible_ssh_private_key_file: "{{ key1 }}"
Many Thanks !!

How to use ssh identity file with Github Actions

I'm in the throes of setting up a Github Action that should run an SSH command to connect to a private server. The private server's connection settings i have specify an identityFile, which I do own. After this connection, I will then run a proxycommand, so this is essentially to a bastion, for context.
What I cannot quite figure out at this point is how/which github action supports this configuration. I see the commands on this one (similar to others): https://github.com/appleboy/ssh-action/blob/master/action.yml and no mention of identifyFile property. Is there another way to execute this or a ssh command that can make this possible?
Would appreciate some pointers, thanks!
If you need some explanation of how to write your action, you can read this article : How to create Github Actions to run tests with docker services .
You just have to create your workflow file and use the actions of appleboy like on steps keyword :
- name: executing remote ssh commands using password
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.KEY }}
key_path: ${{ secrets.KEY_PATH }}
password: ${{ secrets.PASSWORD }}
port: ${{ secrets.PORT }}
script: whoami
With the script line, you can execute what you want to do in the server and connect with the parameters set above. For multiple line do like this :
script: |
pwd
ls -al
Hope it will help.

Package installation in salt-stack including --allow-unauthenticated

During deployment, I am trying to install a package.
It works fine on some environment and does not work on others.
I added a flag, that will get the value True when I'm on one of the problematic environments, and I want it to add the tag --allow-unauthenticated when the flag is on, I found out that when I install it that way manually, the problem is solved, now I need to happen automatically.
This is the command that makes it work proparly when installing manually:
sudo salt <minion name> cmd.run "sudo apt-get -y --allow-unauthenticated install zabbix-agent"
This is the package installation during deployment:
zabbix-agent-installed:
pkg.installed:
- name: zabbix-agent
- zabbix-agent: '>=4.0.0'
- ignore_epoch: True
<Add here>:
{% if flag == 'True' %}
- ** allowing unauthenticated syntax **
{% endif %}
- require:
- pkgrepo: zabbix-agent-repo-added
what is the right way to do it there?
Have you tried the skip_verify option?
- skip_verify: True
skip_verify (bool) -- Skip the GPG verification check for the package to be installed
--allow-unauthenticated
Ignore if packages can't be authenticated and don't prompt about it.
If that doesn't work, then you need to convert the pkg.installed state into a cmd.run state so that you can give the extra arguments for apt-get.
I should add that skipping verification checks is dangerous and you should fix the cause of the verification failures instead of skipping the check.

ansible : how to pass multiple commands

I tried this:
- command: ./configure chdir=/src/package/
- command: /usr/bin/make chdir=/src/package/
- command: /usr/bin/make install chdir=/src/package/
which works, but I was hoping for something neater.
So I tried this:
from: https://stackoverflow.com/questions/24043561/multiple-commands-in-the-same-line-for-bruker-topspin which give me back "no such file or directory"
- command: ./configure;/usr/bin/make;/usr/bin/make install chdir=/src/package/
I tried this too: https://u.osu.edu/hasnan.1/2013/12/16/ansible-run-multiple-commands-using-command-module-and-with-items/
but I couldn't find the right syntax to put:
- command: "{{ item }}" chdir=/src/package/
with_items:
./configure
/usr/bin/make
/usr/bin/make install
That does not work, saying there is a quote issue.
To run multiple shell commands with ansible you can use the shell module with a multi-line string (note the pipe after shell:), as shown in this example:
- name: Build nginx
shell: |
cd nginx-1.11.13
sudo ./configure
sudo make
sudo make install
If a value in YAML begins with a curly brace ({), the YAML parser assumes that it is a dictionary. So, for cases like this where there is a (Jinja2) variable in the value, one of the following two strategies needs to be adopted to avoiding confusing the YAML parser:
Quote the whole command:
- command: "{{ item }} chdir=/src/package/"
with_items:
- ./configure
- /usr/bin/make
- /usr/bin/make install
or change the order of the arguments:
- command: chdir=/src/package/ {{ item }}
with_items:
- ./configure
- /usr/bin/make
- /usr/bin/make install
Thanks for #RamondelaFuente alternative suggestion.
Shell works for me.
Simply to say, Shell is the same as you run a shell script.
Notes:
Make sure use | when running multiple cmds.
Shell won't return errors if the last cmd is success (just like normal shell)
Control it with exit 0/1 if you want to stop ansible when error occurs.
The following example shows an error in shell, but it's success at the end of the execution.
- name: test shell with an error
become: no
shell: |
rm -f /test1 # This should be an error.
echo "test2"
echo "test1"
echo "test3" # success
This example shows stopinng shell with exit 1 error.
- name: test shell with exit 1
become: no
shell: |
rm -f /test1 # This should be an error.
echo "test2"
exit 1 # this stops ansible due to returning an error
echo "test1"
echo "test3" # success
reference:
https://docs.ansible.com/ansible/latest/modules/shell_module.html
You can also do like this:
- command: "{{ item }}"
args:
chdir: "/src/package/"
with_items:
- "./configure"
- "/usr/bin/make"
- "/usr/bin/make install"
Hope that might help other
Here is worker like this. \o/
- name: "Exec items"
shell: "{{ item }}"
with_items:
- echo "hello"
- echo "hello2"
I faced the same issue. In my case, part of my variables were in a dictionary i.e. with_dict variable (looping) and I had to run 3 commands on each item.key. This solution is more relevant where you have to use with_dict dictionary with running multiple commands (without requiring with_items)
Using with_dict and with_items in one task didn't help as it was not resolving the variables.
My task was like:
- name: Make install git source
command: "{{ item }}"
with_items:
- cd {{ tools_dir }}/{{ item.value.artifact_dir }}
- make prefix={{ tools_dir }}/{{ item.value.artifact_dir }} all
- make prefix={{ tools_dir }}/{{ item.value.artifact_dir }} install
with_dict: "{{ git_versions }}"
roles/git/defaults/main.yml was:
---
tool: git
default_git: git_2_6_3
git_versions:
git_2_6_3:
git_tar_name: git-2.6.3.tar.gz
git_tar_dir: git-2.6.3
git_tar_url: https://www.kernel.org/pub/software/scm/git/git-2.6.3.tar.gz
The above resulted in an error similar to the following for each {{ item }} (for 3 commands as mentioned above). As you see, the values of tools_dir is not populated (tools_dir is a variable which is defined in a common role's defaults/main.yml and also item.value.git_tar_dir value was not populated/resolved).
failed: [server01.poc.jenkins] => (item=cd {# tools_dir #}/{# item.value.git_tar_dir #}) => {"cmd": "cd '{#' tools_dir '#}/{#' item.value.git_tar_dir '#}'", "failed": true, "item": "cd {# tools_dir #}/{# item.value.git_tar_dir #}", "rc": 2}
msg: [Errno 2] No such file or directory
Solution was easy. Instead of using "COMMAND" module in Ansible, I used "Shell" module and created a a variable in roles/git/defaults/main.yml
So, now roles/git/defaults/main.yml looks like:
---
tool: git
default_git: git_2_6_3
git_versions:
git_2_6_3:
git_tar_name: git-2.6.3.tar.gz
git_tar_dir: git-2.6.3
git_tar_url: https://www.kernel.org/pub/software/scm/git/git-2.6.3.tar.gz
#git_pre_requisites_install_cmds: "cd {{ tools_dir }}/{{ item.value.git_tar_dir }} && make prefix={{ tools_dir }}/{{ item.value.git_tar_dir }} all && make prefix={{ tools_dir }}/{{ item.value.git_tar_dir }} install"
#or use this if you want git installation to work in ~/tools/git-x.x.x
git_pre_requisites_install_cmds: "cd {{ tools_dir }}/{{ item.value.git_tar_dir }} && make prefix=`pwd` all && make prefix=`pwd` install"
#or use this if you want git installation to use the default prefix during make
#git_pre_requisites_install_cmds: "cd {{ tools_dir }}/{{ item.value.git_tar_dir }} && make all && make install"
and the task roles/git/tasks/main.yml looks like:
- name: Make install from git source
shell: "{{ git_pre_requisites_install_cmds }}"
become_user: "{{ build_user }}"
with_dict: "{{ git_versions }}"
tags:
- koba
This time, the values got successfully substituted as the module was "SHELL" and ansible output echoed the correct values. This didn't require with_items: loop.
"cmd": "cd ~/tools/git-2.6.3 && make prefix=/home/giga/tools/git-2.6.3 all && make prefix=/home/giga/tools/git-2.6.3 install",