Multiple entries for the same bucket(default) in the passwd file - amazon-s3

I am trying to re-run an Ansible script on an old 3rd party integration, the command looks like this:
- name: "mount s3fs Fuse FS on boot from [REDACTED] on [REDACTED]"
mount:
name: "{{ [REDACTED] }}/s3/file_access"
src: "{{ s3_file_access_bucket }}:{{ s3_file_access_key }}"
fstype: fuse.s3fs
opts: "_netdev,uid={{ uid }},gid={{ group }},mp_umask=022,allow_other,nonempty,endpoint={{ s3_file_access_region }}"
state: mounted
tags:
- [REDACTED]
I'm receiving this error:
fatal: [REDACTED]: FAILED! => {"changed": false, "failed": true, "msg": "Error mounting /home/[REDACTED]: s3fs: there are multiple entries for the same bucket(default) in the passwd file.\n"}
I'm trying to find a passwd file to clean out, but I don't know where to find one.
Anyone recognizes this error?

s3fs checks /etc/passwd-s3fs and $HOME/.passwd-s3fs for credentials. It appears that one of these files has duplicate entries that you need to remove.
Your Ansible src stanza also attempts to supply credentials but I do not believe this will work. Instead you can supply these via the AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables.

Related

Ansible Playbook Task - make from source fail

I am having a strange issue that has driven me nuts for days - I'm hoping someone may be able to point me in the right direction. I am attempting run a simple playbook, download a git repository, and then build from source. In one of my playbooks, this works fine, but in my 2nd playbook, I get an error every time I attempt to run the make command.
Keeping it simple for brevity...
tasks:
- name: Set Python PATH
become: yes
shell: export PYTHONPATH=/usr/local/lib/python3/dist-packages
- name: Update bashrc with PYTHONPATH
lineinfile:
path: /home/vagrant/.bashrc
line: export PYTHONPATH=/usr/local/lib/python3/dist-packages
- name: cmake
become_user: vagrant
shell: cmake ..
args:
chdir: /home/vagrant/application/build
This works fine, though I had to become_user: vagrant, even though I did not in my other playbook. (I've split the cmake, make, make install commands up for troubleshooting) Then I run:
- name: make
become_user: vagrant
shell: make
args:
chdir: /home/vagrant/application/build
This fails every time with a VERY large amount of red text. I have logged in to the target, and can run make successfully there. But cannot via Ansible.
I have tried the community make plugin, I have tried many variations of this, including become: yes, but get the errors every time. This is the beginning of the error:
fatal: [server.local]: FAILED! => {"changed": true, "cmd": ["make", "all"], "delta": "0:00:01.218515", "end": "2021-12-30 16:39:36.586928", "msg": "non-zero return code", "rc": 2, "start": "2021-12-30 16:39:35.368413", "stderr": "ERROR:gnuradio.grc.core.FlowGraph:Failed to evaluate variable block variable_ax25_decoder_0_0\nTraceback (most recent call last):\n File \"/usr/lib/python3/dist-packages/gnuradio/grc/core/FlowGraph.py\", line 227, in renew_namespace\n
Do you have any suggestions as to why MAKE would fail in this instance, when it can run fine in another? (I've tried multiple VMs, new installs etc) but am having no joy.

Ansible moving files

I am creating a role to deploy Jira instance. My question is, how I can move files from one directory to another, I was trying something like this:
- name: Create Jira installation directory
command: mv "/tmp/atlassian-jira-software-{{ jira_version }}-standalone/*" "{{ installation_directory }}"
when: not is_jira_installed.stat.exists
But It's not working, I want to copy all files from one directory to another without copying the directory.
From the synopsis of the command module:
The command(s) will not be processed through the shell, so variables like $HOSTNAME and operations like "*", "<", ">", "|", ";" and "&" will not work. Use the ansible.builtin.shell module if you need these features.
So, your issue is the fact that the command module is not expanding the wildcard *, as you expect it, you should be using the shell module instead:
- name: Create Jira installation directory
shell: "mv /tmp/atlassian-jira-software-{{ jira_version }}-standalone/* {{ installation_directory }}"
when: not is_jira_installed.stat.exists
Now, please note that you can also make this without having to resort to a command or shell, by using the copy module.
- copy:
src: "/tmp/atlassian-jira-software-{{ jira_version }}-standalone/"
dest: "{{ installation_directory }}"
remote_src: yes

Ansible task includes undefined var, despite being defined in defaults/main.yml

I am trying to create a Galaxy role for our org's internal galaxy, which I am testing first locally. In our org we use a common list of defaults across all roles.
Ansible is throwing me a "The task includes an option with an undefined variable The error was: 'redis_download_url' is undefined" error when running my playbook, despite me having defined the variable in defaults/main.yml:
# Download
redis_version: "6.2.3"
redis_download_url: "https://download.redis.io/releases/redis-{{ redis_version }}.tar.gz"
When running my simple role/playbook.yml
---
- hosts: all
become: true
tasks:
- include: tasks/main.yml
Linked to tasks/main.yml
---
- name: Check ansible version
assert:
that: "ansible_version.full is version_compare('2.4', '>=')"
msg: "Please use Ansible 2.4 or later"
- include: download.yml
tags:
- download
- include: install.yml
tags:
- install
It should pull the tar file from tasks/download.yml as stated:
---
- name: Download Redis
get_url:
url: "{{ redis_download_url }}"
dest: /usr/local/src/redis-{{ redis_version }}.tar.gz
- name: Extract Redis tarball
unarchive:
src: /usr/local/src/redis-{{ redis_version }}.tar.gz
dest: /usr/local/src
creates: /usr/local/src/redis-{{ redis_version }}/Makefile
copy: no
The redis_download_url var is defined in defaults/main.yml which as I understand ansible should be able to locate there. I also have similar vars defined in defaults/task.yml eg.
redis_user: redis
redis_group: "{{ redis_user }}"
redis_port: "6379"
redis_root_dir: "/opt/redis"
redis_config_dir: "/etc/redis"
redis_conf_file: "{{ redis_config_dir }}/{{ redis_port }}.conf"
redis_password: "change-me"
redis_protected_mode: "yes"
and I assume they are also not able to be found/seen by ansible (but it does not get that far). I have also checked all file permissions and they seem to be fine.
Apologies in advance if the question is badly formatted.
As per documentation:
If you include a task file from a role, it will NOT trigger role behavior, this only happens when running as a role, include_role will work.
To get the role functionality of reading variables from defaults/main.yml, you'll need to use include_role or roles: [].
- hosts: all
become: true
tasks:
- include_role:
name: myrole
OR
- hosts: all
become: true
roles:
- myrole

How to use a Variable in Ansible aws_ec2 plugin

I want to filter ec2 instances according to the Environment tag which I define when I run the scripts, i.e ansible-playbook start.yml -e env=dev
However, it seems that the plugin is not parsing variables. Any idea on how to achieve this task?
my aws_ec2.yml:
---
plugin: aws_ec2
regions:
- eu-central-1
filters:
tag:Secure: 'yes'
tag:Environment: "{{ env }}"
hostnames:
- private-ip-address
strict: False
groups:
keyed_groups:
- key: tags.Function
separator: ''
Edit
There is no error message resulting when running the playbook. The only problem that ansible handle the variable exactly as a string tag:Environment: "{{ env }}"instead of value tag:Environment: dev

Ansible not picking up custom module

I'm having issues with Ansible picking up a module that I've added.
The module is called 'passwordstore' https://github.com/morphje/ansible_pass_lookup/.
I'm using Ansible 2.2
In my playbook, I've added a 'library' folder and have added the contents of that GitHub directory to that folder. I've also tried uncommenting library = /usr/share/ansible/modules and adding the module files there and still doesn't get picked up.
Have also tried setting environment variable to ANSIBLE_LIBRARY=/usr/share/ansible/modules
My Ansible playbook looks like this:
---
- name: example play
hosts: all
gather_facts: false
tasks:
- name: set password
debug: msg="{{ lookup('passwordstore', 'files/test create=true')}}"
And when I run this I get this error;
ansible-playbook main.yml
PLAY [example play] ******************************************************
TASK [set password] ************************************************************
fatal: [backend.example.name]: FAILED! => {"failed": true, "msg": "lookup plugin (passwordstore) not found"}
fatal: [mastery.example.name]: FAILED! => {"failed": true, "msg": "lookup plugin (passwordstore) not found"}
to retry, use: --limit #/etc/ansible/roles/test-role/main.retry
Any guidance on what I'm missing? It may just be the way in which I'm trying to add the custom module, but any guidance would be appreciated.
It's a lookup plugin (not a module), so it should go into a directory named lookup_plugins (not library).
Alternatively, add the path to the cloned repository in ansible.cfg using the lookup-plugins setting.