I am creating a role to deploy Jira instance. My question is, how I can move files from one directory to another, I was trying something like this:
- name: Create Jira installation directory
command: mv "/tmp/atlassian-jira-software-{{ jira_version }}-standalone/*" "{{ installation_directory }}"
when: not is_jira_installed.stat.exists
But It's not working, I want to copy all files from one directory to another without copying the directory.
From the synopsis of the command module:
The command(s) will not be processed through the shell, so variables like $HOSTNAME and operations like "*", "<", ">", "|", ";" and "&" will not work. Use the ansible.builtin.shell module if you need these features.
So, your issue is the fact that the command module is not expanding the wildcard *, as you expect it, you should be using the shell module instead:
- name: Create Jira installation directory
shell: "mv /tmp/atlassian-jira-software-{{ jira_version }}-standalone/* {{ installation_directory }}"
when: not is_jira_installed.stat.exists
Now, please note that you can also make this without having to resort to a command or shell, by using the copy module.
- copy:
src: "/tmp/atlassian-jira-software-{{ jira_version }}-standalone/"
dest: "{{ installation_directory }}"
remote_src: yes
Related
I really dont know if this is a simple (must be), common or complex task.
I have a buildspec.yml file in my codebuild project, and i am trying to append the version written in package.json file to the output artifact.
I have already seen a lot of tutorials that teach how to append the date (not really useful to me), and others that tell me to execute a version.sh file with this
echo $(sed -nr 's/^\s*"version": "([0-9]{1,}.[0-9]{1,}.*)",$/\1/p' package.json)
and set it in a variable (it doesn't work).
i'm ending up with a build folder called: "my-project-$(version.sh)"
codebuild environment uses ubuntu and nodejs
Update (solved):
my version.sh file:
#!/usr/bin/env bash
echo $(sed -nr 's/^\s*\"version": "([0-9]{1,}\.[0-9]{1,}.*)",$/\1/p' package.json)
Then, i just found out 2 things:
Allow access to your version.sh file:
git update-index --add --chmod=+x version.sh
Declare a variable in any phase in buildspec, i dit in in build phase (just to make sure repository is already copied in environment)
TAGG=$($CODEBUILD_SRC_DIR/version.sh)
then reference it in artifact versioned name:
artifacts:
files:
- '**/*'
name: workover-frontend-$TAG
As result, my build artifact's name: myproject-1.0.0
In my case this script do not want to fetch data from package.json. On my local machine it working great but on AWS doesn't. I had to use chmod in different way, because i got message that i don't have right permissions. My buildspec:
version: 0.2
env:
variables:
latestTag: ""
phases:
pre_build:
commands:
- "echo sed version"
- sed --version
build:
commands:
- chmod +x version.sh
- latestTag=$($CODEBUILD_SRC_DIR/version.sh)
- "echo $latestTag"
artifacts:
files:
- '**/*'
discard-paths: yes
And results in console:
CodeBuild
I also have to mark that when i paste only for example echo 222 into version.sh file i got right answer in CodeBuild console.
I am trying to create a Galaxy role for our org's internal galaxy, which I am testing first locally. In our org we use a common list of defaults across all roles.
Ansible is throwing me a "The task includes an option with an undefined variable The error was: 'redis_download_url' is undefined" error when running my playbook, despite me having defined the variable in defaults/main.yml:
# Download
redis_version: "6.2.3"
redis_download_url: "https://download.redis.io/releases/redis-{{ redis_version }}.tar.gz"
When running my simple role/playbook.yml
---
- hosts: all
become: true
tasks:
- include: tasks/main.yml
Linked to tasks/main.yml
---
- name: Check ansible version
assert:
that: "ansible_version.full is version_compare('2.4', '>=')"
msg: "Please use Ansible 2.4 or later"
- include: download.yml
tags:
- download
- include: install.yml
tags:
- install
It should pull the tar file from tasks/download.yml as stated:
---
- name: Download Redis
get_url:
url: "{{ redis_download_url }}"
dest: /usr/local/src/redis-{{ redis_version }}.tar.gz
- name: Extract Redis tarball
unarchive:
src: /usr/local/src/redis-{{ redis_version }}.tar.gz
dest: /usr/local/src
creates: /usr/local/src/redis-{{ redis_version }}/Makefile
copy: no
The redis_download_url var is defined in defaults/main.yml which as I understand ansible should be able to locate there. I also have similar vars defined in defaults/task.yml eg.
redis_user: redis
redis_group: "{{ redis_user }}"
redis_port: "6379"
redis_root_dir: "/opt/redis"
redis_config_dir: "/etc/redis"
redis_conf_file: "{{ redis_config_dir }}/{{ redis_port }}.conf"
redis_password: "change-me"
redis_protected_mode: "yes"
and I assume they are also not able to be found/seen by ansible (but it does not get that far). I have also checked all file permissions and they seem to be fine.
Apologies in advance if the question is badly formatted.
As per documentation:
If you include a task file from a role, it will NOT trigger role behavior, this only happens when running as a role, include_role will work.
To get the role functionality of reading variables from defaults/main.yml, you'll need to use include_role or roles: [].
- hosts: all
become: true
tasks:
- include_role:
name: myrole
OR
- hosts: all
become: true
roles:
- myrole
I want to filter ec2 instances according to the Environment tag which I define when I run the scripts, i.e ansible-playbook start.yml -e env=dev
However, it seems that the plugin is not parsing variables. Any idea on how to achieve this task?
my aws_ec2.yml:
---
plugin: aws_ec2
regions:
- eu-central-1
filters:
tag:Secure: 'yes'
tag:Environment: "{{ env }}"
hostnames:
- private-ip-address
strict: False
groups:
keyed_groups:
- key: tags.Function
separator: ''
Edit
There is no error message resulting when running the playbook. The only problem that ansible handle the variable exactly as a string tag:Environment: "{{ env }}"instead of value tag:Environment: dev
I am trying to re-run an Ansible script on an old 3rd party integration, the command looks like this:
- name: "mount s3fs Fuse FS on boot from [REDACTED] on [REDACTED]"
mount:
name: "{{ [REDACTED] }}/s3/file_access"
src: "{{ s3_file_access_bucket }}:{{ s3_file_access_key }}"
fstype: fuse.s3fs
opts: "_netdev,uid={{ uid }},gid={{ group }},mp_umask=022,allow_other,nonempty,endpoint={{ s3_file_access_region }}"
state: mounted
tags:
- [REDACTED]
I'm receiving this error:
fatal: [REDACTED]: FAILED! => {"changed": false, "failed": true, "msg": "Error mounting /home/[REDACTED]: s3fs: there are multiple entries for the same bucket(default) in the passwd file.\n"}
I'm trying to find a passwd file to clean out, but I don't know where to find one.
Anyone recognizes this error?
s3fs checks /etc/passwd-s3fs and $HOME/.passwd-s3fs for credentials. It appears that one of these files has duplicate entries that you need to remove.
Your Ansible src stanza also attempts to supply credentials but I do not believe this will work. Instead you can supply these via the AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables.
Hi I am having a task which is as follows
- name: Replace log directory in configuration
lineinfile:
path: $HOME/amsible_test/test.txt
regexp: '^dataDir='
line: 'dataDir=$HOME/.zookeeper_log'
it's running fine , But issue is that this is writing line as dataDir=$HOME/.zookeeper_log
but as per my understanding it should parse $HOME as /home/username as per ubuntu 16.04 .It should write dataDir=/home/username/.zookeeper.log but not doing as expected.
any suggestion what i am doing wrong . I tried many alternate for string parsing purpose but no luck.
Thanks in advance
Hi this worked for me ..
- name: test connection
shell: echo $HOME
register: user_home
- name: Replace log directory in configuration
lineinfile:
path: $HOME/amsible_test/test.txt
regexp: '^dataDir='
line: 'dataDir={{user_home.stdout}}/.zookeeper_log'