I'm using a block like below
- name: Ensure that the existing certificate has a certain domain in its subjectAltName
openssl_certificate:
path: /etc/ssl/crt/example.com.crt
provider: selfsigned
subject_alt_name:
- www.example.com
- test.example.com
To generate a selfsigned cert with Ansible, I'd like to use the ips in my inventory file as subject_alt_names something like
- name: Generate cert
openssl_certificate:
path: ssl/mongo-test.crt
privatekey_path: ssl/mongo-test.pem
csr_path: ssl/mongo-test.csr
provider: selfsigned
subject_alt_name:
- IP:{{hostvars[item].ansible_host}}
So that I end up with
- name: Generate cert
openssl_certificate:
path: ssl/mongo-test.crt
privatekey_path: ssl/mongo-test.pem
csr_path: ssl/mongo-test.csr
provider: selfsigned
subject_alt_name:
- IP:10.136.31.37
- IP:10.136.29.52
- IP:10.136.30.53
How do I get all my inventory ips to come under the subject_alt_name list?
I've tried using with_items but that creates a new cert per ip address and each iteration overwrites the last.
I know that I am not answering to your question directly but I had the same problem and I chose another approach, hopping it could apply to you too.
I created an openssl.conf file that is templated with Jinja:
[ req ]
prompt = no
distinguished_name = req_distinguished_name
{% if letsencrypt_sans_domains[item] is defined and letsencrypt_sans_domains[item] | length > 0 %}
req_extensions = req_ext
{% endif %}
string_mask = utf8only
default_md = sha256
[ req_distinguished_name ]
O=Organization
L=Boston
ST=Massachusetts
C=US
CN={{ item }}
{% if letsencrypt_sans_domains[item] is defined and letsencrypt_sans_domains[item] | length > 0 %}
[ req_ext ]
subjectAltName = #alt_names
[alt_names]
DNS.1 = {{ item }}
{% set i = 2 %}
{% for domain in letsencrypt_sans_domains[item] %}
DNS.{{ i }} = {{ domain }}
{% set i = i + 1 %}
{% endfor %}
{% endif %}
Then I deploy the file using template module and call:
- name: "Generate CSR"
command: "openssl req -config openssl_req_{{ item }}.conf -nodes -new -newkey rsa:4096 -out {{ item }}.csr -keyout {{ item }}.key"
with_items: "{{ letsencrypt_domains | default([]) }}"
The variables letsencrypt_sans_domains and letsencrypt_domains point to:
letsencrypt_domains: [
"a.b.com"
],
letsencrypt_sans_domains: {
"a.b.com": [ "b.b.com", "c.b.com", "d.b.com" ]
}
}
Of course if letsencrypt is your use case, you'll need to answer the challenge for all SANs domain
This can be done via
- name: Generate an OpenSSL CSR with subjectAltName extension with dynamic list
openssl_csr:
path: /etc/ssl/csr/www.ansible.com.csr
privatekey_path: /etc/ssl/private/ansible.com.pem
subject_alt_name: "{{ item.value | map('regex_replace', '^', 'IP:') | list }}"
with_dict:
ips:
- 10.10.0.11
- 10.10.0.12
- 10.10.0.13
Related
i have an inventory file containing 200 servers and thier respective variables as shown in a sample below:
[myhost1.mrsh.com]
myhost1.mrsh.com ORACLE_HOME=/u/orahome12/middleware/12c_db1 ansible_user=wladmin
[myhost2.mrsh.com]
myhost2.mrsh.com ORACLE_HOME=/u/orahome12/middleware/12c_db1 ansible_user=wladmin
..........
........
i ask the user to enter any hostname which is passed to hostnames variable as below:
ansible-playbook /web/playbooks/automation/applycpupatch/applycpupatch.yml -i /web/playbooks/automation/applycpupatch/applycpupatch.hosts -f 5 -e action=status -e hostnames='myhost1
myhost2' -e patch_file='p33286132_122130_Generic.zip'
if myhost1 is present in the applycpupatch.hosts file i then wish to create a dynamic inventory using add_host having only myhost1 and its variables like ORACLE_HOME
Below is my code:
- name: "Play 1 - Set Destination details"
hosts: all
tasks:
- add_host:
name: "{{ item | upper }}"
groups: dest_nodes
ansible_user: "{{ hostvars[item + '*'].ansible_user }}"
ORACLE_HOME: "{{ hostvars[item + '*'].ORACLE_HOME }}"
when: inventory_hostname | regex_search(item)"
with_items: "{{ hostnames.split() }}"
Unfortunately, i get the error as below:
TASK [add_host] ****************************************************************
Saturday 20 November 2021 19:05:38 -0600 (0:00:00.059) 0:00:23.532 *****
[0;31mfatal: [myhost222.mrsh.com]: FAILED! => {"msg": "The conditional check 'inventory_hostname | regex_search(item)\"' failed. The error was: template error while templating string: unexpected char '\"' at 45. String: {% if inventory_hostname | regex_search(item)\" %} True {% else %} False {% endif %}\n\nThe error appears to be in '/web/playbooks/automation/applycpupatch/applycpupatch.yml': line 36, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - add_host:\n ^ here\n"}[0m
I also tried the below but it fails with the error.
ORACLE_HOME: "{{ hostvars['all'][item + '*'].ORACLE_HOME }}"
Thus my dynamic inventory constructed runtime dest_nodes in this example should have ONLY the below.
myhost1.mrsh.com ORACLE_HOME=/u/orahome12/middleware/12c_db1 ansible_user=wladmin
myhost2.mrsh.com ORACLE_HOME=/u/orahome12/middleware/12c_db1 ansible_user=wladmin
i dont understand very well what do you want, but you have lot of errors to fix in your playbook:
1- launch your playbook with -e hostnames='myhost1,myhost2'
2- fix your playbook: you have to test the result of your regex_search, use the variable inventory_hostname and use split(','):
a sample
- name: "Play 1 - Set Destination details"
hosts: all
tasks:
- debug:
msg: "{{ item }} - {{ hostvars[inventory_hostname].ORACLE_HOME }}"
when: (inventory_hostname | regex_search(item)) != ''
with_items: "{{ hostnames.split(',') }}"
In other words, I want the content from an Ansible variable to be written to a particular "column" in a config file, and I don't want that content to be wrapped.
The task:
- name: build cloud-init file
local_action:
module: ansible.builtin.template
src: cloud_config.j2
dest: ./cloud_config.yml
mode: 0640
tags: ['containers', 'containers:configuration']
The variable, which lives in a vault file:
machineuser_key: |
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
...
The template:
write_files:
{# ssh config to allow our deployment user to fetch the Ansible repo #}
{# See https://docs.github.com/en/developers/overview/managing-deploy-keys#machine-users #}
- path: /home/{{ gcp_deploy_user }}/.ssh/{{ gcp_deploy_user}}_github_rsa
permissions: 600
owner: {{ gcp_deploy_user }}
content: |
{{ machineuser_key }}
The output of that template task:
write_files:
- path: /home/omegasphinx/.ssh/omegasphinx_github_rsa
permissions: 600
owner: omegasphinx
content: |
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
...
I know that blockinfile is a recommended module when one wants to write a config file, and that I should use the "Block Indication Indicator" to do this. But when I do these things, it still wraps the content of the ssh key. That blockinfile task looks like this:
- name: add ssh private key to cloud-init file
local_action:
module: ansible.builtin.blockinfile
path: cloud_config.yml
mode: 0440
insertafter: "# machineuser_ssh_block"
block: |2
- path: /home/{{ gcp_deploy_user }}/.ssh/{{ gcp_deploy_user}}_github_rsa
permissions: 600
owner: {{ gcp_deploy_user }}
content: |
{{ machineuser_key }}
tags: ['containers', 'containers:configuration']
And the .j2 template looks like this:
write_files:
# machineuser_ssh_block
The output then looks like this (exactly the same):
write_files:
- path: /home/omegasphinx/.ssh/omegasphinx_github_rsa
permissions: 600
owner: omegasphinx
content: |
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
...
Other things I've tried:
I disable tabs-to-spaces in my editor and then added tabs before each line of the ssh key in the machineuser_key variable
The docs say "You can manually disable the lstrip_blocks behavior by putting a plus sign (+) at the start of a block"; so I added a block and then indented the variable inside the block:
{%+ if machineuser_key is defined %}
{{ machineuser_key }}
{% endif %}
My understanding of whitespace in Jinja2:
To give you some context on what I'm expecting to happen. The "Whitespace Control" section of the Jinja docs states that "a single trailing newline is stripped if present" but "other whitespace (spaces, tabs, newlines etc.) is returned unchanged." In my case, I have "other whitespace": spaces to the left of each line of the variable machineuser_key. But obviously I'm missing something.
As far as I can tell from your question, it's not jinja whitespace control that's jamming up your outcome, it's the fact that machineuser_key itself has embedded newlines; a lot of helm charts have the same problem, which allows you to take advantage of the same work-arounds:
take advantage of the fact that YAML is a superset of JSON
use the indent filter to get you and the YAML block on the same page
JSON as YAML approach
- path: /home/{{ gcp_deploy_user }}/.ssh/{{ gcp_deploy_user}}_github_rsa
permissions: 600
owner: {{ gcp_deploy_user }}
content: {{ machineuser_key | to_json }}
resulting in
owner: gcpawesomeuser
content: "-----BEGIN OPENSSH PRIVATE KEY-----\nb3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn\nETC ETC ETC\n"
Using the indent filter
- path: /home/{{ gcp_deploy_user }}/.ssh/{{ gcp_deploy_user}}_github_rsa
permissions: 600
owner: {{ gcp_deploy_user }}
content: |
{{ machineuser_key | indent(2) }}
# with the (2) matching the number of spaces under "content" where the mustaches start
producing:
owner: gcpawesomeuser
content: |
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
ETC ETC ETC
I have a terraform code which will deploy the frontend application and have ingress.yaml helm chart.
ingress.yaml
{{- if .Values.ingress.enabled -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ .Values.global.namespace }}-ingress
namespace: {{ .Values.global.namespace }}
labels:
{{- include "test-frontend.labels" . | nindent 4 }}
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.allow-http: "false"
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ .servicename }}
servicePort: {{ .serviceport }}
{{- end }}
{{- end }}
{{- end }}
values.yaml
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.regional-static-ip-name : "ingress-internal-static-ip"
kubernetes.io/ingress.allow-http: "false"
hosts:
- host: test-dev.test.com
paths:
- path: "/*"
servicename: test-frontend-service
serviceport: 80
- path: "/api/*"
servicename: test-backend-service
serviceport: 80
tls:
- hosts:
- test-dev.test.com
secretName: ingress-tls-credential-file
type: kubernetes.io/tls
crt: <<test.pem value>>
key: <<test.key value>>
The terraform apply command ran successfully. In GCP also the certificate is accepted and ingress in up and running inside Kubernetes Service in GCP. But If I pass the .crt and .key as a file in values.yaml in terraform code
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.regional-static-ip-name : "ingress-internal-static-ip"
kubernetes.io/ingress.allow-http: "false"
hosts:
- host: test-dev.test.com
paths:
- path: "/*"
servicename: test-frontend-service
serviceport: 80
- path: "/api/*"
servicename: test-backend-service
serviceport: 80
tls:
- hosts:
- test-dev.test.com
secretName: ingress-tls-credential-file
type: kubernetes.io/tls
crt: file(../../.secret/test.crt)
key: file(../../.secret/test.key)
The values.yaml will send the certificate to helm->template->secret.yaml which will create the secrets(ingress-tls-credential-file)
secret.yaml
{{- if .Values.ingress.tls }}
{{- $namespace := .Values.global.namespace }}
{{- range .Values.ingress.tls }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .secretName }}
namespace: {{ $namespace }}
labels:
{{- include "test-frontend.labels" $ | nindent 4 }}
type: {{ .type }}
data:
tls.crt: {{ toJson .crt | b64enc | quote }}
tls.key: {{ toJson .key | b64enc | quote }}
{{- end }}
{{- end }}
We are getting below error in GCP -> Kubernetes Engine -> Service & Ingress. How to pass the files to the values.yaml file.
Error syncing to GCP: error running load balancer syncing routine:
loadbalancer 6370cwdc-isp-isp-ingress-ixjheqwi does not exist: Cert
creation failures - k8s2-cr-6370cwdc-q0ndkz9m629eictm-ca5d0f56ba7fe415
Error:googleapi: Error 400: The SSL certificate could not be parsed.,
sslCertificateCouldNotParseCert
So google can accept your cert and key files, you need to make sure they have the proper format as per next steps
You need to first format them creating a Self Managed SSL Certificate resource with your existing files using you GCP Cloud Shell
gcloud compute ssl-certificates create CERTIFICATE_NAME \
--certificate=CERTIFICATE_FILE \
--private-key=PRIVATE_KEY_FILE \
--region=REGION \
--project=PROJECT_ID
Then you need to complete a few more steps to make sure you have all the parameters required in your .yaml file and that you have the proper services enable to accept the information coming from it (you may already have completed them):
Enable Kubernetes Engine API by running the following command:
gcloud services enable container.googleapis.com \
--project=PROJECT_ID
Create a GKE cluster:
gcloud container clusters create CLUSTER_NAME \
--release-channel=rapid \
--enable-ip-alias \
--network=NETWORK_NAME \
--subnetwork=BACKEND_SUBNET_NAME \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--region=REGION --machine-type=MACHINE_TYPE \
--project=PROJECT_ID
The cluster is created in the BACKEND_SUBNET_NAME.
The cluster uses GKE version is 1.18.9-gke.801 or later.
The cluster is created with the Cloud Platform scope.
The cluster is created with the desired service account you would like to use to run the - application.
The cluster is using n1-standard-4 machine type or better.
Enable IAP by doing the following steps:
Configure the OAuth consent screen.
Create OAuth credentials.
Convert the ID and Secret to base64 by running the following commands:
echo -n 'CLIENT_ID' | base64
echo -n 'CLIENT_SECRET' | base64
Create an internal static IP address, and reserve a static IP address for your load balancer
gcloud compute addresses create STATIC_ADDRESS_NAME \
--region=REGION --subnet=BACKEND_SUBNET_NAME \
--project=PROJECT_ID
Get the static IP address by running the following command:
gcloud compute addresses describe STATIC_ADDRESS_NAME \
--region=REGION \
--project=PROJECT_ID
7.Create the values YAML file by copying the gke_internal_ip_config_example.yaml and renaming it to PROJECT_ID_gke_config.yaml:
clientIDEncoded: Base64 encoded CLIENT_ID from earlier step.
clientSecretEncoded: Base64 encoded CLIENT_SECRET from earlier step.
certificate.name: CERTIFICATE_NAME that you have created earlier.
initialEmail: The INITIAL_USER_EMAIL email of the initial user who will set up Custom Governance.
staticIpName: STATIC_ADDRESS_NAME that you created earlier.
Try again your deployment after completing above steps.
You seem to mix a secret and a direct definition.
You need first to create the ingress-tls-credential-file secret then link it in your ingress definition like the example https://kubernetes.io/fr/docs/concepts/services-networking/ingress/#tls
apiVersion: v1
data:
tls.crt: file(../../.secret/test.crt)
tls.key: file(../../.secret/test.key)
kind: Secret
metadata:
name: ingress-tls-credential-file
namespace: default
type: kubernetes.io/tls
Then clean your ingress
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.regional-static-ip-name : "ingress-internal-static-ip"
kubernetes.io/ingress.allow-http: "false"
hosts:
- host: test-dev.test.com
paths:
- path: "/*"
servicename: test-frontend-service
serviceport: 80
- path: "/api/*"
servicename: test-backend-service
serviceport: 80
tls:
- hosts:
- test-dev.test.com
secretName: ingress-tls-credential-file
type: kubernetes.io/tls
I have to create a number of users using ansible. I pass the users as a list inside my ansible play in the vars section:
vars:
users: ['user1', 'user2']
I then have to create a script that uses this users' ID as an argument. The command inside the script is something like this:
blobfuse $1 --tmp-path=/mnt/resource/{{ item }}/ -o attr_timeout=240 -o entry_timeout=240 -o negative_timeout=120 -o uid=$USER_ID -o allow_other --container-name={{ item }} --file-cache-timeout-in-seconds=120 --config-file=/root/connection-{{ item }}.cfg
Everything works fine, with the exception of uid=
I have tried with the lookup's pipe plugin but this doesn't get me the correct UID:
{{ lookup('pipe', 'grep -w {{ item }} /etc/passwd | cut -d : -f3') }}
My end goal is to get the UID of each of the created users, and pass that to the blobfuse command above.
Q: "Get the UID of each created user."
A: Module getent serves precisely this purpose. For example,
- hosts: localhost
vars:
my_users: [root, admin]
tasks:
- getent:
database: passwd
- debug:
msg: |
{{ item }} uid: {{ getent_passwd[item].1 }}
{{ item }} gid: {{ getent_passwd[item].2 }}
{{ item }} home: {{ getent_passwd[item].4 }}
{{ item }} shell: {{ getent_passwd[item].5 }}
loop: "{{ my_users }}"
gives (abridged)
TASK [debug] ************************************************************
ok: [localhost] => (item=root) =>
msg: |-
root uid: 0
root gid: 0
root home: /root
root shell: /bin/bash
ok: [localhost] => (item=admin) =>
msg: |-
admin uid: 1002
admin gid: 1002
admin home: /home/admin
admin shell: /bin/bash
Module getent automatically created the dictionary getent_passwd which can be used in a template. For example, the template below
shell> cat template.j2
{% for user in my_users %}
{{ user }} {{ getent_passwd[user].1 }}
{% endfor %}
- template:
src: template.j2
dest: my_users.txt
gives
shell> cat my_users.txt
root 0
admin 1002
I had the same question.
Unfortunately, the lookup plugin is only useful on the control node.
In other words, the subjected users (with the same uid) need to be on both the control node and all the managed nodes. Or the task will fail with return code 1.
For me, using the getent module is a decent solution.
Alternatively, everything is actually there in the returned value of the user module. I only need to register its returned value to access them.
The playbook below demonstrates how I compute uid from the returned value of the user module both in an arbitrary subsequent task (debug) and in the jinja template.
- name: Demonstrate How to Compute UID after User Creations
hosts: all
become: true
gather_facts: no
vars:
my_users:
- u1
- u2
- u3
tasks:
- name: Create users in loop
user:
name: "{{ item }}"
loop: "{{ my_users }}"
register: created_users
- name: Display created users
debug:
msg: "User: uid:{{ item.uid }} name:{{ item.name }} group:{{ item.group }} home:{{ item.home }} shell:{{ item.shell }}"
loop: "{{ created_users.results }}"
# Use loop_control-label to suppress item verbosity
loop_control:
label: "{{ item.name }}"
- name: Work with template
template:
src: myusers.j2
dest: /tmp/myusers.txt
And here is myusers.j2 template.
{{ ansible_managed | comment }}
{% for item in created_users.results %}
User: uid:{{ item.uid }} name:{{ item.name }} group:{{ item.group }} home:{{ item.home }} shell:{{ item.shell }}
{% endfor %}
The output is as follows:
$ ansible-playbook compute-uid.yml
PLAY [Demonstrate How to Compute UID after User Creations] **********************************************************************************************
TASK [Create users in loop] *****************************************************************************************************************************
changed: [ansible1] => (item=u1)
changed: [ansible1] => (item=u2)
changed: [ansible1] => (item=u3)
TASK [Display created users] ****************************************************************************************************************************
ok: [ansible1] => (item=u1) => {
"msg": "User: uid:1001 name:u1 group:1001 home:/home/u1 shell:/bin/bash"
}
ok: [ansible1] => (item=u2) => {
"msg": "User: uid:1002 name:u2 group:1002 home:/home/u2 shell:/bin/bash"
}
ok: [ansible1] => (item=u3) => {
"msg": "User: uid:1003 name:u3 group:1003 home:/home/u3 shell:/bin/bash"
}
TASK [Work with template] *******************************************************************************************************************************
changed: [ansible1]
PLAY RECAP **********************************************************************************************************************************************
ansible1 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ ansible all -a "cat /tmp/myusers.txt"
ansible1 | CHANGED | rc=0 >>
#
# Ansible managed
#
User: uid:1001 name:u1 group:1001 home:/home/u1 shell:/bin/bash
User: uid:1002 name:u2 group:1002 home:/home/u2 shell:/bin/bash
User: uid:1003 name:u3 group:1003 home:/home/u3 shell:/bin/bash
What about using id command and subshell? You can then do something like
blobfuse $1 --tmp-path=/mnt/resource/{{ item }}/ -o attr_timeout=240 -o entry_timeout=240 -o negative_timeout=120 -o uid="$(id {{ item }})" -o allow_other --container-name={{ item }} --file-cache-timeout-in-seconds=120 --config-file=/root/connection-{{ item }}.cfg
If you are using command module - you'll have to replace it with shell.
Edit: If you are using templates and want to use lookup plugin, which seems cleaner, you can do something like this (this was tested on local linux machine):
template.yaml
{% for item in users %}
{{ item }} {{ lookup('pipe', "id -u " + item) }}
{% endfor %}
ansible command
ansible -m template -i localhost, all -c local -a "src=template.yaml dest=result.txt" -e "{ users: [nobody,root]}"
result.txt
nobody 65534
root 0
In your case the error was using the {{ item }} in lookup, you should use just variable names and concatenation inside {{}} block
I have four systems, in those I need to extract facts then use them as variables on a jinja 2 template.
In Ansible i have:
vars:
office1:
web01:
myip: 10.10.10.10 // or fact
peer: 10.10.10.20
web02
myip: 10.10.10.20 // or fact
peer: 10.10.10.10
office2:
web01:
myip: 10.20.20.30 // or fact
peer: 10.20.20.40
web02
myip: 10.20.20.40 // or fact
peer: 10.20.20.30
On the jinja 2 template I have:
# Config File:
host_name: {{ ansible_hostname }} // web01
host_ip: {{ ansible_eth0.ipv4.address }}
host_peer: {{ office1."{{ ansible_hostname }}".peer }}
I however get error that Ansible variable: office1.ansible_hostname.peer is not defined.
Any help with this would be greatly appreciated.
Expansion in Ansible is not recursive. Try the expansion below
host_peer: {{ office1[ansible_hostname].peer }}
For example the play below
- hosts: test_01
gather_facts: yes
vars:
office1:
test_01:
myip: 10.20.20.30
peer: 10.20.20.40
tasks:
- template:
src: template.j2
dest: /scratch/test_01.cfg
with template.j2
# Config File:
host_name: {{ ansible_hostname }}
host_peer: {{ office1[ansible_hostname].peer }}
gives
# cat /scratch/test_01.cfg
# Config File:
host_name: test_01
host_peer: 10.20.20.40
To answer the question
Q: "Create Variable From Ansible Facts"
A: An option would be to use lookup vars. For example the play below
vars:
var1: var1
var2: var2
var3: var3
tasks:
- debug:
msg: "{{ lookup('vars', 'var' + item) }}"
with_sequence: start=1 end=3
gives (abridged)
"msg": "var1"
"msg": "var2"
"msg": "var3"