Ansible cisco ios, shutdown interfaces that are not connected - automation

So here is my current playbook
---
- hosts: SWITCHES
gather_facts: no
tasks:
- name: Show Interface Status
ios_command:
commands:
- show int status
register: out
- debug: var=out.stdout_lines
I basically want to take this script, and then disable all the ports in the "notconnect" state, meaning all the ports with nothing connected to them. Is there a way I can add a "when" statement to this, so that when "show interface status" comes back, it looks at all the ports that are not connected and disables them by applying the "shutdown" command to each interface? I think a "when" statement is what I am needing to do, but not sure where to get started with it. Or is there a better way to accomplish this?
Is there a python script that could accomplish this as well?

You should use ios_facts to retrieve a dictionary containing all the interfaces. Then you can iterate over that dictionary to shutdown the interfaces that are not connected.
If you run your playbook using the -vvv switch, you will see the all the variables collected by ios_facts.
I believe in Ansible 2.9 and later, Ansible gathers the actual network device facts if you specify "gather_facts: yes". With Ansible 2.8 or older, you need to use the "ios_facts" module.
---
- hosts: SWITCHES
gather_facts: no
tasks:
- name: gather IOS facts
ios_facts:
- name: Shutdown notconnect interfaces
ios_config:
lines: shutdown
parents: "interface {{ item.key }}"
with_dict: "{{ ansible_net_interfaces }}"
when: item.value.operstatus == "down"
Here is an example from part of a collected "ansible_net_interfaces" variable:
{
"ansible_net_interfaces": {
"GigabitEthernet0/0": {
"bandwidth": 1000000,
"description": null,
"duplex": "Full",
"ipv4": [],
"lineprotocol": "down",
"macaddress": "10b3.d507.5880",
"mediatype": "RJ45",
"mtu": 1500,
"operstatus": "administratively down",
"type": "RP management port"
},
"GigabitEthernet1/0/1": {
"bandwidth": 1000000,
"description": null,
"duplex": null,
"ipv4": [],
"lineprotocol": null,
"macaddress": "10b3.d507.5881",
"mediatype": "10/100/1000BaseTX",
"mtu": 1500,
"operstatus": "down",
"type": "Gigabit Ethernet"
},
"GigabitEthernet1/0/10": {
"bandwidth": 1000000,
"description": "Telefon/PC",
"duplex": null,
"ipv4": [],
"lineprotocol": null,
"macaddress": "null,
"mediatype": "10/100/1000BaseTX",
"mtu": 1500,
"operstatus": "down",
"type": "Gigabit Ethernet"
},
"GigabitEthernet1/0/11": {
"bandwidth": 1000000,
"description": null,
"duplex": null,
"ipv4": [],
"lineprotocol": null,
"macaddress": "10b3.d507.588b",
"mediatype": "10/100/1000BaseTX",
"mtu": 1500,
"operstatus": "down",
"type": "Gigabit Ethernet"
}
}
The value of the "ansible_net_interfaces" variable is a dictionary. Each key in that dictionary is the interface name, and the value is a new dictionary containing new key/value pairs. The "operstatus" key will have a value "down" when the interface is not connected.
Using "with_dict" in the "ios_config" task loops through all top-level key/value pairs in the dictionary, and you can use the variables in each key/value pair by referring to "{{ item.key }}" or "{{ item.value }}".
Using "when" in the "ios_config" task, you set a condition for when the task is to be executed. In this case we only want it to run when "operstatus" has a value of "down".
The "parents" parameter in the "ios_config" task specifies a new section where the configuration is to be entered, in this case the section is the interface configuration mode. The interface name is returned for each interface in the "ansible_net_interfaces" using the "{{ item.key }}" variable.
Refer to Ansibles documentation for these modules to get a better understanding of them:
https://docs.ansible.com/ansible/latest/collections/cisco/ios/ios_facts_module.html
https://docs.ansible.com/ansible/latest/collections/cisco/ios/ios_config_module.html

Related

Accessing values of nested dictionaries

I'm working some separate tasks for automating VM deployments through tower.
Basically I just need a quick run down on how to gather/use the various properties of a registered return from a task.
I've got this.
tasks:
- name: Gather disk info from virtual machine using name
vmware_guest_disk_info:
hostname: "{{ vcenter }}"
username: "{{ username }}"
password: "{{ esxipassword }}"
datacenter: "{{ datacenter }}"
name: "{{ fqdn }}"
register: disk_info
- debug:
var: disk_info
This spits out the information I want. But, for the life of me I can't figure out how to select a single property. can someone tell me how to do that (particularly for the backing_filename) property?
I mean in powershell it would just be disk_info.backing_filename or something like backing = $disk_info | select -expandproperty backing_filename. Just looking for something like the equivalent of that.
Snip of output
{
"disk_info": {
"guest_disk_info": {
"0": {
"key": 2000,
"label": "Hard disk 1",
"summary": "104,857,600 KB",
"backing_filename": "[datastorex] vmname/vmname.vmdk",
To be fair, this one is not as simple as it looks, because your dictionary has a key being a string 0, but, would you be doing disk_info.guest_disk_info.0.backing_filename you would try to access an element 0, so a list, and not a dictionary key '0'.
Here would be an example playbook solving your issue:
- hosts: all
gather_facts: yes
tasks:
- debug:
var: disk_info.guest_disk_info['0'].backing_filename
vars:
disk_info:
guest_disk_info:
'0':
key: 2000
label: Hard disk 1
summary: 104,857,600 KB
backing_filename: "[datastorex] vmname/vmname.vmdk"
That gives:
{
"disk_info.guest_disk_info['0'].backing_filename": "[datastorex] vmname/vmname.vmdk"
}
While this works also, you would see that the YAML is representing a totally different structure, also including a list, and not only multiple nested dictionaries:
- hosts: all
gather_facts: yes
tasks:
- debug:
var: disk_info.guest_disk_info.0.backing_filename
vars:
disk_info:
guest_disk_info:
- key: 2000
label: Hard disk 1
summary: 104,857,600 KB
backing_filename: "[datastorex] vmname/vmname.vmdk"
To give you an equivalent in JSON, since you seems to have issue understanding the YAML constructions, your output is
{
"disk_info": {
"guest_disk_info": {
"0": {
"backing_filename": "[datastorex] vmname/vmname.vmdk"
}
}
}
}
That would be accessible via disk_info.guest_disk_info['0'].backing_filename.
While
{
"disk_info": {
"guest_disk_info": [
{
"backing_filename": "[datastorex] vmname/vmname.vmdk"
}
]
}
}
Would be accessible via disk_info.guest_disk_info.0.backing_filename

Web Activity endless running In Azure Data Factory

Currently it seems that web activity is broken.
When using simple pipeline
{
"name": "pipeline1",
"properties": {
"activities": [
{
"name": "Webactivity",
"type": "WebActivity",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"url": "https://www.microsoft.com/",
"connectVia": {
"referenceName": "AutoResolveIntegrationRuntime",
"type": "IntegrationRuntimeReference"
},
"method": "GET",
"body": ""
}
}
],
"annotations": []
}
}
When debugging it never finishes. There is "in progress" for several minutes.
I tried Web hook and it works.
Is there something else I could try?
A quick note on the "never finishes" issue: one of my pet peeves with Data Factory is that the default timeout for all activities is 7 DAYS. While I've had a few activities that needed to run for 7 hours, a WEEK is a ridiculous default timeout value. One of the first things I do in any production scenario is address the timeout values of all the activities.
As to the Web activity question: I set up a quick example in my test bed and it returned just fine:
Looking at the generated code, the only real difference I see is the absence of the "connectVia" property that was in your example:
Ok I've found it.
The default AutoResolveIntegrationRuntime only had managed private network which I couldn't change. So I created a new Integration Runtime with public network setting.
This is a litte bit strange as I started today with a brand new Azure Data Factory.
I wonder why I cannot change the default Integration Runtime to disable virtual network:

Ansible debug module print undesired details [duplicate]

I'm trying to get a debug message from the results of a previous loop, but I can't get just the bit that I want from it. Ansible keeps giving me the entire result instead just the line I'm asking for.
Here are the 2 modules I'm using:
- name: Make the curl call
uri:
url: https://www.theurl.com
method: GET
return_content: yes
headers:
XXXX: "xxxxxxxxxx"
loop: "{{ simplelist }}"
register: this
- name: just testing
debug:
msg: "I just want to see: {{ item.json.section.test }}"
loop: "{{ this.results}}"
As you can see from the msg, I'm just trying to output that specific value, but what Ansible gives me is:
{
"ansible_loop_var": "item",
"_ansible_no_log": false,
"item": {
"content_length": "",
"cookies": {},
"via": "1.1 varnish",
"connection": "",
"vary": "Accept-Encoding",
"x_timer": "",
"access_control_allow_methods": "OPTIONS, HEAD, GET, PATCH, DELETE",
"x_cache_hits": "0",
"failed": false,
"access_control_allow_credentials": "true",
"content": blah blah blah,
"json": { the json },
"changed": false,
"msg": "I just want to see: False",
So it is setting the message, as you can see from the last line, and it is getting the correct value, but it's not outputting that message. How can I get just the message to be output? I tested and I know that I can get the value because the msg has False and I tested with doing a fail/when with that value.
What you are seeing looks like a verbose output of ansible-playbook running with the -v[vv] option. You can drop that option to decrease verbosity.
Meanwhile, even in non-verbose mode, and whatever module your are using, when going over a loop, ansible outputs a label for each iteration, roughly looking like the following (watch for the (item=....) part of the screen).
TASK [test] *******************************************************************************
ok: [localhost] => (item={'a': 1, 'b': 2}) => {
"msg": "This is the value of a: 1"
}
ok: [localhost] => (item={'a': 3, 'b': 4}) => {
"msg": "This is the value of a: 3"
}
By default, the label is a the full item your are currently looping over. But you can change this label in the loop_control parameter which can be a little too verbose for complex data structures. If you really want an empty label your can use the following example. But you will still get ok: [server1] => (item=) => prepended to each iteration output.
- name: just testing
debug:
msg: "I just want to see: {{ item.json.section.test }}"
loop: "{{ this.results }}"
loop_control:
label: ""
For more info see limiting loop output with label

Ansible if nested value doesn't exist in nested array

I'd like to make my Ansible EIP creation idempotent. In order to do that I only want the task to run when Tag "Name" value "tag_1" doesn't exist.
However I'm not sure how I could add this as a 'when' at the end of a task.
"eip_facts.addresses": [
{
"allocation_id": "eipalloc-blablah1",
"domain": "vpc",
"public_ip": "11.11.11.11",
"tags": {
"Name": "tag_1",
}
},
{
"allocation_id": "eipalloc-blablah2",
"domain": "vpc",
"public_ip": "22.22.22.22",
"tags": {
"Name": "tag_2",
}
},
{
"allocation_id": "eipalloc-blablah3",
"domain": "vpc",
"public_ip": "33.33.33.33",
"tags": {
"Name": "tag_3",
}
}
]
(Tags are added later) I'm looking for something like:
- name: create elastic ip
ec2_eip:
region: eu-west-1
in_vpc: yes
when: eip_facts.addresses[].tags.Name = "tag_1" is not defined
What is the correct method of achieving this? Bear in mind the value can not exist in that parameter in the entire array, not just a single iteration.
Ok, I found a semi-decent solution
- name: Get list of EIP Name Tags
set_fact:
eip_facts_Name_tag: "{{ eip_facts.addresses | map(attribute='tags.Name') | list }}"
Which extracts the Name tag and puts them into an array
ok: [localhost] => {
"msg": [
"tag_1",
"tag_2",
"tag_3"
]
}
and then...
- debug:
msg: "Hello"
when: '"tag_1" in "{{ eip_facts_Name_tag }}"'
This will work, beware though, this doesn't do an exact string search. So if you did a search for just 'tag' that'd count as a hit too.

How to parameterize ports in OpenShift JSON Project Template

I'm trying to create a custom project template in OpenShift Origin. The Service configuration specifically, looks like below:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "${NAME}",
"annotations": {
"description": "Exposes and load balances the node.js application pods"
}
},
"spec": {
"ports": [
{
"name": "web",
"port": "${APPLICATION_PORT}",
"targetPort": "${APPLICATION_PORT}",
"protocol": "TCP"
}
],
"selector": {
"name": "${NAME}"
}
}
},
where, APPLICATION_PORT is supplied as a user parameter:
"parameters": [
{
"name": "APPLICATION_PORT",
"displayName": "Application Port",
"description": "The exposed port that will route to the node.js application",
"value": "8000"
},
When I try to use this template to create a project, I get the following error:
spec.ports[0].targetPort: Invalid value: "8000": must be an IANA_SVC_NAME (at most 15 characters, matching regex [a-z0-9]([a-z0-9-]*[a-z0-9])*...
I get a similar error in my DeploymentConfig as well, for the http ports in the liveness and readiness probes:
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"httpGet": {
"path": "/Info",
"port": "${APPLICATION_ADMIN_PORT}"
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 30,
"httpGet": {
"path": "/Info",
"port": "${APPLICATION_ADMIN_PORT}"
}
},
where, APPLICATION_ADMIN_PORT, again, is user-supplied.
Error:
spec.template.spec.containers[0].livenessProbe.httpGet.port: Invalid value: "8001": must be an IANA_SVC_NAME...
spec.template.spec.containers[0].readinessProbe.httpGet.port: Invalid value: "8001": must be an IANA_SVC_NAME...
I've been following https://blog.openshift.com/part-2-creating-a-template-a-technical-walkthrough/ to understand templates, and it, unfortunately, does not have any examples of ports being parameterized anywhere.
It almost seems as if strings are not allowed as the values of these ports. Is that the case? What's the right way to parameterize these values? Should I switch to YAML?
Versions:
OpenShift Master: v1.1.6-3-g9c5694f
Kubernetes Master: v1.2.0-36-g4a3f9c5
Edit 1: I tried the same configuration in YAML format, and got the same error. So, JSON vs YAML is not the issue.
Unfortunately it is not currently possible to parameterize non-string field values: https://docs.openshift.org/latest/dev_guide/templates.html#writing-parameters
" Parameters can be referenced by placing values in the form "${PARAMETER_NAME}" in place of any string field in the template."
Templates are in the process of being upstreamed to Kubernetes and this limitation is being addressed there:
https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/templates.md
The proposal is being implemented in PRs 25622 and 25293 in the kubernetes repo.
edit:
Templates now support non-string parameters as documented here: https://docs.openshift.org/latest/dev_guide/templates.html#writing-parameters
I don't know if this option was available in 2016 when this post was added but now you can use ${{PARAMETER_NAME}} to parameterize non-string field values.
spec:
externalTrafficPolicy: Cluster
ports:
- name: ${NAME}-port
port: ${{PORT_PARAMETER}}
protocol: TCP
targetPort: ${{PORT_PARAMETER}}
sessionAffinity: None
This may a be a bad practice but I'm using sed to substitute int parameters:
cat template.yaml | sed -e 's/PORT/8080/g' > proxy-template-subst.yaml
Template:
apiVersion: template.openshift.io/v1
kind: Template
objects:
- apiVersion: v1
kind: Service
metadata:
name: ${NAME}
namespace: ${NAMESPACE}
spec:
externalTrafficPolicy: Cluster
ports:
- name: ${NAME}-port
port: PORT
protocol: TCP
targetPort: PORT
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
parameters:
- description: Desired service name
name: NAME
required: true
value: need_real_value_here
- description: IP adress
name: IP
required: true
value: need_real_value_here
- description: namespace where to deploy
name: NAMESPACE
required: true
value: need_real_value_here