Where to put common variables for groups in Ansible - ssh

We have some scripts to help us set up VPCs with up to 6 VMs in AWS. Now I want to log in to each of these machines. For security reasons we can only access one of them via SSH and then tunnel/proxy through that to the other machines. So in our inventory we have the IP address of the SSH host (we call it Redcarpet) and some other hosts like Elasticsearch, Mongodb and Worker:
#inventory/hosts
[redcarpet]
57.44.113.25
[services]
10.0.1.2
[worker]
10.0.5.77
10.0.1.200
[elasticsearch]
10.0.5.30
[mongodb]
10.0.1.5
Now I need to tell each of the groups, EXCEPT redcarpet to use certain SSH settings. If these were applicable to all groups, I would put them in inventory/group_vars/all.yml, but now I will have to put them in:
inventory/group_vars/services.yml
inventory/group_vars/worker.yml
inventory/group_vars/elasticsearch.yml
inventory/group_vars/mongodb.yml
Which leads to duplication. Therefore I would like to use an include or include_vars to include one or two variables from a common file (e.g. inventory/common.yml). However, when I try to do this in any of the group_var files above, it does not pick up the variables. What is the best practice to use with variables that are common to multiple groups?

If you want to go with the group_vars approach, I would suggest you add another group, and add the dependent groups as children to that group.
#inventory/hosts
[redcarpet]
57.44.113.25
[services]
10.0.1.2
[worker]
10.0.5.77
10.0.1.200
[elasticsearch]
10.0.5.30
[mongodb]
10.0.1.5
[redcarpet_deps:children]
mongodb
elasticsearch
worker
services
And now you can have a group_vars file called redcarpet_deps.yml and they should pickup the vars from there.

Related

Fluentbit Cloudwatch templating with EKS and Fargate

I've an EKS cluster purely on Fargate and I'm trying to setup the logging to Cloudwatch.
I've a lot of [OUTPUT] sections that can be unified using some variables. I'd like to unify the logs of each deployment to a single log_stream and separate the log_stream by environment (name_space). Using a couple of variable I'd need just to write a single [OUTPUT] section.
For what I understand the new Fluentbit plugin: cloudwatch_logs doesn't support templating, but the old plugin cloudwatch does.
I've tried to setup a section like in the documentation example:
[OUTPUT]
Name cloudwatch
Match *container_name*
region us-east-1
log_group_name /eks/$(kubernetes['namespace_name'])
log_stream_name test_stream
auto_create_group on
This generates a log_group called fluentbit-default that according to the README.md is the fallback name in case the variables are not parsed.
The old plugin cloudwatch is supported (but not mentioned in AWS documentation) because if I replace the variable $(kubernetes['namespace_name']) with any string it works perfectly.
Fluentbit in Fargate manages automatically the INPUT section so I don't really know which variables are sent to the OUTPUT section, I suppose the variable kubernetes is not there or it has a different name or a different array structure.
So my questions are:
Is there a way to get the list of the variables (or input) that Fargate + Fluentbit are generating?
Get I solve that in a different way? (I don't want to write more than 30 different OUTPUT one for each service/log_stream_name. It would be also difficult to maintain it)
Thanks!
After few days of tests, I've realised that you need to enable the kubernetes filter to receive the kubernetes variables to the cloudwatch plugin.
This is the result, and now I can generate log_group depending on the environment label and log_stream depending of the container-namespace names.
filters.conf: |
[FILTER]
Name kubernetes
Match *
Merge_Log Off
Buffer_Size 0
Kube_Meta_Cache_TTL 300s
output.conf: |
[OUTPUT]
Name cloudwatch
Match *
region eu-west-2
log_group_name /aws/eks/cluster/$(kubernetes['labels']['app.environment'])
log_stream_name $(kubernetes['namespace_name'])-$(kubernetes['container_name'])
default_log_group_name /aws/eks/cluster/others
auto_create_group true
log_key log
Please note that the app.environment is not a "standard" value, I've added it to all my deployments. The default_log_group_name is necessary in case that value is not present.
Please note also that if you use log_retention_days and new_log_group_tags the system is not going to work. To be honest log_retention_days it never worked for me also using the new cloudwatch_logs plugin either.

Parse a group of cisco switches, compile a list of IPs and interfaces, and then point a netmiko script to that new list. Possible?

I think my choice of words is correct. I want to take a group of switches and compile a list of Ip addresses and specific interfaces to have netmiko push commands to. For instance, scan all cisco switches and put together a list of all interfaces in vlan X and not being used. Can someone point me in the right direction of how to do this?
Sounds like you need to figure out the different steps to work out your solution.
Maybe something this:
Connect to switch
run show commands
config interface to vlan xx
I don't see any code or anything you have attempted so far but here is a simple flow for looping through a list of IP addresses.
#Python 3.7
from netmiko import ConnectionHandler
username = "user"
password = "password"
for ip in IPlist:
# netmiko code profiles;
cisco ={
"host":IP,
"username":username,
"password":password,
"device_type: "cisco_ios"
}
with ConnectHandler(**cisco) as ssh_conn:
print(sshcon.find_prompt())
# do stuff here.

SLURM: Should there be a different gres.conf for each node?

When configuring a slurm cluster you need to have a copy of the configuration file slurm.conf on all nodes. These copies are identical. In the situation where you need to use GPUs in your cluster you have an additional configuration file that you need to have on all nodes. This is the gres.conf. My question is - will this file be different on each node depending on the configuration on that node or will it be identical on all nodes (like slurm.conf?). Assume that the nodes have different configurations of gpus in them and are not identical.
Since Slurm version 14.3.0, the gres.conf accepts a NodeName parameter so that the same file can be setup on all nodes.
From the NEWS file:
gres.conf - Add "NodeName" specification so that a single gres.conf file
can be used for a heterogeneous cluster.
It will thus look something like this:
NodeName=node001 Name=gpu File=/dev/nvidia0
NodeName=node002 Name=gpu File=/dev/nvidia[0-1]
...
Before that, the gres.conf file had to be distinct for each node.

Adding a TFS server group to access levels via command line

I am creating a group of users within TFS 2013 and I want to add them to the none default access level (ex. the full access group) but I noticed I am only able to do this through the web interface by adding a TFS Group under that certain level. I am wondering if there is a way to do this via the developer tool (command line) as everything I am doing is being done in a batch script.
Any input would be appreciated. Thanks!
Create 3 TFS server groups; add these groups to the different access levels (e.g. TFS_ACCESS_LEVEL_(NONE|STANDARD|FULL)). Now use the TFSSecurity commandline tool to add groups to these existing and mapped groups(tfssecurity /g+ TFS_ACCESS_LEVEL_NONE GroupYouWantToHaveThisAccessLevel). There is no other way to directly add people to the access levels, except probably through the Object Model using C#.
For the record, tfssecurity may require the URI, which can be obtained via API. This is easy to do in Powershell, here is how to create a TFS group
[psobject] $tfs = get-tfs -serverName $collection
$projectUri = ($tfs.CSS.ListAllProjects() | where { $_.Name -eq $project }).Uri
& $TFSSecurity /gc $projectUri $groupName $groupDescription /collection:$collection
Full script at TfsSecurity wrapper.

Multiple tasks with same sudo_user in Ansible role?

I have a bunch of tasks in a role that repeatedly use:
sudo: yes
sudo_user: my_user
Isn't there a way that I can set these attributes for multiple tasks, so it will be more DRY?
I know I can change the user in the playbook, but other tasks need user root, so I can't change that.
In your inventory file you can have multiple groups, ie: root_access group or deploy_user group. So you define your hosts, say like this;
[web]
webby-1 ansible_ssh_host=192.168.1.1
webby-2 ansible_ssh_host=ec2-192-168-1-1.compute-1.amazonaws.com
[foo:children]
web
[foo:vars]
ansible_ssh_user=foo
ansible_ssh_private_key_file=~/.ssh/foo.pem
[bar:children]
web
[bar:vars]
ansible_ssh_user=bar
ansible_ssh_private_key_file=~/.ssh/bar.pem
and then you can call them based on the inventory groups.