I parse config file, in some line i have adding '42D' to the beginning of the line.
Code example:
with open(config.txt) as file:
for line in file:
if re.search(r'port default vlan \d{1,4}.*', line).group():
print(re.search(r'port default vlan \d{1,4}.*', line).group())
File example:
'port default vlan 111'
Output example:
'42D port default vlan 111'
Related
I have an ansible playbook, which has a variable defined in it like this:
- hosts: dev-web
become: yes
vars:
- web_dir: /opt/abc/example.com/xyz
i want the string inside the variable "/opt/abc/example.com/xyz" dynamically get from the host_var file in host_vars/dev-web.
host_var file looks like this:
vhosts:
dev1:
name: 'example.com'
dev2:
name: 'xyz.com'
Expected outcome dev1 is:
vars:
web_dir: /opt/abc/"{{ vhosts.dev1.name }}"/xyz
should reflect to
web_dir: /opt/abc/example.com/xyz
and for dev2:
vars:
web_dir: /opt/abc/"{{ vhosts.dev2.name }}"/xyz
should reflect to
web_dir: /opt/abc/xyz.com/xyz
Any help would be appreciated.
You have to approach the problem from a different perspective:
In the playbook, the variable should be identical for all hosts, i.e. vhost.name, which will take a different value in every host.
In the host_vars/ directory, you should have a different file for each host.
File host_vars/dev1:
vhost:
name: dev1
File host_vars/dev2:
vhost:
name: dev2
On another note, if possible, I'd rather reuse the real hostname using an automatically generated variable like: ansible_host or inventory_hostname.
I want to configure Openshift authentication through Request Header. I have tried modifying the master-config.yaml file as mentioned at Request Header but it's giving certificate errors so I need help on how to bypass error or how to get the certificates supported by Openshift. I updated only following stanza.
identityProviders:
- challenge: true
login: true
mappingMethod: claim
name: my_request_header_provider
provider:
apiVersion: v1
kind: RequestHeaderIdentityProvider
challengeURL: https://host:port/api/user/oauth/authorize?${query}
loginURL: https://host:port/api/user/oauth/authorize?${query}
headers:
- x-auth-token
I have used following command to restart the openshift
openshift start master --config=/etc/origin/master/reqheadauthconfig/master-config.yaml
Getting following errors
Warning: oauthConfig.identityProvider[0].provider.clientCA: Invalid value: "": if no clientCA is set, no request verification is done, and any request directly against the OAuth server can impersonate any identity from this provider, master start will continue.
Invalid MasterConfig /etc/origin/master/reqheadauthconfig/master-config.yaml
etcdClientInfo.urls: Required value
kubeletClientInfo.port: Required value
kubernetesMasterConfig.proxyClientInfo.certFile: Invalid value: "/etc/origin/master/reqheadauthconfig/master.proxy-client.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/master.proxy-client.crt: no such file or directory
kubernetesMasterConfig.proxyClientInfo.keyFile: Invalid value: "/etc/origin/master/reqheadauthconfig/master.proxy-client.key": could not read file: stat /etc/origin/master/reqheadauthconfig/master.proxy-client.key: no such file or directory
masterClients.openShiftLoopbackKubeConfig: Invalid value: "/etc/origin/master/reqheadauthconfig/openshift-master.kubeconfig": could not read file: stat /etc/origin/master/reqheadauthconfig/openshift-master.kubeconfig: no such file or directory
oauthConfig.masterCA: Invalid value: "/etc/origin/master/reqheadauthconfig/ca.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/ca.crt: no such file or directory
serviceAccountConfig.privateKeyFile: Invalid value: "/etc/origin/master/reqheadauthconfig/serviceaccounts.private.key": could not read file: stat /etc/origin/master/reqheadauthconfig/serviceaccounts.private.key: no such file or directory
serviceAccountConfig.publicKeyFiles[0]: Invalid value: "/etc/origin/master/reqheadauthconfig/serviceaccounts.public.key": could not read file: stat /etc/origin/master/reqheadauthconfig/serviceaccounts.public.key: no such file or directory
serviceAccountConfig.masterCA: Invalid value: "/etc/origin/master/reqheadauthconfig/ca-bundle.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/ca-bundle.crt: no such file or directory
servingInfo.certFile: Invalid value: "/etc/origin/master/reqheadauthconfig/master.server.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/master.server.crt: no such file or directory
servingInfo.keyFile: Invalid value: "/etc/origin/master/reqheadauthconfig/master.server.key": could not read file: stat /etc/origin/master/reqheadauthconfig/master.server.key: no such file or directory
servingInfo.clientCA: Invalid value: "/etc/origin/master/reqheadauthconfig/ca.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/ca.crt: no such file or directory
controllerConfig.serviceServingCert.signer.certFile: Invalid value: "/etc/origin/master/reqheadauthconfig/service-signer.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/service-signer.crt: no such file or directory
controllerConfig.serviceServingCert.signer.keyFile: Invalid value: "/etc/origin/master/reqheadauthconfig/service-signer.key": could not read file: stat /etc/origin/master/reqheadauthconfig/service-signer.key: no such file or directory
aggregatorConfig.proxyClientInfo.certFile: Invalid value: "/etc/origin/master/reqheadauthconfig/aggregator-front-proxy.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/aggregator-front-proxy.crt: no such file or directory
aggregatorConfig.proxyClientInfo.keyFile: Invalid value: "/etc/origin/master/reqheadauthconfig/aggregator-front-proxy.key": could not read file: stat /etc/origin/master/reqheadauthconfig/aggregator-front-proxy.key: no such file or directory
2 things I have to share with you here.
for the provider.clientCA error: ClientCA is required for RequestHeader identity provider since OpenShift api need it to verify clients which pass request with "x-auth-token" http header.
For all the files with "no such file or directory" error: I think you just make a copy for /etc/origin/master/master-config.yaml, but all files is in relative path format, so the error comes here.
I'm having trouble authenticating over AD to windows machines from my ansible host. I have a valid kerberos ticket -
klist
Credentials cache: FILE:/tmp/krb5cc_1000
Principal: ansible#SOMEDOMAIN.LOCAL
Issued Expires Principal
Mar 10 09:15:27 2017 Mar 10 19:15:24 2017 krbtgt/SOMEDOMAIN.LOCAL#SOMEDOMAIN.LOCAL
My kerberos config looks fine to me -
cat /etc/krb5.conf
[libdefaults]
default_realm = SOMEDOMAIN.LOCAL
# dns_lookup_realm = true
# dns_lookup_kdc = true
# ticket_lifetime = 24h
# renew_lifetime = 7d
# forwardable = true
# The following krb5.conf variables are only for MIT Kerberos.
# kdc_timesync = 1
# forwardable = true
# proxiable = true
# The following encryption type specification will be used by MIT Kerberos
# if uncommented. In general, the defaults in the MIT Kerberos code are
# correct and overriding these specifications only serves to disable new
# encryption types as they are added, creating interoperability problems.
#
# Thie only time when you might need to uncomment these lines and change
# the enctypes is if you have local software that will break on ticket
# caches containing ticket encryption types it doesn't know about (such as
# old versions of Sun Java).
# default_tgs_enctypes = des3-hmac-sha1
# default_tkt_enctypes = des3-hmac-sha1
# permitted_enctypes = des3-hmac-sha1
# The following libdefaults parameters are only for Heimdal Kerberos.
# v4_instance_resolve = false
# v4_name_convert = {
# host = {
# rcmd = host
# ftp = ftp
# }
# plain = {
# something = something-else
# }
# }
# fcc-mit-ticketflags = true
[realms]
SOMEDOMAIN.LOCAL = {
kdc = prosperitydc1.somedomain.local
kdc = prosperitydc2.somedomain.local
default_domain = somedomain.local
admin_server = somedomain.local
}
[domain_realm]
.somedomain.local = SOMEDOMAIN.LOCAL
somedomain.local = SOMEDOMAIN.LOCAL
When running a test command - ansible windows -m win_ping -vvvvv I get
'Server not found in Kerberos database'.
ansible windows -m win_ping -vvvvv
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/windows/win_ping.ps1
<kerberostest.somedomain.local> ESTABLISH WINRM CONNECTION FOR USER: ansible#SOMEDOMAIN.LOCAL on PORT 5986 TO kerberostest.somedomain.local
<kerberostest.somedomain.local> WINRM CONNECT: transport=kerberos endpoint=https://kerberostest.somedomain.local:5986/wsman
<kerberostest.somedomain.local> WINRM CONNECTION ERROR: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ansible/plugins/connection/winrm.py", line 154, in _winrm_connect
self.shell_id = protocol.open_shell(codepage=65001) # UTF-8
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/protocol.py", line 132, in open_shell
res = self.send_message(xmltodict.unparse(req))
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/protocol.py", line 207, in send_message
return self.transport.send_message(message)
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/transport.py", line 181, in send_message
prepared_request = self.session.prepare_request(request)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/sessions.py", line 407, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/models.py", line 306, in prepare
self.prepare_auth(auth, url)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/models.py", line 543, in prepare_auth
r = auth(self)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests_kerberos/kerberos_.py", line 308, in __call__
auth_header = self.generate_request_header(None, host, is_preemptive=True)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests_kerberos/kerberos_.py", line 148, in generate_request_header
raise KerberosExchangeError("%s failed: %s" % (kerb_stage, str(error.args)))
KerberosExchangeError: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
kerberostest.somedomain.local | UNREACHABLE! => {
"changed": false,
"msg": "kerberos: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))",
"unreachable": true
}
I am able to ssh to the target machine
ssh -v1 kerberostest.somedomain.local -p 5986
OpenSSH_7.3p1 Ubuntu-1, OpenSSL 1.0.2g 1 Mar 2016
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to kerberostest.somedomain.local [10.10.20.84] port 5986.
debug1: Connection established.
I can also ping all hosts with their hostname. I'm at a loss :(
Here is the ansible host file-
sudo cat /etc/ansible/hosts
# This is the default ansible 'hosts' file.
#
# It should live in /etc/ansible/hosts
#
# - Comments begin with the '#' character
# - Blank lines are ignored
# - Groups of hosts are delimited by [header] elements
# - You can enter hostnames or ip addresses
# - A hostname/ip can be a member of multiple groups
# Ex 1: Ungrouped hosts, specify before any group headers.
## green.example.com
## blue.example.com
## 192.168.100.1
## 192.168.100.10
# Ex 2: A collection of hosts belonging to the 'webservers' group
## [webservers]
## alpha.example.org
## beta.example.org
## 192.168.1.100
## 192.168.1.110
# If you have multiple hosts following a pattern you can specify
# them like this:
## www[001:006].example.com
# Ex 3: A collection of database servers in the 'dbservers' group
## [dbservers]
##
## db01.intranet.mydomain.net
## db02.intranet.mydomain.net
## 10.25.1.56
## 10.25.1.57
# Here's another example of host ranges, this time there are no
# leading 0s:
## db-[99:101]-node.example.com
[monitoring-servers]
#nagios
10.10.20.75 ansible_connection=ssh ansible_user=nagios
[windows]
#fileserver.somedomain.local#this machine isnt joined to the domain yet.
kerberostest.SOMEDOMAIN.LOCAL
[windows:vars]
#the following works for windows local account authentication
#ansible_ssh_user = prosperity
#ansible_ssh_pass = *********
#ansible_connection = winrm
#ansible_ssh_port = 5986
#ansible_winrm_server_cert_validation = ignore
#vars needed to authenticate on the windows domain using kerberos
ansible_user = ansible#SOMEDOMAIN.LOCAL
ansible_connection = winrm
ansible_winrm_scheme = https
ansible_winrm_transport = kerberos
ansible_winrm_server_cert_validation = ignore
I also tried connecting to the domain with realmd with success, but running the ansible command produced the same result.
This looks like a case of a missing SPN.
Here's the relevant error snippet:
<kerberostest.prosperityerp.local> ESTABLISH WINRM CONNECTION FOR USER: ansible#PROSPERITYERP.LOCAL on PORT 5986 TO kerberostest.prosperityerp.local
<kerberostest.prosperityerp.local> WINRM CONNECT: transport=kerberos endpoint=https://kerberostest.prosperityerp.local:5986/wsman
<kerberostest.prosperityerp.local> WINRM CONNECTION ERROR: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
And that is based off something I noticed in your Ansible configuration file:
[windows]
#fileserver.prosperityerp.local#this machine isnt joined to the domain yet.
kerberostest.PROSPERITYERP.LOCAL
I think the this machine isnt joined to the domain yet line in that file is a good indicator that the SPN HTTP/kerberostest.prosperityerp.local does not exist in Active Directory which would be causing the "server not found" message. You can SSH to kerberostest.prosperityerp.local, probably because it exists in DNS or in a Hosts file of the client machine, but unless and until the SPN HTTP/kerberostest.prosperityerp.local is created in Active Directory you will continue to get that error message. Adding that SPN properly in at this point would be a whole other topic of discussion.
You could use a command like this to test if you have that SPN defined:
setspn -Q HTTP/kerberostest.prosperityerp.local
SPNs exists to represent to a Kerberos client where to find the service instance for that service on the network.
Also run:
nslookup kerberostest.prosperityerp.local
on at least two client machines to make sure the FQDN of the IP host where the Kerberized is running exists DNS. DNS is a requirement for Kerberos to properly run in a network.
Finally, you could use Wireshark on the client for further analysis, use the filter kerberos to highlight only kerberos traffic.
In my case, the Server not found in Kerberos database error was a result of the target Windows machine's DNS name not being mapped to the right realm, as hinted at in this line from this Microsoft Technet Article:
The error “Server not found in Kerberos database” is common and can be misleading because it often appears when the service principal is not missing. The error can be caused by domain/realm mapping problems or it can be the result of a DNS problem where the service principal name is not being built correctly. Server logs and network traces can be used to determine what service principal is actually being requested.
I had playbook whoami.yaml:
- hosts: windows-machine.mydomain.com
tasks:
- name: Run 'whoami' command
win_command: whoami
Hosts file:
[windows]
windows-machine.mydomain.com
[windows:vars]
ansible_connection=winrm
ansible_winrm_transport=kerberos
ansible_user=user#FOO.BAR.MYDOMAIN.COM
ansible_password=<password>
ansible_port=5985
Since the DNS name was windows-machine.mydomain.com, but the AD realm was FOO.BAR.MYDOMAIN.COM I had to fix the mapping in my /etc/krb5.conf file on my Ansible host:
INCORRECT
This won't work for our case since this mapping rule won't apply to windows-machine.mydomain.com:
[domain_realm]
foo.bar.mydomain.com = FOO.BAR.MYDOMAIN.COM
CORRECT
This will correctly map windows-machine.mydomain.com to realm FOO.BAR.MYDOMAIN.COM
[domain_realm]
.mydomain.com = FOO.BAR.MYDOMAIN.COM
with Ansible I need to copy a script in different clients/hosts, then I need to modify a line in the script. The line depends of the client and is not the same each times.
Each hosts have the same name. Each clients name is different.
Something like that:
lineinfile: >
state=present
dest=/path/to/myscript
line="/personal line
when: {{ clients/hosts }} is {{ client/host }}
As you can see, I have no idea about the way to proceed.
It sounds like there are some clients that have some specific hosts associated to them, and the line in this script will vary based on the client.
In that case, you should use group vars. I've included a simplified example below.
Set up your hosts file like this:
[client1]
host1
host2
[client2]
host3
host4
Use group variables like this:
File group_vars/client1:
variable_script_line: echo "this is client 1"
File group_vars/client2:
variable_script_line: echo "this is client 2"
Create a template file named yourscript.sh.j2:
#!/bin/bash
# {{ ansible_managed }}
script line 1
script line 2
# below is the line that should be dynamic
{{ variable_script_line }}
And then use the template module like this:
---
- hosts: all
tasks:
- name: Deploy script to remote hosts
template:
src: /path/to/yourscript.sh.j2
dest: /path/to/location/yourscript.sh
mode: 0755
Note that the path to your source template will be different if you're using a [role][1].
Ultimately, when the play is run on client1 vs client2, the content of the template will be written differently based on the variable (see more about variable scopes).
The RabbitMQ documentation states:
Default Virtual Host and User
When the server first starts running, and detects that its database is uninitialised or has been deleted, it initialises a fresh database with the following resources:
a virtual host named /
The api has things like:
/api/exchanges/#vhost#/?name?/bindings
where "?name?" is a specific exchange-name.
However, what does one put in for the #vhost# for the default-vhost?
As write here: http://hg.rabbitmq.com/rabbitmq-management/raw-file/3646dee55e02/priv/www-api/help.html
As the default virtual host is called "/", this will need to be encoded as "%2f".
so:
/api/exchanges/%2f/{exchange_name}/bindings/source
full:
http://localhost:15672/api/exchanges/%2f/test_ex/bindings/source
as result:
[{"source":"test_ex","vhost":"/","destination":"test_queue","destination_type":"queue","routing_key":"","arguments":{},"properties_key":"~"}]