A way for client to trigger Ansible Playbook? - automation

My task is to automate CentOS installs, including a suite of proprietary software, onto bare metal machines. I've set up a PXE boot server which automates initial install from a Kickstart file and the rest gets passed to an Ansible Playbook.
I've solved all of the above, except I have to be in the server to start the Playbook. I haven't found a good way for the Playbook to start at the request of the client (or perhaps the server-side PXE process can hand it off somehow?), in the hopes that I can cut myself out of the install process.

I thought I would expand on my comment a little bit.
Depending on what you're trying to accomplish, there are a few options you could consider.
Use ansible-pull
The ansible-pull cli fetches a git repository from a remote server and then locally executes ansible-playbook playbook.yml in the top level of that repository.
This means you can drop something like this into your Kickstart %post script:
ansible-pull -U https://server.example.com/playbooks/client-configuration
This is a great solution if your playbook only requires running tasks on the client.
Trigger a playbook run on the server
If your playbook really needs to execute on the server, you could set up a simple web server that would allow clients to trigger the playbook run. In this case, you would embed curl command or similar into your Kickstart %post script:
curl https://my.server.com/trigger-playbook
The trigger-playbook service would take care of triggering a playbook run targeting the appropriate client. This would require you to implement the service yourself (or use something like webhook to handle that task for you).

Related

How can I ssh into a container running inside an OpenShift/Kubernetes cluster?

I want to be able to ssh into a container within an OpenShift pod.
I do know that, I can simply do so using oc rsh. But this is based on the assumption that I have the openshift cli installed on the node where I want to ssh into the container from.
But what I want to actually achieve is, to ssh into a container from a node that does not have openshift cli installed. The node is on the same network as that of OpenShift. The node does have access to web applications hosted on a container (just for the sake of example). But instead of web access, I would like to have ssh access.
Is there any way that this can be achieved?
Unlike a server, which is running an entire operating system on real or virtualized hardware, a container is nothing more than a single Linux process encapsulated by a few kernel features: CGroups, Namespacing, and SELinux. A "fancy" process if you will.
Opening a shell session into a container is not quite the same as opening an ssh connection to a server. Opening a shell into a container requires starting a shell process and assigning it to the same cgroups and namespaces on the same host as the container process and then presenting that session to you, which is not something ssh is designed for.
Using oc exec, kubectl exec, podman exec, or docker exec cli commands to open a shell session inside a running container is the method that should be used to connect with running containers.

How do I connect to a docker container running Apache Drill remotely

On Machine A, I run
$ docker run -i --name drill-1.14.0 -p 8047:8047
--detach -t drill/apache-drill:1.14.0 /bin/bash
<displays container ID>
$ docker exec -it drill-1.14.0 bash
<connects to container>
$ /opt/drill/bin/drill-localhost
My question is, how do I, from Machine B run
docker exec -it drill-1.14.0 bash
on Machine A - I've looked trough the help pages, but nothing is clicking.
Both machines are Windows (10 x64) machines.
You need to ssh or otherwise securely connect from machine B to machine A, and then run the relevant Docker command there. There isn't a safe shortcut around this.
Remember that being able to run any Docker command at all implies root-level access over the system (you can docker run -u root -v /:/host ... and see or change any host-system files you want). Usually there's some control over who exactly can run Docker commands because of this. It's possible to open up a networked Docker socket, but extremely dangerous: now anyone who can reach that socket over the network can, say, change the host's password and sudoers files to allow a passwordless root-equivalent ssh login. (Google News brought me an article a week or two ago about attackers looking for open Docker network sockets and using them to turn machines into cryptocurrency miners, for instance.)
If you're building a service, and you expect users to interact with it remotely, then you probably need to make whatever interfaces available as network requests and not by running local shell commands. For instance, it's common for HTTP-based services to have a /admin set of URL paths that require a separate password authentication or otherwise different privileges.
If you're trying to administer a service via its local config files, often the best path is to store the config files on the host system, use docker run -v to inject them into the container, and when you need to change them, docker stop; docker rm; docker run the container to get a new copy of it with a new config file.
If you're packaging some application, but the primary way to interact with it is via CLI tools and local files, consider whether you actually want to use a tool that isolates the application's filesystem from the host's and requires root-level access to interact with it at all. The tooling for installing semi-isolated tools in your choice of scripting language is pretty mature, and for compiled languages quite well-established; there's nothing wrong with installing software on your host system.

Can't run Ansible in daemon-mode

Can I run Ansible to manage my hosts like a daemon? For example, I sometimes change my playbooks and I don't want to run "ansible-playbook main.yml" manually. Please, don't propose crontab. There is a specific point and I can't use crontab on production server.
Thank you
What you are talking about here is called pull mode. Architectually Ansible is designed to work in push mode - you push changes to server from a control machine.
If you really would like to make Ansible work in pull mode then you can do so with Ansible-Pull script, see docs here: http://docs.ansible.com/playbooks_intro.html#ansible-pull
Ansible-pull is a script that can fetch your configuration playbooks from remote repository and run them against localhost. Ansible-pull does not however solve a problem of checking for a new configuration changes - you need to solve it yourself with cron.
Another alternative is using Ansible Tower (you need a paid license for it).
Ansible Tower supports callbacks via API, so the server you want to configure has to do API request to Ansible Tower server, Tower in turn will check whether the host that sent API request is in its inventory. If it's in inventory then Tower will start configuring it.

How to set up an Apache httpd test instance?

For our continuous integration tests under Ubuntu (run by Jenkins), I'd like to test the Apache httpd configuration especially with regard to the rewrite rules.
My plan of attack was (and is):
create a temporary directory,
copy the configuration there and amend some directives,
fire up an Apache httpd on a non-standard port,
run the tests,
shutdown the httpd,
remove the temporary directory.
The repository of our Apache httpd configuration can be found here, my first stab at the test script here.
The process however is very cumbersome as many paths are hardcoded and even the man page for apachectl just recommends reading the source for the various environment variables.
What is the recommended approach to set up such an isolated Apache httpd instance? Are there instructions or field reports that I have missed?
Rather than trying to rewrite configuration files, I suggest using a tool like Vagrant to create and provision a VM that runs your actual apache configuration. Running in a VM provides isolation (you can expose and remap TCP ports as needed) and it also gives you a development environment for interactive testing and debugging.
Instead of creating a temporary directory and modifying configuration files, you would run vagrant up as the first build step. With the right configuration, Vagrant will install whatever packages are needed and provision your apache configuration. Once the VM is up, you can run your tests.
It's easy to get started with Vagrant by walking through the Getting Started Guide to see if it's right for you.

How to access to remote server

I want to create a repository on the remote server .
Access constraint that I have :
(a) IP address (of server)
(b) username/pw
I am following this tutorial and stuck in the first step :"Initial access to mercurial-server"
I am not able to understand those "ssh connection" syntax (specially the my-key)
How could I connect to remote server(using ssh-agent ) i order to create new repo .
This is the same problem we see again and again. mercurial-server isn't a part of Mercurial. It's a separate, third party, not generally necessary piece of software that tries to make mercurial administration easier without really succeeding.
Start here: https://www.mercurial-scm.org/wiki/PublishingRepositories/
and pick the type of access you want, http or ssh and then use either hgweb.cgi + apache (for http) or nothing at all if you just want to use ssh.
Specifically, for any server that has the mercurial client on it (apt-get install mercurial on debian or ubuntu and yum install mercurial on redhat, fedora, or centos) you don't need any extra software at all for hosting mercurial repositories over ssh. You can just do:
hg clone myLocalrepo ssh://you#thatserver/myRemoteRepo
and poof you're hosting there.