How do I connect to a docker container running Apache Drill remotely - apache

On Machine A, I run
$ docker run -i --name drill-1.14.0 -p 8047:8047
--detach -t drill/apache-drill:1.14.0 /bin/bash
<displays container ID>
$ docker exec -it drill-1.14.0 bash
<connects to container>
$ /opt/drill/bin/drill-localhost
My question is, how do I, from Machine B run
docker exec -it drill-1.14.0 bash
on Machine A - I've looked trough the help pages, but nothing is clicking.
Both machines are Windows (10 x64) machines.

You need to ssh or otherwise securely connect from machine B to machine A, and then run the relevant Docker command there. There isn't a safe shortcut around this.
Remember that being able to run any Docker command at all implies root-level access over the system (you can docker run -u root -v /:/host ... and see or change any host-system files you want). Usually there's some control over who exactly can run Docker commands because of this. It's possible to open up a networked Docker socket, but extremely dangerous: now anyone who can reach that socket over the network can, say, change the host's password and sudoers files to allow a passwordless root-equivalent ssh login. (Google News brought me an article a week or two ago about attackers looking for open Docker network sockets and using them to turn machines into cryptocurrency miners, for instance.)
If you're building a service, and you expect users to interact with it remotely, then you probably need to make whatever interfaces available as network requests and not by running local shell commands. For instance, it's common for HTTP-based services to have a /admin set of URL paths that require a separate password authentication or otherwise different privileges.
If you're trying to administer a service via its local config files, often the best path is to store the config files on the host system, use docker run -v to inject them into the container, and when you need to change them, docker stop; docker rm; docker run the container to get a new copy of it with a new config file.
If you're packaging some application, but the primary way to interact with it is via CLI tools and local files, consider whether you actually want to use a tool that isolates the application's filesystem from the host's and requires root-level access to interact with it at all. The tooling for installing semi-isolated tools in your choice of scripting language is pretty mature, and for compiled languages quite well-established; there's nothing wrong with installing software on your host system.

Related

ruby linting with vscode remote + docker

I've managed to set up VSCode remote containers over SSH accessing my docker containers on the remote host (+ docker-compose).
One thing I can't work out however, is how to use extensions like ruby-rubocop (linter). I can install it on the remote SSH host, but it doesn't work, because my remote host doesn't directly run ruby. It's running inside one of my containers...
Is there a way to get it running inside a container on the remote host?

A way for client to trigger Ansible Playbook?

My task is to automate CentOS installs, including a suite of proprietary software, onto bare metal machines. I've set up a PXE boot server which automates initial install from a Kickstart file and the rest gets passed to an Ansible Playbook.
I've solved all of the above, except I have to be in the server to start the Playbook. I haven't found a good way for the Playbook to start at the request of the client (or perhaps the server-side PXE process can hand it off somehow?), in the hopes that I can cut myself out of the install process.
I thought I would expand on my comment a little bit.
Depending on what you're trying to accomplish, there are a few options you could consider.
Use ansible-pull
The ansible-pull cli fetches a git repository from a remote server and then locally executes ansible-playbook playbook.yml in the top level of that repository.
This means you can drop something like this into your Kickstart %post script:
ansible-pull -U https://server.example.com/playbooks/client-configuration
This is a great solution if your playbook only requires running tasks on the client.
Trigger a playbook run on the server
If your playbook really needs to execute on the server, you could set up a simple web server that would allow clients to trigger the playbook run. In this case, you would embed curl command or similar into your Kickstart %post script:
curl https://my.server.com/trigger-playbook
The trigger-playbook service would take care of triggering a playbook run targeting the appropriate client. This would require you to implement the service yourself (or use something like webhook to handle that task for you).

How can I ssh into a container running inside an OpenShift/Kubernetes cluster?

I want to be able to ssh into a container within an OpenShift pod.
I do know that, I can simply do so using oc rsh. But this is based on the assumption that I have the openshift cli installed on the node where I want to ssh into the container from.
But what I want to actually achieve is, to ssh into a container from a node that does not have openshift cli installed. The node is on the same network as that of OpenShift. The node does have access to web applications hosted on a container (just for the sake of example). But instead of web access, I would like to have ssh access.
Is there any way that this can be achieved?
Unlike a server, which is running an entire operating system on real or virtualized hardware, a container is nothing more than a single Linux process encapsulated by a few kernel features: CGroups, Namespacing, and SELinux. A "fancy" process if you will.
Opening a shell session into a container is not quite the same as opening an ssh connection to a server. Opening a shell into a container requires starting a shell process and assigning it to the same cgroups and namespaces on the same host as the container process and then presenting that session to you, which is not something ssh is designed for.
Using oc exec, kubectl exec, podman exec, or docker exec cli commands to open a shell session inside a running container is the method that should be used to connect with running containers.

Is there a way to access a running docker container on a remote server from my local development enviroment(Sublime)

Currently I can use rsub with sublime to edit remotely but the container is a second layer of ssh that is only accessible from the host machine.
Just curious, how do you use your remote host machine if you even have no ssh running on it?
Regarding to your question, I think you need to install openssh-server directly inside the container and map container's 22 port to the host's custom port. Inside your container you'll have to run some initial process that will launch all the processes you need (like openssh-server).
Consider this comprehensive example of the use of supervisord inside Docker container.

Where are TLS certificates stored for Docker on Windows Server 2016 TP3

I have a VM running Windows Server 2016 Technical Preview, and have installed the Containers feature, and then run the Install-ContainerHost.ps1 script from Microsoft's container tools repo
https://github.com/Microsoft/Virtualization-Documentation/tree/master/windows-server-container-tools/Install-ContainerHost
I can now run the Docker Deamon on Windows. Next I want to copy the certificates to a client machine so that I can issue commands to the host remotely. But I don't know where the certificates are stored on the host.
In the script the path variable is set to %ProgramData%\docker\certs.d
The certificates on windows are located in the .docker folder in the current user directory.
docker --help command will show the exact path details
AFAIK there are no certificates generated when you do what you are doing. If you drop certificates in the path you found then it will use them, and be secured. But otherwise there is none on the machine. Which explains why it isn't exposed by default.
On my setup I connected without TLS but that was on a VM that I could only access on my dev machine. Obviously anything able to be accessed over a network shouldn't do that.
Other people doing this are here: https://social.msdn.microsoft.com/Forums/en-US/84ca60c0-c54d-4513-bc02-14bd57676621/connect-docker-client-to-windows-server-2016-container-engine?forum=windowscontainers and here https://social.msdn.microsoft.com/Forums/en-US/9caf90c9-81e8-4998-abe5-837fbfde03a8/can-i-connect-docker-from-remote-docker-client?forum=windowscontainers
When I dug into the work in progress post it has this:
Docker clients unsecured by default
In this pre-release, docker communication is public if you know where to look.
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/about/work_in_progress#DockermanagementDockerclientsunsecuredbydefault
So eventually this should get better.