How can I ssh into a container running inside an OpenShift/Kubernetes cluster? - ssh

I want to be able to ssh into a container within an OpenShift pod.
I do know that, I can simply do so using oc rsh. But this is based on the assumption that I have the openshift cli installed on the node where I want to ssh into the container from.
But what I want to actually achieve is, to ssh into a container from a node that does not have openshift cli installed. The node is on the same network as that of OpenShift. The node does have access to web applications hosted on a container (just for the sake of example). But instead of web access, I would like to have ssh access.
Is there any way that this can be achieved?

Unlike a server, which is running an entire operating system on real or virtualized hardware, a container is nothing more than a single Linux process encapsulated by a few kernel features: CGroups, Namespacing, and SELinux. A "fancy" process if you will.
Opening a shell session into a container is not quite the same as opening an ssh connection to a server. Opening a shell into a container requires starting a shell process and assigning it to the same cgroups and namespaces on the same host as the container process and then presenting that session to you, which is not something ssh is designed for.
Using oc exec, kubectl exec, podman exec, or docker exec cli commands to open a shell session inside a running container is the method that should be used to connect with running containers.

Related

ruby linting with vscode remote + docker

I've managed to set up VSCode remote containers over SSH accessing my docker containers on the remote host (+ docker-compose).
One thing I can't work out however, is how to use extensions like ruby-rubocop (linter). I can install it on the remote SSH host, but it doesn't work, because my remote host doesn't directly run ruby. It's running inside one of my containers...
Is there a way to get it running inside a container on the remote host?

How do I connect to a docker container running Apache Drill remotely

On Machine A, I run
$ docker run -i --name drill-1.14.0 -p 8047:8047
--detach -t drill/apache-drill:1.14.0 /bin/bash
<displays container ID>
$ docker exec -it drill-1.14.0 bash
<connects to container>
$ /opt/drill/bin/drill-localhost
My question is, how do I, from Machine B run
docker exec -it drill-1.14.0 bash
on Machine A - I've looked trough the help pages, but nothing is clicking.
Both machines are Windows (10 x64) machines.
You need to ssh or otherwise securely connect from machine B to machine A, and then run the relevant Docker command there. There isn't a safe shortcut around this.
Remember that being able to run any Docker command at all implies root-level access over the system (you can docker run -u root -v /:/host ... and see or change any host-system files you want). Usually there's some control over who exactly can run Docker commands because of this. It's possible to open up a networked Docker socket, but extremely dangerous: now anyone who can reach that socket over the network can, say, change the host's password and sudoers files to allow a passwordless root-equivalent ssh login. (Google News brought me an article a week or two ago about attackers looking for open Docker network sockets and using them to turn machines into cryptocurrency miners, for instance.)
If you're building a service, and you expect users to interact with it remotely, then you probably need to make whatever interfaces available as network requests and not by running local shell commands. For instance, it's common for HTTP-based services to have a /admin set of URL paths that require a separate password authentication or otherwise different privileges.
If you're trying to administer a service via its local config files, often the best path is to store the config files on the host system, use docker run -v to inject them into the container, and when you need to change them, docker stop; docker rm; docker run the container to get a new copy of it with a new config file.
If you're packaging some application, but the primary way to interact with it is via CLI tools and local files, consider whether you actually want to use a tool that isolates the application's filesystem from the host's and requires root-level access to interact with it at all. The tooling for installing semi-isolated tools in your choice of scripting language is pretty mature, and for compiled languages quite well-established; there's nothing wrong with installing software on your host system.

Jenkins selenium docker and application files

I have a docker hub and a docker node up and running.I have also a docker container which includes my application up and running with the same set up as my pc. I get the following error.
[ConnectionException] Can't connect to Webdriver at http://ip:4444/wd/hub. Please make sure that Selenium Server or PhantomJS is running.
The IP is correct since I see there the selenium grid as it should be. Which might be the problem. When I get inside the container that i have in jenkins it runs my tests also.
Have you explicitly instructed the Hub Docker Container to expose it's internal port 4444 as 4444 externally?
Instructing a container to expose ports does not enforce the same port numbers to be used. So in your case, while internally it is running on 4444, externally it could be whatever port Docker thought was the best choice when it started.
How did you start your container? If via the docker cmd line, then did you use -P or -p 4444:4444? (Note the difference in case). -P simply exposes ports but no guarantee of number, where as -p allows you to map as you wish.
There are many ways to orchestrate Docker which may allow you to control this in a different way.
For example, if you used Docker Compose that has the potential to allow your containers to communicate via 4444 even if those are not the actually exposed ports. It achieves this through some clever networking but is very simple to set up and use.

Is there a way to access a running docker container on a remote server from my local development enviroment(Sublime)

Currently I can use rsub with sublime to edit remotely but the container is a second layer of ssh that is only accessible from the host machine.
Just curious, how do you use your remote host machine if you even have no ssh running on it?
Regarding to your question, I think you need to install openssh-server directly inside the container and map container's 22 port to the host's custom port. Inside your container you'll have to run some initial process that will launch all the processes you need (like openssh-server).
Consider this comprehensive example of the use of supervisord inside Docker container.

connect opscenter and datastax agent runs in two docker containers

There two containers which is running in two physical machines.One container for Ops-center and other is for (datastax Cassandra + Ops-center agent).I have have manually installed Ops-center agent on each Cassandra containers.This setup is working fine.
But Ops-center can not upgrade nodes due to fail ssh connections to nodes. Is there any way create ssh connection between those two containers. ??
In Docker you should NOT run SSH, read HERE why. After reading that and you still want to run SSH you can, but it is not the same as running it on Linux/Unix. This article has several options.
If you still want to SSH into your container read THIS and follow the instructions. It will install OpenSSH. You then configure it and generate a SSH key that you will copy/paste into the Datastax Opscenter Agent upgrade dialog box when prompted for security credentials.
Lastly, upgrading the Agent is as simple as moving the latest Agent JAR or the version of the Agent JAR you want to run into the Datastax-agent Bin directory. You can do that manually and redeploy your container much simpler than using SSH.
Hope that helps,
Pat