connect opscenter and datastax agent runs in two docker containers - ssh

There two containers which is running in two physical machines.One container for Ops-center and other is for (datastax Cassandra + Ops-center agent).I have have manually installed Ops-center agent on each Cassandra containers.This setup is working fine.
But Ops-center can not upgrade nodes due to fail ssh connections to nodes. Is there any way create ssh connection between those two containers. ??

In Docker you should NOT run SSH, read HERE why. After reading that and you still want to run SSH you can, but it is not the same as running it on Linux/Unix. This article has several options.
If you still want to SSH into your container read THIS and follow the instructions. It will install OpenSSH. You then configure it and generate a SSH key that you will copy/paste into the Datastax Opscenter Agent upgrade dialog box when prompted for security credentials.
Lastly, upgrading the Agent is as simple as moving the latest Agent JAR or the version of the Agent JAR you want to run into the Datastax-agent Bin directory. You can do that manually and redeploy your container much simpler than using SSH.
Hope that helps,
Pat

Related

How can I ssh into a container running inside an OpenShift/Kubernetes cluster?

I want to be able to ssh into a container within an OpenShift pod.
I do know that, I can simply do so using oc rsh. But this is based on the assumption that I have the openshift cli installed on the node where I want to ssh into the container from.
But what I want to actually achieve is, to ssh into a container from a node that does not have openshift cli installed. The node is on the same network as that of OpenShift. The node does have access to web applications hosted on a container (just for the sake of example). But instead of web access, I would like to have ssh access.
Is there any way that this can be achieved?
Unlike a server, which is running an entire operating system on real or virtualized hardware, a container is nothing more than a single Linux process encapsulated by a few kernel features: CGroups, Namespacing, and SELinux. A "fancy" process if you will.
Opening a shell session into a container is not quite the same as opening an ssh connection to a server. Opening a shell into a container requires starting a shell process and assigning it to the same cgroups and namespaces on the same host as the container process and then presenting that session to you, which is not something ssh is designed for.
Using oc exec, kubectl exec, podman exec, or docker exec cli commands to open a shell session inside a running container is the method that should be used to connect with running containers.

connect to Minishift VM from host machine (Windows)

I have created minishift env using comment:
"minishift start --vm-driver=virtualbox -v5 --show-libmachine-logs --alsologtostderr"
Now I have the console for minishift working.
I have to make some changes to master node config files a
To connect to the Minishift VM you can use minishift ssh command which will connect the cmd/Powershell session to the shell on the VM. You can then do whatever changes you want to do using sudo, which is on the VM password-less. However please note that changing the VM directly is not recommended and might cause problems.
To make the changes to the master node you could also use minishift openshift config command which provides way to alter OpenShift's cluster configuration. In general using this command with combination with oc adm is advised for most users as it should be more secure and robust solution than altering the configs directly in the VM.

Not able to login after migrating libvert on-prem boot disk to Google cloud platform using cloud endure migration service

I migrated the vm from libvirt to Google Cloud Platform using Cloudendure. The initial sync is complete and is in Data Replication stage from over a week. Once the VM is launched using test mode and try to putty using ssh it throws Connection Refused exited with error code 255.
I tried to log in using my on-premise local machine username and SSH key with putty, As it is told in the Cloudendure documentation that I can log in to the replicated server using same credentials
The firewall rule in GCP and the machine allows port 22 for incoming connections. SSH key is also updated properly in metadata section and saying SSH key is not propagated properly.
I thought there is a problem with my local machine ufw rules and tried turning off firewall and replicated again but no use. Also tried adding SSH rule to ufw allow connections from 0.0.0.0/0 still I'm not able to connect to VM which is replicated and launched in test mode.
Steps tried:
I tried interactive console method where I tried to log in using serial-port, but the problem is it is asking for ID and password. Where I don't have PASSWORD and using only SSH keys to log-into.
Tried using Static IP for an instance. before replicating boot disk I added firewall rule allow SSH from that static-IP then I replicated and tried to login (assuming that it is blocking connection via this IP).
Followed this article to install Linux Guest OS.
Generated SSH key using ssh-keygen -t RSA -C "" in gcloud shell.
I cannot ssh into the Linux environment. Appreciate the help
Operating System: Ubuntu 18.04 LTS x64
ANy help would be greatful.

Can't access my EC2 chef node with knife ssh

So I've set up my EC2 chef node in several ways (bootstrapping with knife or through client-chef parameters from my node), and every time I try to access the node through knife ssh I get the following error:
WARNING: Failed to connect to *node's FQDN* -- SocketError: getaddrinfo: nodename nor servname provided, or not known
I use knife ssh mainly to update to node and just run sudo client-chef
From this error I assume that I have no access to the FQDN as its an internal address, isn't the chef server supposed to do that for me?
I will soon have a private VPC on AWS so on any occasion I won't be able to access the internal address from my workstation.
Is there a way to make the chef server run this ssh command, or run it any other way?
What I've discovered is basically my misunderstanding of how chef works, I was looking for some sort of push mechanism, and chef does not support push out of the box.
There are 2 workarounds to this:
1) Chef's push jobs - as I'm writing this post, chef push jobs do not work on Ubuntu 14, and I'm not too keen on letting this service dictate the OS of my choice
2) Not recommended anywhere, but installing knife on my chef server worked. Since the chef server is within the VPC, he's my only point of access and from there I'll run knife ssh to all my other nodes.
If anyone is looking for more of a push-based service I'd recommend to look at SaltStack
Since your node does not have an external IP, you should use an ssh gateway. Please refer to this thread: Using knife ec2 plugin to create VM in VPC private subnet
As you mentioned in your answer, the chef doesn't provide push capability, instead it uses a pull. And knife ssh does exactly that - it ssh to the nodes and allows you to run chef-client command which would pull the configuration from the chef server.
Please note, that in your 2nd solution, any node within the VPC with knife would do. This doesn't have to be a Chef server, or should I say the Chef server doesn't have to be in this VPC at all. However, a solution like this compromises security since your authentication with the chef server and your ssh private key would be both located somewhere outside your workstation.
There is also one more way to mention, which is to add chef-client runs to cron if your strategy is well tested.

Where are TLS certificates stored for Docker on Windows Server 2016 TP3

I have a VM running Windows Server 2016 Technical Preview, and have installed the Containers feature, and then run the Install-ContainerHost.ps1 script from Microsoft's container tools repo
https://github.com/Microsoft/Virtualization-Documentation/tree/master/windows-server-container-tools/Install-ContainerHost
I can now run the Docker Deamon on Windows. Next I want to copy the certificates to a client machine so that I can issue commands to the host remotely. But I don't know where the certificates are stored on the host.
In the script the path variable is set to %ProgramData%\docker\certs.d
The certificates on windows are located in the .docker folder in the current user directory.
docker --help command will show the exact path details
AFAIK there are no certificates generated when you do what you are doing. If you drop certificates in the path you found then it will use them, and be secured. But otherwise there is none on the machine. Which explains why it isn't exposed by default.
On my setup I connected without TLS but that was on a VM that I could only access on my dev machine. Obviously anything able to be accessed over a network shouldn't do that.
Other people doing this are here: https://social.msdn.microsoft.com/Forums/en-US/84ca60c0-c54d-4513-bc02-14bd57676621/connect-docker-client-to-windows-server-2016-container-engine?forum=windowscontainers and here https://social.msdn.microsoft.com/Forums/en-US/9caf90c9-81e8-4998-abe5-837fbfde03a8/can-i-connect-docker-from-remote-docker-client?forum=windowscontainers
When I dug into the work in progress post it has this:
Docker clients unsecured by default
In this pre-release, docker communication is public if you know where to look.
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/about/work_in_progress#DockermanagementDockerclientsunsecuredbydefault
So eventually this should get better.