connect to Minishift VM from host machine (Windows) - virtual-machine

I have created minishift env using comment:
"minishift start --vm-driver=virtualbox -v5 --show-libmachine-logs --alsologtostderr"
Now I have the console for minishift working.
I have to make some changes to master node config files a

To connect to the Minishift VM you can use minishift ssh command which will connect the cmd/Powershell session to the shell on the VM. You can then do whatever changes you want to do using sudo, which is on the VM password-less. However please note that changing the VM directly is not recommended and might cause problems.
To make the changes to the master node you could also use minishift openshift config command which provides way to alter OpenShift's cluster configuration. In general using this command with combination with oc adm is advised for most users as it should be more secure and robust solution than altering the configs directly in the VM.

Related

How can I ssh into a container running inside an OpenShift/Kubernetes cluster?

I want to be able to ssh into a container within an OpenShift pod.
I do know that, I can simply do so using oc rsh. But this is based on the assumption that I have the openshift cli installed on the node where I want to ssh into the container from.
But what I want to actually achieve is, to ssh into a container from a node that does not have openshift cli installed. The node is on the same network as that of OpenShift. The node does have access to web applications hosted on a container (just for the sake of example). But instead of web access, I would like to have ssh access.
Is there any way that this can be achieved?
Unlike a server, which is running an entire operating system on real or virtualized hardware, a container is nothing more than a single Linux process encapsulated by a few kernel features: CGroups, Namespacing, and SELinux. A "fancy" process if you will.
Opening a shell session into a container is not quite the same as opening an ssh connection to a server. Opening a shell into a container requires starting a shell process and assigning it to the same cgroups and namespaces on the same host as the container process and then presenting that session to you, which is not something ssh is designed for.
Using oc exec, kubectl exec, podman exec, or docker exec cli commands to open a shell session inside a running container is the method that should be used to connect with running containers.

Not able to login after migrating libvert on-prem boot disk to Google cloud platform using cloud endure migration service

I migrated the vm from libvirt to Google Cloud Platform using Cloudendure. The initial sync is complete and is in Data Replication stage from over a week. Once the VM is launched using test mode and try to putty using ssh it throws Connection Refused exited with error code 255.
I tried to log in using my on-premise local machine username and SSH key with putty, As it is told in the Cloudendure documentation that I can log in to the replicated server using same credentials
The firewall rule in GCP and the machine allows port 22 for incoming connections. SSH key is also updated properly in metadata section and saying SSH key is not propagated properly.
I thought there is a problem with my local machine ufw rules and tried turning off firewall and replicated again but no use. Also tried adding SSH rule to ufw allow connections from 0.0.0.0/0 still I'm not able to connect to VM which is replicated and launched in test mode.
Steps tried:
I tried interactive console method where I tried to log in using serial-port, but the problem is it is asking for ID and password. Where I don't have PASSWORD and using only SSH keys to log-into.
Tried using Static IP for an instance. before replicating boot disk I added firewall rule allow SSH from that static-IP then I replicated and tried to login (assuming that it is blocking connection via this IP).
Followed this article to install Linux Guest OS.
Generated SSH key using ssh-keygen -t RSA -C "" in gcloud shell.
I cannot ssh into the Linux environment. Appreciate the help
Operating System: Ubuntu 18.04 LTS x64
ANy help would be greatful.

connect opscenter and datastax agent runs in two docker containers

There two containers which is running in two physical machines.One container for Ops-center and other is for (datastax Cassandra + Ops-center agent).I have have manually installed Ops-center agent on each Cassandra containers.This setup is working fine.
But Ops-center can not upgrade nodes due to fail ssh connections to nodes. Is there any way create ssh connection between those two containers. ??
In Docker you should NOT run SSH, read HERE why. After reading that and you still want to run SSH you can, but it is not the same as running it on Linux/Unix. This article has several options.
If you still want to SSH into your container read THIS and follow the instructions. It will install OpenSSH. You then configure it and generate a SSH key that you will copy/paste into the Datastax Opscenter Agent upgrade dialog box when prompted for security credentials.
Lastly, upgrading the Agent is as simple as moving the latest Agent JAR or the version of the Agent JAR you want to run into the Datastax-agent Bin directory. You can do that manually and redeploy your container much simpler than using SSH.
Hope that helps,
Pat

Start ipython cluster using ssh on windows machine

I have a problem setting up a ipython cluster on a Windows server and connecting to this ipcluster using a ssh connection. I tried following the tutorial on https://ipython.org/ipython/doc/dev/parallel/parallel_process.html#ssh, but I have problems to understand what the options mean exactly and what parameters are to use exactly...
Could anyone help a total noob to set up an ipcluster? (Let's say the remote machine has ip 192.168.0.1 and the local machine has 192.168.0.2)
If you scroll roughly to the middle of the page https://ipython.org/ipython-doc/dev/parallel/parallel_process.html#ssh you will find this:
Current limitations of the SSH mode of ipcluster are:
Untested and unsupported on Windows. Would require a working ssh on Windows. Also, we are using shell scripts to setup and execute
commands on remote hosts.
That means, there is no easy way to build an ipcluster with ssh connection on windows (if it works at all).
Do you really need to connect the machines with an ssh connection? I guess it's possible with a ssh client on each windows machine, but if you are in a trusted local network you can also decide not to use the loopback interface and just expose the ports...
Sure you can start controller and engine separately! For further examples about ports (if you have problems with firewalls) see also How to setup ssh tunnel for ipython cluster (ipcluster)

Amazon EC2 ssh timeout due inactivity

I am able to issue commands to my EC2 instances via SSH and these commands logs answers which I'm supposed to keep watching for a long time. The bad thing is that SSH command is closed after some time due to my inactivity and I'm no longer able to see what's going on with my instances.
How can I disable/increase timeout in Amazon Linux machines?
The error looks like this:
Read from remote host ec2-50-17-48-222.compute-1.amazonaws.com: Connection reset by peer
You can set a keep alive option in your ~/.ssh/config file on your computer's home dir:
ServerAliveInterval 50
Amazon AWS usually drops your connection after only 60 seconds of inactivity, so this option will ping the server every 50 seconds and keep you connected indefinitely.
Assuming your Amazon EC2 instance is running Linux (and the very likely case that you are using SSH-2, not 1), the following should work pretty handily:
Remote into your EC2 instance.
ssh -i <YOUR_PRIVATE_KEY_FILE>.pem <INTERNET_ADDRESS_OF_YOUR_INSTANCE>
Add a "client-alive" directive to the instance's SSH-server configuration file.
echo 'ClientAliveInterval 60' | sudo tee --append /etc/ssh/sshd_config
Restart or reload the SSH server, for it to recognize the configuration change.
The command for that on Ubuntu Linux would be..
sudo service ssh restart
On any other Linux, though, the following is probably correct..
sudo service sshd restart
Disconnect.
logout
The next time you SSH into that EC2 instance, those super-annoying frequent connection freezes/timeouts/drops should hopefully be gone.
This is also helps with Google Compute Engine instances, which come with similarly annoying default settings.
Warning: Do note that TCPKeepAlive settings (which also exist) are subtly, yet distinctly different from the ClientAlive settings that I propose above, and that changing TCPKeepAlive settings from the default may actually hurt your situation rather than help.
More info here: http://man.openbsd.org/?query=sshd_config
Consider using screen or byobu and the problem will likely go away. What's more, even if the connection is lost, you can reconnect and restore access to the same terminal screen you had before, via screen -r or byobu -r.
byobu is an enhancement for screen, and has a wonderful set of options, such as an estimate of EC2 costs.
I know for Putty you can utilize a keepalive setting so it will send some activity packet every so often as to not go "idle" or "stale"
http://the.earth.li/~sgtatham/putty/0.55/htmldoc/Chapter4.html#S4.13.4
If you are using other client let me know.
You can use Mobaxterm, free tabbed SSH terminal with below settings-
Settings -> Configuration -> SSH -> SSH keepalive
remember to restart Mobaxterm app after changing the setting.
I have a 10+ custom AMIs all based on Amazon Linux AMIs and I've never run into any timeout issues due to inactivity on a SSH connection. I've had connections stay open more than 24 hrs, without running a single command. I don't think there are any timeouts built into the Amazon Linux AMIs.