Can't access my EC2 chef node with knife ssh - ssh

So I've set up my EC2 chef node in several ways (bootstrapping with knife or through client-chef parameters from my node), and every time I try to access the node through knife ssh I get the following error:
WARNING: Failed to connect to *node's FQDN* -- SocketError: getaddrinfo: nodename nor servname provided, or not known
I use knife ssh mainly to update to node and just run sudo client-chef
From this error I assume that I have no access to the FQDN as its an internal address, isn't the chef server supposed to do that for me?
I will soon have a private VPC on AWS so on any occasion I won't be able to access the internal address from my workstation.
Is there a way to make the chef server run this ssh command, or run it any other way?

What I've discovered is basically my misunderstanding of how chef works, I was looking for some sort of push mechanism, and chef does not support push out of the box.
There are 2 workarounds to this:
1) Chef's push jobs - as I'm writing this post, chef push jobs do not work on Ubuntu 14, and I'm not too keen on letting this service dictate the OS of my choice
2) Not recommended anywhere, but installing knife on my chef server worked. Since the chef server is within the VPC, he's my only point of access and from there I'll run knife ssh to all my other nodes.
If anyone is looking for more of a push-based service I'd recommend to look at SaltStack

Since your node does not have an external IP, you should use an ssh gateway. Please refer to this thread: Using knife ec2 plugin to create VM in VPC private subnet
As you mentioned in your answer, the chef doesn't provide push capability, instead it uses a pull. And knife ssh does exactly that - it ssh to the nodes and allows you to run chef-client command which would pull the configuration from the chef server.
Please note, that in your 2nd solution, any node within the VPC with knife would do. This doesn't have to be a Chef server, or should I say the Chef server doesn't have to be in this VPC at all. However, a solution like this compromises security since your authentication with the chef server and your ssh private key would be both located somewhere outside your workstation.
There is also one more way to mention, which is to add chef-client runs to cron if your strategy is well tested.

Related

How can I ssh into a container running inside an OpenShift/Kubernetes cluster?

I want to be able to ssh into a container within an OpenShift pod.
I do know that, I can simply do so using oc rsh. But this is based on the assumption that I have the openshift cli installed on the node where I want to ssh into the container from.
But what I want to actually achieve is, to ssh into a container from a node that does not have openshift cli installed. The node is on the same network as that of OpenShift. The node does have access to web applications hosted on a container (just for the sake of example). But instead of web access, I would like to have ssh access.
Is there any way that this can be achieved?
Unlike a server, which is running an entire operating system on real or virtualized hardware, a container is nothing more than a single Linux process encapsulated by a few kernel features: CGroups, Namespacing, and SELinux. A "fancy" process if you will.
Opening a shell session into a container is not quite the same as opening an ssh connection to a server. Opening a shell into a container requires starting a shell process and assigning it to the same cgroups and namespaces on the same host as the container process and then presenting that session to you, which is not something ssh is designed for.
Using oc exec, kubectl exec, podman exec, or docker exec cli commands to open a shell session inside a running container is the method that should be used to connect with running containers.

How do I set up a proxy server that will SSH tunnel into a VPC I have in AWS for a Hibernate MySQL connection for me?

I have a microservice, let's call it RdsConnector, I want to test locally that is normally deployed on a machine on AWS. It connects to a MySQL instance, which is also in AWS, without any SSH tunnelling as they are in the same VPC. To connect to that MySQL instance from my local machine, I can use SSH tunnelling to get into the VPC I have set up in AWS. This is what that configuration looks like:
I could set up my microservice to also connect through SSH (optionally, perhaps), but I don't want to do that. Then I would have a different configuration running it locally vs in the cloud. What I want to do instead is set up some kind of proxy server on my local machine that will take the SSH credentials and do that SSH tunnelling, exposing the VPC MySQL endpoint locally. Then RdsConnector will just use that local endpoint, and I won't have to have a different config for RdsConnector just for local testing.
I'm not very familiar with the networking technologies in use here. I just know that there's no public IPs for my VPC, so I have to SSH in. I imagine that what I want is possible, but I have no idea what the moving parts would be.
Ok this turned out to be quite simple actually! The ssh program can do this for you, this is how I configure it with Mac OS ssh:
ssh -N -i "/Users/foo/aws_ssh_key.pem" \
-L "localhost:5990:stack-name-vpc-db.asdfqwerty.us-east-1.rds.amazonaws.com:3306" \
foo#12.34.567.890
With the -L flag, it'll proxy stuff over the SSH connection for you from the given endpoint to the provided endpoint on the other side. That -N flag is optional, it just turns off the regular SSH console since we only want to run a proxy server. The microservice can treat localhost:5990 as if it were the regular MySQL endpoint.

connect opscenter and datastax agent runs in two docker containers

There two containers which is running in two physical machines.One container for Ops-center and other is for (datastax Cassandra + Ops-center agent).I have have manually installed Ops-center agent on each Cassandra containers.This setup is working fine.
But Ops-center can not upgrade nodes due to fail ssh connections to nodes. Is there any way create ssh connection between those two containers. ??
In Docker you should NOT run SSH, read HERE why. After reading that and you still want to run SSH you can, but it is not the same as running it on Linux/Unix. This article has several options.
If you still want to SSH into your container read THIS and follow the instructions. It will install OpenSSH. You then configure it and generate a SSH key that you will copy/paste into the Datastax Opscenter Agent upgrade dialog box when prompted for security credentials.
Lastly, upgrading the Agent is as simple as moving the latest Agent JAR or the version of the Agent JAR you want to run into the Datastax-agent Bin directory. You can do that manually and redeploy your container much simpler than using SSH.
Hope that helps,
Pat

how to make amazon EC2 instances authenticate each other automatically?

I am using aws java sdk to launch EC2 instances (running Ubuntu 12.04) and run a distributed tool on them, the tool uses openMPI for message passing between the nodes and openMPI uses SSH to connect nodes with each other.
The problem is that the EC2 instances don't authenticate each other for SSH connections by default, this tutorial shows how to set up SSH by generating keys and adding them to nodes, However, when I tried to add the generated key to the slaves using the command
$ scp /home/mpiuser/.ssh/id_dsa.pub mpiuser#slave1:.ssh/authorized_keys
I still got permission denied. Also, after generating new keys, I was not able to log in using the ".pem" key that I got from amazon.
I am not experienced with SSH keys, but I would like to have some way of configuring each EC2 instance (when its firstly created) to authenticate the others, for example by coping a key into each of them. Is this possible and how It could be done?
P.S.: I can connect to each instance once it is launched and can execute any commands on them over SSH.
I found the solution, I added the amazon private key (.pem) in the image (AMI) that I use to create the EC2 instances and I changed the /etc/ssh/ssh_config file by adding a new identity file
IdentityFile /path/to/the/key/file
This made SSH recognize the .pem private key when it tries to connect to any other EC2 instance created with the same key.
I also changed StrictHostKeyChecking to no, which stopped the message "authenticity of host xxx can't be established" which requires users interaction to proceed with connecting to that host.

How to access to remote server

I want to create a repository on the remote server .
Access constraint that I have :
(a) IP address (of server)
(b) username/pw
I am following this tutorial and stuck in the first step :"Initial access to mercurial-server"
I am not able to understand those "ssh connection" syntax (specially the my-key)
How could I connect to remote server(using ssh-agent ) i order to create new repo .
This is the same problem we see again and again. mercurial-server isn't a part of Mercurial. It's a separate, third party, not generally necessary piece of software that tries to make mercurial administration easier without really succeeding.
Start here: https://www.mercurial-scm.org/wiki/PublishingRepositories/
and pick the type of access you want, http or ssh and then use either hgweb.cgi + apache (for http) or nothing at all if you just want to use ssh.
Specifically, for any server that has the mercurial client on it (apt-get install mercurial on debian or ubuntu and yum install mercurial on redhat, fedora, or centos) you don't need any extra software at all for hosting mercurial repositories over ssh. You can just do:
hg clone myLocalrepo ssh://you#thatserver/myRemoteRepo
and poof you're hosting there.