How to use packer to build AMI without SSH - ssh

I would like to use packer to build AMI's where SSH is not running. This will be for immutable infrastructure. We will be building base / golden images and then building more streamlined images from the base image but, ultimately, I don't want SSH or any other means of remote access to the image. Can packer do this?

I'm not sure about Packer's ability to do this. However, you could use AWS Security Groups to control SSH access to your EC2 instances after they've been spun up using your AMIs.
Just create a Security Group that denies all ingress connections, and place your EC2 instance into it.

Related

Cannot ssh to google cloud instance

I'm newbie for GCP and I need your help which this is the step I had made.
(1) I setup google cloud firewall rules to allow ssh on port 22 and I can ssh to my instance, CentOS7, correctly.
(2) When I connect to my instance, I run some firewall script and after that I cannot ssh to my instance anymore. It seem that script block ssh port even I enable it in the VPC Network > Firewall rules.
(3) Now I cannot connect to my instance including Open in browser window in the SSH menu on gcp console.
Is there any solution to connect my instance? Please help.
Thank in advance.
Bom
You probably change block ssh port by changing firewall configuration inside VM.
So you can consider 2 options :
1) Recreate VM if no sensitive data, or not too much work spent for the existing setup.
2) Detach Boot disk and reuse it on another instance, to change the configuration files of firewal.
check Official Docs - Use your disk on a new instance for that:
gcloud compute instances delete $PROB_INSTANCE
--keep-disks=boot
gcloud compute instances create new-instance
--disk name=$BOOT_DISK,boot=yes,auto-delete=no
gcloud compute ssh new-instance
Hope it will help you.

How can I ssh into a container running inside an OpenShift/Kubernetes cluster?

I want to be able to ssh into a container within an OpenShift pod.
I do know that, I can simply do so using oc rsh. But this is based on the assumption that I have the openshift cli installed on the node where I want to ssh into the container from.
But what I want to actually achieve is, to ssh into a container from a node that does not have openshift cli installed. The node is on the same network as that of OpenShift. The node does have access to web applications hosted on a container (just for the sake of example). But instead of web access, I would like to have ssh access.
Is there any way that this can be achieved?
Unlike a server, which is running an entire operating system on real or virtualized hardware, a container is nothing more than a single Linux process encapsulated by a few kernel features: CGroups, Namespacing, and SELinux. A "fancy" process if you will.
Opening a shell session into a container is not quite the same as opening an ssh connection to a server. Opening a shell into a container requires starting a shell process and assigning it to the same cgroups and namespaces on the same host as the container process and then presenting that session to you, which is not something ssh is designed for.
Using oc exec, kubectl exec, podman exec, or docker exec cli commands to open a shell session inside a running container is the method that should be used to connect with running containers.

How do you make an Express.js API accessible from the Internet?

I have an Express API server running on localhost on my own machine. How do I make it accessible from the Internet and not just my own machine?
Preferably, it would be deployed on AWS.
In AWS there are multiple ways of hosting your express application based on flexibility vs convenience.
AWS Elastic Beanstalk:
This will provide you more convenience by creating an autoscaling and loadbalancing environment with version management and roll back support from one place in AWS web console. Also provide you IDE support for deployments and CLI commands for CI/CD support.
AWS ECS:
If you plans to dockerize your application(Which I highly recommend) you can use AWS ECS to manage your docker cluster with container level Autoscaling and loadbalancing support for more convenience. This also provides CLI for CI/CD.
AWS EC2:
If you need more flexibility, you can get a virtual server in AWS and also manually configure autoscaling and loadbalancing which I prefer as the least option simply for a web app since you have to do most of the things manually.
All this services will provide you with publicly accessible URL if you configure them properly to grant access from outside. You need to configure networking and security groups properly either exposing the loadbalancer or instance IP/DNS URL to the outside.

Can't access my EC2 chef node with knife ssh

So I've set up my EC2 chef node in several ways (bootstrapping with knife or through client-chef parameters from my node), and every time I try to access the node through knife ssh I get the following error:
WARNING: Failed to connect to *node's FQDN* -- SocketError: getaddrinfo: nodename nor servname provided, or not known
I use knife ssh mainly to update to node and just run sudo client-chef
From this error I assume that I have no access to the FQDN as its an internal address, isn't the chef server supposed to do that for me?
I will soon have a private VPC on AWS so on any occasion I won't be able to access the internal address from my workstation.
Is there a way to make the chef server run this ssh command, or run it any other way?
What I've discovered is basically my misunderstanding of how chef works, I was looking for some sort of push mechanism, and chef does not support push out of the box.
There are 2 workarounds to this:
1) Chef's push jobs - as I'm writing this post, chef push jobs do not work on Ubuntu 14, and I'm not too keen on letting this service dictate the OS of my choice
2) Not recommended anywhere, but installing knife on my chef server worked. Since the chef server is within the VPC, he's my only point of access and from there I'll run knife ssh to all my other nodes.
If anyone is looking for more of a push-based service I'd recommend to look at SaltStack
Since your node does not have an external IP, you should use an ssh gateway. Please refer to this thread: Using knife ec2 plugin to create VM in VPC private subnet
As you mentioned in your answer, the chef doesn't provide push capability, instead it uses a pull. And knife ssh does exactly that - it ssh to the nodes and allows you to run chef-client command which would pull the configuration from the chef server.
Please note, that in your 2nd solution, any node within the VPC with knife would do. This doesn't have to be a Chef server, or should I say the Chef server doesn't have to be in this VPC at all. However, a solution like this compromises security since your authentication with the chef server and your ssh private key would be both located somewhere outside your workstation.
There is also one more way to mention, which is to add chef-client runs to cron if your strategy is well tested.

how to make amazon EC2 instances authenticate each other automatically?

I am using aws java sdk to launch EC2 instances (running Ubuntu 12.04) and run a distributed tool on them, the tool uses openMPI for message passing between the nodes and openMPI uses SSH to connect nodes with each other.
The problem is that the EC2 instances don't authenticate each other for SSH connections by default, this tutorial shows how to set up SSH by generating keys and adding them to nodes, However, when I tried to add the generated key to the slaves using the command
$ scp /home/mpiuser/.ssh/id_dsa.pub mpiuser#slave1:.ssh/authorized_keys
I still got permission denied. Also, after generating new keys, I was not able to log in using the ".pem" key that I got from amazon.
I am not experienced with SSH keys, but I would like to have some way of configuring each EC2 instance (when its firstly created) to authenticate the others, for example by coping a key into each of them. Is this possible and how It could be done?
P.S.: I can connect to each instance once it is launched and can execute any commands on them over SSH.
I found the solution, I added the amazon private key (.pem) in the image (AMI) that I use to create the EC2 instances and I changed the /etc/ssh/ssh_config file by adding a new identity file
IdentityFile /path/to/the/key/file
This made SSH recognize the .pem private key when it tries to connect to any other EC2 instance created with the same key.
I also changed StrictHostKeyChecking to no, which stopped the message "authenticity of host xxx can't be established" which requires users interaction to proceed with connecting to that host.