I have instance in Google Cloud Platform's Compute Engine. My goal is to connect to this instance via ssh command or putty. Following this instruction (except Ansible part) https://alex.dzyoba.com/blog/gcp-ansible-service-account/
But when I try to connect via
ssh -i genereated-key <username of service account>#<external ip>
it gives me
Host key verification failed.
or
I also tried to use keys, which were genereated by google, but no difference
Compute Firewall's rules are:
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
default-allow-http default INGRESS 1000 tcp:80 False
default-allow-https default INGRESS 1000 tcp:443 False
default-allow-icmp default INGRESS 65534 icmp False
default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-65535,icmp False
default-allow-rdp default INGRESS 65534 tcp:3389 False
default-allow-ssh default INGRESS 65534 tcp:22 False
What should I do to connect to instance via ssh ?
It turns out, that you cannot login via Putty/ssh using OS Login (at least support agent told me that). So In my case I added public key in instance settings (insted of metadata) and it works.
Related
Edit: Before you down vote, please comment why you down vote. so I can improve next time, thank you.
I tried to ssh from pod in kubernetes to another VM in GCE, mainly because I want to use rsync between these two. At the moment, I use gcloud compute scp to copy file to local computer then kubectl cp.
I used kubectl exec to access the pod, setting up ssh with ssh-keygen then copy rsa_id.pub to designated VM to /home/user/.ssh/, but when I try ssh -v user#ip it just said error connection timed out.
I tried setup gcloud inside pods and to be able to use gcloud compute ssh and I also tried gcloud compute config-ssh, the results are the same.
When I ssh with my own computer it works fine
I think firewall or network configuration is causing this problem but I'm not really sure how to fix it. Should I expose ssh port with k8s service LoadBalancer or should I edit my firewall rules in VPC network?
Beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons.
You can find the solution HERE
First, find your cluster's network:
gcloud container clusters describe [CLUSTER_NAME] --format=get"(network)"
Then get the cluster's IPv4 CIDR used for the containers:
gcloud container clusters describe [CLUSTER_NAME] --format=get"(clusterIpv4Cidr)"
Finally create a firewall rule for the network, with the CIDR as the source range, and allow all protocols:
gcloud compute firewall-rules create "[CLUSTER_NAME]-to-all-vms-on-network" --network="[NETWORK]" --source-ranges="[CLUSTER_IPV4_CIDR]" --allow=tcp,udp,icmp,esp,ah,sctp
at the first day when i created the instance, i was able to SSH no problem, but after yesterday, i just couldnt connect to my instance. when i checked the console i get something like this
Nov 5 15:30:49 my-app kernel: [79738.555434] [UFW BLOCK] IN=ens4 OUT= MAC=42:01:0a:94:00:02:42:01:0a:94:00:01:08:00 SRC=71.15.27.115 DST=10.121.0.7 LEN=60 TOS=0x00 PREC=0x00 TTL=50 ID=38049 PROTO=TCP SPT=37344 DPT=22 WINDOW=60720 RES=0x00 SYN URGP=0
i figured its a firewall issue, but my firewall rule seems okay (assuming i did not change anything since first i created the instance). i wonder what else could be the problem? here's my fw config
default-allow-http
http-server
IP ranges: 0.0.0.0/0
tcp:80
Allow
1000
default
default-allow-https
https-server
IP ranges: 0.0.0.0/0
tcp:443
Allow
1000
default
default-beego-http
http-server
IP ranges: 0.0.0.0/0
tcp:8080
Allow
1000
default
default-jenkins-app
http-server
IP ranges: 0.0.0.0/0
tcp:8989
Allow
1000
default
default-allow-icmp
Apply to all
IP ranges: 0.0.0.0/0
icmp
Allow
65534
default
default-allow-internal
Apply to all
IP ranges: 10.128.0.0/9
tcp:0-65535, udp:0-65535, 1 more
Allow
65534
default
default-allow-rdp
Apply to all
IP ranges: 0.0.0.0/0
tcp:3389
Allow
65534
default
default-allow-ssh
Apply to all
IP ranges: 0.0.0.0/0
tcp:22
Allow
65534
default
Looking at the output you’ve provided following your attempt to SSH into your instance, it looks like you’re being blocked by UFW (Uncomplicated Firewall) which is installed/enabled on the actual instance, rather than the GCP project wide firewall rules you have set (these look okay).
In order to SSH into your VM you will need to open port 22 in UFW on the instance. There are a couple of possible methods that will allow you to do this.
Firstly, see Google Compute Engine - alternative log in to VM instance if ssh port is disabled , specifically the answer by Adrián which explains how to open port 22 using a startup script. This method requires you to reboot your instance before the firewall rule is applied.
Another method which doesn’t require a reboot of the machine makes use of the Serial Console. However, in order to use this method a password for the VM is required. This method is therefore only possible if you previously set a password on the VM (before losing access).
To connect via the Serial Console the following metadata must be added, either to the instance you are trying to connect to, or to the entire project:
serial-port-enable=1
You can apply the metadata to a specific instance like so:
gcloud compute instances add-metadata [INSTANCE_NAME] \
--metadata=serial-port-enable=1
Or alternatively, to the entire project by running:
gcloud compute project-info add-metadata --metadata=serial-port-enable=1
After setting this metadata you can attempt to connect to the instance via the Serial Console by running the following command from the Cloud Shell:
gcloud compute connect-to-serial-port [INSTANCE_NAME]
When you have accessed the instance you will be able to manage the UFW rules. To open port 22 you can run:
sudo /usr/sbin/ufw allow 22/tcp
Once UFW port 22 is open, you should then be able to SSH into your instance from Cloud Shell or from the Console.
There is some additional info about connecting to instances via the Serial Console here:
https://cloud.google.com/compute/docs/instances/interacting-with-serial-console
I'm installing Phabricator on AWS and the usual proxy/web/database setup works for HTTP/S. Now I want to add SSH access to the repos. How can I configure SSH access to the repositories? My usage is small and I'd rather not create complex or obscure setups.
If the elastic IP is associated with the proxy server, the proxy would have to proxy SSH requests to Phabricator. Will a SOCKS proxy work? Is there an easy way (i.e. package) to have a socks proxy connect to the web server whenever either one is rebooted?
Without the SOCKS proxy, it seems the alternative is to put the everything on one server, except the database of course. This means the web server (running Phabricator) will need to be in the public VPS with an elastic IP associated with it. That way HTTPS and SSH can connect to the same hostname.
Are there alternatives? Is my only option to setup everything on one server?
This is accomplished with IP and port forwarding. My initial search for port forwarding was overwhelmed by port forwarding in routers and firewalls.
Add the following to /etc/sysctl.d/50-ip_forward.conf. This works on openSUSE Leap and CentOS.
# Enable IP_FOWARD
net.ipv4.ip_forward = 1
Apply the file with sysctl.
sudo sysctl -p /etc/sysctl.d/50-ip_forward.conf
Then forward the SSH port and test. If you lose access to the server, reboot it from the AWS console. I already changed the admin port to another port before making this change. It's easier for me to add the Port to my ~/.ssh/config than it is for users to specify a different port in their git client.
sudo firewall-cmd --zone external --add-forward-port port=22:proto=tcp:toaddr=1.2.3.4
If everything checks out, make the change permanent.
sudo firewall-cmd --zone external --add-forward-port port=22:proto=tcp:toaddr=1.2.3.4 --permanent
Don't forget to add the necessary ports in the AWS security group. Reboot the server to make sure everything is kept across reboots.
NAT instance
As an added bonus let's turn the proxy into a NAT instance! The external zone already has masquerade enabled but is not used. Assign the eth0 interface to the external zone.
sudo firewall-cmd --zone external --add-interface eth0
In your VPC, add a default route for 0.0.0.0/0 with a target of the instance above to the private subnet. Now test internet connectivity from an internal server in the private subnet. Performance isn't an issue because this route is only for server updates, not production traffic.
ping 8.8.8.8
If everything works, make the change permanent.
sudo firewall-cmd --zone external --add-interface eth0 --permanent
I like to reboot after making network/firewall changes to make sure everything is kept across reboots.
I have installed Kubernetes using contrib/ansible scripts.
When I run cluster-info:
[osboxes#kube-master-def ~]$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubedash is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubedash
Grafana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
The cluster is exposed on localhost with insecure port, and exposed on secure port 443 via ssl
kube 18103 1 0 12:20 ? 00:02:57 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=https://10.57.50.161:443 -- kubeconfig=/etc/kubernetes/controller-manager.kubeconfig --service-account-private-key-file=/etc/kubernetes/certs/server.key --root-ca-file=/etc/kubernetes/certs/ca.crt
kube 18217 1 0 12:20 ? 00:00:15 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=https://10.57.50.161:443 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
root 27094 1 0 12:21 ? 00:00:00 /bin/bash /usr/libexec/kubernetes/kube-addons.sh
kube 27300 1 1 12:21 ? 00:05:36 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://10.57.50.161:2379 --insecure-bind-address=127.0.0.1 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --tls-cert-file=/etc/kubernetes/certs/server.crt --tls-private-key-file=/etc/kubernetes/certs/server.key --client-ca-file=/etc/kubernetes/certs/ca.crt --token-auth-file=/etc/kubernetes/tokens/known_tokens.csv --service-account-key-file=/etc/kubernetes/certs/server.crt
I have copied the certificates from kube-master machine to my local machine, I have installed the ca root certificate. The chrome/safari browsers are accepting the ca root certificate.
When I'm trying to access the https://10.57.50.161/ui
I'm getting the 'Unauthorized'
How can I access the kubernetes ui?
You can use kubectl proxy.
Depending if you are using a config file, via command-line run
kubectl proxy
or
kubectl --kubeconfig=kubeconfig proxy
You should get a similar response
Starting to serve on 127.0.0.1:8001
Now open your browser and navigate to
http://127.0.0.1:8001/ui/ (deprecated, see kubernetes/dashboard)
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
You need to make sure the ports match up.
This works for me that you can access from network
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
Quick-n-dirty (and unsecure) way to access the Dashboard:
$ kubectl edit svc/kubernetes-dashboard --namespace=kube-system
This will load the Dashboard config (yaml) into an editor where you can edit it.
Change line type: ClusterIP to type: NodePort.
Get the tcp port:
$ kubectl get svc kubernetes-dashboard -o json --namespace=kube-system
The line with the tcp port will look like:
"nodePort": 31567
In newer releases of kubernetes you can get the nodeport from get svc:
# This is kubernetes 1.7:
donn#host37:~$ sudo kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.3.0.234 <nodes> 80:31567/TCP 2h
Do kubectl describe nodes to get a node IP address.
Browse to:
http://NODE_IP:31567
Good for testing. Not good for production due to lack of security.
Looking at your apiserver configuration, you will need to either present a bearer token (valid tokens will be listed in /etc/kubernetes/tokens/known_tokens.csv) or client certificate (signed by the CA cert in /etc/kubernetes/certs/ca.crt) to prove to the apiserver that you should be allowed to access the cluster.
https://github.com/kubernetes/kubernetes/issues/7307#issuecomment-96130676 describes how I was able to configure client certificates for a GKE cluster on my Mac.
To pass bearer tokens, you need to pass an HTTP header Authorization with a value Bearer ${KUBE_BEARER_TOKEN}. You can see an example of how this is done with curl here; in a browser, you will need to install an add-on/plugin to pass custom headers.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy &
Run the following command in your local laptop(or where you want to access the GUI)
ssh -L 8877:127.0.0.1:8001 -N -f -l root master_IPADDRESS
Get the secret key
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
Open the dashboard
http://localhost:8877/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Perform role-binding if you get any errors.
You can use kubectl proxy --address=clusterIP --port 8001 --accept-hosts '.*'
api server is already accessible on 6443 port on the node, but not authorize accesss to https://:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
i've generated client certificats signed by kubernetes ca cert, and converted to pkcs12 and integreted to my browser... when try to access to the this url they says that user are not authorized to access to the uri...
1: Is there a way to log in to an AWS instance without using key pairs? I want to set up a couple of sites/users on a single instance. However, I don't want to give out key pairs for clients to log in.
2: What's the easiest way to set up hosting sites/users in 1 AWS instance with different domains pointing to separate directories?
Answer to Question 1
Here's what I did on a Ubuntu EC2:
A) Login as root using the keypairs
B) Setup the necessary users and their passwords with
# sudo adduser USERNAME
# sudo passwd USERNAME
C) Edit /etc/ssh/sshd_config setting
For a valid user to login with no key
PasswordAuthentication yes
Also want root to login also with no key
PermitRootLogin yes
D) Restart the ssh daemon with
# sudo service ssh restart
just change ssh to sshd if you are using centOS
Now you can login into your ec2 instance without key pairs.
1) You should be able to change the ssh configuration (on Ubuntu this is typically in /etc/ssh or /etc/sshd) and re-enable password logins.
2) There's nothing really AWS specific about this - Apache can handle VHOSTS (virtual hosts) out-of-the-box - allowing you to specify that a certain domain is served from a certain directory. I'd Google that for more info on the specifics.
I came here through Google looking for an answer to how to setup cloud init to not disable PasswordAuthentication on AWS. Both the answers don't address the issue. Without it, if you create an AMI then on instance initialization cloud init will again disable this option.
The correct method to do this, is instead of manually changing sshd_config you need to correct the setting for cloud init (Open source tool used to configure an instance during provisioning. Read more at: https://cloudinit.readthedocs.org/en/latest/). The configuration file for cloud init is found at:
/etc/cloud/cloud.cfg
This file is used for setting up a lot of the configuration used by cloud init. Read through this file for examples of items you can configure on cloud-init. This includes items like default username on a newly created instance)
To enable or disable password login over SSH you need to change the value for the parameter ssh_pwauth. After changing the parameter ssh_pwauth from 0 to 1 in the file /etc/cloud/cloud.cfg bake an AMI. If you launch from this newly baked AMI it will have password authentication enabled after provisioning.
You can confirm this by checking the value of the PasswordAuthentication in the ssh config as mentioned in the other answers.
Recently, AWS added a feature called Sessions Manager to the Systems Manager service that allows one to SSH into an instance without needing to setup a private key or opening up port 22. I believe authentication is done with IAM and optionally MFA.
You can find out more about it here:
https://aws.amazon.com/blogs/aws/new-session-manager/
su - root
Goto /etc/ssh/sshd_config
vi sshd_config
Authentication:
PermitRootLogin yes
To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords no
Change to no to disable tunnelled clear text passwords
PasswordAuthentication yes
:x!
Then restart ssh service
root#cloudera2:/etc/ssh# service ssh restart
ssh stop/waiting
ssh start/running, process 10978
Now goto sudoers files (/etc/sudoers).
User privilege specification
root ALL=(ALL)NOPASSWD:ALL
yourinstanceuser ALL=(ALL)NOPASSWD:ALL / This is the user by which you are launching instance.
AWS added a new feature to connect to instance without any open port, the AWS SSM Session Manager.
https://aws.amazon.com/blogs/aws/new-session-manager/
I've created a neat SSH ProxyCommand script that temporary adds your public ssh key to target instance during connection to target instance. The nice thing about this is you will connect without the need to add the ssh(22) port to your security groups, because the ssh connection is tunneled through ssm session manager.
AWS SSM SSH ProxyComand -> https://gist.github.com/qoomon/fcf2c85194c55aee34b78ddcaa9e83a1
Amazon added EC2 Instance Connect.
There is an official script to automate the process https://pypi.org/project/ec2instanceconnectcli/
pip install ec2instanceconnectcli
Then just
mssh <instance id>