How to open Apache Airflow, using SSH in Putty in web, when port 8080 is already taken by Postgres? - ssh

I have a cluster running on Google Cloud. I have an active connection to master, using Putty SSH (I have used public and private keys generated by PuTTy Key Generator), so I am logged in as a user and I have a password. At the same time, I have Apache Airflow server running in master (I run it by using SSH in Google Cloud).
I want to see graphical interface and graph flow at port 8080. However, it is already taken.
enter image description here
enter image description here
I have typed in cmd the following command: netstat -a -n -o | find "8080"
The result is like that:
enter image description here
I've tried stopping what is running on this port but I can't do it.
What should I type in "Source port" and "Destination" in Putty SSH Tunnels configuration to open Apache Airflow?

Related

JupyterLab Enginne SSH Connection

i have a JupyterLab-Engine and I want to connect the terminal via SSH to the engine.
DataSpell asks for the host, the password and the port.
For some reasons it does not work. Do you know which part of the link is the host as well as which port is meant?
The link is:
jupyter.(machinename).(universityurl)

Error Public Key when trying to ssh into Google Cloud Platform VM

I had been using VSCode's remote-ssh to access my virtual machines running on google cloud. This had been working perfectly fine until I made a snapshot of my most recent instance and created a new instance out of this on a larger VM. Now when I try to connect (through any method) I get: " Permission denied (publickey).". I have spent countless hours deleting and re-adding, and recreating my ssh keys to no avail. Before I simply ran "gcloud compute config-ssh" and this created a working config file, but now this works. Please help, I have tried everything and there is simply no way for me to ssh. On the website I can click the ssh button to open up their shell, but cannot do it from my terminal
The problem may be related to the lack of identification of your SSH private key during connection in VSCode. You can indicate your private key adding IdentityFile option pointing to your SSH private key, this in your SSH connection host entries in SSH configuration files:
Host vm_name
HostName external_ip
IdentityFile /path/to/ssh_private_key
Port port_number
Here the long story if you or someone need more information.
You can go from the start for ensure that you do no have compromise your SSH keys and that is the origin of problem.
Create SSH Key
First, create new ssh keys.In the computer that you will use to access your remote host, that is Google VM instance, open your terminal or cmd and go to the ssh folder to generate the keys.
My ssh config and keys are under my user directory, /home/my_user/.ssh on Linux or C:\Users\my_user\.ssh on Windows.
The I will cd to one of these path, depending on for which of them I using at the moment.
Linux:
cd /home/my_user/.ssh
Windows:
cd C:\Users\my_user\.ssh
Command to generate SSH key
ssh-keygen -t rsa -f my_ssh_key -C user
my_ssh_key: the name your key, you can put what you want to better identify
user: must be the user that you want to use to connect at your Google VM instance.
This will generate an Private Key named my_ssh_key and a Public key named my_ssh_key.pub.
Alternatively, stay in any location of operating system and passing the absolute path where to generate the keys:
Linux:
ssh-keygen -t rsa -f /home/my_user/.ssh/my_ssh_key -C user
Windows:
ssh-keygen -t rsa -f C:\Users\my_user\.ssh\my_ssh_key -C user
Copy the public key in your Google cloud VM authorized_keys file
/home/my_user/.ssh/authorized_keys
** Do not rewrite anyone public key that already exists jus append in the file of authorized_keys file.
Add new ssh Host entry for remote connection
Click on Remote SSH manager, the icon at the bottom right of the VS Code, click on the Remote SSH: Open Configuration File option and choose your ssh configuration file to add another SSH entry for remote connection.
The config file must be under SSH directory, the same path used in the step of generate SSH keys.
Linux:
/home/my_user/.ssh/config
Windows:
C:\Users\my_user\.ssh\config
To add another Host, write the following make the properly changes:
Host vm_name
HostName external_ip
IdentityFile /path/to/ssh_private_key
Port port_number
vm_name: is alias to connect with ssh command in practical way, could be what you want.
external_ip: the external of your Google VM instance, you can get in the VM instances panel at https://console.cloud.google.com/
IdentityFile: the path for yout private SSH key, the file that you generated that note have .pub extension.
Linux:
/home/my_user/.ssh/my_ssh_key
Windows:
C:\Users\my_user\.ssh\my_ssh_key
Port: the por number of SSH of your Google VM instance, 22 is the default port.
Now it is just choose this host to connect to your Google VM instance.
For more details about SSH settings on Google Cloud Platform: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#linux-and-macos_1

SSH to Github not working

SSH has been working fine for the last few weeks since I got my new PC. I've had no problems but today I started getting:
ssh: connect to host github.com port 22: resource temporarily unavailable
I did some googling and found that there is a common issue with WSL which sometimes causes this, but I'm unable to SSH from my bash shell, or from cmd/powershell.
This is the part that confuses me, if I do: ssh -T git#192.30.253.113 I am prompted for the password to my key, it successfully authenticates and responds with "Hi alexmk92! You've successfully authenticated".
Great, that at least proves that my firewall isn't blocking SSH on port 22. But why does git#github.com throw the resource failed error? My initial thought is that this could be a DNS problem.
So I tried to configure my network adapter to use Google's DNS server (8.8.8.8 and 8.8.4.4) I even configured the IPV6 DNS servers just in case. Following this I did an ipconfig /flushdns, attempted to connect via git#github.com again and BAM the same result, however git#192.30.253.113 still works.
I'm guessing another potential cause is that github.com is behind a load balancer and one of the IP's on the cluster could be black-listed somewhere on my machine? I'm just pulling guesses out of thin air now, any help would be greatly appreciated, this is driving me insane.
After some further Googling it turned out that my machine did not have a hosts entry for github.com and it was unable to automatically resolve it.
In Windows Subsystem for Linux I created a ssh config file
touch ~/.ssh/config
(for some reason the base distro of Ubuntu 18.04 on the windows marketplace didn't have one) I then had to make sure the file permissions were correct:
chmod 755 ~/.ssh/config
Once the file was created, I edited it with
sudo nano ~/.ssh/config
and added github.com as a Host.
Host github.com
Hostname ssh.github.com
Port 22
Upon saving, I ran
sudo /etc/init.d/ssh restart
and attempted
ssh -T git#github.com
Everything now seems to be working.
In my case my ISP did not allow ssh, so it was not working from cmd and wsl both. Got around it using vpn
To have successful SSH connection to Github, SSH key has to be import into Github
Open Git bash or Terminal
Run the command ssh-keygen
Choose all default option
A private and a public key gets generated in the folder * < user_home>/.ssh/*
Login to Github.com
Navigate to account settings
Choose item "SSH and GPG Keys" from the side navigation bar
click added new SSh key
Copy and save public key content from * < user_home>/.ssh/id_rsa.pub *

Putty multihop tunnel replicate in bash

Im experiencing a problem replicate my putty ssh tunneling with Cmder bash (on windows machine).
1. I want to access web interface on port 7183 on server_2. To get there I have to go through jump_server first and and tunnel twice, as from the jump_server, only visible port is 22.
Steps with putty:
1. connect to jump_server with tunnel (L22 server_2:22) using username_1
2. connect to localhost with tunnel (L7183 localhost:7183) using username_2
After that, Im able to access the web interface when I type localhost:7183 into browser on my local machine.
Now Im trying to reproduce this in Cmder, but I havent been able to do that with one big command, nor 2 separate commands:
ssh -L 7183:localhost:7183 username_1#jump_server ssh -L 22:localhost:22 -N username_2#server_2 -vvv
This is only the last command I used as I tried interchanging ports and hosts without success.
2. Is the syntax different when I want to open port 12345 on my local machine and have it forwarded to port 21050 on server_2 or that would be remote tunneling?
Finally managed to achieve the 1. question with:
ssh username_1#jump_server -L 22:server_2:22 -N -vvv
ssh -L 7183:localhost:7183 username_2#localhost
Now Im albe to access the web interface from server_2 on my localhost:7183

How to SSH into a GCE Instance created from a custom image?

I'm having issues using ssh to log in to a VM created from a custom image.
I followed the steps for creating an image from an existing GCE instance.
I have successfully created the image, uploaded it to Google Cloud Storage and added it as an image to my project, yet when I try to connect to the new image, I get a "Connection Refused".
I can see other applications running on other ports for the new image, so it seems to be just ssh that is affected.
The steps I did are below:
...create an image from existing GCE instance (one I can log into fine via ssh)..then:
gcutil --project="river-ex-217" addimage example2 http://storage.googleapis.com/example-image/f41aca6887c339afb0.image.tar.gz
gcutil --project="river-ex-217" addinstance --image=example2 --machinetype=n1-standard-1 anothervm
gcutil --service_version="v1" --project="river-ex-217" ssh --zone="europe-west1-a" "anothervm"
Which outputs:
INFO: Running command line: ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /Users/mark1/.ssh/google_compute_engine -A -p 22 mark1#23.251.133.2 --
ssh: connect to host 23.251.133.2 port 22: Connection refused
I've tried deleting the sshKeys metadata as suggested in another SO answer, and reconnecting which did this:
INFO: Updated project with new ssh key. It can take several minutes for the instance to pick up the key.
INFO: Waiting 120 seconds before attempting to connect.
INFO: Running command line: ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /Users/mark1/.ssh/google_compute_engine -A -p 22 mark1#23.251.133.2 --
ssh: connect to host 23.251.133.2 port 22: Connection refused
I then try for the first instance in another zone, it works fine with the new key:
gcutil --service_version="v1" --project="river-ex-217" ssh --zone="europe-west1-b" "image1"
Both instances are running on the same "default" network with port 22 running, and ssh works for the first instance the image is created from.
I tried nc command from the other instance and my local machine, it shows no output:
nc 23.251.133.2 22
...whilst the original VM's ip shows this output:
nc 192.157.29.255 22
SSH-2.0-OpenSSH_6.0p1 Debian-4
I've tried remaking the image again and re-adding the instance, no difference.
I've tried logging in to the first instance, and switching user to one on that machine (which should be the same as the second machine?), and ssh from there.
WARNING: You don't have an ssh key for Google Compute Engine. Creating one now...
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
INFO: Updated project with new ssh key. It can take several minutes for the instance to pick up the key.
INFO: Waiting 300 seconds before attempting to connect.
INFO: Running command line: ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/mark/.ssh/google_compute_engine -A -p 22 mark#23.251.133.2 -- --zone=europe-west1-a
ssh: connect to host 23.251.133.2 port 22: Connection refused
I'm out of ideas, any help greatly appreciated :) The maddening thiing is I can see the new VM is live with the application ready, I just need to add a few files to it and set up some cronjobs. I guess I could do this pre-image making, but I would like to be able to log in at a later date and modify it, without needing to take 1hr to create images and launch new instances every time.
Yours faithfully,
Mark
This question appears to be about how to debug SSH connectivity problems with images, so here is my answer to that.
It appears that your instance may not be running the SSH server properly. There may be something amiss with the prepared image.
Possibly useful debugging questions to ask yourself:
Did you use gcimagebundle to bundle up the image or did it manually? Consider using the tool to make sure there isn't something you missed.
Did you change anything about the ssh server configuration before bundling the image?
When the instance is booting, check it's console output for ssh messages - it should mention regenerating the keys, starting sshd daemon and listening on port 22. If it does not or complains about something related to ssh, you should follow up on that.
You covered these, but for sake of completeness, these should also be checked:
Can you otherwise reach the VM after it comes up? Does it respond on webserver ports (if any) or respond to ping?
Double check that the network you VM is on allows SSH (port 22) access from the host you are connecting from.
You can compare your ssh setup to that of a working image:
Create a new disk (disk-mine-1) from your image.
Create a new disk (disk-upstream-1) from any working boot image, for example the debian wheezy one.
Attach both of these to a VM you can access (either on console or from cli).
SSH into the VM.
Mount both of the images (sudo mkdir /mnt/{mine,upstream} && sudo mount /dev/sdb1 /mnt/mine && sudo mount /dev/sdc1 /mnt/upstream). Note that whether your image is sdb or sdc depends on the order you attached the images!
Look for differences between the ssh config (diff -waur /mnt/{mine,upstream}/etc/ssh). There should not be any unless you specifically need them.
Also check if your image has proper /mnt/mine/etc/init.d/{ssh,generate-ssh-hostkeys} scripts. They should also be linked from /mnt/mine/etc/rc{S,2}.d (S10generate-ssh-hostkeys and S02ssh respectively).