Connect my Raspberry Pi from outside with SSL - ssl

I run some server (gitea, registry, nextcloud) local on my Raspberry Pi. Vodafon dont let me connect my Pi from outside so I use a server. On my Pi I use:
autossh -M 0 -f -o ConnectTimeout=10 -o ServerAliveInterval=60 -o ServerAliveCountMax=2 -N -R localport:<myserver>:serverport diabolus#raspi.tld`
That do the job really good. But registry want a connection with SSL.
What I have to do if I want to connect my Pi with https?

Related

How to connect to expo via private tunnel (not ngrok)

I have the problem that at work I can not connect via network to expo, so I need to use tunnel, which is fine. However sometimes the tunnel is really slow destroying any developer expierience.
Since I can also host expo locally on localhost I had the idea of simply ssh-tunneling to a remote server that has an open port.
my remote host runs ubuntu
so i SSH there like so:
ssh -R 0.0.0.0:19000:0.0.0.0:19000 user#ip
in order for this to work i also added
GatewayPorts clientspecified
to my /etc/ssh/sshd_config
...
sudo netstat -plutn
shows me
tcp 0 0 0.0.0.0:19000 0.0.0.0:* LISTEN 20183/2
so accepting requests (i also tried to forward port 19001 to get something back when i enter it in the browser which worke fine)
However when i enter:
exp://serverip:19000 into the expo client on my android phone he can't connect.
Any ideas on help?
It looks like Expo uses multiple ports 19000, 19001, and 19002. So you will need to forward all of these.
e.g.
$ ssh -f -N -R 19000:localhost:19000 user#ip
$ ssh -f -N -R 19001:localhost:19001 user#ip
$ ssh -f -N -R 19002:localhost:19002 user#ip
Also, you can set the REACT_NATIVE_PACKAGER_HOSTNAME environment variable to use the remote host.
$ export REACT_NATIVE_PACKAGER_HOSTNAME="ip"
$ expo start

Reverse tunneling in RaspberryPI and cloud server?

I have a Raspberry Pi and and i have done reverse tunneling with an AWS instance. I ran the following command below on my Raspberry Pi.
ssh -N -R 1234:localhost:22 username#instance_IP
and on my Linux instance i am able to ssh using..
ssh -l user_pi -p 1234 localhost
but i am not able to ssh directly into my PI instead i first have to login to AWS and then into my PI..
how can i login to my PI directly using tunneling?
Thanks a lot!!
I found out that in case of direct remote connecting you need to allow Tcp Forwarding
AllowTcpForwarding yes

How to remotely capture traffic across multiple SSH hops?

I want to debug another machine on my network but have to pass through one or more SSH tunnels to get there.
Currently:
# SSH into one machine
ssh -p 22 me#some_ip -i ~/.ssh/00_id_rsa
# From there, SSH into the target machine
# Note that this private key lives on this machine
ssh -p 1234 root#another_ip -i ~/.ssh/01_id_rsa
# Capture debug traffic on the target machine
tcpdump -n -i eth0 -vvv -s 0 -XX -w tcpdump.pcap
But then it's a pain to successively copy that .pcap out. Is there a way to write the pcap directly to my local machine, where I have wireshark installed?
You should use ProxyCommand to chain ssh hosts and to pipe output of tcpdump directly into wireshark. To achieve that you should create the following ssh config file:
Host some_ip
IdentityFile ~/.ssh/00_id_rsa
Host another_ip
Port 1234
ProxyCommand ssh -o 'ForwardAgent yes' some_ip 'ssh-add ~/.ssh/01_id_rsa && nc %h %p'
I tested this with full paths, so be carefull with ~
To see the live capture you should use something like
ssh another_ip "tcpdump -s0 -U -n -w - -i eth0 'not port 1234'" | wireshark -k -i -
If you want to just dump pcap localy, you can redirect stdout to filename of your choice.
ssh another_ip "tcpdump -n -i eth0 -vvv -s 0 -XX -w -" > tcpdump.pcap
See also:
https://serverfault.com/questions/337274/ssh-from-a-through-b-to-c-using-private-key-on-b
https://serverfault.com/questions/503162/locally-examine-network-traffic-of-remote-machine/503380#503380
How can I have tcpdump write to file and standard output the appropriate data?

Ansible cannot ssh into VM created by Vagrant

I have a very simple Vagrantfile:
config.vm.define "one" do |one|
one.vm.box = "centos/7"
end
config.ssh.insert_key = false
end
(Note it was creating vm but exiting with a failure untill I installed vbguest plugin)
After vm was created I wanted to execute a simple Ansible job. My inventory file (Vagrant forwarded 22 port on guest to 2222 on host):
[one]
127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=C:/Users/Lukasz/.vagrant.d/insecure_private_key
And here's the Docker command (from Windows cmd):
docker run --rm -v /c/Users/Lukasz/ansible/ansible:/home:rw -w /home williamyeh/ansible:ubuntu14.04 ansible-playbook -i inventory/testvms site.yml --check -vvvv
Finally, here's the output of the command:
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o 'IdentityFile="C:/Users/Lukasz/.vagrant.d/insecure_private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o PreferredAuthentications=privatekey -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1488381378.63-13786642598588 `" && echo ansible-tmp-1488381378.63-13786642598588="` echo ~/.ansible/tmp/ansible-tmp-1488381378.63-13786642598588 `" ) && sleep 0'"'"''
fatal: [127.0.0.1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g 1 Mar 2016\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/root/.ansible/cp/ansible-ssh-127.0.0.1-2222-vagrant\" does not exist\r\ndebug2: resolving \"127.0.0.1\" port 2222\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 127.0.0.1 [127.0.0.1] port 2222.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 127.0.0.1 port 2222: Connection refused\r\nssh: connect to host 127.0.0.1 port 2222: Connection refused\r\n",
"unreachable": true
}
I can ssh to this VM manualy with no problem - specifying user, port and private key.
Am I doing something wrong?
EDIT 1:
I have mounted folder with the private key: -v /c/Users/Lukasz/.vagrant.d/:/home/.ssh/ and refer to it from inventory file: ansible_ssh_private_key_file=/home/.ssh/insecure_private_key. Also assigned a static IP in the vagrantfile, and used it in docker command. Errror now is "Connection timed out".
There's a misunderstanding of how loopback addresses work and also an underestimation of how complex system you actually run.
In the scenario described in your question, you are running four machines with four separate network stacks:
a physical machine Windows
a CentOS VM (supposedly running under VirtualBox, orchestrated by Vagrant)
a Docker Linux machine which is running in the background when you install Docker for Windows (judging from your sentence "the docker command (from windows cmd)")
an Ansible container running under the Docker's Linux machine
Each of these machines has its own loopback address (127.0.0.1) which is not accessible from any other machine.
You have one port mapping:
Vagrant set a mapping for tnt CentOS virtual machine under the control of VirtualBox so that the VM's port 22 is accessible on the Windows machine loopback address (127.0.0.1) port 2222.
And thus you can connect with SSH client from Windows to the CentOS machine.
However, Docker for Windows runs a separate Linux machine and configures the docker command so that when you execute docker from Windows command-line prompt, you actually work directly on this Linux machine (as you run containers, you don't actually need to access this Docker host directly, so you can be unaware of its existence).
Like it was not enough, each container you run will have its own loopback 127.0.0.1 address.
As a result there is no way an Ansible container would reach the loopback address of your physical Windows machine.
Probably the easiest solution would be to configure the CentOS box to run on a public network, with a static IP address (see Vagrant: Public Networks) by adding for example the following line to the Vagrantfile:
config.vm.network "public_network", ip: "192.168.0.17"
Then you should use this address in the inventory file and follow Konstantin's advice to make the private key available to the container:
[one]
192.168.0.17 ansible_ssh_user=vagrant ansible_ssh_private_key_file=/path/to/insecure_private_key/mapped/inside/container
It seems that you specify windows path for ansible_ssh_private_key_file in your inventory, but use this inventory from inside the container.
You should map C:/Users/Lukasz/.vagrant.d/ into your container and set ansible_ssh_private_key_file from container's perspective.

SCP through ssh tunnel

ssh abc#202.221.23.87 -p 22724 -L 12345:172.21.33.51:18081
I openned the tunnel successfully and I tried on my localhost:
scp -P 12345 abc#127.0.0.1:/testfuel_20140714.zip .
But it's not working. It showed nothing. How to make scp works? Thanks so much.
You don't appear to be creating the tunnel on localhost. Are you trying to download or upload a file?
ssh -p 22724 -L 12345:localhost:18081 abc#202.221.23.87