HTTP reverse tunnel - ssh

Question
Is it possible to use a socks tunnel (or any other form of tunnel on port 80 or 443) to control the local machine that is creating the tunnel from the remote machine? Basically, a ssh -R [...] when ssh is not an option and only TCP connection on port 80 and 443 are possible?
Concrete scenario
Due to a very restrictive security policy of one of our customers, we currently have to connect to a Windows jump host without the ability to copy-and-paste stuff there. From there, we download needed files via web browser and copy via ssh to the target machine, or use ssh directly to do maintenance work on the target machine. However, this workflow is time-consuming, and honestly quite annoying.
Unfortunately, the firewall seems to be able to distinguish between real HTTP traffic and ssh as opening instructing sshd on our server to accept connections on 443 did not work.
Firewall
(HTTP only)
┌──────────────┐
│ │
│ ┌─────────┐ │ ??? ┌──────────┐
│ │Jumphost ├─┼───────►│Our Server│
│ │(Windows)│ │ └───▲──────┘
│ └──┬──────┘ │ │
│ │ssh │ │ssh
│ │ │ │
│ ┌─▼─────┐ │ ┌───┴─────┐
│ │Target │ │ │Developer│
│ │(Linux)│ │ │Machine │
│ └───────┘ │ └─────────┘
│ │
└──────────────┘
Any hints are highly appreciated 👍🏻

The problem seems to be a firewall with deep packet inspection.
You can overcome it with using ssh over ssl, using stunnel or openssl.
From the windows box you can tunnel with a stunnel client to our-server stunnel server.
That encapsulate all (ssh) data into ssl therefore there is no difference to a HTTPS connection.
Another option could be ptunnel-ng, it supports a tcp connection over ICMP (ping).
Most firewalls ignores ICMP, if you can ping your our-server this should work, too.
But ptunnel-ng seems sometimes a bit unstable.
If you can't install/execute programs on the windows jumbBox, you can open ports, redirect them by ssh and use them directly by the target-linux.
On your windows jumpbox:
ssh target -R target:7070:our-server:443
On the target (linux) you can use localhost:7070 to connect to our-server:443
I would recommend to use docker for the client and server parts.
I only can't use the ptunnel server inside a container, probably because of the required privileges.
Using ptunnel
On the server
The ptunnel binary is build inside docker, but used by the host directly
This sample expects an ubuntu server
Dockerfile.server
FROM ubuntu:latest
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y build-essential autoconf automake git
RUN mkdir -p /workdir
WORKDIR /workdir
RUN git clone https://github.com/lnslbrty/ptunnel-ng.git && cd ptunnel-ng && ./autogen.sh
start-server.sh
#!/bin/sh
# Starts the icmp tunnel server, this doesn't work inside a docker container
# Or perhaps it works, but I dont't know how
script_dir=$(cd "$(dirname "$0")"; pwd)
if [ ! -f $script_dir/ptunnel-ng ]; then
# Build the ptunnel binary and copy it to the host
docker build -t ptunnel-ng-build-server -f $script_dir/Dockerfile.server $script_dir
docker run --rm -v $script_dir:/shared ptunnel-ng-build-server cp /workdir/ptunnel-ng/src/ptunnel-ng /shared
fi
magic=${1-123456}
sudo $script_dir/ptunnel-ng --magic $magic
On the client
FROM alpine:latest as builder
ARG DEBIAN_FRONTEND=noninteractive
RUN apk add --update alpine-sdk bash autoconf automake git
RUN mkdir -p /workdir
WORKDIR /workdir
RUN git clone https://github.com/lnslbrty/ptunnel-ng.git && cd ptunnel-ng && ./autogen.sh
FROM alpine:latest
WORKDIR /workdir
COPY --from=builder /workdir/ptunnel-ng/src/ptunnel-ng .
start-client.sh
#!/bin/sh
image=ptunnel-ng
if ! docker inspect $image > /dev/null 2> /dev/null; then
docker build -t $image .
fi
magic=${1-123456}
ptunnel_host=${2-myserver.de}
port=${3-2001}
docker run --rm --detach -ti --name 'ptunnel1' -v $PWD:/shared -p 2222:2222 $image //workdir/ptunnel-ng --magic ${magic} -p${ptunnel_host} -l${port}
If you try to run the ptunnel client on termux, this can be done, but requires some small code changes

Related

How do I resolve Invalid SSH Key Entry error when starting App with GCE

I'm trying to launch my app on Google Compute Engine, and I get the following error:
Sep 26 22:46:09 debian google_guest_agent[411]: ERROR non_windows_accounts.go:199 Invalid ssh key entry - unrecognized format: ssh-rsa AAAAB...
I'm having a hard time interpreting it. I have the following startup script:
# Talk to the metadata server to get the project id
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
REPOSITORY="github_sleepywakes_thunderroost"
# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
service google-fluentd restart &
# Install dependencies from apt
apt-get update
apt-get install -yq ca-certificates git build-essential supervisor
# Install nodejs
mkdir /opt/nodejs
curl https://nodejs.org/dist/v16.15.0/node-v16.15.0-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm
# Get the application source code from the Google Cloud Repository.
# git requires $HOME and it's not set during the startup script.
export HOME=/root
git config --global credential.helper gcloud.sh
git clone https://source.developers.google.com/p/${PROJECTID}/r/${REPOSITORY} /opt/app/github_sleepywakes_thunderroost
# Install app dependencies
cd /opt/app/github_sleepywakes_thunderroost
npm install
# Create a nodeapp user. The application will run as this user.
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app
# Configure supervisor to run the node app.
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/opt/app/github_sleepywakes_thunderroost
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production"
stdout_logfile=syslog
stderr_logfile=syslog
EOF
supervisorctl reread
supervisorctl update
# Application should now be running under supervisor
My instance shows I have 2 public SSH keys. The second begins like this one in the error, but after about 12 characters it is different.
Any idea why this might be occurring?
Thanks in advance.
Once you deployed your VM instance, its a default setting that the SSH key isn't
configure yet, but you can also configure the SSH key upon deploying the VM instance.
To elaborate the answer of #JohnHanley, I tried to test in my environment.
Created a VM instance, verified the SSH configuration. As a default configuration there's no SSH key configured as I said earlier you can configure SSH key upon deploying the VM
Created a SSH key pair via CLI, you can use this link for instruction details
Navigate your VM instance, Turn off > EDIT > Security > Add Item > SSH key 1 - copy+paste generated SSH key pair > Save > Power ON VM instance
Then test the VM instance if accessible.
Documentation link How to Add SSH keys to project metadata.

Dockerized apache server not exposing port 80

I am trying to run a React application inside a docker container. My application image was built with the following Dockerfile:
Dockerfile
FROM node:latest
LABEL autor="Ed de Almeida"
RUN apt-get update
RUN apt-get install -y apache2
RUN mkdir /tmp/myapp
COPY . /tmp/myapp
RUN cd /tmp/myapp && npm install
RUN cd /tmp/myapp && npm run build
RUN cd /tmp/myapp/build && cp -Rvf * /var/www/html
RUN cd /var/www && chown -Rvf www-data:www-data html/
EXPOSE 80
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
CMD /usr/sbin/apache2ctl -D FOREGROUND
As you may see, I create a production build, copy it to the standard directory of the Apache server and then run the Apache server. I even exposed port 80, the Apache default port.
I am creating the image with
docker build -t myimage .
and running the container with
docker run -d -p 80:80 --name myapp myimage
I am probably missing something, because I am new to Docker, because the container is there, up and running, but when I point my browser to http://localhost I got nothing.
I entered the container with
docker exec -it myapp bash
and the application is running fine inside it.
Any hints?
When running on windows, Docker will be running on a virtual machine that is running in the backgound. Thus you need to connect to this virtual machine and not to localhost.
You can get the machine ip by running:
docker-machine ip default
This will give you the IP address of the machine, which you can use to connect from the browser.

ERR_EMPTY_RESPONSE for localhost when running Docker

Here's my Dockerfile:
# CentOs base image
FROM centos:centos6.8
# install python, pip, apache and other packages
RUN yum -y update; yum clean all
RUN yum -y install epel-release; yum clean all
RUN yum -y install centos-release-scl; yum clean all
RUN yum -y install python27; yum clean all
RUN yum -y install python-devel.x86_64; yum clean all
RUN yum -y install python-pip; yum clean all
RUN yum -y install gcc; yum clean all
RUN yum -y install httpd httpd-devel mod_ssl; yum clean all
# Make a non root user so I can run mod_wsgi without root
# USER adm
# install Python modules needed by the Python app
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r /usr/src/app/requirements.txt
# copy files required for the app to run
COPY . /usr/src/app/
# tell the port number the container should expose
EXPOSE 80
# run the application
# CMD ["mod_wsgi", "start-server run_apache_server.wsgi"]
# CMD ["cat", "/etc/passwd"]
# CMD ["cat", "/etc/group"]
# CMD ["find", "/"]
CMD ["/bin/sh", "-c", "/usr/bin/mod_wsgi-express start-server run_apache_server.wsgi --user adm --group apache"]
I can run the app:
$ docker run -d -P --name myapp jacobirr/pleromatest
And see tcp port 80:
$ docker port myapp
80/tcp -> 0.0.0.0:32769
Here's my requirements.txt:
Flask==0.10.1
Flask-Restless==0.13.1
Flask-SQLAlchemy==0.16
Jinja2==2.7
MarkupSafe==0.18
SQLAlchemy==0.8.2
Werkzeug==0.9.2
gunicorn==17.5
itsdangerous==0.22
mimerender==0.5.4
python-dateutil==2.1
python-mimeparse==0.1.4
requests==1.2.3
six==1.3.0
wsgiref==0.1.2
setuptools==5.4.2
mod_wsgi==4.5.15
Why can't I get to localhost:32769 in the browser? I suspect this is related to:
•the user/group running apache?
•the fact that I'm installing mod_wsgi but it's nowhere on the docker "filesystem" so I have to use mod_wsgi-express?
Update:
'1' Netstat shows:
[root#9003b0d64916 app]# netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:irdmi *:* LISTEN
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
unix 2 [ ACC ] STREAM LISTENING 113181 /tmp/mod_wsgi-localhost:8000:0/wsgi.1.0.1.sock
'2' httpd seems to be running in my container:
[root#9003b0d64916 mod_wsgi-localhost:8000:0]# ps aux | grep httpd
root 1 0.0 0.2 64060 5084 ? Ss 21:17 0:00 httpd (mod_wsgi-express) -f /tmp/mod_wsgi-localhost:8000:0/httpd.conf -k start -DFOREGROUND
adm 6 0.0 0.6 350928 13936 ? Sl 21:17 0:00 (wsgi:localhost:8000:0) -f /tmp/mod_wsgi-localhost:8000:0/httpd.conf -k start -DFOREGROUND
adm 7 0.0 0.1 64192 3248 ? S 21:17 0:00 httpd (mod_wsgi-express) -f /tmp/mod_wsgi-localhost:8000:0/httpd.conf -k start -DFOREGROUND
From all your outputs, your httpd / uwsgi process is definitely bound to 8000, and this is the port you need to expose on the container.
This line in netstat, is showing a bind on 8000, and nothing else.
tcp 0 0 *:irdmi *:* LISTEN
It is not obvious here, but if you use the --numeric-ports argument, it will not convert the 8000 into its known port.
In your docker file, again you should
EXPOSE 8000
When launching your container, you can also specify the port to use on the host machine:
docker run -p 8080:8000 --name ...
After this, you should be able to use your browser to hit
localhost:8080 -> container:8000
Add this to your Dockerfile, just before CMD:
WORKDIR /usr/src/app/
Assuming that your start-apache-server file is in that directory. This will help wsgi to find the needed file.

I would like to set up rfc5766-turn-server in Ubuntu 14.04, can anyone give me the set of steps listed all together ? I am doing it in AWS EC2

I have tried to install and set up rfc5766-turn-server in AWS EC2 but unable to do it as I do not see a proper flow of work or command line for that, can someone help me about this ? I need to set it up in Ubuntu 14.04
do an ssh login to your ec2 instance, then run the below commands for installing and starting the turn server.
commands for installing turnserver:
sudo apt-get update
sudo apt-get install make gcc libssl-dev libevent-dev wget -y # for installing modules required by turn server
mkdir ~/turn && cd ~/turn # creating temp directory
wget turnserver.open-sys.org/downloads/v3.2.5.9/turnserver-3.2.5.9.tar.gz # downloading the TURN source code
tar -zxvf *.gz # extract
cd turn*
make
sudo make install # installing the rfc5766
cd ../.. && rm -rf turn # cleaning up
command for starting the TURN server:
turnserver -a -o -v -n -u user:root -p 3478 -L INT_IP -r someRealm -X EXT_IP/INT_IP
assumptions:
your ip, internal ip = EXT_IP, INT_IP
desired port for listening: 3478
single credential username:password = user:root
realm: someRealm
in your WebRTC app, you can use trun server like:
{
url: 'turn:user#EXT_IP:3478',
credential: 'root'
}

Running a Docker container that accept traffic from the host

I have the following config:
Dockerfile
FROM centos
MAINTAINER Eduar Tua <eduartua#gmail.com>
RUN yum -y update && yum clean all
RUN yum -y install httpd && yum clean all
RUN echo "Apache works" >> /var/www/html/index.html
EXPOSE 80
ADD run-apache.sh /run-apache.sh
RUN chmod -v +x /run-apache.sh
CMD ["/run-apache.sh"]
The run-apache.sh script:
#!/bin/bash
rm -rf /run/httpd/* /tmp/httpd*
exec /usr/sbin/apachectl -D FOREGROUND
Then I build the image with:
sudo docker build --rm -t platzi/httpd .
then
sudo docker run -d -p 80:80 platzi/httpd
After that when I try to run the container accepting connections from the host in the 80 port I get this:
67ed31b50133adc7c745308058af3a6586a34ca9ac53299d721449dfa4996657
FATA[0002] Error response from daemon: Cannot start container 67ed31b50133adc7c745308058af3a6586a34ca9ac53299d721449dfa4996657: Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Any help?
It is saying port 80 is busy ... run this to see who is using port 80
sudo netstat -tlnp | grep 80 # sudo apt-get install net-tools # to install netstat
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1380/nginx -g daemo
tcp6 0 0 :::80 :::* LISTEN 1380/nginx -g daemo
scroll to far right to see offending PID of process holding port 80 ... its PID 1380 so lets do a process list to see that pid
ps -eaf | grep 1380
root 1380 1 0 11:33 ? 00:00:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
so teardown that offending process to free up the port 80
sudo kill 1380 # if you know the pid ( 1380 for example )
__ or __
sudo fuser -k 80/tcp # just kill whatever pid is using port 80 tcp
If after doing above its still saying busy then probably the process which you killed got auto relaunched in which case you need to kill off its watcher however you can walk up the process tree from netstat output to identify this parent process and kill that too
Here is how to identify the parent pid of a given process pid
ps -eafww
eve 2720 2718 0 07:56 ? 00:00:00 /usr/share/skypeforlinux/skypeforlinux --type=zygote
in above pid is 2720 and its parent will be the next column to right pid 2718 ... there are commands to show a process tree to visualize these relationships
ps -x --forest
or
pstree -p
with sample output of
systemd(1)─┬─ModemManager(887)─┬─{ModemManager}(902)
│ └─{ModemManager}(906)
├─NetworkManager(790)─┬─{NetworkManager}(872)
│ └─{NetworkManager}(877)
├─accounts-daemon(781)─┬─{accounts-daemon}(792)
│ └─{accounts-daemon}(878)
├─acpid(782)
├─avahi-daemon(785)───avahi-daemon(841)
├─colord(1471)─┬─{colord}(1472)
│ └─{colord}(1475)
├─containerd(891)─┬─containerd-shim(1836)─┬─registry(1867)─┬─{registry}(1968)
│ │ │ ├─{registry}(1969)
│ │ │ ├─{registry}(1970)
The error seems pretty clear:
FATA[0002] Error response from daemon: Cannot start container 67ed31b50133adc7c745308058af3a6586a34ca9ac53299d721449dfa4996657: Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
It says, "address already in use". This means that something on your system -- probably a web server like Apache -- is already listening on port 80. You will either need to:
stop the web server,
select a different host port in the -p argument to docker run or
just drop the -p argument.
Because Docker can't set up the requested port forwarding, it does not start the container.
Options (a) and (b) will both allow the container to bind to port 80 on your host. This is only necessary if you want to access the container from somewhere other than your host.
Option (c) is useful if you only want to access the container from the docker host but do not want to otherwise expose the container on your local network. In this case, you would use the container ip address as assigned by docker, which you can get by running docker inspect and perusing the output, or just running:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' container_id
If you are running Ubuntu, just run
sudo /etc/init.d/apache2 stop
Then reload your Docker Image
docker reload
I found so solution:
$ docker stop container_name
$ docker commit container_name image_name
$ docker rm container_name
then u can create a new container from the image:
$ docker run -d -P --name container_name_the_same_or_new image_name
and now works.