Remote Docker Host Authentication - ssl

Hi I'm currently working on a side project. In this project I'll have a central server that will need to connect to several remote docker daemons. My problem is with authentication.
Given that the project will be hosted on Digitalocean, my first thought suggested that I'll accept only connections from the private networking interface. The problem is that that interface is accessible by all other servers in the same datacenter.
Second thought is to allow only requests from the central server using the DOCKER_HOST config, the problem is that if I understand correctly the if the private IP of the centeral server get known, the IP can be spoofed.
Third thought is to enable TLS ( https://docs.docker.com/articles/https/ ), I've never dealt with those things before and the tutorial is unclear for me, I lack the knowledge of the terminologies and it's being used heavily.
So basically the problem is that I have a central client and multiple remote docker hosts, what is the best way to connect to them? Thank you.
EDIT: I managed to solve the problem using HTTP authentication by running nginx as a proxy in front of the docker daemon.

My understand is you are trying to build a docker cluster, which can manage all nodes from one single central server.
this is very likely docker's Docker Swarm project, from their doc, they give some simple idea how this is work:
open a TCP port on each node for communication with the swarm manager
install Docker on each node
create and manage TLS certificates to secure your swarm
Sorry this should post as a comment but I do not have enough rep to do that.

Related

Accessing a Client API through SSH Tunnel

I have a scenario whereby I need to access a an API hosted on a Client Machine found on the Client's Network. No Port Forwarding can be configured on the client's Router/Network, which is how I would normally tackle such a requirement. Also, the Client Machine / Network will not be will not have a Fixed External IP.
After some research, I was pointed towards using an SSH Client on the Client Machine, which registers itself against an SSH Server, and opens up a 'Tunnel'. My Application, which resides on Our Servers, can then communicate to the Client API through the SSH Server and Open Tunnel.
Am I on the right track, and can anyone suggest any tutorials / Demos / or Sample Solutions which I can gleam over so that I can better understand what is needed from my end, and how to configure and/or develop such a setup, please?
Any assistance would be greatly appreciated, Thanks.
Luke

using cloudflared to do ssh tunneling accesible by the interenet without need to run cloudflared on the otherside

I have a raspi machine behind NAT in my room, and I want to access it from the interenet using the URL.I found this article.
https://developers.cloudflare.com/cloudflare-one/tutorials/ssh
However, it required me to run the cloudflared program on the connecting client. I understand that this is for the security purpose. Does it possible to make the connect without running the cloudflared program on the client machine.
A follow-up question would be is it possible to ssh into ipv6 machine that using the same technique.
There are various options when it comes to connecting to a machine running on a private network:
Running cloudflared on the client (which you already found)
Installing the WARP client on the user side, then using cloudflared on the server side to expose the service securely. Finally, route the network traffic for the private network on the tunnel via WARP. This approach is described in a tutorial here
Cloudflare started also supporting in browser rendering of an SSH session. I have wrote a tutorial describing how to set it up here.
Approach (3) would do away with the need of running a client since it relies on a simple browser.

TCP route binding to specific hosts in traefik

We are using traefik for simulating our production environment. We have multiple services running in kubernetes running in docker. Few of them are java applications. In this stack, a developer can come and deploy the code as per the git branches they are working on. So at a given point, we can have 100s of full fledged stack running. We use traefik for certificates resolution so that each stack can be hosted based on branch names and all.
Now I want to give developer the facility to debug their java applications. Its fairly simple to do it in java. You need to attach java agent while starting up the docker image for application. Basically we need to pass -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=37000 as JVM argument and JVM is ready to attach remote debuggers.
Now JVM is using JDWP protocol. And as far as I understand, it is a tcp protocol. Now my problem is: I want to traefik to create routes dynamically based on my docker service labels. That is also I am able to figure out. I used these labels in the docker service.
And this is how you connect to JVM remotely.
Now if in RULE, if is use HostSNI(*) then I cam able to connect to the container. But problem is when I am doing remote connection for debugging, traefik can direct my request to any container. And this whole thing won't work as expected.
I believe we must have some other supported function for TCP rule as well, apart from only HostSNI. What is your opinion on this ? Or Have I missed something here ?

Managing Multiple Reverse SSH Tunnels

I want to install a number of raspberry pis at remote locations and be able to log in to them remotely. (Will begin with 30-40 boxes and hopefully grow to 1000 individual raspberry pis soon.)
I need to be able to remotely manage these boxes. Going the easier route, forwarding a port on the router and setting a DHCP reservation, requires either IT support from the company we'll be doing the install for (many of which don't have IT), or it will require one of our IT people physically installing each box.
My tentative solution is to have each box create a reverse SSH tunnel to our server. My question is: How feasible would this be? How easy would it be to manage that many connections? Would it be an issue for a small local server to have 1000+ concurrent SSH connections? Is there an easier solution to this problem?
My end goal is to be able to ship someone a box, have them plug it in, and be able to access it.
Thanks,
w
An alternate solution would be to:
Install OpenVPN server on your server machine. How to install OpenVPN Server on the PI. Additionally, add firewall rules that block everything but traffic directed for the client's ssh and other services ports (if desired), from administrating machine(s).
Run OpenVPN clients on your Raspberry PI client machines. They will connect back to your VPN server. On a side note, the VPN server and administrating machine(s) need not be the same machine if resources are limited on the VPN server. How to install OpenVPN on the client Raspberry PIs.
SSH from administrating machine(s) to each client machine. Optionally, you could use RSA authentication to simplify authentication.
Benefits include encryption for the tunnel including ssh encryption for administrating, as well as being able to monitor other services on their respective ports.
I made a WebApp to manage this exact same setting in about 60 minutes with my java web template. All I can share are some scripts that I use to list the connection and info about them. You can use those to build your own app, it is really simple to display this in some fancy way in a fast web.
Take a look at my scripts: https://unix.stackexchange.com/a/625771/332669
Those will allow you to get the listening port, as well as the public IPs they're binded from. With that you can easilly plan a system where everything is easilly identificable with a simple BBDD.
You might find this docker container useful https://hub.docker.com/r/logicethos/revssh/

Proxy / ServiceBus / Reverse SSH

Trying to figure out the best way to easily connect a bunch of client machines running WCF service to a LAMP server on a wide area network....
Currently just set up set up each client with DynDNS, and port forwarding at the router... Absolutely not the best situation for deployment.
Ideally would like to have a simple program they run which automatically connects them to the LAMP server....
Can anyone point me in the right direction?
Should I be looking at Reverse SSH, Windows Azure AppFabric ServiceBus?
This is one the scenarios that Service Bus relay was created for. With the relay, a sort of tunnel is established via ServiceBus between your WCF services and your clients, independently of where each party is deployed (as long as both have internet access, that is).
This article has a tutorial on an scenario that's very similar to what you describe:
http://www.windowsazure.com/en-us/develop/net/tutorials/hybrid-solution/
A reverse proxy would certainly be relevant here.
There are a number of ways to provide this. You mention using a LAMP stack so I'm assuming that you are using Apache as a web server.
You need a couple of optional Apache modules. Proxy and Reverse Proxy.
Typically you would assign a virtual "folder" to each actual app:
https://server/app1
https://server/app2
The reverse proxy would forward requests through to the actual, internal server/port:
https://server/app1 -> http://localhost:8000/
https://server/app2 -> http://localhost:8001/
(or whatever configuration you want)