Pretty new to docker / docker-machine / docker-compose and use this for a meteor app that needs to connect to a queue and a few other services. I need to setup SSL on localhost as we're using the getUserMedia api (which chrome is deprecating on insecure connections).
I believe I need to create a self signed certificate, but not sure what to do with it after that. Do I set it up on my local machine? or do I set this up in the docker container?
Note that meteor is actually running in development mode on its container on local
Any definitive help getting started on this would be great.
EDIT: while the similar question noted in the comments seems to solve the problem for meteor specifically, I'm interested more importantly in the context of docker and OSX, While my actual problem is with a meteor app currently, I would like to find a solution thats not meteor dependant, but is considerate of the user case.
Related
I am here to describe a certain issue I faced recently. Me and My friends are having a pet project called Wibrant(earlier named winbook). Which is a social media website, hosted here. It has a Django-react stack both repos can be found here, and is hosted on an EC2 instance of free tier, on AWS, which is associated to an elastic IP.
The backend is running on a docker container, on the server itself, however, we decided to host the frontend on vercel, which was initally hosted here.
But I decided to proxy it using nginx. Nginx conf for both react and django can be found here
This configuration was working perfectly, until one night I was suddenly getting a 502 error on https://winbook.d3m0n1k.engineer/. Upon inspecting the nginx logs, I found an error like
no live upstreams while connecting to upstream
which I was unable to understand. So, I tried to curl the site, using my localhost and the server. I was able to curl it using my local system, but was not able to do the same with the ec2 server. I got the error:
curl: (35) error:0A000126:SSL routines::unexpected eof while reading
Upon researching I found this error to occur due to openssl version mismatch, so i tried to update it, but couldn't. So decided to spin up a new ec2 instance. I was able to curl the site from there. Thinking that fixed the issue, I migrated the whole set up to that instance and reassociated my elastic ip to that instance. I tried to test it, Only to find that it stopped working. Confused, I ran the curl command again, and it was failing too. On using a python script with requests module to get the site, I got this error from my latest setup.
Caused by SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed
However, now the previous setup started to work perfectly fine.
So, I could curl the Vercel deployment when I didn't had the elastic IP associated to my instance, but couldn't if I did.
So, I figured it was some issue with the elastic IP. I suspected Vercel had blacklisted my IP maybe. So I reset the whole dns config of my domain, created and associated a new elastic IP with the instance, and it worked perfectly.
So, my question is..
Has anyone faced such an issue before? If yes, what was the fix in your case.
Is it really possible that Vercel has the IP in a blacklist of sorts?
This issue is probably non reproducible, but if someone find this thread, dealing with the same problem, I hope that the post and/or the comments/answers lead you to your solution. Cheers.
We are using traefik for simulating our production environment. We have multiple services running in kubernetes running in docker. Few of them are java applications. In this stack, a developer can come and deploy the code as per the git branches they are working on. So at a given point, we can have 100s of full fledged stack running. We use traefik for certificates resolution so that each stack can be hosted based on branch names and all.
Now I want to give developer the facility to debug their java applications. Its fairly simple to do it in java. You need to attach java agent while starting up the docker image for application. Basically we need to pass -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=37000 as JVM argument and JVM is ready to attach remote debuggers.
Now JVM is using JDWP protocol. And as far as I understand, it is a tcp protocol. Now my problem is: I want to traefik to create routes dynamically based on my docker service labels. That is also I am able to figure out. I used these labels in the docker service.
And this is how you connect to JVM remotely.
Now if in RULE, if is use HostSNI(*) then I cam able to connect to the container. But problem is when I am doing remote connection for debugging, traefik can direct my request to any container. And this whole thing won't work as expected.
I believe we must have some other supported function for TCP rule as well, apart from only HostSNI. What is your opinion on this ? Or Have I missed something here ?
I've connected my Lita bot to a Diaglogflow agent via the lita-api-ai plugin and (currently) a Firebase-enabled fulfillment script edited inline on the Dialogflow site.
I'd like to convert that webhook into ruby and host it as a handler in Lita itself, but Dialogflow requires SSL on the webhook endpoint.
I'm using the standard docker setup for Lita on CoreOS, and I'd like to use a Let's Encrypt cert. How can I do this? I'm not experienced with the innards of Docker or a ruby app like Lita (as opposed to a full-blown nginx/Apache setup) -- can I put something around Docker to handle the SSL? Do I need to modify the Docker image itself?
The best way to go about this is to install a web server (nginx, caddy, etc.) to handle SSL termination. It should then proxy requests to the Docker instance. You can use nginx-proxy with the LetsEncrypt companion as the basic setup, although you'll need to alter the Lita systemd script to include config and environment variables (e.g., VIRTUAL_HOST, expose).
nginx-proxy listens for container changes to dynamically update its proxying, but I created systemd services for both nginx-proxy and the LetsEncrypt companion so that they would start on boot.
I have a VM running Windows Server 2016 Technical Preview, and have installed the Containers feature, and then run the Install-ContainerHost.ps1 script from Microsoft's container tools repo
https://github.com/Microsoft/Virtualization-Documentation/tree/master/windows-server-container-tools/Install-ContainerHost
I can now run the Docker Deamon on Windows. Next I want to copy the certificates to a client machine so that I can issue commands to the host remotely. But I don't know where the certificates are stored on the host.
In the script the path variable is set to %ProgramData%\docker\certs.d
The certificates on windows are located in the .docker folder in the current user directory.
docker --help command will show the exact path details
AFAIK there are no certificates generated when you do what you are doing. If you drop certificates in the path you found then it will use them, and be secured. But otherwise there is none on the machine. Which explains why it isn't exposed by default.
On my setup I connected without TLS but that was on a VM that I could only access on my dev machine. Obviously anything able to be accessed over a network shouldn't do that.
Other people doing this are here: https://social.msdn.microsoft.com/Forums/en-US/84ca60c0-c54d-4513-bc02-14bd57676621/connect-docker-client-to-windows-server-2016-container-engine?forum=windowscontainers and here https://social.msdn.microsoft.com/Forums/en-US/9caf90c9-81e8-4998-abe5-837fbfde03a8/can-i-connect-docker-from-remote-docker-client?forum=windowscontainers
When I dug into the work in progress post it has this:
Docker clients unsecured by default
In this pre-release, docker communication is public if you know where to look.
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/about/work_in_progress#DockermanagementDockerclientsunsecuredbydefault
So eventually this should get better.
Hi I'm currently working on a side project. In this project I'll have a central server that will need to connect to several remote docker daemons. My problem is with authentication.
Given that the project will be hosted on Digitalocean, my first thought suggested that I'll accept only connections from the private networking interface. The problem is that that interface is accessible by all other servers in the same datacenter.
Second thought is to allow only requests from the central server using the DOCKER_HOST config, the problem is that if I understand correctly the if the private IP of the centeral server get known, the IP can be spoofed.
Third thought is to enable TLS ( https://docs.docker.com/articles/https/ ), I've never dealt with those things before and the tutorial is unclear for me, I lack the knowledge of the terminologies and it's being used heavily.
So basically the problem is that I have a central client and multiple remote docker hosts, what is the best way to connect to them? Thank you.
EDIT: I managed to solve the problem using HTTP authentication by running nginx as a proxy in front of the docker daemon.
My understand is you are trying to build a docker cluster, which can manage all nodes from one single central server.
this is very likely docker's Docker Swarm project, from their doc, they give some simple idea how this is work:
open a TCP port on each node for communication with the swarm manager
install Docker on each node
create and manage TLS certificates to secure your swarm
Sorry this should post as a comment but I do not have enough rep to do that.