Adding an SSL cert to Lita bot using Docker - ssl

I've connected my Lita bot to a Diaglogflow agent via the lita-api-ai plugin and (currently) a Firebase-enabled fulfillment script edited inline on the Dialogflow site.
I'd like to convert that webhook into ruby and host it as a handler in Lita itself, but Dialogflow requires SSL on the webhook endpoint.
I'm using the standard docker setup for Lita on CoreOS, and I'd like to use a Let's Encrypt cert. How can I do this? I'm not experienced with the innards of Docker or a ruby app like Lita (as opposed to a full-blown nginx/Apache setup) -- can I put something around Docker to handle the SSL? Do I need to modify the Docker image itself?

The best way to go about this is to install a web server (nginx, caddy, etc.) to handle SSL termination. It should then proxy requests to the Docker instance. You can use nginx-proxy with the LetsEncrypt companion as the basic setup, although you'll need to alter the Lita systemd script to include config and environment variables (e.g., VIRTUAL_HOST, expose).
nginx-proxy listens for container changes to dynamically update its proxying, but I created systemd services for both nginx-proxy and the LetsEncrypt companion so that they would start on boot.

Related

TCP route binding to specific hosts in traefik

We are using traefik for simulating our production environment. We have multiple services running in kubernetes running in docker. Few of them are java applications. In this stack, a developer can come and deploy the code as per the git branches they are working on. So at a given point, we can have 100s of full fledged stack running. We use traefik for certificates resolution so that each stack can be hosted based on branch names and all.
Now I want to give developer the facility to debug their java applications. Its fairly simple to do it in java. You need to attach java agent while starting up the docker image for application. Basically we need to pass -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=37000 as JVM argument and JVM is ready to attach remote debuggers.
Now JVM is using JDWP protocol. And as far as I understand, it is a tcp protocol. Now my problem is: I want to traefik to create routes dynamically based on my docker service labels. That is also I am able to figure out. I used these labels in the docker service.
And this is how you connect to JVM remotely.
Now if in RULE, if is use HostSNI(*) then I cam able to connect to the container. But problem is when I am doing remote connection for debugging, traefik can direct my request to any container. And this whole thing won't work as expected.
I believe we must have some other supported function for TCP rule as well, apart from only HostSNI. What is your opinion on this ? Or Have I missed something here ?

Let's Encrypt certificate with Docker

I'm new to Docker, I've been trying to set up an environment that emulates a standard LAMP stack do develop PHP applications locally and easily deploy them
So far I've followed this setup for my Docker, it seems to be working fine, but I'm having trouble with certificates. On a normal server I would just run Certbot, select the Apache site to enable HTTPS for, and be done with it.
On Docker however I have no idea how to do this. My certificates should be placed inside ./cert/. Does that mean that I have to run commands to add the PPA, install Certbot, then create a certificate and place it in the folder I want? Or is there a simpler way to do this?
Googling brought me to a whole lot of Docker images that automatically create a Certificate and also create an Apache instance, but I'd like to keep this as vanilla as possible.
What is the process of using a Let's Encrypt certificate with Docker?
Should I even install one locally or is that bad practice?
My certificates should be placed inside ./cert/. Does that mean that I have to run commands to add the PPA, install Certbot, then create a certificate and place it in the folder I want? Or is there a simpler way to do this?
Yes, you can proceed like this and store the certificate into a volume which point to ./cert/.
What is the process of using a Let's Encrypt certificate with Docker?
Should I even install one locally or is that bad practice?
There is no certificate management with docker. Yes you can manage the certificate in your container but it would be hard to maintain it ( renewal etc).
The correct approach would be to use traefik as a load balancer it has built-in certificate manager which handle all the necessary.

How to configure a Flask ws in Kubernetes with SSL?

I have a containerized Flash application (a simple webservice exposed in the internet) with SSL enabled by gunicorn through:
CMD ["gunicorn", "--certfile", "/var/tmp/fullchain.pem", "--keyfile", "/var/tmp/key.pem", "__init__:create_app()", "-b", ":8080"]
I have a bot that renews Let's Encrypt certificates in this path every 3 months.
Now I am creating a Kubernetes cluster to put this application an orchestrate the replicas.
In a related question I've seen some ingress controllers provide this certificate creation/renew functionality so I would not need to map to .pem files anymore. There is also cert-manager that does that.
Now I don't know if I need gunicorn or what is the easyest and recommended way to configuring that to run the application. I am also in the process of chosing an ingress controller for my cluster.
Now I don't know if I need gunicorn.
Gunicorn is like java Tomcat, and it can also improve performance for python web server, so using Gunicorn is also recommend without SSL.
If you have other service in same cluster want to talk to your Flask server, and you want to protect that connection, you should config Gunicorn with SSL. If not, I think using an ingress controller with certificate manager is convenient.
I am also in the process of chosing an ingress controller for my cluster.
Well, I think cert-manager offical doc can help you, it deploy cert-manager with Nginx ingress controller.
Theoretically you don't need to resign from your current setup: Flask app exposed on HTTPS. For instance the NGINX ingress controller can pass (encrypted) TLS packets directly to an upstream server (in your case Gunicorn) using SSL Passthrough feature.
But definitely it would be better to do it in a recommended Kubernetes way, with TLS enabled for Ingress (where cert-manager add-on can help you in obtaining certificates from sources like Let's Encrypt)

Endpoint Paths for APIs inside Docker and Kubernetes

I am newbie on Docker and Kubernetes. And now I am developing Restful APIs which later be deployed to Docker containers in a Kubernetes cluster.
How the path of the endpoints will be changed? I have heard that Docker-Swarm and Kubernetes add some ords on the endpoints.
The "path" part of the endpoint URLs themselves (for this SO question, the /questions/53008947/... part) won't change. But the rest of the URL might.
Docker publishes services at a TCP-port level (docker run -p option, Docker Compose ports: section) and doesn't look at what traffic is going over a port. If you have something like an Apache or nginx proxy as part of your stack that might change the HTTP-level path mappings, but you'd probably be aware of that in your environment.
Kubernetes works similarly, but there are more layers. A container runs in a Pod, and can publish some port out of the Pod. That's not used directly; instead, a Service refers to the Pod (by its labels) and republishes its ports, possibly on different port numbers. The Service has a DNS name service-name.namespace.svc.cluster.local that can be used within the cluster; you can also configure the Service to be reachable on a fixed TCP port on every node in the service (NodePort) or, if your Kubernetes is running on a public-cloud provider, to create a load balancer there (LoadBalancer). Again, all of this is strictly at the TCP level and doesn't affect HTTP paths.
There is one other Kubernetes piece, an Ingress controller, which acts as a declarative wrapper around the nginx proxy (or something else with similar functionality). That does operate at the HTTP level and could change paths.
The other corollary to this is that the URL to reach a service might be different in different environments: http://localhost:12345/path in a local development setup, http://other_service:8080/path in Docker Compose, http://other-service/path in Kubernetes, https://api.example.com/other/path in production. You need some way to make that configurable (often an environment variable).

Meteor, docker and SSL on localhost

Pretty new to docker / docker-machine / docker-compose and use this for a meteor app that needs to connect to a queue and a few other services. I need to setup SSL on localhost as we're using the getUserMedia api (which chrome is deprecating on insecure connections).
I believe I need to create a self signed certificate, but not sure what to do with it after that. Do I set it up on my local machine? or do I set this up in the docker container?
Note that meteor is actually running in development mode on its container on local
Any definitive help getting started on this would be great.
EDIT: while the similar question noted in the comments seems to solve the problem for meteor specifically, I'm interested more importantly in the context of docker and OSX, While my actual problem is with a meteor app currently, I would like to find a solution thats not meteor dependant, but is considerate of the user case.