I'm new to Docker, I've been trying to set up an environment that emulates a standard LAMP stack do develop PHP applications locally and easily deploy them
So far I've followed this setup for my Docker, it seems to be working fine, but I'm having trouble with certificates. On a normal server I would just run Certbot, select the Apache site to enable HTTPS for, and be done with it.
On Docker however I have no idea how to do this. My certificates should be placed inside ./cert/. Does that mean that I have to run commands to add the PPA, install Certbot, then create a certificate and place it in the folder I want? Or is there a simpler way to do this?
Googling brought me to a whole lot of Docker images that automatically create a Certificate and also create an Apache instance, but I'd like to keep this as vanilla as possible.
What is the process of using a Let's Encrypt certificate with Docker?
Should I even install one locally or is that bad practice?
My certificates should be placed inside ./cert/. Does that mean that I have to run commands to add the PPA, install Certbot, then create a certificate and place it in the folder I want? Or is there a simpler way to do this?
Yes, you can proceed like this and store the certificate into a volume which point to ./cert/.
What is the process of using a Let's Encrypt certificate with Docker?
Should I even install one locally or is that bad practice?
There is no certificate management with docker. Yes you can manage the certificate in your container but it would be hard to maintain it ( renewal etc).
The correct approach would be to use traefik as a load balancer it has built-in certificate manager which handle all the necessary.
Related
In our company's internal network we have self-signed certificates used for applications that runs on DEV or staging environments. For our local machines it's already trusted because Active Directory provides that using Group Policy Objects. But in the Kubernetes(Openshift) world, we have to do some additional operations to provide successful SSL/TLS traffic.
In the Dockerfile of related application, we copy the certificate into container and trust it while building Docker image. After that, the requests from application that runs in container to an HTTPS endpoint served with that self signed certificate are success. Otherwise we encounter the errors like "SSL/TLS secure channel cannot be established"
COPY ./mycertificate.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/mycertificate.crt && update-ca-certificates
However, I don't think this is the best way to do this. It requires lots of operational work when the certificate has expired. Shortly it's hard to manage and maintain. I wonder what's the efficient way to handle this.
Thanks in advance for your support.
Typically that should be configured cluster-wide by your OpenShift administrators using the following documentation so that your containers trust your internal root CA by default (additionalTrustBundle):
https://docs.openshift.com/container-platform/4.6/networking/configuring-a-custom-pki.html#nw-proxy-configure-object_configuring-a-custom-pki
Best is highly relative but you could start by pulling that out into a ConfigMap and mounting it into your container(s). That pushes all the work of updating it out to runtime, but introduces a fair bit of complexity. It depends on how often it changes and how much you can automate the rebuilds/redeploys when it does.
I have an app (hosted in heroku) where customers have an individual subdomain e.g. client1.myapp.com, client2.myapp.com I am using a wildcard SSL cert here.
If however a client wants to use their own custom domain e.g. CNAME app.client1.com
How can i automatically provide an SSL cert (I'm guessing using Lets Encrypt) for the client without them providing me a certificate to upload in a similar way to firebase etc.. provides SSL certs for domains.
As long as app.client1.com is reachable from the internet, you are free to set up a lets-encrypt certificate.
If your applications run on unix-like system, then the cerbot docs describe all the steps for automating the renewal.
You basically have to set up a cronjob that launches the renewal command.
You can edit the crons in you linux machine with the command crontab -e and place something like this at the end of the file:
0 15 1 * * certbot renew
This will run the command the first day of each month and attempt the renewal of your certificate. Check crontab.guru if you need different settings.
If you want to call the cerbot command from a custom script, you can add such script to your cronjob (and save the output to a custom file).
E.g:
0 15 1 * * python cert_autorenew.py >> cron.log 2>&1
Keep in mind that if you want a certificate for app.client1.com, then the renewal request you send, must resolve to the server app.client1.com.
That's how you demonstrate the control over your domain to letsencrypt.
If you are hosting the new application in Heroku/new domain is directed to Heroku app, they will take care of the SSL Certificates for you, as long as you run the application on a paid dyno. Which you should as its a client's application!
https://devcenter.heroku.com/articles/ssl
Heroku provides free Automated Certificate Management (ACM) for all
applications running on paid dynos. With ACM, Heroku automatically
provisions and renews SSL certificates for your application. If you
prefer to upload your own certificate manually, follow the steps in
this article.
You only have to make an application in Heroku, upgrade the dyno and configure the custom domain name (app.client1.com) for the application, following instructions at https://devcenter.heroku.com/articles/custom-domains
I have a client site set up on AWS with multiple servers running HTPPS behind an Elastic Load Balancer. At some point, someone from the client's team attempted to update the SSL Cert by installing a new one directly on one of the servers (instead of in the ELB).
I was able to upload a new cert to the ELB, but when traffic is directed towards the server with the improperly installed cert, it triggers a security warning.
No one can seem to answer who attempted this install, how they went about, or where they installed it.
What's the best way to go about finding and removing it?
Thanks,
ty
If it's installed on the server, it has very little to do with AWS. I see you tagged the question with apache so I assume the server is running Apache Web Server. You will have to connect into that server and remove the SSL settings from the Apache Web Server configuration, just like you would with an Apache Web Server install anywhere else.
I'am quite new to setting up and managing websites, domains and stuff.
I purchased a domain (let's say example.de) and registerd it on my vserver running Parallels Plesk. As I need secure access I requested and created a SSL-Certificate at startssl.com. The developed application (Spring-Boot) runs on an EC2-Instance at AWS. The Product-Website runs on an Apache-Webserver on an EC2 instance. I need to secure both, the App (app.example.de) and the Website (example.de) using SSL.
What I want to archive is a redirect from the domain https://example.de to the EC2 Instance. I already tried several things - some I remember from the try&error marathon
Configure Plesk frame-forwarding the traffic on https://example.de to the ec2-ip
Obviously the Browser warns me that the Certificate is issued for example.de and not for and classifies the traffic as unsecure. Same like when accessing it like https://...
I also uploaded the certificate at Plesk - Also without success
Is there a solution for my setup? Or do I need (or is it recommened) to use Amazon Route53 for that task? Would be nice if someone could guide me and provide some tipps as I am pretty new to this topics.
Thanks
It seems there is no way around AWS route 53.
I figured out that there is a Extension for Plesk that is designed to route traffic using route53 and even a nice manual article at the Plesk homepage how to use any external DNS and also Route53 Extension. As this Extension requires a newer version of Plesk, than that one I am using I wasn't able to install it. I am pretty much bound to this version, so an update didn't come into question. I cannot tell for sure if using this Extension solves my initial problem, but it seems to be a potential solution.
The most simplistic solution (at least for me):
I ended up moving my Domain the AWS, created a Hosted-Zone, Added a Record Set with the IP of the EC2 and the DNS Server provided due the hosted Zone. Everything is now working like a charm.
Some more Background: The Product-Website and App-Frontend are running inside an Apache where I installed mod_ssl and configured SSL access. The Application backend runs as a Spring-Boot-App in a Tomcat where I also configured SSL using a TomcatConnectorCustomizer.
This setup works for my scenario
Am trying to use SSL Offloading to to allow https on our webfarm. The only way we can get the SSL to work is to install the the certificate and and bind it in IIS on each server. However our farm is scalable and we need to be able to create servers and drop them as traffic levels change. We can't include the certificate in the server template because it corrupts and won't work properly.
However if I understand it correctly we should only have to install the certificate on the ARR server and SSL offloading should apply to all the other servers. However this doesn't seems to be working.
Whilst we can install the certificate wach time we create a server, this is an added hassle and seems like there should be a better way of doing it.
Any thoughts?
You can use SSL Offloading which would only require the SSL certificates be installed at the ARR level, allowing you to add servers to your webfarm without certificate configuration.
What exactly isn't working when you try to do this?