im currently trying to setup a private Docker Registry in Artifacory (v4.7.4).
I've setup a local,remote and virtual docker Repository, added Apache as a Reverse Proxy. Added a DNS Entry for the virtual "docker" Repo.
Reverse Proxy is working but if i try something like:
docker pull docker.my.company.com/ubuntu:16.04
I'm getting:
https://docker.my.company.com/v1/_ping: x509: certificate is valid for
*.company.com, company.com, not docker.my.company.com
My Artifactory URL is: "my.company.com/artifactory" and i want the repositorys to be accessible on repo.my.company.com/artifactory.
I also have a Wildcard Certificate for company.com so i don't understand whats the problem here.
Or is there a way to access Artifactory over just http without SSL
Any Ideas?
According to the RFC-2818 Wildcard certificate matches only the one level down domains, but not deeper:
E.g., *.a.com matches foo.a.com but not bar.foo.a.com. f*.com matches foo.com but not bar.com.
In this case what you should do is use ports for mapping repositories, instead of subdomains, so the docker repository will be accessible under, for example my.company.com:5001/ instead of docker.my.company.com.
You can find the explanation about the change and how to do it using Artifactory Proxy settings generator in the User Guide.
If you are prepared to live with the certificate-name mismatch for-now, and understand the security implications of ignoring the name-mismatch and accessing the repo insecurely, you can apply the following workaround:
Edit /etc/default/docker and add the option DOCKER_OPTS="--insecure-registry docker.my.company.com".
Restart docker: [sudo] service docker restart.
Related
I am a bit new to Google Compute engine and managed to get a webserver with nginx to work on my google domain and installed WordPress. HTTP access was working. Now I wanted to get HTTPS to work as well.
I noticed that I don't have SSL running and so I ended up using cloudflare, made necessary changes to my nginx server and also changed the nameserver for my webserver IP address on the Google Compute Engine. That works fine. Although, there are still some errors when accessing the IP address instead of the domain name (400 Bad Request No required SSL certificate was sent nginx/1.18.0 (Ubuntu)).
So, I heard Google can do SSL on my google domain, but I am really stuck with the documentation, https://cloud.google.com/appengine/docs/standard/python/securing-custom-domains-with-ssl?authuser=2#upgrading_to_managed_ssl_certificates. It talks about Google App Engine and I haven't found a documentation to apply SSL certificates to my Google Compute Engine instance. Though, I added a custom domain there, but it points to a different IP address than my webserver on the Google Compute Engine. That surely can't be the right way?
Hence, does anyone know how I can get SSL from Google to work on my webserver using a VM instance on Google Compute Engine?
(Note to myself: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-20-04)
It is very easy to set up SSL on Compute Engine.
STEP 1: Domain names
Determine which domain names you want SSL certificates for. Typically you want two. The naked domain (example.com) and the zone www (www.example.com). Replace example.com with your actual domain name.
Note: Let's Encrypt will not issue SSL certificates for an IP address. This also means you cannot access your web server using SSL specifying an IP address instead of a domain name. Trying this will generate an error: https://my-ip-address.com
STEP 2: Setup DNS
Change your DNS servers to point directly to your Compute Engine instance reserved static IP address. At this point, do not use CloudFlare. Let's Encrypt will talk directly to your Nginx web server. Validate that each domain name is configured correctly and that you can access your site via HTTP (http://example.com and http://www.example.com).
The following instructions are OS dependant and are for Debian based systems such as Debian and Ubuntu. There are similar steps for CentOS, Red Hat, etc.
STEP 3: Install Certbot
Certbot is the software agent for Let's Encrypt. This requires Python3 to be installed on your system. Most Google Cloud instances have Python 3 installed.
Run the following commands on your VM instance:
sudo apt update
sudo apt upgrade -y
sudo apt install certbot python3-certbot-nginx
STEP 4: VPC Firewall
Make sure that ports 80 and 443 are allowed in the Google Cloud VPC Firewall.
Using firewall rules
STEP 5: Issue the SSL Certificate
Run the following command on your VM instance. Replace example.com with your domain names.
sudo certbot --nginx -d example.com -d www.example.com
Summary
Your server now has SSL configured. The SSL certificate will auto-renew. Provided that you do not change the domain names or DNS server settings, SSL will continuously function.
In the future, you may decide to offload SSL certificates to another service such as Cloudflare or a Google HTTP(S) Load Balancer. I recommend understanding how to set up SSL directly on your instance so that encryption is end-to-end. Then you can decide on SSL-offloading, caching, load balancing, auto-scaling, and more options.
I'm not even sure I asked the question right...
I have three servers running minio in distributed mode. I need all three servers to run with TLS enabled. It's easy enough to run certbot, generate a cert for each node, drop said certs into /etc/minio/certs/ and go! but here's where I start running into issues.
The servers are thus:
node1.files.example.com
node2.files.example.com
node3.files.example.com
I'm launching minio using the following command:
MINIO_ACCESS_KEY=minio \
MINIO_SECRET_KEY=secret \
/usr/local/bin/minio server \
-C /etc/minio --address ":443" \
https://node{1...3}.files.example.com:443/volume/{1...4}/
This works and I am able to connect to all three servers from a webbrowser using https with good certs. however, users will connect to the server using the parent domain "files.example.com" (using distributed DNS)
I already ran certbot and generated the certs for the parent domain... and I copied the certs into /etc/minio/certs/ as well as /etc/minio/certs/CAs/ (calling both files "files.example.com-public.crt" and "files.example.com-public.key" respectively)... this did not work. when I try to open the parent domain "files.example.com" I get a cert error (chich I can bypass) indicating the certificate is for the node in which I have connected and not for the parent domain.
I'm pretty sure this is just a matter of putting the cert in the right place and naming it correctly... right? does anyone know how to do that? I also have an idea there might be a way to issue a cert that covers multiple domains... is that how I'm supposed to do this? how?
I already hit up minio's slack channel and posted on their github, but no ones replying to me. not even, "this won't work."
any ideas?
I gave up and ran certbot in manual mode. it had to install apache on one of the nodes, then certbot had me jump through a couple of minor hoops (namely it had me create a new txt record with my DNS provider, and then create a file with a text string on the server for verification). I then copied the created certs into my minio config directory (/etc/minio/certs/) on all three nodes. that's it.
to be honest, I'd rather use the plugin as it allows for an automated cert renewal, but I'll live with this for now.
You could also run all of them behind a reverse proxy to handle the TLS termination using a wildcard domain certificate (ie. *.files.example.com). The reverse proxy would centralize the certificates, DNS, and certbot script if you prefer, etc to a single node, essentially load balancing the TLS and DNS for the minio nodes. The performance hit of "load-balancing" TLS like this may be acceptable depending on your workload, considering the simplification to your current DNS and TLS cert setup.
[Digital Ocean example using nginx and certbot plugins] https://www.digitalocean.com/community/tutorials/how-to-create-let-s-encrypt-wildcard-certificates-with-certbot
I have registered a free domain name from freenom.com and added nameservers from AWS route53. Now my domain <blabla>.ga successfully redirects to EC2 python flask server. But I really can't figure out how to add ssl by using lets encrypt. I am following the link https://ivopetkov.com/b/let-s-encrypt-on-ec2/ for SLLifying my ec2.after running letsencrypt-auto I add domain names and press enter, then I get
[ec2-user#ip-172-31-40-218 letsencrypt]$ cd /opt/letsencrypt/
[ec2-user#ip-172-31-40-218 letsencrypt]$ ./letsencrypt-auto
Requesting to rerun ./letsencrypt-auto with root privileges...
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
No names were found in your configuration files. Please enter in your domain
name(s) (comma and/or space separated) (Enter 'c' to cancel): iotserver.ga www.iotserver.ga
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for iotserver.ga
http-01 challenge for www.iotserver.ga
Cleaning up challenges
Unable to find a virtual host listening on port 80 which is currently needed for Certbot to prove to the CA that you control your domain. Please add a virtual host for port 80.
A similar question is asked here, but I've already done most part explained in both of the answers. Can anyone assist me on what I am missing here ?
try following tutorials:
https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-14-04
https://www.digitalocean.com/community/tutorials/how-to-deploy-a-flask-application-on-an-ubuntu-vps
Make sure that you able to access said web app without https, then try to install SSL. As I can see you are getting following error
Unable to find a virtual host listening on port 80 which is currently needed for Certbot to prove to the CA that you control your domain. Please add a virtual host for port 80.
There must be some configuration issue. Please debug it and let me know.
I am trying out the 30 day trial version of the artifactory-registry docker image to evaluate the docker repository for our internal use. I am following the documentation https://www.jfrog.com/confluence/display/RTF/Running+with+Docker
After I run the docker image I am able to access the UI on port 8081, however When I try to push an image I get the following error
“The plain HTTP request was sent to HTTPS port”
Heres how I deploy the image
sudo docker pull mysql
sudo docker tag mysql localhost:5002/mysql
sudo docker push localhost:5002/mysql
Also the documentation says that artifactory could be accessed on the following URLS
http://localhost/artifactory
http://localhost:8081/artifactory
https://localhost:5000/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-remote/v2)
https://localhost:5001/v1 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-prod-local/v1)
https://localhost:5002/v1 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-dev-local/v1)
https://localhost:5001/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-prod-local/v2)
https://localhost:5002/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-dev-local/v2)
But I get a 404 trying to access any of the https urls
What am I missing?
This appears to be an NGINX configuration issue (as described here ) with not forwarding HTTPS requests to Artifactory.
Changing the configuration to forward your requests should fix your issue.
I want to enable ssl on an EC2 instance. I know how to install third party SSL. I have also enabled ssl in security group.
I just want to use a url like this: ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com with https.
I couldn't find the steps anywhere.
It would be great if someone can direct me to some document or something.
Edit:
I have a instance on EC2. On Which I have installed LAMP. I have also enabled http, https and ssh in the security group policy.
When I open the Public DNS url in browser,I can see the web server running perfectly.
But When I add https to URL, nothing happens.
Is there a way I am missing? I really dont want to use any custom domain on this instance because I will terminate it after a month.
For development, demo, internal testing, (which is a common case for me) you can achieve demo grade https in ec2 with tunneling tools. Within few minutes especially for internal testing purposes with [ngrok] you would have https (demo grade traffic goes through tunnel)
Tool 1: https://ngrok.com Steps:
Download ngrok to your ec2 instance: wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip (at the time of writing but you will see this link in ngrok home page once you login).
Enable 8080, 4443, 443, 22, 80 in your AWS security group.
Register and login to ngrok and copy the command to activate it with token: ./ngrok authtoken shjfkjsfkjshdfs (you will see it in their home page once you login)
Run your http - non https server (any, nodejs, python, whatever) on EC2
Run ngrok: ./ngrok http 80 (or a different port if your simple http server runs on a different server)
You will get an https link to your server.
Tool 2: cloudflare wrap
Alternatively, I think you can use an alternative to ngrok which is called cloudflare wrap but I haven't tried that.
Tool 3: localtunnel
A third alternative could be https://localtunnel.github.io which as opposed to ngrok can provide you a subdomain for free it's not permanent but you can ask for a specific subdomain and not a random string.
--subdomain request a named subdomain on the localtunnel server (default is random characters)
Tool 4: https://serveo.net/
Turns out that Amazon does not provide ssl certificates for their EC2 instances out of box. I skipped the part that they are a virtual servers providers.
To install ssl certificate even the basic one, you need to buy it from someone and install it manually on your server.
I used startssl.com They provide free basic ssl certificates.
Create a self signed SSL certificate using openssl. CHeck this link for more information.
Install that certificate on your web server. As you have mentioned LAMP, I guess it is Apache. So check this link for installing SSL to Apache.
In case you reboot your instance, you will get a different public DNS so be aware of this. OR attach an elastic IP address to your instance.
But When I add https to URL, nothing happens.
Correct, your web server needs to have SSL certificate and private key installed to serve traffic on https. Once it is done, you should be good to go. Also, if you use self-signed cert, then your web browser will complain about non-trusted certificate. You can ignore that warning and proceed to access the web page.
You can enable SSL on an EC2 instance without a custom domain using a combination of Caddy and nip.io.
nip.io is allows you to map any IP Address to a hostname without the need to edit a hosts file or create rules in DNS management.
Caddy is a powerful open source web server with automatic HTTPS.
Install Caddy on your server
Create a Caddyfile and add your config (this config will forward all requests to port 8000)
<EC2 Public IP>.nip.io {
reverse_proxy localhost:8000
}
Start Caddy using the command caddy start
You should now be able to access your server over https://<IP>.nip.io
I wrote an in-depth article on the setup here: Configure HTTPS on AWS EC2 without a Custom Domain