What I want to achieve is:
I want to run XMPP service on separate subdomain like xmpp.domain.com
But at the same time, to use usernames like john#domain.com(neither than john#xmpp.domain.com)
to auto detect xmpp service's url xmpp.domain.com on jabber clients while using username like john#domain.com
to correctly use ssl
Of course it might be, the way I think is not correct. If you have suggestion about it please comment me.
What I've done is:
Created Debian 8 server
Executed
apt-get update && apt-get dist-upgrade
apt-get -y install ejabberd
dpkg-reconfigure ejabberd
A hostname - xmpp.domain.com.
An administrative user - admin and two times password
Placed ssl pem key for domain.com in /etc/ejabberd/ejabberd.pem
Added all DNS records like
Then service ejabberd restart
Now when I try to register a new user like
ejabberdctl register admin domain.com 12345
It gives me an error message like, it's not allowed to register such username. But it allows when I enter
ejabberdctl register admin xmpp.domain.com 12345
The problem is...
So basically I can't use username admin#domain.com while using server xmpp.domain.com. What am I missing? any suggestions?
Also I'm a bit confused about SSL config and pem file. My SSL certificate currently supports www.domain.com and domain.com. Do I have to buy ssl cert for xmpp.domain.com also?
In the dpkg-reconfigure step, you should have used domain.com instead of xmpp.domain.com. ejabberd only needs to know the domain it should use for JIDs, it doesn't need to know the domain it is actually running on.
Your SRV records and SSL certificate are correct: if you want to use admin#domain.com, you have to have a certificate for domain.com.
Related
I am a bit new to Google Compute engine and managed to get a webserver with nginx to work on my google domain and installed WordPress. HTTP access was working. Now I wanted to get HTTPS to work as well.
I noticed that I don't have SSL running and so I ended up using cloudflare, made necessary changes to my nginx server and also changed the nameserver for my webserver IP address on the Google Compute Engine. That works fine. Although, there are still some errors when accessing the IP address instead of the domain name (400 Bad Request No required SSL certificate was sent nginx/1.18.0 (Ubuntu)).
So, I heard Google can do SSL on my google domain, but I am really stuck with the documentation, https://cloud.google.com/appengine/docs/standard/python/securing-custom-domains-with-ssl?authuser=2#upgrading_to_managed_ssl_certificates. It talks about Google App Engine and I haven't found a documentation to apply SSL certificates to my Google Compute Engine instance. Though, I added a custom domain there, but it points to a different IP address than my webserver on the Google Compute Engine. That surely can't be the right way?
Hence, does anyone know how I can get SSL from Google to work on my webserver using a VM instance on Google Compute Engine?
(Note to myself: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-20-04)
It is very easy to set up SSL on Compute Engine.
STEP 1: Domain names
Determine which domain names you want SSL certificates for. Typically you want two. The naked domain (example.com) and the zone www (www.example.com). Replace example.com with your actual domain name.
Note: Let's Encrypt will not issue SSL certificates for an IP address. This also means you cannot access your web server using SSL specifying an IP address instead of a domain name. Trying this will generate an error: https://my-ip-address.com
STEP 2: Setup DNS
Change your DNS servers to point directly to your Compute Engine instance reserved static IP address. At this point, do not use CloudFlare. Let's Encrypt will talk directly to your Nginx web server. Validate that each domain name is configured correctly and that you can access your site via HTTP (http://example.com and http://www.example.com).
The following instructions are OS dependant and are for Debian based systems such as Debian and Ubuntu. There are similar steps for CentOS, Red Hat, etc.
STEP 3: Install Certbot
Certbot is the software agent for Let's Encrypt. This requires Python3 to be installed on your system. Most Google Cloud instances have Python 3 installed.
Run the following commands on your VM instance:
sudo apt update
sudo apt upgrade -y
sudo apt install certbot python3-certbot-nginx
STEP 4: VPC Firewall
Make sure that ports 80 and 443 are allowed in the Google Cloud VPC Firewall.
Using firewall rules
STEP 5: Issue the SSL Certificate
Run the following command on your VM instance. Replace example.com with your domain names.
sudo certbot --nginx -d example.com -d www.example.com
Summary
Your server now has SSL configured. The SSL certificate will auto-renew. Provided that you do not change the domain names or DNS server settings, SSL will continuously function.
In the future, you may decide to offload SSL certificates to another service such as Cloudflare or a Google HTTP(S) Load Balancer. I recommend understanding how to set up SSL directly on your instance so that encryption is end-to-end. Then you can decide on SSL-offloading, caching, load balancing, auto-scaling, and more options.
I developed an application for a client which I host on a subdomain, now the problem is that I don't own the main domain/website. They've added a DNS record to point to the IP on which I host that app. Now I want to request a Free & automatic certificate from Let's Encrypt. But when I try the handshake it says
Getting challenge for subdomain.example.com from acme-server...
Error: http://subdomain.example.com/.well-known/acme-challenge/letsencrypt_**** is not reachable. Aborting the script.
dig output for subdomain.example.com:subdomain.example.com
Please make sure /.well-known alias is setup in WWW server.
Which makes sense cause I don't own that domain on my server. But if I try to generate it without the main domain I get:
You must include your main domain: example.com.
Cannot Execute Your Request
Details
Must include your domain example.com in the LetsEncrypt entries.
So I'm curious on how I can just set up a certificate without owning the main domain. I tried googling the issue but I couldn't find any relevant results. Any help would be much appreciated.
First
You don't need to own the domain, you just need to be able to copy a file to the location serving that domain. (You're all set there it sounds like)
Second
What tool are you using? The error message you gave makes me think the client is misconfigured. The challenge name is usually something like https://example.com/.well-known/acme-challenge/jQqx6qlM8u3wpi88N6lwvFd7SA07oK468mB1x4YIk1g. Compare that to your error:
Error: http://example.com/.well-known/acme-challenge/letsencrypt_example.com is not reachable. Aborting the script.
Third
I'm the author of Greenlock, which is compatible with Let's Encrypt. I'm confident that it will work for you.
Install
# Feel free to read the source first
curl -fsS https://get.greenlock.app/ | bash
Usage with existing webserver:
Let's say that:
You're using Apache or Nginx.
You confirm that ping example.com gives the IP of your server
You're exposing http on port 80 (otherwise verification will fail)
Your website is located in /srv/www/example.com
Your email is jon#example.com (must be a real email address)
You want to store your certificate as /etc/acme/live/example.com/fullchain.pem
This is what the command would look like:
sudo greenlock certonly --webroot \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--root /srv/www/example.com \
--config-dir /etc/acme
If that doesn't work on the first try then change out --acme-url https://acme-v02.api.letsencrypt.org/directory for --acme-url https://acme-staging-v02.api.letsencrypt.org/directory while you debug. Otherwise your server could become blocked from Let's Encrypt for too many bad requests. Just know that you'll have to delete the certificates from the staging environment and retry with the production url since the tool cannot tell which certificates are "production" and which ones are "testing".
The --community-member flag is optional, but will provide me with analytics and allow me to contact you about important or mandatory changes as well as other relevant updates.
After you get the success message you can then use those certificates in your webserver config and restart it.
That will work as a cron job as well. You could run it daily and it will only renew the certificate after about 75 days. You could also put a cron job to send the "update configuration" signal to your webserver (normally HUP or USR1) every few days to cause it to start using the new certificates without even restarting (...or just have it restart).
Usage without a web server
If you just want to quickly test without even having a webserver running, this will do it for you:
sudo greenlock certonly --standalone \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--config-dir /etc/acme/
That runs expecting that you DO NOT have a webserver running on port 80, as it will start one temporarily just for the purpose of the certificate.
sudo is required for using port 80 and for writing to root and httpd-owned directories (like /etc and /srv/www). You can run the command as your webserver's user instead if that has the correct permissions.
Use Greenlock as your webserver
We're working on an option to bypass the middleman altogether and simply use greenlock as your webserver, which would probably work great for simple vhosting like it sounds like you're doing. Let me know if that's interesting to you and I'll make sure to update you about it.
Fourth
Let's Encrypt also has an official client called certbot which will likely work just as well, perhaps better, but back in the early days it was easier for me to build my own than to use theirs due to issues which they have long since fixed.
Whats important is the sub domains A record. It should be the IP Address of from where you are trying to request the sub domains certificate.
I have registered a free domain name from freenom.com and added nameservers from AWS route53. Now my domain <blabla>.ga successfully redirects to EC2 python flask server. But I really can't figure out how to add ssl by using lets encrypt. I am following the link https://ivopetkov.com/b/let-s-encrypt-on-ec2/ for SLLifying my ec2.after running letsencrypt-auto I add domain names and press enter, then I get
[ec2-user#ip-172-31-40-218 letsencrypt]$ cd /opt/letsencrypt/
[ec2-user#ip-172-31-40-218 letsencrypt]$ ./letsencrypt-auto
Requesting to rerun ./letsencrypt-auto with root privileges...
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
No names were found in your configuration files. Please enter in your domain
name(s) (comma and/or space separated) (Enter 'c' to cancel): iotserver.ga www.iotserver.ga
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for iotserver.ga
http-01 challenge for www.iotserver.ga
Cleaning up challenges
Unable to find a virtual host listening on port 80 which is currently needed for Certbot to prove to the CA that you control your domain. Please add a virtual host for port 80.
A similar question is asked here, but I've already done most part explained in both of the answers. Can anyone assist me on what I am missing here ?
try following tutorials:
https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-14-04
https://www.digitalocean.com/community/tutorials/how-to-deploy-a-flask-application-on-an-ubuntu-vps
Make sure that you able to access said web app without https, then try to install SSL. As I can see you are getting following error
Unable to find a virtual host listening on port 80 which is currently needed for Certbot to prove to the CA that you control your domain. Please add a virtual host for port 80.
There must be some configuration issue. Please debug it and let me know.
I'm having issues with TLS enabling in Mattemost. In my server I configured a lot of virtualHosts plus the mattermost files. In http everything was working fine.
Today I tried to setup TLS and https. I followed the instuctions as in https://docs.mattermost.com/install/config-tls-mattermost .html. Now I get this:
Please notice the error: I'm trying to access domain1.mywebsite.com and the error is "its security certificate is signed by domain2.mywebsite.com". domain2.mywebsite.com is one of the websites configured as virtualhosts in apache.
I did not configure any virtualhost for Mattermost, since I don't thing any is needed (and it worked flawlessly without one, and without TLS). But how can I tell mattermost (or the browser?) that the server of domain2.mywebsite.com is the same of domain1.mywebsite.com?
I generated the certificates using letsencrypt with the standalone option (sudo certbot certonly --standalone -d domain1.mywebsite.com) and didn't move any file, just enabled "UseLetsEncrypt": true, in config.json file.
Do you happen to have any idea about how I could fix this?
Thank you
Marco
You'll need to configure TLS on Apache. You'll needs to use separate certificates for each virtual host.
Here is information that might help you: https://httpd.apache.org/docs/2.4/ssl/ssl_howto.html
Don't configure TLS on Mattermost if TLS is being handled by the proxy.
I am trying to use knife from my laptop to connect to a newly configured Chef server hosted on AWS. I know what is listed below is the right direction for me but I'm not sure how to go about this exactly.
If you are not able to connect to the server using the hostname ip-xx-x-x-xx.ec2.internal
you will have to update the certificate on the server to use the correct hostname.
I had this same problem. The problem is that EC2 instances place their private ip into their hostname file. Which causes chef to self assign certs to the internal ip. When you do knife ssl check you'll probably get an error message that looks like this:
ERROR: The SSL cert is signed by a trusted authority but is not valid for the given hostname
ERROR: You are attempting to connect to: 'ec2-x-x-x-x.us-west-2.compute.amazonaws.com'
ERROR: The server's certificate belongs to 'ip-y-y-y-y.us-west-2.compute.internal'
connecting to the public IP is correct however you'll continue to get this error if you don't configure your chef server to use your public dns when signing the cert.
EDIT: Chef's documentation used to have steps to correct this issue, but since the time I initially answered this question they have removed those steps from their tutorial. The following steps worked for me with Chef 12, Ubuntu 16 on an ec2 instance.
ssh onto your chef server
open your hostname file with the following command sudo vim /etc/hostname
remove the line containing you internal ip and replace it with your public ip and save the file.
reboot the server with sudo reboot
run sudo chef-server-ctl reconfigure (this signs a new certificate, among other things)
Go back to your workstation and use knife ssl fetch followed by knife ssl check and you should be good to go.
What you could ALSO do, is just complete steps 1 - 4 before you even install chef onto the server.
Update public IP on Chef Server
run chef-server-ctl reconfigure on Server (No reboot needed)
Update the knife.rb on Workstation with new IP address
run 'knife ssl fetch' on the Chef Workstation
This should resolve the issue, to confirm run 'knife client list'
You can't connect to an internal IP (or DNS that points to an internal IP) from outside AWS. Those are nonroutable IP addresses.
Instead, connect to the public IP of the instance, if you have one.