Is it possible to get an SSL certificate for a portforwarding VPN service? - ssl-certificate

In my example I am using https://portmap.io VPN service which is not exactly a pure VPN services but still uses VPN technology to break my ISP restrictions, allowing portforwarding to my own home server running in my android device.
So if I run 193.161.193.99:1200, my website gets browsed. The port 1200 is mapped to my local python server port running on port 1000. Port 1200 is given by the VPN provider.
However, if I try 193.161.193.99 without the port 1200. The portmap VPN official website gets called, cause that's the websites' IP. So basically each user of this VPN services has there own port to work with.
Question: I don't have any public IP totally in my own control to get an SSL certificate, which requires a file upload verification by the CA (CSR). So, it it anyhow possible to get an SSL certificate using 193.161.193.99:1200 ?
Note: Services like zerossl.com accepts to provide a certificate for ipv4 public addresses. So it not always essential to use a FQDN to get a cert.

Yes this is possible, you will need a domain pointing to the VPN/portmap IP and then obtain a SLL certificate from Let's Encrypt for that domain. This can be your own domain or one provided by a Dynamic DNS Service such as Duck DNS.
I'll describe how I have done it with Docker and Duck DNS in detail:
Sign in to Duck DNS, create a subdomain and point it to the VPN/portmap IP, note the token at the top of the page.
Deploy a docker container from LinuxServer.io's SWAG Image
Make sure to provide the required environment variables in your docker-compose.yml (or with docker run command):
- VALIDATION=duckdns
- DUCKDNSTOKEN={your token}
- URL={yourdomain}.duckdns.org
Note: If you want everything behind your VPN, there is a great docker container called gluetun which allows you to run the swag container behind your VPN
You will find your SSL certificates in the /config/etc/letsencrypt/live/{yourdomain}.duckdns.org folder of the SWAG container. Use those for the website/service that is running behind your forwarded port.
The certificates will get updated automatically 30 days before they expire. There is also a PKCS#12-file privkey.pfx, which is needed for services like emby. For more information on SWAG see the LinuxServer.io Docs. You may or may not need another container running that updates the Duck DNS IP periodically, I'm not sure if the SWAG container already does that.
All of this can of course be done without Docker and with your own domain. In this case you will need to map your domain or subdomain to the VPN IP in the DNS Record section of your domain provider. And then use certbot to create certificates for that domain. Docker just automates the renewal part.

Related

Can I get trusted CA certificate for a local virtual host e.g. https://myapp

we have an web application(WAMP stack) on a local Windows server. There are several dependencies and an Oracle db running on this server and the server is closed to outside internet traffic.
Clients access the app on LAN using the server's IP, but we plan to create a virtual hostname in active directory to enable access using a hostname on entire local network.
We would like to secure the traffic and switch to https, thus we need a SSL certificate, preferably from a trusted CA to avoid any confusing warnings to the users.
Is there a way to get a Trusted CA SSL certificate for a local host? I was thinking of getting a certificate for a public domain, say myapp.net, then map this domain to actual ip of the local server running the app and install the certificate in apache... would that work?
Thank you for any ideas.
Alexander
You have several options:
The public CAs that your browser trusts by default will only sign keys for DNS names. And you can totally have a DNS name that is not accessible from the public internet (e.g. one that resolves to a local IP). In case you're using CAs like LetsEncrypt (Domain Validated), you will need to make the server available publicly during the key exchange/certification time, but can change the IP immediately afterwards (*). Or simply use one of the other available validation techniques - typically they're paid.
As you're on the intranet, you might be in a situation where you can install your own internal trusted CA on your users' computers. In that case, you can mint such a certificate yourself. This is a scenario that's common in case there's a proxy running internally that's also inspecting https traffic.
And, of course, in case you can install trusted root CAs on computers, you can also install individual trusted keys/certificates for a single machine. But that seems not to be what you'd like to do.
So, for https://myapp you'll have to do some minting/installation yourself. For https://myapp.example.com, you have options with (already) trusted root CAs
(*) that is for the commonly used and documented mode-of-operation. See Patrick's comment below

Why does my domain and ssl is not working correctly from every place?

I have a domain purchased at 1and1 and set up at AWS EC2 with SSL and Apache server.
Even the domain pointing to the correct IP (using nslookup I can see it), it works from some places and not from others.
For example, here from my workplace, I see this page (the domain does not reach the EC2 server):
I launched a Windows EC2 at AWS to make a test and from there, everything is correct (the page loads and SSL is valid):
From my client's computer, it has another behavior. It reaches the EC2 server, but is says the SSL is invalid:
Has anyone faced the same problem?
The first thing you need to do is get an Elastic IP, the instance IP can change during reboot etc but elastic IP are static IP’s so you should make sure you create one of them and assign it your running instance.
Create Hosted Zone and Record Sets
Documentation is here - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingHostedZone.html
Create a recordset and add values
Add the Amazon NameSpaceServers in Control panel of Domain Provider
Import the SSl certificate to AWS Certificate manager (Optional). Documentation is here https://docs.aws.amazon.com/acm/latest/userguide/import-certificate-api-cli.html#import-certificate-api
Self signed certificate will not work.
Deploy the SSl certificate into Apache server and configure the traffic for https.
Open the AWS in-bond traffic port documentation is here - https://aws.amazon.com/premiumsupport/knowledge-center/connect-http-https-ec2/

TFS 2017, HTTPS Binding loses console permissions?

I've been trying all day to set up my instance of TFS2017 to work with HTTPS.
I've read the official setup guide, but it didn't help much.
My instance is attached to a domain and configuration has been made with an Administrators group user. The domain account is referenced as an administration console user properly.
The setup has been made with default 8080 port and domain account user can access the website as expected (hosted at http://machine-name:8080/tfs)
Now, when I change the IIS website settings binding to use HTTPS on port 443 with a valid wildchar certificate + set the hostname to be tfs.mydomain.com + ask for SSL require, I cannot have my user to authenticate anymore.
I make TFS Public Url point to https://tfs.mydomain.com/tfs.
I get prompted for the authentication box, but after many attempts, the site would just fail with 401.
The tests are made into the server environment to avoid Firewall confusions.
My instance has two network cards with 2 separate networks. First resolves to public IP, second resolves to private IP. I noticed the configuration works with the machine names, while it fails with the DNS resolution on the public IP. Could this be a reason ?
Thanks for your help
To perform the procedures in your requirements, you must first meet some prerequisites such as required Permissions and so on. Please double check this first. Also please make sure you have set up the corresponding ports such as below prompted.
Important:
The default port number for SSL connections is 443, but you
must assign a unique port number for each of the following
sites: Default Website, Team Foundation Server, Microsoft Team
Foundation Server Proxy (if your deployment uses it), and SharePoint
Central Administration (if your deployment uses SharePoint).
You should record the SSL port number for each website that you
configure. You will need to specify these numbers in the
administration console for Team Foundation.
There is a very detail tutorial about configuring HTTPS with SSL, please refer Setting up HTTPS with Secure Sockets Layer (SSL) for Team Foundation Server
To narrow down the issue with IP, you could disable one of your two network cards. Give a test with only using one network card each time.

HTTPS on Amazon EC2 for OwnCloud

I have a question which I hope somebody can answer for me.
My situation: I have an Ubuntu Server running Apache2 on a EC2 Amazon instance, which is serving an OwnCloud instance.
My goal: I want to deploy HTTPS on this instance. I already configured the security group to allow HTTPS traffic from anywhere (as the server should be accessible from anywhere on the internet). We already have a domain name bar.com registered at another domain hosting company. But we want to point foo.bar.com to this owncloud installation.
My questions:
1) Which IP-address do I use to configure the DNS at this domain hosting company. Because the public ip-address and public DNS of the EC2 instance is renewed every time the instance restarts.
2) How do I generate the SSL certificate for HTTPS configuration of Apache2? More specifically, which common name (CN) do I need to put in the certificate. Because the public dns of the EC2 instance is changing on every restart. I think if I put the foo.bar.com CN in the certificate that the browser will throw a certificate error once the user gets redirected from foo.bar.com -> .compute.amazonaws.com, am I right?
In short: how do I deploy https on a EC2 instance at Amazon AWS with a dns at a third party domain name service?
To deal with the changing public ip address you've got two options, first and (for simple situations, best) go to the Elastic Ip Page, get an eip and associate it with your instance, this association and hence public IP will hang around even after start/stop. You can even move the eip over to a different machine if you need to. This option is very cheap (you only get charged for an eip if its not attached to a started server). You're then safe to point your dns at the eip. The alternative option is much more powerful and that is to use elb (load balancing) but it also involves a fair amount more work to setup.
I assume if you're asking about cn's you dont really want a "how to" on creating an ssl cert (please correct me if I'm wrong). For the cn you just use the domain name - it doesn't matter what ip address the name resolves to the cert is for the domain. If you have your own domain to point at your eip you dont need to care about the machines public hostname. A user will never see it.

HTTPS with WCF for machine names instead of IP-Adresses

I'm trying to set up my self-hosted WCF service to function over HTTPS. I roughly did as described in this CodeProject article:
I created a certificate for me as CA
I created a certificate for SSL using the CA certificate from the first step.
I used netsh to register my machine's IP address and the service's port for HTTPS
In 3. I used my machine's public IP address, as I'm testing SSL communication from a mobile device within the same WLAN. Things work fine, the client can talk to the server.
Now I also want to test from my local machine using another client. So I did 1. to 3. again, but for CN=localhost and 127.0.0.1 in step 3. Things also work fine if I access the service through https://127.0.0.1. However, when I try to use https://localhost I get an error that the registration with HTTPS.SYS may be missing - and it is, as I need to use an IP-address in above step 3.
My question is: While it is not much of a problem using the IP-address instead of the machine name, what would I have to do to allow access to my WCF service by machine name?