We have a multi-tenant website where we use a wildcard SSL cert to give people a subdomain to our site. Some of our customers would like to use their own domain, but I'm concerned about how we would manage each customer's certificate as our business grows. Currently the certificate resides on the web server, which means loading all of the certs to each web server as we add them.
I'm aware we could introduce a dedicated SSL device in front of the web servers, but are there other options to improve the management of these certificates?
I'm a Microsoft Technical Evangelist and one of my partners had exactly the same challenge.
I have created a sample source code that automates and manages SSL certificates for multiple domain bindings using a new IIS 8 (Windows Server 2012) feature called SNI, which is a kind of SSL hostheaders.
All you will need to do is to reuse my code (it's quite simple) and upload your custom SSL certificates to the blob storage, or you can write your own provider to fetch custom domains and certificates from your database.
I have posted a detailed explanation and a sample "plug & play" source-code at:
http://www.vic.ms/microsoft/windows-azure/multiples-ssl-certificates-on-windows-azure-cloud-services/
You could make your clients deal with their own certificates and make them run there own https site. They can serve a page containing a single frame with your content (over https). The users will see their domain and their certificate and the browser will load the frame without complaining as long as the frame contents are also loaded over a valid https connection. I created a quick an dirty test page so you can see it in action.
This solution will 'break' the address bar as it will keep the url of the page containing the frame. Depending on the type of site you're running this might be a showstopper.
Related
I'm using Surge.sh to deploy a simple react app to a custom domain i bought from GoDaddy.com.
I've followed the instructions regarding custom domains on their site and get a confirmation that my site was deployed successfully:
https://surge.sh/help/adding-a-custom-domain
On GoDaddy I've configured the CNAME and A types to point to Surge:
However when I open up the domain at https://codatheory.dev/ I receive an error message with error code: SSL_ERROR_BAD_CERT_DOMAIN
I'm quite new to hosting sites on custom domains, so I'm sure I've misunderstood something. The certificate registered on the site is provided by surge.sh.
What configuration steps can I take to resolve this issue? Do I need to create a new certificate to be signed by a CA in order to use this domain, or have I missed something in my deployment?
Thanks!
SSl with surge comes out of the box with *.surge.sh domains. For these domains you can force a redirect of http to https. However, for custom domains surge does not offer SSL as stated explicitly here and they mentioned that it is a feature of surge plus. To answer your Q, yes you could generate a certificate using some provider (e.g. https://letsencrypt.org/) and add it to surge but that would be within the frame of surge plus (not the free tier anymore).
I would try if I were you maybe s3 with cloudfront? it does not cost that much if the traffic is not that high.
I'm building a serverless web application. My HTML, CSS and JavaScript are in a public storage location which my domain example.com points towards.
When my users navigate to my domain using their browser, their browser will GET these files from that location and then there is no further communication with example.com. The JavaScript application runs in the browser and communicates with a separate backend via HTTPS (in my case AWS, but could be e.g. Azure, Kinvey, BlueMix or others).
It therefore seems to be that there is no reason to encrypt the communication between my users' web browsers and xyz.com i.e. I don't need to provide https://example.com, and my doing so would provide no security benefit.
Am I correct?
The reason I ask is that I found at least two static hosting services which offer SSL support:
https://www.netlify.com/features#security
https://surge.sh/help/using-https-by-default
I am aware of the reasons for wanting HTTPS (described in the second link above and also at https://levels.io/default-to-https/ ...) but none of this seems to apply to my situation.
I believe this is a serious question because more applications will be built in this manner (the folks at http://serverlessconf.io/ certainly think so), and as long as the channel to the actual backend is secured there is no reason to secure the channel to what is essentially a read-only hard disk.
If you don't secure communication with example.com then a man in the middle attacker (eg a rogue wifi hotspot) could modify the html and JavaScript loaded by users.
One way to use this would be to change the JavaScript so that subsequent API requests are sent to attacker controller servers instead of yours, compromising any credentials or information transferred.
I am currently developing a web application to allow customers to place orders.
The way I have choosed to handle the application structure is to split the app in two sub-applications:
1 backend application (the API) that serves only json content
1 front end application (AngularJS in my case) that takes an API url as configuration and serves user content
Now on the server, what I have done for testing, is creating 2 virtual hosts:
app.com
api.app.com
and linked the API to the frontend app.
The problem is that everything will be served over https and, in the current setup, I will need to buy either 2 SSL certificates, or 1 wildcard certificate.
The second solution would be to create a subdirectory on the frontend app (let's say /api) and copy the backend app into it. The advantage would be to get only one single SSL certificate and have everything on the same directory; the /api would be an .htaccess redirect to the backend api.
I think that the "cleanest" solution would be to split the two apps completely and get a wildcard SSL certificate for both, but I'd like to hear if someone have some experience whether one solution is better than another.
The advantage of combining is that you will get to avoid CORS. CORS isn't that bad, but it's another complication. That being said, if you want to expose this to the outside world (allow other web pages to use it), you might want to go through that process anyway.
If you aren't looking to actually expose your API to third-parties, but just keep your layers separate, than I would either look at combining, or even proxying. I've used this architecture to put my services completely behind the firewall, and use mod_proxy or the like to serve my API through my web server. This is useful as it limits the exposure of your API, and solves CORS issues in one go.
If you really want to use SSL between your web server and your API server, you can do a self-generated client certificate between you web-server and your API server.
Background:
Imagine a website, visible to the world, https://www.example.com, with a static IP address, 1.1.1.1. This site is hosted in an Apache server and it already possess an SSL Server Certificate.
On the other hand, inside a protected internal network, not visible to the world, a server (https://www.myinternalserver.com), with a static IP address (2.2.2.2), also running Apache, runs some internal web-based applications.
A static IP address (3.3.3.3), that maps to a subdomain (myapps) of the external site (https://myapps.example.com) serves as an entry point to the server where the internal web-based applications reside.
A firewall that protects the internal network does the redirect/proxying so all external traffic going to 3.3.3.3 is redirected internally to 2.2.2.2.
The firewall also limits all external traffic so any calls going to 3.3.3.3 must have been originated at 1.1.1.1, in essence, making the external website (https://www.example.com) the only authorized caller to the internal server (https://www.myinternalserver.com).
Scenario
With this infrastructure in place, I can make REST calls from the external website into the internal network and send back data to use in the pages. So, in this scenario, the external site is the client and the internal application, the server.
Question:
But beyond that, I want the server in the internal network to issue an "SSL Client Certificate" that would be "installed" (I don't know if this is the correct term), in the external website so all calls from the external site would have to be authenticated against this certificate.
How do I accomplish this?
Breaking the question:
I know that the question above is very broad, so let me try to break it into three (not so) "smaller" questions:
1 - How to I create the key/certificate? Using OPENSSL and some online recipes (this is one of them: http://www.impetus.us/~rjmooney/projects/misc/clientcertauth.html), I was able to generate the certificate file and learned (or so I believe) what I have to do with it and what to change in the httpd.conf file. In any case, I would like to feel more secure about what I have done so any suggestions/guidance here would be highly appreciated. For example, is the recipe I used any good?
2 - How to "install/transfer" this certificate to the external site? Do I simply copy/send one of the files created when generating the certificate? If so, which one? Where specifically does it go in the client server (external site)? Do they have to do anything at their end? If not, what is the process? I tried to contact the hosting company but I don't know if Icouldn't explain it to them or if they don't have experience with "SSL Client Certificate". All they told me is that there's already an SSL Certificate installed (SSL Server Certificate). They don't even seem to know what a "SSL Client Certificate" is.
3 - Once the certificate in place, what can I do to guarantee that ALL calls to the internal server, by default, come with the Certificate, without the need to code it into each API I create? I know very little about certificates so it might be possible that it happens "by default" always, but I read online about certificates that are "embedded" in the header of the API call, so I just want to be sure.
Thank you.
After some more research, this is what I found...
1 - How do I create the key/certificate?
I had to try other recipes and use a combination of them to get what I wanted. What I learned is that I have to create a certificate (CA Certificate) first, and generate the Server and Client certificates based on that first one. So look for recipes that encompass all three certificates: CA, Server, Client.
2 - How to install/transfer this certificate to the external site?
Actually you simply copy the necessary ones (client/CA) to a safe place in your share of the external site. A place outside the "www" tree.
3 - Once the certificate in place, what can I do to guarantee that ALL calls...
Well, here is what I did.
I "objectified my API call using php/libcURL and place it too, outside the "www" tree. For any developer in my site to use it, all they have to do is create an instance of the object and make the call by passing the URL as a parameter. In other words, you don't install the certificate. Instead, you make a call to the certificates each time you make a call to the internal server.
I hope it helps someone out there.
Could someone explain the steps that one must do to show an Azure application (example.cloudapp.net) in a custom domain (service.example.com), when we want to use a secured connection? So the users browse to https://service.example.com, see it as a certified, trusted domain, and can safely access the application.
Right now, I think that
1) we need a domain (and subdomain) with a static IP from a service provider
2) we need a certificate from a CA for our domain
But I'm not quite sure how the connection between our domain and cloudapp.net should be made. I have found many examples and blog posts, but they tell either how to install a certificate to Azure application or how to show the application in custom domain (without the certificate).
This sounds like a basic requirement, so I'd expect a rather simple solution to exist.
Thanks!
Look at this blog entry
Custom Domain Names in Windows Azure
Basically you need to buy domain name and add some CNAME record in DNS table. The part remaining would be to buy appropriate SSL certificate for your site.
Here is a stop-gap for custom domains: http://www.bradygaster.com/running-ssl-with-windows-azure-web-sites
I do not believe that Azure currently supports using a certificate with a custom domain (see request for feature). In the meantime, you can use CORS.