I'm interested in hearing what others do when, in a given application, some pages need to be secure and others don't. Take any solution off the table that requires a separate domain/subdomain. In this case, all calls, secure or insecure, will link to the same domain. I see a few options:
The ham-fisted, just secure it all approach.
A URI rewrite solution that ensure the pages that need to be secure are accessed via the https protocol and either ignores other pages or, alternatively, forces those to standard http
An application-centric approach where each link is responsible for knowing whether it's pointing and applying the correct protocol. In this solution, all links would have to be fully qualified.
A laissez-fair version of the application-centric approach where links to secure pages are fully qualified and links to other pages are not. In this case, the protocol would be inherited for pages not handled explicitly and inconsequential pages may be accessed via https.
I've used several of these from time to time, but they all have drawbacks. What's everyone else doing in these situations? Is there another path I haven't considered?
UPDATE:
vartec's answer below made me realize that I'd left out one critical piece of information. In my network config, all SSL-handling is taken care of at the load balancer level. The LB, then, communicates with the web server cluster via port 80. As a result, the applications themselves have no idea whether traffic arrived securely. All they see is a port 80 connection.
Thanks.
I use a mixture of #4 and #2: try to specify absolute URLs where possible when I need to switch protocols, and implement server-side redirection to catch any links I haven't used absolute URLs on (or if someone accesses the URL directly, not by following a link).
In my view, the one essential thing is that the pages which need to be accessed securely (form submissions etc.) are accessed securely, and for that I use Apache's SSLRequireSSL directive. It makes it easy to verify to myself that certain pages will never be accessed except over SSL.
I'm for the ham-fisted secure it all approach, but then you took my real solution off the table by (strangely to my mind) excluding domain/subdomain solutions. Errors in securing the site are far more dangerous than a bit of processing overhead.
We have our main site, which is insecure (but mainly marketing) and then we have the application site which is a different subdomain. Simple, easy and effective. Why take that option off the table?
Application centric approach, where controller for each page knows if it has to be secure. If it needs to be secure, but it's accessed via insecure http, redirect to https, passing along all of parameters.
Related
I'm building a serverless web application. My HTML, CSS and JavaScript are in a public storage location which my domain example.com points towards.
When my users navigate to my domain using their browser, their browser will GET these files from that location and then there is no further communication with example.com. The JavaScript application runs in the browser and communicates with a separate backend via HTTPS (in my case AWS, but could be e.g. Azure, Kinvey, BlueMix or others).
It therefore seems to be that there is no reason to encrypt the communication between my users' web browsers and xyz.com i.e. I don't need to provide https://example.com, and my doing so would provide no security benefit.
Am I correct?
The reason I ask is that I found at least two static hosting services which offer SSL support:
https://www.netlify.com/features#security
https://surge.sh/help/using-https-by-default
I am aware of the reasons for wanting HTTPS (described in the second link above and also at https://levels.io/default-to-https/ ...) but none of this seems to apply to my situation.
I believe this is a serious question because more applications will be built in this manner (the folks at http://serverlessconf.io/ certainly think so), and as long as the channel to the actual backend is secured there is no reason to secure the channel to what is essentially a read-only hard disk.
If you don't secure communication with example.com then a man in the middle attacker (eg a rogue wifi hotspot) could modify the html and JavaScript loaded by users.
One way to use this would be to change the JavaScript so that subsequent API requests are sent to attacker controller servers instead of yours, compromising any credentials or information transferred.
I am currently developing a web application to allow customers to place orders.
The way I have choosed to handle the application structure is to split the app in two sub-applications:
1 backend application (the API) that serves only json content
1 front end application (AngularJS in my case) that takes an API url as configuration and serves user content
Now on the server, what I have done for testing, is creating 2 virtual hosts:
app.com
api.app.com
and linked the API to the frontend app.
The problem is that everything will be served over https and, in the current setup, I will need to buy either 2 SSL certificates, or 1 wildcard certificate.
The second solution would be to create a subdirectory on the frontend app (let's say /api) and copy the backend app into it. The advantage would be to get only one single SSL certificate and have everything on the same directory; the /api would be an .htaccess redirect to the backend api.
I think that the "cleanest" solution would be to split the two apps completely and get a wildcard SSL certificate for both, but I'd like to hear if someone have some experience whether one solution is better than another.
The advantage of combining is that you will get to avoid CORS. CORS isn't that bad, but it's another complication. That being said, if you want to expose this to the outside world (allow other web pages to use it), you might want to go through that process anyway.
If you aren't looking to actually expose your API to third-parties, but just keep your layers separate, than I would either look at combining, or even proxying. I've used this architecture to put my services completely behind the firewall, and use mod_proxy or the like to serve my API through my web server. This is useful as it limits the exposure of your API, and solves CORS issues in one go.
If you really want to use SSL between your web server and your API server, you can do a self-generated client certificate between you web-server and your API server.
I have a question regarding CloudFlare's new Flexible SSL. I am on a free account there, so I figured I'd ask the community here before submitting a support ticket (since they don't appear to have a forum).
How do I properly handle a forced SSL redirect? I want all traffic to my site to use SSL, but right now it's bypassed. CloudFlare is enabled, and manually going to https:// works perfectly, but what is the "proper" way for forcing SSL? Do I need to use my domain registrar to redirect all requests to https? Not a problem if that's the case, I know how to do that, but I don't know if that's the "proper" way.
You could actually use PageRules to force http:// to https://
You should also make sure you don't have any mixed content issues.
Note: We don't provide a forum because we don't want people sharing sensitive information (server IPs, etc.) in a public arena. Everything is handled via support for that reason.
is it usually advisable to install a single domain ssl certificate to the main domain --domain.com and use .htaccess to go in and out of ssl or to a subdomain such as --secure.domain.com. I know there are different needs for different sites but I'm asking for the average websites needs. -eg a website owner wants a secure shopping cart for their customers should they use domain.com/secure and force ssl or have ssl on secure.domain.com
I would install it on the main domain name. It sounds better to put it on a sub domain but then you technically have a whole separate website to maintain and this could be a negative rating for search engines. Also, need another SSL certificate if you want anything secure on your main site.
Depending on your back end technology (i.e. - .NET, ASP, PHP), it only takes a couple lines of code to check the page request and redirect the user to the proper page desired. For example, if a user goes to [http://www.domain.com/secure] you can redirect the request to the proper secure page (https://www.domain.com/secure) and vice versa.
.htaccess is an older technology and can be very cumbersome to use.
We're considering setting up a subdomain gateway.domain.com where that sub domain will process all of our payments to authorize.net from possibly multiple sections of our site, our internal and external systems alike. I know it would need SSL and I'm guessing I should accept $_POST from a restricted list of URLs and extreme data validation.
I'm wondering what your thoughts are on this. Are there any security risks that I'm not thinking of?
Putting it on a subdomain doesn't have any security issues associated with it in concept as where the payments are handled on your website really doesn't mean anything as far as payment processing goes. All the usual security issues still apply regardless of where you put it on your website.
There are also no real benefits to this either other than, perhaps, you only need to get an SSL certificate for that subdomain assuming you don't need it anywhere else on your website. But that's barely a benefit if one at all.