I am currently developing a web application to allow customers to place orders.
The way I have choosed to handle the application structure is to split the app in two sub-applications:
1 backend application (the API) that serves only json content
1 front end application (AngularJS in my case) that takes an API url as configuration and serves user content
Now on the server, what I have done for testing, is creating 2 virtual hosts:
app.com
api.app.com
and linked the API to the frontend app.
The problem is that everything will be served over https and, in the current setup, I will need to buy either 2 SSL certificates, or 1 wildcard certificate.
The second solution would be to create a subdirectory on the frontend app (let's say /api) and copy the backend app into it. The advantage would be to get only one single SSL certificate and have everything on the same directory; the /api would be an .htaccess redirect to the backend api.
I think that the "cleanest" solution would be to split the two apps completely and get a wildcard SSL certificate for both, but I'd like to hear if someone have some experience whether one solution is better than another.
The advantage of combining is that you will get to avoid CORS. CORS isn't that bad, but it's another complication. That being said, if you want to expose this to the outside world (allow other web pages to use it), you might want to go through that process anyway.
If you aren't looking to actually expose your API to third-parties, but just keep your layers separate, than I would either look at combining, or even proxying. I've used this architecture to put my services completely behind the firewall, and use mod_proxy or the like to serve my API through my web server. This is useful as it limits the exposure of your API, and solves CORS issues in one go.
If you really want to use SSL between your web server and your API server, you can do a self-generated client certificate between you web-server and your API server.
Related
I'm building a serverless web application. My HTML, CSS and JavaScript are in a public storage location which my domain example.com points towards.
When my users navigate to my domain using their browser, their browser will GET these files from that location and then there is no further communication with example.com. The JavaScript application runs in the browser and communicates with a separate backend via HTTPS (in my case AWS, but could be e.g. Azure, Kinvey, BlueMix or others).
It therefore seems to be that there is no reason to encrypt the communication between my users' web browsers and xyz.com i.e. I don't need to provide https://example.com, and my doing so would provide no security benefit.
Am I correct?
The reason I ask is that I found at least two static hosting services which offer SSL support:
https://www.netlify.com/features#security
https://surge.sh/help/using-https-by-default
I am aware of the reasons for wanting HTTPS (described in the second link above and also at https://levels.io/default-to-https/ ...) but none of this seems to apply to my situation.
I believe this is a serious question because more applications will be built in this manner (the folks at http://serverlessconf.io/ certainly think so), and as long as the channel to the actual backend is secured there is no reason to secure the channel to what is essentially a read-only hard disk.
If you don't secure communication with example.com then a man in the middle attacker (eg a rogue wifi hotspot) could modify the html and JavaScript loaded by users.
One way to use this would be to change the JavaScript so that subsequent API requests are sent to attacker controller servers instead of yours, compromising any credentials or information transferred.
Background:
Imagine a website, visible to the world, https://www.example.com, with a static IP address, 1.1.1.1. This site is hosted in an Apache server and it already possess an SSL Server Certificate.
On the other hand, inside a protected internal network, not visible to the world, a server (https://www.myinternalserver.com), with a static IP address (2.2.2.2), also running Apache, runs some internal web-based applications.
A static IP address (3.3.3.3), that maps to a subdomain (myapps) of the external site (https://myapps.example.com) serves as an entry point to the server where the internal web-based applications reside.
A firewall that protects the internal network does the redirect/proxying so all external traffic going to 3.3.3.3 is redirected internally to 2.2.2.2.
The firewall also limits all external traffic so any calls going to 3.3.3.3 must have been originated at 1.1.1.1, in essence, making the external website (https://www.example.com) the only authorized caller to the internal server (https://www.myinternalserver.com).
Scenario
With this infrastructure in place, I can make REST calls from the external website into the internal network and send back data to use in the pages. So, in this scenario, the external site is the client and the internal application, the server.
Question:
But beyond that, I want the server in the internal network to issue an "SSL Client Certificate" that would be "installed" (I don't know if this is the correct term), in the external website so all calls from the external site would have to be authenticated against this certificate.
How do I accomplish this?
Breaking the question:
I know that the question above is very broad, so let me try to break it into three (not so) "smaller" questions:
1 - How to I create the key/certificate? Using OPENSSL and some online recipes (this is one of them: http://www.impetus.us/~rjmooney/projects/misc/clientcertauth.html), I was able to generate the certificate file and learned (or so I believe) what I have to do with it and what to change in the httpd.conf file. In any case, I would like to feel more secure about what I have done so any suggestions/guidance here would be highly appreciated. For example, is the recipe I used any good?
2 - How to "install/transfer" this certificate to the external site? Do I simply copy/send one of the files created when generating the certificate? If so, which one? Where specifically does it go in the client server (external site)? Do they have to do anything at their end? If not, what is the process? I tried to contact the hosting company but I don't know if Icouldn't explain it to them or if they don't have experience with "SSL Client Certificate". All they told me is that there's already an SSL Certificate installed (SSL Server Certificate). They don't even seem to know what a "SSL Client Certificate" is.
3 - Once the certificate in place, what can I do to guarantee that ALL calls to the internal server, by default, come with the Certificate, without the need to code it into each API I create? I know very little about certificates so it might be possible that it happens "by default" always, but I read online about certificates that are "embedded" in the header of the API call, so I just want to be sure.
Thank you.
After some more research, this is what I found...
1 - How do I create the key/certificate?
I had to try other recipes and use a combination of them to get what I wanted. What I learned is that I have to create a certificate (CA Certificate) first, and generate the Server and Client certificates based on that first one. So look for recipes that encompass all three certificates: CA, Server, Client.
2 - How to install/transfer this certificate to the external site?
Actually you simply copy the necessary ones (client/CA) to a safe place in your share of the external site. A place outside the "www" tree.
3 - Once the certificate in place, what can I do to guarantee that ALL calls...
Well, here is what I did.
I "objectified my API call using php/libcURL and place it too, outside the "www" tree. For any developer in my site to use it, all they have to do is create an instance of the object and make the call by passing the URL as a parameter. In other words, you don't install the certificate. Instead, you make a call to the certificates each time you make a call to the internal server.
I hope it helps someone out there.
Let's say I'm running a dedicated server with owncloud and roundcube on it. First idea was to protect those URLs with some kind of reverse proxy. However I would like to make it more secured and implement a two factor authentication.
The idea is to redirect clients to a login page (implemented with Play Framework), once user is authenticated, he is free to use owncloud or roundcube.
I have been thinking about this problem for a while now, here are my thoughts:
Use play router to filter protected pages
redirect to login page built with play
[possible solution : once authenticated, redirect requests to internal web server running on a different port that can not be accessed from outside]
The main challenge is that owncloud is a PHP app running on apache, I need some magic to talk with the apache server (running play with apache as front-end is not excluded). This solution needs to be somewhat generic so that it can be used for other apps in the future.
I hope my idea is all clear, we can see this configuration as a private backend (with applications running in different environments) for a blog.
Question is, do you think this is the best way to go considering how play works and the configuration I want to implement ?
Thanks !
I have 2 web roles in a cloud service; my API and my Web Client. Im trying to setup SSL for both. My question is, do I need two SSL certificates? Do I need 2 domain names?
The endpoint for my api is my.ip.add.ress. The endpoint for my webclient is my.ip.add.ress:8080.
Im not sure how to add the dns entrees for this as there is nowhere for me to input the port number (which I have learned is because its out of the scope of the dns system).
What am I not understanding? This seems to be a pretty standard scenario with Azure Cloud Services (it is set up this way in the example project in this tutorial, for instance http://msdn.microsoft.com/en-us/library/dn735914.aspx) but I can't find anywhere that explains explicitly how to handle this scenario.
First, you are right about DNS not handling port number. For your case, you can simply use one SSL certificate for both endpoints and make the two endpoints have the same domain name. Based on which port is used by user request, the request will be routed to the correct endpoint (API vs. Web Client). Like you said this is a relative common scenario. There is no need to complicate things.
Let's assume you have one domain www.dm.com pointing to the ip address. To access your Web API, your users need to hit https://www.dm.com, without port number which defaults to 443. To access your web client, your users need to hit https://www.dm.com:8080. If you want users to use default port 443 for both web api and web client, you need to create two cloud services instead of one, then web api on one cloud service and web client on the other cloud service. Billing wise, you will be charged the same as one cloud service.
Are there any reasons you want to make 2 different domains and in turn 2 SSL certificates? If so, it is still possible. Based on your requirements, you may have to add extra logic to block requests from the other domain.
We have a multi-tenant website where we use a wildcard SSL cert to give people a subdomain to our site. Some of our customers would like to use their own domain, but I'm concerned about how we would manage each customer's certificate as our business grows. Currently the certificate resides on the web server, which means loading all of the certs to each web server as we add them.
I'm aware we could introduce a dedicated SSL device in front of the web servers, but are there other options to improve the management of these certificates?
I'm a Microsoft Technical Evangelist and one of my partners had exactly the same challenge.
I have created a sample source code that automates and manages SSL certificates for multiple domain bindings using a new IIS 8 (Windows Server 2012) feature called SNI, which is a kind of SSL hostheaders.
All you will need to do is to reuse my code (it's quite simple) and upload your custom SSL certificates to the blob storage, or you can write your own provider to fetch custom domains and certificates from your database.
I have posted a detailed explanation and a sample "plug & play" source-code at:
http://www.vic.ms/microsoft/windows-azure/multiples-ssl-certificates-on-windows-azure-cloud-services/
You could make your clients deal with their own certificates and make them run there own https site. They can serve a page containing a single frame with your content (over https). The users will see their domain and their certificate and the browser will load the frame without complaining as long as the frame contents are also loaded over a valid https connection. I created a quick an dirty test page so you can see it in action.
This solution will 'break' the address bar as it will keep the url of the page containing the frame. Depending on the type of site you're running this might be a showstopper.