I reading an article about reverse-proxies. Among the benefits listed are
Enable HTTPS support
Gzip responses
I am wondering if I should concern myself with these if I am leveraging Firebase Hosting? I wasn't able to find any information on these topic within their documentation. In short, do I need a reverse-proxy with Firebase hosting?
Firebase Hosting already uses HTTPS, and Gzips most responses.
Even if it didn't, there is no requirement to have HTTPS and/or Gzip. If you don't know whether you need them, you probably shouldn't spend time on adding them.
Related
I am new to Nginx, so please bear with me if my question is obvious.
I am looking for ways to authenticate users to the Nginx server. From my research I've understood there are two primary options:
End-user sends a request that contains the private key (in the header for example) to Nginx, Ngnix sends the authentication to auth server and the Ngnix gets an answer if the user authenticated or not.
The second option is, Nginx plus (A service that costs money), and the Nginx handles the authentication process - If someone knows an open-source version of this option it would be the best.
I would really appreciate the help, thank you all!
The old good Basic authentication still exists, among with the ngx_http_auth_basic_module. Unfortunately the only algorithm that is implemented by nginx itself is the old and weak apache MD5, however using glibc based host systems you have some other options. You can find out more details here.
You can authenticate your users using client-side certificates. There are many articles all over the internet; here is the Client-Side Certificate Authentication with Nginx from the first search results page by google.
You can use the js_content directive from njs module as the auth location content handler (instead of proxying auth request to some backend app). Or you can do both things, you may find the Validating OAuth 2.0 Access Tokens with NGINX and NGINX Plus article to be very interesting.
You can implement almost every authentication mechanism you can ever imagine using the famous lua-nginx-module. Some useful links (again, from the very first page of google search results) are
Method of using Lua to write authentication module of nginx server
NGINX Lua OAuth Proxy Plugin
Nginx Lua script redis based for Basic user authentication
Although this one related only to Nginx Plus, to made the answer complete I had to mention ngx_http_auth_jwt_module and a few official articles from F5:
Setting up JWT Authentication
Authenticating API Clients with JWT and NGINX Plus
I was looking at https://material-ui-next.com who seem to be running on firebase hosting and use CloudFlare on top of it.
This raised a question. Do firebase hosting websites need additional layers for things like DDoS protection? As as I am aware, firebase provides SSL, CDN, DDoS and caching out of the box? When would one want to add CloudFlare on top of that?
UPDATE: I've moved from Firebase hosting to Netlify
While deploying our website (https://mfy.im) we ran into a similar debate. However, we decided to go with Firebase hosting without CloudFlare
The main reason is the performance:
Firebase hosting without CloudFlare: 732ms
Firebase hosting with CloudFlare: 1.2s
Using Firebase config json I was able to configure most of the things that I did earlier in CloudFlare.
However, if you're not much concerned about performance, I recommend to use Firebase with CloudFlare due to the following reasons:
Firebase provides some basic DDOS prevention, but no rate limiting. See: Rate Limiting on Firebase Hosting
Brotli compression - Firebase only provides gzip
Pricing - only 10GB bandwidth is free. After that, it's $0.15 per GB. If you enable CloudFlare on top of Firebase it will cover most of your bandwidth
To anyone looking to put Cloudflare or another CDN in front of Firebase - bear in mind that Firebase sees only one IP making a massive number of requests and may decide to block that IP. I'm not sure if this is something happening recently, but here's the (arrogant) response from Google Support on the matter:
The specialist we involved in the issue recommended us to escalate
this with one of the Firebase Engineers which we did.
The engineers mentioned us that CloudFlare integration is limited as
Firebase hosting already provides content through the Firebase CDN[1]
and adding a second CDN on top is discouraged as it can actually bring
down the site performance.
This causing a limitation preventing us to allow the cloudflare IPs.
Edit: If you're interested in doing this, Google have opened a "Feature request" here to whitelist / stop blocking CDN IPs:
https://issuetracker.google.com/issues/185590945?pli=1
Please star it if you would like it resolved faster.
We put Fastly in front of firebase. We put it in front of functions AND hosting.
We did this using rewriters to point to the functions, then we requested Fastly to do a force override to pull the hosting domain properly (we were getting site not found).
Using Fastly to pull data from Firebase is working very well. We get additional logging, control of WAF, etc.
We did not have to setup a custom domain in Firebase to achieve this, but we did have to allow Fastly to call with CORS settings.
I'm building a serverless web application. My HTML, CSS and JavaScript are in a public storage location which my domain example.com points towards.
When my users navigate to my domain using their browser, their browser will GET these files from that location and then there is no further communication with example.com. The JavaScript application runs in the browser and communicates with a separate backend via HTTPS (in my case AWS, but could be e.g. Azure, Kinvey, BlueMix or others).
It therefore seems to be that there is no reason to encrypt the communication between my users' web browsers and xyz.com i.e. I don't need to provide https://example.com, and my doing so would provide no security benefit.
Am I correct?
The reason I ask is that I found at least two static hosting services which offer SSL support:
https://www.netlify.com/features#security
https://surge.sh/help/using-https-by-default
I am aware of the reasons for wanting HTTPS (described in the second link above and also at https://levels.io/default-to-https/ ...) but none of this seems to apply to my situation.
I believe this is a serious question because more applications will be built in this manner (the folks at http://serverlessconf.io/ certainly think so), and as long as the channel to the actual backend is secured there is no reason to secure the channel to what is essentially a read-only hard disk.
If you don't secure communication with example.com then a man in the middle attacker (eg a rogue wifi hotspot) could modify the html and JavaScript loaded by users.
One way to use this would be to change the JavaScript so that subsequent API requests are sent to attacker controller servers instead of yours, compromising any credentials or information transferred.
I am currently developing a web application to allow customers to place orders.
The way I have choosed to handle the application structure is to split the app in two sub-applications:
1 backend application (the API) that serves only json content
1 front end application (AngularJS in my case) that takes an API url as configuration and serves user content
Now on the server, what I have done for testing, is creating 2 virtual hosts:
app.com
api.app.com
and linked the API to the frontend app.
The problem is that everything will be served over https and, in the current setup, I will need to buy either 2 SSL certificates, or 1 wildcard certificate.
The second solution would be to create a subdirectory on the frontend app (let's say /api) and copy the backend app into it. The advantage would be to get only one single SSL certificate and have everything on the same directory; the /api would be an .htaccess redirect to the backend api.
I think that the "cleanest" solution would be to split the two apps completely and get a wildcard SSL certificate for both, but I'd like to hear if someone have some experience whether one solution is better than another.
The advantage of combining is that you will get to avoid CORS. CORS isn't that bad, but it's another complication. That being said, if you want to expose this to the outside world (allow other web pages to use it), you might want to go through that process anyway.
If you aren't looking to actually expose your API to third-parties, but just keep your layers separate, than I would either look at combining, or even proxying. I've used this architecture to put my services completely behind the firewall, and use mod_proxy or the like to serve my API through my web server. This is useful as it limits the exposure of your API, and solves CORS issues in one go.
If you really want to use SSL between your web server and your API server, you can do a self-generated client certificate between you web-server and your API server.
I have a bucket on S3 that needs to utilize the "static website" functionality in order to take advantage of the routing rules capabilities. Enabling this broke the built in ssl certificate. Is there a built in mechanism for supporting SSL requests on the S3 bucket while using the static website hosting? It seems like a major miss in functionality if this isn't present.
Also note that I need this to function outside of CloudFront as the hosted CSS suffers from CORS issues, which only the S3 bucket can resolve with its CORS Configuration options.
Thanks.
Static hosting and SSL do not work together. You could, as you know, use the ssl wildcard cert on the REST endpoint, but then you lose routing rules. And, as you have apparently found, Cloudfront's support for CORS is somewhat limited from what I can tell unless you have a very generous CORS policy.
From an AWS product manager on 2013-05-10:
Thanks for all your feedback. S3 static website hosting currently does not support SSL certificates. We may consider adding this support in the future. Please keep your feedback coming!
https://forums.aws.amazon.com/thread.jspa?threadID=60821#450167
The only noteworthy alternative that comes to mind -- which I have implemented successfully in the past -- is to use a reverse proxy (HAProxy? Nginx? Apache? Maybe even stunnel4? Others?) on EC2 in the same region to terminate the SSL and proxy the requests over to S3. In the same region, there are no bandwidth charges between EC2 and S3 so the only cost is that of the instance... which could still end up being less than the cost of using Cloudfront, and should perform comparably (without the caching aspect, of course).