Does CloudFlare protect the underlying OS and web server from vulnerabilities? - cloudflare

I understand that CloudFlare protects your application against attacks like SQL and XSS injections. But what about the operating system and web server?
For example, if my website is hosted on IIS/Windows, will CloudFlare also protect against operating system vulnerabilities and/or IIS vulnerabilities on my server?

CloudFlare protection come in the form of a Web Application Firewall (WAF). This is available (currently) to all paid plan users. The WAF scans all web requests according to certain rules defined by CloudFlare, and the website owner. These rules are in place to protect against certain coding and application vulnerabilities in the event they are not protected against in the website's code or software.
While CloudFlare has blocked certain vulnerabilities like CVE-2015-1635 and Shellshock. These all were vulnerabilities that could be exploited through a HTTP request.
If your server or operating system or software has a vulnerability that does not correspond directly with web requests, then CloudFlare is not likely to be interested in blocking it.
It should also be noted that CloudFlare does not actually fix the vulnerability on your server, rather blocks any requests that come through CloudFlare that clearly are attempting to exploit a vulnerability. You should always still attempt to patch the vulnerability as soon as possible.
In Short:
If a vulnerability can be exploited through a common HTTP request, CloudFlare is likely to block it. All other vulnerabilities are likely to be ignored by CloudFlare, as they have no way to block them in the first place.

Related

SSL for statically served web application

I'm building a serverless web application. My HTML, CSS and JavaScript are in a public storage location which my domain example.com points towards.
When my users navigate to my domain using their browser, their browser will GET these files from that location and then there is no further communication with example.com. The JavaScript application runs in the browser and communicates with a separate backend via HTTPS (in my case AWS, but could be e.g. Azure, Kinvey, BlueMix or others).
It therefore seems to be that there is no reason to encrypt the communication between my users' web browsers and xyz.com i.e. I don't need to provide https://example.com, and my doing so would provide no security benefit.
Am I correct?
The reason I ask is that I found at least two static hosting services which offer SSL support:
https://www.netlify.com/features#security
https://surge.sh/help/using-https-by-default
I am aware of the reasons for wanting HTTPS (described in the second link above and also at https://levels.io/default-to-https/ ...) but none of this seems to apply to my situation.
I believe this is a serious question because more applications will be built in this manner (the folks at http://serverlessconf.io/ certainly think so), and as long as the channel to the actual backend is secured there is no reason to secure the channel to what is essentially a read-only hard disk.
If you don't secure communication with example.com then a man in the middle attacker (eg a rogue wifi hotspot) could modify the html and JavaScript loaded by users.
One way to use this would be to change the JavaScript so that subsequent API requests are sent to attacker controller servers instead of yours, compromising any credentials or information transferred.

Should I set https on every page?

I am bulding a marketplace which store users session ect.... I just added a SSL encryption for login and for the payment (I am using stripe as a payment gateway). I have seen sites like facebook forcing HTTPS on every page so that got me wondering, should I force HTTPS on every page or just on login and payment?
side note, apparently SSL encrypted pages load faster
Yes. But not just because it loads faster, or even ranks better on Google than non-HTTPS sites, but mainly because of security. Having HTTPS makes it harder to do a man-in-the-middle attack, whereby an attack intercepts the connection between your website and the user to either steal or modify data. The trouble with HTTP is that it is possible for someone to do exactly that, and then modify the links to point to a fake login page to steal data (this souunds paranoid but it happens).
While many pages use a script to check if the user is attempting to access HTTP and then redirect them to a HTTPS version, this might still be an issue for websites as attackers can still 'strip' out any HTTPS links (known as the SSLStrip attack) to use only HTTP and then view the data, take a look at enabling HSTS (HTTP Strict Transport Security) for enhanced security to avoid that. This is done by forcing browsers to only interact with the website on HTTPS connections and avoid any sort of downgrade attack.

Is it safe to proxy a request from https to http?

I have 2 servers, Web and Api. Web serves up webpages, and Api serves up json.
I want to be able to make ajax calls from Web to Api, but I want to avoid CORS pre-flight requests. So instead, I thought to proxy all requests for https://web.com/api/path to https://api.com/path.
The only way I've been able to get this to work is to drop the https when making the request to the api server. In other words, it goes https://web.com/some/page -> https://web.com/api/path -> http://api.com/path.
Am I leaving myself vulnerable to an attack by dropping the https in my proxy request?
(I would make a comment but I don't have enough rep)
I think this would depend largely on what you mean by proxying.
If you actually use a proxy (that is, your first server relays the request to the second, and it comes back through the first), then you're only as vulnerable as the connection between those two servers. If they're in physical proximity, over a private network, I wouldn't worry about it too much, as an attacker would have to compromise your physical network. If they're communicating over open internet, you might have other attacks happen (DNS spoofing comes to mind if you don't supply an actual IP address), and I would not recommend this.
If by 'proxy' you mean the webpage makes an Ajax call to your API server, this would open things up to the same attacks that proxying across the internet could.
Of course, this all depends on what you're serving up in JSON. If any of it involves authentication or session-related information, I wouldn't leave it unencrypted. If it's just basic info that's the same for all users, you might not care. However, a skilled attacker could potentially manipulate the data with a man-in-the-middle attack, so I would still encrypt it.

Why does Twitter serve every page over HTTPS (SSL)?

Is there a reason why a website such as Twitter serves all pages over HTTPS? I was under the impression that the only pages that need to be served over an encrypted channel are pages where sensitive information is being submitted or received.
I do that when developing web apps. It makes securing user data much simpler, because I don't have to think about whether or not confidential information could be passed through a particular request. If there is a performance penalty, it's hasn't been bad enough to make it worth my while to start profiling. My projects have been fairly small, in terms of usage, so far.
Every page on Twitter either:
Is accessed when you are logged in and sending credentials in the request (and potentially receiving data that is private) or
Contains a login form (that shouldn't be interfered with via a man-in-the-middle attack).
Consequently every page on the site has the potential to be a page where sensitive information is being submitted or received.
Switching between HTTP and HTTPS can be tricky to do correctly.
If any resource that is served over HTTP requires authentication, some form of authentication token (typically a session cookie) will be leaked from HTTPS to HTTP (assuming the user authentication itself is done over HTTPS).
Getting the flow of pages right so that, once that token has been used over plain HTTP, it can no longer be relied upon for anything more sensitive (which would require HTTPS) can require a lot of planning in the design of the application. (There are certainly a number of websites that don't do it properly.)
Since Twitter is a website where you're always logged on (or always have the opportunity to log on securely in the corner), it seems to make sense to use HTTPS for everything.
The main overhead in HTTPS is the SSL/TLS handshake: checking the certificates, asymmetric cryptography, ... Once the connection is established, it's all symmetric cryptography, with a much lower overhead.
You'll see a number of questions here (and other places) where people insist to have redirection rules to force plain HTTP for resources that don't need to be used securely, while forcing HTTPS for other pages. This seems misguided to me: by the time the redirection from HTTPS to HTTP happens, the handshake has already taken place. A good browser will keep the connection alive (and will be able to reuse sessions) to get multiple resources and pages, thereby keeping the overhead to a minimum, almost negligible at that point.

how to properly secure access to a site

I am a developer and have a site that has some sensitive information that needs to be online for people in different cities. The data is not of much use for most people (no CC numbers, or marketing plans)
the server has a LAMP stack.
I have used .htaccess on the server and the site has a web based sign in screen as well.
Two questions:
What is the best way to guarantee the data is secure (within reason)?
How can i check there has not been a security breach?
thanks
You can install SSL cert. and redirect all the HTTP calls to HTTPS (using .htaccess or httpd.conf).
There are many possible security breaches such as: code injection, sql injection etc. You can use services like this and this in order to check your site's security.