Is there a difference between Cloudflare headers CF-Connecting-IP and True-Client-IP? - http-headers

I'm using Cloudflare's Web Application Firewall - WAF. I don't want clients connecting directly to my application server, but once Cloudflare WAF forwards the traffic to the server I would like to know the IP address of the original client. For logging and tracking purposes. The literature documents two headers; CF-Connecting-IP and True-Client-IP.
https://developers.cloudflare.com/fundamentals/get-started/reference/http-request-headers
Strangely, although these two headers ostensibly provide the same information, True-Client-IP requires Enterprise (thousands $$ per month), while CF-Connecting-IP states no such requirement. The difference in pricing between these two features is radical enough to have me question -
Are these actually the same thing?
And second, can I actually use CF-Connecting-IP on my Pro plan?
(And if so, why would Cloudflare be restricting the equivalent header - True-Client-IP - to the enterprise plan? Granted that is not a technical question, but when things don’t make sense, I wonder what it is that I am missing…)
Thanks for your advice!

I have found two Request Headers which are useful to my purpose:
CF-Connecting-IP
x-forwarded-for
As far as I've seen so far, in all cases, these two headers contain identical values; sometimes the IP is in v4 format, sometimes in v6 format.
(Given that this information is available to us already for free, I'm puzzled why Cloudflare would insist on the Enterprise plan and pricing to provide the same information, which is already available to us, in a header with a different name. But in any case, either of those headers listed above will give you the IP address of the end-user client.)

Related

What are alternatives to secure a web-server other than firewall

I'm doing a network security course and trying to wrap my head around all the concepts. One of which is:
What technology other than firewall can be used to allow only a specific customers while block some other customers? Why is firewall not suitable?
During the course, I've been learning about all the security tools such as: firewall (static, dynamic, DPI), Proxy, VPN, Tunnel, all sorts of IDS (signature, anomaly, darknet/greynet and honeypot) then mod_security to secure apache but still puzzled by this question.
Any insights here will be greatly appreciated.
A firewall implied that you block based on the customer IP address. This may work if the customer has his own range of addresses and all requests from him are legitimate.
It gets complicated when he is with a large cloud provider who who provide a wide range of possible IPs, including IPs from other people.
For an application one good solution would be to use client-side certificates. In that case, during the TLS handshake (the process of putting in place a TLS (was: SSL) tunnel), the server will request the client to provide a certificate he (the server) trusts. Failure to providing one will break the connection.
This way, you can distribute the certificate to the clients you want to be able to reach your service and others will be rejected. This solution is better as it uses technologies which were developed exactly to solve this problem. The drawback is that you have to maintain and distribute the certificates (and usually run a PKI).

Medium sized website: Transition to HTTPS, Apache and reverse proxy

I have a medium sized website called algebra.com. As of today, it is ranked 900th website in US in Quantcast ratings.
At the peak of its usage, during weekday evenings, it serves over 120-150 queries for objects per second. Almost all objects, INCLUDING IMAGES, are dynamically generated.
It has 7.5 million page views per month.
It is server by Apache2 on Ubuntu and is supplemented by Perlbal reverse proxy, which helps reduce the number of apache slots/child processes in use.
I spent an inordinate amount of time working on performance for HTTP and the result is a fairly well functioning website.
Now that the times call for transition to HTTPS (fully justified here, as I have logons and registered users), I want to make sure that I do not end up with a disaster.
I am afraid, however, that I may end up with a performance nightmare, as HTTPS sessions last longer and I am not sure whether a reverse proxy can help as much as it did with HTTP.
Secondly, I want to make sure that I will have enough CPU capacity to handle HTTPS traffic.
Again, this is not a small website with a few hits per second, we are talking 100+ hits per second.
Additionally, I run multiple sites on one server.
For example, can I have a reverse proxy, that supports several virtual domains on one IP (SNI), and translates HTTPS traffic into HTTP, so that I do not have to encrypt twice (once by apache for the proxy, and once by the proxy for the client browser)?
What is the "best practices approach" to have multiple websites, some large, served by a mix of HTTP and HTTPS?
Maybe I can continue running perlbal on port 80, and run nginx on port 443? Can nginx be configured as a reverse proxy for multiple HTTPS sites?
You really need to load test this, and no one can give a definitive answer other than that.
I would offer the following pieces of advice though:
First up Stack overflow is really for programming questions. This question probably belongs on the sister site www.serverfault.com.
Https processing is, IMHO, not an issue for modern hardware unless you are encrypting large volumes of traffic (e.g. video streaming). Especially with proper caching and other performance tuning that I presume you've already done from what you say in your question. However not dealt with a site of your traffic so it could become an issue there.
There will be a small hit to clients as the negotiate the https session on initial connection. This is in the order of a few hundred milliseconds, will only happen on initial connection for each session, is unlikely to be noticed by most people, but it is there.
There are several things you can do to optimise https including choosing fast ciphers, implementing session resumption (two methods for this - and this can get complicated on load balanced sites). Ssllabs runs an excellent https tester to check your set up, Mozilla has some great documentation and advice, or you could check out my own blog post on this.
As to whether you terminate https at your end point (proxy/load balanced) that's very much up to you. Yes there will be a performance hit if you re-encrypt to https again to connect to your actual server. Most proxy servers also allow you to just pass through the https traffic to your main server so you only decrypt once but then you lose the original IP address from your webserver logs which can be useful. It also depends on if you access your web server directly at all? For example at my company we don't go through the load balanced for internal traffic so we do enable https on the web server as well and make the LoadBalancer re-encrypt to connect to that so we can view the site over https.
Other things to be aware of:
You could see an SEO hit during migration. Make sure you redirect all traffic, tell Google Search Console your preferred site (http or https), update your sitemap and all links (or make them relative).
You need to be aware of insecure content issues. All resources (e.g. css, javascript and images) need to be served over https or you will get browsers warnings and refuse to use those resources. HSTS can help with links on your own domain for those browsers that support HSTS, and CSP can also help (either to report on them or to automatically upgrade them - for browsers that support upgrade insecure requests).
Moving to https-only does take a bit of effort but it's once off and after that it makes your site so much easier to manage than trying to maintain two versions of same site. The web is moving to https more and more - and if you have (or are planning to have) logged in areas then you have no choice as you should 100% not use http for this. Google gives a slight ranking boost to https sites (though it's apparently quite small so shouldn't be your main reason to move), and have even talked about actively showing http sites as insecure. Better to be ahead of the curve IMHO and make the move now.
Hope that's useful.

Using SSL Across Entire Site

Instead of just having a few select pages for HTTPS access, I was thinking about just using SSL for my entire site.
What would be the drawbacks to this?
Edit Aug 7, 2014
Google now factors in HTTPS for rankings, so you absolutely should use SSL across your entire site:
http://googleonlinesecurity.blogspot.com/2014/08/https-as-ranking-signal_6.html
It is highly recommended these days to run the entire site on TLS (https that is) if possible.
The overhead concern is a thing of the past, it is no longer an issue with the newer TLS protocols, because it is now maintaining sessions, and even caching them for reuse if the client drops the connection. In the old days this was not the case. Which means that today, the only time you have to do public-key crypto(the type that is cpu heavy) is when establishing the connection. So there isn't really any drawbacks when you have a cert anyway. This means that you won't have to send people back and forth between http and https, and the customers will always see the lock sign in their browser.
Extra attention has been drawn to this subject after the release of Firesheep. As you might've heard Firesheep is a Firefox addon that let's you easily (if you are both using the same open wifi network) highjack other people's sessions on sites like Facebook, Twitter etc. This works because those sites only use TLS selectively, and this would not be a problem for them if TLS was enabled site-wide.
So, in conclusion, the cons (such as added CPU use) are negligible with the state of current technology, and the pros are clear, so serve all content via SSL/TLS! It's the way to go these days.
Edit: As mentioned in other answers, another problem with serving some of a site's content (like images) without SSL/TLS, is that customers/users will get a very annoying "unsecure content on secure page" message.
Also, as stated by thirtydot, you should redirect people to the https site. And you can even enable the flag that makes your server deny non-ssl connections.
Another edit: As pointed out in a comment below, remember that SSL/TLS isn't the only solution to all your site's security needs, there is still a lot of other considerations, but it does solve a few security issues for the users, and solves them well (Even though there are ways to do a man-in-the-middle, even with SSL/TLS)
It is a good idea to do this if possible, however you should:
Serve static resources (images, CSS, etc) from plain HTTP to avoid the HTTPS overhead.
(Don't do this or you will get warnings about "insecure resources").
You should also redirect the HTTP homepage to the HTTPS version so that users do not have to type HTTPS to access your site.
Drawbacks include:
Less responsive browsing experience - because there is more back and forth between the
server and client with HTTPS vs HTTP - the amount this is noticeable will be dependent on the latency between the server and client.
More CPU usage on your server - because every page has to be encrypted instead of just the select few.
Server side algorithms for establishing SSL connection are expensive, so serving all content via SSL requires more CPU power on the back end.
As far as I know that is the only drawback.
SSL was not designed for virtual hosting, especially of the elastic cloud type. You may face some difficulties if you cannot control the host names of the web servers, and how they resolve to IP addresses.
But in general, that it is excellent idea, and if you allow users to login to your site, almost a necessity (as shown by Firesheep).
I should also add what I am trying to do. I would like to allow social service logins (like FaceBook), but we will also be storing credit card information
For the pages where the user can review his credit card information, or make financial transactions, better shift into a more secure authentication mode. Facebook is a big target, and attracts hackers. If someone's Facebook account gets hacked, and they can then spend money or gather credit card info from your site, that would not be good. Accepting social service logins for non-critical stuff is fine, but for the more serious parts of your site, better require additional passwords.
It is highly recommended these days to
run the entire side on TLS
It's highly recommended by some people.
The total number of users your
system can support is gated by
either the CPU demands or IO load;
if you are up against the CPU, TLS
makes it that much worse.
Encrypting the traffic makes it impossible to use certain kinds of diagnostic techniques.
Most browsers will give your user a warning if you load any non-encrypted files. Which can be a huge problem if you are trying access third-party resources.
In some circumstances (e.g. a lot of money at stake), it makes sense to just bite the bullet and encrypt everything; in others, the odds of an attacker intercepting a packet in flight and deciding to hijack the session are so low and the amount of damage that could be done is so small, you can just go bare-back, as it work. (For example, this session, the one I'm using to post this answer, is unencrypted and I really, really don't care.)
For still other cases, you may want to offer your user a choice. Someone using a hard-wired connection in his own basement can make a different situation than someone using WiFi at the Starbucks across from a Black Hat convention.
I'm working on a protocol and a library to let you sign XHR requests. The idea is that the entire site would be set up as static files of HTML, CSS, and JavaScript, which would be loaded from a CDN. The actual application would be conducted entirely by JavaScript making AJAX and COMET requests. Any request that has to be authenticated is, but as a practical matter, most requests do not. I've done several sites this way -- they're very, very scalable.
We run a fully forced, secured website and shop. I've done this on the advice of a friend that knows a thing or two about website security.
The positive is that our website doesn't seem noticeably slower. Also Google Analytics runs although I can't get ecommerce to work. If it protected us against attacks I can't say offcourse but until today no trouble.
The bad thing however is that you will have a very hard time running Youtube and Social ("Like") boxes on a secured website.
Tips for good security:
Good webhost (they will cost you but it's worth it!)
No login for visitors. It kills usability but with a fast and easy checkout it goes and the obvious pro is that you simply don't store sensitive info.
Use a good Payment Service Provider and let them handle payment.
*2 I know this won't go for a lot of websites but "what you don't have, can't be stolen".
We have been selling on our webshop without login for 2 years now and it works fine as long as the Checkout is Mega simple and lightning fast.

speeding up website load using multiple servers/domains

When Yahoo! developer guide says "Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective".
And as an explanation I read somewhere, that browsers will load up to 5 things simultaneously from the same domain.
Would a subdomain, for example cdn.example.com be considered a new domain, in the previous statement?
Yahoo: The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel.
Google also says you only need different host names.
This may depend on browser, but I believe they may need to have different IP addresses. All that HTTP spec really says is: "Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server."
So the safest choice is to have different host name AND address.

Unique identification of a certain computer

i have following scenario and can't seem to find anything on the net, or maybe i am looking for the wrong thing:
i am working on a webbased data storage system. there are different users and different places and only certain users are allowed to access certain parts of the system. now, we do not want them to connect to these parts from at home or with a different computer than they are using at their work-place (there are different reasons for that).
now my question is: if there is a way to have the work-place-pc identify itself to the server in some way over the browser, how can i do that?
oh and yes, it is supposed to be webbased.
i hope i explained it so everyone understands.
thnx for your replies in advance.
... dg
I agree with Lenni... IP address is a possible solution if they are static or the DHCP server consistently assigns the same IP address to the same machine.
Alternatively, you might also consider authentication via "personal certificates" ... that's what they are referred to in Firefox, don't know it that's the standard name or not. (Obviously I haven't worked with these before.)
Basically they are SSL or PKI certificates that are installed on the client (user's) machine that identify that machine as being the machine it says it is -- that is, if the user tries to connect from a machine that doesn't have a certificate or doesn't have a certificate that you allow, you would deny them.
I don't know the issues around this ... it might be relatively easy for the same user to take the certificate off one computer and install it on another one with the correct password (i.e. it authenticates the user), or it might be keyed specifically to that machine somehow (i.e. it authenticates the machine). And a quick google search didn't turn up any obvious "how to" instructions on how it all works, but it might be worth looking into.
---Lawrence
Since you're going web based you can:
Examine the remote host's IP Address (compare it against known internal subnets, etc)
During the authentication process, you can ping the remote IP and take a look at the TTL on the returned packets, if it's too low, then the computer can't be from the local network. (of course this can be broken, but it's just 1 more thing)
If you're doing it over IIS, then you can integrate into SSO (probably the best if you can do it)
If it's supposed to be web-based (and by that I mean that the web server should be able to uniquely identify the user's machine), then you choices are limited: per se, there's nothing you can obtain from the browser's headers or request body that allows you to identify the machine. I suppose this is by design, due to the obvious privacy implications.
There are choices though, none of which pain-free: you could use an ActiveX control, which however only runs on Windows (and not on all browsers I think) and requires elevated privileges. You could think of a Firefox plug-in (obviously Firefox only). At any rate, a plain-vanilla browser will otherwise escape identification.
There are only a few of REAL solutions to this. Here are a couple:
Use domain authentication, and disallow users who are connecting over a VPN.
Use known IP ranges to allow or disallow access.
IP address. Not bombproof security but a start.