Avoiding bandwidth cap by ISP on an Apache HTTP Web Server: encryption methods can do it? - apache

I got a working Apache HTTP web server on my computer so a friend (and only him, no one else) who has no computer at home could get my files directly, as from an website, from an Internet café.
I did some speed tests on my computer at home and on my computer at workplace and found out that, in both cases, I get almost full bandwidth (~7MB/s) when using protocol encryption methods in some P2P softwares (BitTorrent, eMule). This leads me to believe that this is happening because the data is hidden from their ISPs.
Well, at the same very moment, when downloading from my web server at home to my work, it goes sluggish as hell (~90KB/s)...
Is there a protocol encryption method like the one in P2P to prevent my Apache web server from being slowed down by the ISP? Or at least some alternate solution to achieve better speed in this situation? Tried HTTPS but it seemed to not work.

Download != upload. Your upload at home will most likely be 1 Mbit (do you have an ADSL connection?), which will come down to ~ 90 KB/s.
But this doesn't belong on SO. :-)

Related

Medium sized website: Transition to HTTPS, Apache and reverse proxy

I have a medium sized website called algebra.com. As of today, it is ranked 900th website in US in Quantcast ratings.
At the peak of its usage, during weekday evenings, it serves over 120-150 queries for objects per second. Almost all objects, INCLUDING IMAGES, are dynamically generated.
It has 7.5 million page views per month.
It is server by Apache2 on Ubuntu and is supplemented by Perlbal reverse proxy, which helps reduce the number of apache slots/child processes in use.
I spent an inordinate amount of time working on performance for HTTP and the result is a fairly well functioning website.
Now that the times call for transition to HTTPS (fully justified here, as I have logons and registered users), I want to make sure that I do not end up with a disaster.
I am afraid, however, that I may end up with a performance nightmare, as HTTPS sessions last longer and I am not sure whether a reverse proxy can help as much as it did with HTTP.
Secondly, I want to make sure that I will have enough CPU capacity to handle HTTPS traffic.
Again, this is not a small website with a few hits per second, we are talking 100+ hits per second.
Additionally, I run multiple sites on one server.
For example, can I have a reverse proxy, that supports several virtual domains on one IP (SNI), and translates HTTPS traffic into HTTP, so that I do not have to encrypt twice (once by apache for the proxy, and once by the proxy for the client browser)?
What is the "best practices approach" to have multiple websites, some large, served by a mix of HTTP and HTTPS?
Maybe I can continue running perlbal on port 80, and run nginx on port 443? Can nginx be configured as a reverse proxy for multiple HTTPS sites?
You really need to load test this, and no one can give a definitive answer other than that.
I would offer the following pieces of advice though:
First up Stack overflow is really for programming questions. This question probably belongs on the sister site www.serverfault.com.
Https processing is, IMHO, not an issue for modern hardware unless you are encrypting large volumes of traffic (e.g. video streaming). Especially with proper caching and other performance tuning that I presume you've already done from what you say in your question. However not dealt with a site of your traffic so it could become an issue there.
There will be a small hit to clients as the negotiate the https session on initial connection. This is in the order of a few hundred milliseconds, will only happen on initial connection for each session, is unlikely to be noticed by most people, but it is there.
There are several things you can do to optimise https including choosing fast ciphers, implementing session resumption (two methods for this - and this can get complicated on load balanced sites). Ssllabs runs an excellent https tester to check your set up, Mozilla has some great documentation and advice, or you could check out my own blog post on this.
As to whether you terminate https at your end point (proxy/load balanced) that's very much up to you. Yes there will be a performance hit if you re-encrypt to https again to connect to your actual server. Most proxy servers also allow you to just pass through the https traffic to your main server so you only decrypt once but then you lose the original IP address from your webserver logs which can be useful. It also depends on if you access your web server directly at all? For example at my company we don't go through the load balanced for internal traffic so we do enable https on the web server as well and make the LoadBalancer re-encrypt to connect to that so we can view the site over https.
Other things to be aware of:
You could see an SEO hit during migration. Make sure you redirect all traffic, tell Google Search Console your preferred site (http or https), update your sitemap and all links (or make them relative).
You need to be aware of insecure content issues. All resources (e.g. css, javascript and images) need to be served over https or you will get browsers warnings and refuse to use those resources. HSTS can help with links on your own domain for those browsers that support HSTS, and CSP can also help (either to report on them or to automatically upgrade them - for browsers that support upgrade insecure requests).
Moving to https-only does take a bit of effort but it's once off and after that it makes your site so much easier to manage than trying to maintain two versions of same site. The web is moving to https more and more - and if you have (or are planning to have) logged in areas then you have no choice as you should 100% not use http for this. Google gives a slight ranking boost to https sites (though it's apparently quite small so shouldn't be your main reason to move), and have even talked about actively showing http sites as insecure. Better to be ahead of the curve IMHO and make the move now.
Hope that's useful.

Local HTTPS proxy possible?

TL;DR
I want to set up a local HTTPS proxy that can (LOCALLY) modify the content of HTML pages on my machine. Is this possible?
Motivation
I have used an HTTP Proxy called GlimmerBlocker for years. It started in 2008 as a proxy-based approach to blocking ads (as opposed to browser extensions or other OS X-specific hacks like InputManagers). But besides blocking ads, it also allows the user to inject their own CSS or JavaScript into the page. Development has seriously slowed, but it remains incredibly useful.
The only problem is that it doesn’t do HTTPS (from its FAQ):
Ads on https pages are not blocked
When Safari fetches an https page using a proxy, it doesn't really use the http protocol, but makes a tunneled tcp connection so Safari receives the encrypted bytes. The advantage is that any intermediate proxies can't modify or read the contents of the page, nor the URL. The disadvantage is, that GlimmerBlocker can't modify the content. Even if GlimmerBlocker tried to work as a middleman and decoded/encoded the content, it would have no means of telling Safari to trust it, nor to tell Safari if the websites certificate is valid, so Safari would think you have visited a dubious website.
Fortunately, most ad-providers are not going to switch to https as serving pages using https are much slower and would have a huge processing overhead on the ad-providers servers.
Back in 2008, maybe that last part was true…but not any more.
To be clear, I think the increasing use of SSL is a good thing. I just want to get back the control I had over the content after it arrives on my end.
Points of Confusion
While searching for a solution, I’ve become confused by some apparently contradictory points.
(Also, although I’m quite experienced with the languages of web pages, I’ve always had a difficult time grokking networks and protocols. On that note, sorry if I’m missing something that is way obvious!)
I found this StackOverflow question asking whether HTTPS proxies were possible. The best answer says that “TLS/SSL (The S in HTTPS) guarantees that there are no eavesdroppers between you and the server you are contacting, i.e. no proxies.” (The same answer then described a hack to pull it off, but I don’t understand the instructions. It was very theoretical, anyway.)
In OS X under Network Preferences ▶︎ Advanced… ▶︎ Proxies, there is clearly a setting for an HTTPS proxy. This seems to contradict the previous statement that TLS/SSL’s guarantee against eavesdropping implies the impossibility of proxies.
Other things of note
I can’t remember where, but I read that it is possible to set up an HTTPS proxy, but that it makes HTTPS pointless (by breaking the secure communication in the process). I don’t want this! Encryption is good. I don’t want to filter anyone else’s traffic; I just want something to customize the content after I’ve already received it.
GlimmerBlocker has a nice GUI interface, but I’m fine with non-GUI solutions, too. I may have a poor understanding of networking and protocols, but I’m perfectly comfortable on the command line, tweaking settings in text editors, and so on.
Is what I’m asking possible? Or is my question a case of “either you get security, or you can break it with hacks and get to customize your content—but not both”?
The common idea of a HTTP proxy is a server which accepts a CONNECT request which includes the target hostname and port and then just builds a tunnel to the target server. All the https is done inside the tunnel, so there is no way for the proxy to modify it (end-to-end security from browser to web server).
To modify the data you need to have a proxy which plays man-in-the-middle. In this case you have a https connection between the proxy and the web server and another https connection between the browser and the proxy. Between proxy and web server the original server certificate is used, while between browser and proxy a newly created certificate is used, which is signed by a CA specific to the proxy. Of course this CA must be imported as trusted into he browser, otherwise it would complain all the time about possible attacks.
Of course - all the verification of the original server certificate has to be done in the proxy now, and not all solutions do this the correct way. See also http://www.secureworks.com/cyber-threat-intelligence/threats/transitive-trust/
There are several proxy solution which might do this SSL interception, like squid, mitmproxy (python) or App::HTTP_Proxy_IMP (perl). The last two are specifically designed to let you modify the content with your own code, so these might be good places to start.

Cocoa server with user friendly automatic port forwarding or external ip lookup

I am coding a mac app, which will be a server that serve files to each user's mobile device.
The issues with this of course are getting the actual ip/port of the server host, as it will usually be inside of a home network. If the ip/port changes, its no big as i plan to send that info to a middle-man-server first, and have my mobile app get the info from there.
I have tried upnp with https://code.google.com/p/tcmportmapper/ but even though I know my router supports upnp, the library does not work as intended.
I even tried running a TURN server on my amazon ec2 instance, but i had a very hard time figuring how what message to communicate with it to get the info i need.
I've been since last night experimenting with google's libjingle, but am having a hard time even getting the provided ios example to run.
Any advice on getting this seemingly difficult task accomplished?
The port of your app will not change. The IP change could be handled by posting your servers IP to a web service every hour or whatever time period you want.
Server should run a URL http://your-web-service.com/serverip.php?ip=your-updated-ip and then have your serverip.php handle the rest (put it into a mySQL db or something)
When your client start it should ask your site for the IP and then connect to your server with that.
This is a pretty common way of handling this type of things.

WebLogic server - large file transfers

I may be giving entirely the wrong information here, but at the moment we're a bit unsure where to look for the issue. We have a server running on WebLogic, of which version I'm not sure.
Our site has an installer that clients need that can run around 15 MB. Normally, this downloads perfectly fine, but we've recently been finding issues in the download where the browser reports it completed, but the installer can't be opened - it appears that the filesize isn't what it's expected to be either, like the download was just cut off.
The issues are relegated to instances where the user is on a spotty connection, such as a 3G card in their laptop.
It seems to happen mostly on Macs, but it seems like that's because the mac .dmg file is much larger than the windows executable. Still, from my knowledge of network protocols, a spotty network shouldn't cause the specific issue we're seeing.
At the moment, we're debugging several of the layers of the transfer, like our firewalls, but with my meager knowledge of Weblogic, I'm kind of curious if there is something we could be missing in the server's configuration itself.
Unfortunately, I'm not sure if I am able to post the configuration files here - I'm pretty sure at the moment, there are no servlet rules created specifically for the installer's directory - but I was hoping someone here might at least recognize this type of issue and be able to point me in the right direction.
Check if you have any maxpostsize limit set.
Check for the responses that has failed if there's any socket timeout errors seen in the log file.
If you are using a proxy, check for error there related mainly to sockets.
Such issues can come when a tcp socket is timed out at the firewall end, WLS end, Frontend proxy like apache end.
There are few other settings like http connection timeout I think in WLS.
check from admin console-server-protocol-general tab or http tab.

Using SSL Across Entire Site

Instead of just having a few select pages for HTTPS access, I was thinking about just using SSL for my entire site.
What would be the drawbacks to this?
Edit Aug 7, 2014
Google now factors in HTTPS for rankings, so you absolutely should use SSL across your entire site:
http://googleonlinesecurity.blogspot.com/2014/08/https-as-ranking-signal_6.html
It is highly recommended these days to run the entire site on TLS (https that is) if possible.
The overhead concern is a thing of the past, it is no longer an issue with the newer TLS protocols, because it is now maintaining sessions, and even caching them for reuse if the client drops the connection. In the old days this was not the case. Which means that today, the only time you have to do public-key crypto(the type that is cpu heavy) is when establishing the connection. So there isn't really any drawbacks when you have a cert anyway. This means that you won't have to send people back and forth between http and https, and the customers will always see the lock sign in their browser.
Extra attention has been drawn to this subject after the release of Firesheep. As you might've heard Firesheep is a Firefox addon that let's you easily (if you are both using the same open wifi network) highjack other people's sessions on sites like Facebook, Twitter etc. This works because those sites only use TLS selectively, and this would not be a problem for them if TLS was enabled site-wide.
So, in conclusion, the cons (such as added CPU use) are negligible with the state of current technology, and the pros are clear, so serve all content via SSL/TLS! It's the way to go these days.
Edit: As mentioned in other answers, another problem with serving some of a site's content (like images) without SSL/TLS, is that customers/users will get a very annoying "unsecure content on secure page" message.
Also, as stated by thirtydot, you should redirect people to the https site. And you can even enable the flag that makes your server deny non-ssl connections.
Another edit: As pointed out in a comment below, remember that SSL/TLS isn't the only solution to all your site's security needs, there is still a lot of other considerations, but it does solve a few security issues for the users, and solves them well (Even though there are ways to do a man-in-the-middle, even with SSL/TLS)
It is a good idea to do this if possible, however you should:
Serve static resources (images, CSS, etc) from plain HTTP to avoid the HTTPS overhead.
(Don't do this or you will get warnings about "insecure resources").
You should also redirect the HTTP homepage to the HTTPS version so that users do not have to type HTTPS to access your site.
Drawbacks include:
Less responsive browsing experience - because there is more back and forth between the
server and client with HTTPS vs HTTP - the amount this is noticeable will be dependent on the latency between the server and client.
More CPU usage on your server - because every page has to be encrypted instead of just the select few.
Server side algorithms for establishing SSL connection are expensive, so serving all content via SSL requires more CPU power on the back end.
As far as I know that is the only drawback.
SSL was not designed for virtual hosting, especially of the elastic cloud type. You may face some difficulties if you cannot control the host names of the web servers, and how they resolve to IP addresses.
But in general, that it is excellent idea, and if you allow users to login to your site, almost a necessity (as shown by Firesheep).
I should also add what I am trying to do. I would like to allow social service logins (like FaceBook), but we will also be storing credit card information
For the pages where the user can review his credit card information, or make financial transactions, better shift into a more secure authentication mode. Facebook is a big target, and attracts hackers. If someone's Facebook account gets hacked, and they can then spend money or gather credit card info from your site, that would not be good. Accepting social service logins for non-critical stuff is fine, but for the more serious parts of your site, better require additional passwords.
It is highly recommended these days to
run the entire side on TLS
It's highly recommended by some people.
The total number of users your
system can support is gated by
either the CPU demands or IO load;
if you are up against the CPU, TLS
makes it that much worse.
Encrypting the traffic makes it impossible to use certain kinds of diagnostic techniques.
Most browsers will give your user a warning if you load any non-encrypted files. Which can be a huge problem if you are trying access third-party resources.
In some circumstances (e.g. a lot of money at stake), it makes sense to just bite the bullet and encrypt everything; in others, the odds of an attacker intercepting a packet in flight and deciding to hijack the session are so low and the amount of damage that could be done is so small, you can just go bare-back, as it work. (For example, this session, the one I'm using to post this answer, is unencrypted and I really, really don't care.)
For still other cases, you may want to offer your user a choice. Someone using a hard-wired connection in his own basement can make a different situation than someone using WiFi at the Starbucks across from a Black Hat convention.
I'm working on a protocol and a library to let you sign XHR requests. The idea is that the entire site would be set up as static files of HTML, CSS, and JavaScript, which would be loaded from a CDN. The actual application would be conducted entirely by JavaScript making AJAX and COMET requests. Any request that has to be authenticated is, but as a practical matter, most requests do not. I've done several sites this way -- they're very, very scalable.
We run a fully forced, secured website and shop. I've done this on the advice of a friend that knows a thing or two about website security.
The positive is that our website doesn't seem noticeably slower. Also Google Analytics runs although I can't get ecommerce to work. If it protected us against attacks I can't say offcourse but until today no trouble.
The bad thing however is that you will have a very hard time running Youtube and Social ("Like") boxes on a secured website.
Tips for good security:
Good webhost (they will cost you but it's worth it!)
No login for visitors. It kills usability but with a fast and easy checkout it goes and the obvious pro is that you simply don't store sensitive info.
Use a good Payment Service Provider and let them handle payment.
*2 I know this won't go for a lot of websites but "what you don't have, can't be stolen".
We have been selling on our webshop without login for 2 years now and it works fine as long as the Checkout is Mega simple and lightning fast.