Apache Server Timing Out taking long time - apache

i was in in trouble help me by figuring out the problem I've run my website on my Apache server for quite some time now and recently ran into an issue that has me stumped.
My server has been DDOS attacked in the past requiring me to move my server behind a proxy/WAF. For some time I was behind Sucuri as it provided the best affordable defense at the time. The attacks tapered off and I moved to Cloudflare free to protect my IP address while lightening up on my monthly server costs. The switch was smooth and everything has been working fine for several months.
I was recently hit again with what seemed to be a layer 7 attack. I could see several IP addresses making 10-20 requests every couple of seconds in my domain's access.log. Running netstat returned thousands of TIME_WAIT and SYN_RECV all with Cloudflare IP addresses. This lead me to believe the attack was against my domain, being proxied by Cloudflare, and reaching my server regardless of my security settings. I confirmed this by viewing the statistics provided by Cloudflare and seeing millions of requests being made in a short time period. Unfortunately this is making it even more difficult to pinpoint the attack. what should i do.
I've enabled syn cookies, added mod_cloudflare to Apache, activated Cloudflare's WAF / rate limiting rules, blocked offending IP addresses, and used mod_evasive to automatically blacklist future offenders. This has reduced (and almost stopped) the amount of malicious requests seen in the Apache access log but has not resolved the timeouts.check site
According to Cloudflare analytics, I've only received 16,000 requests in the previous 6 hours (as opposed to the tens of millions when I was being actively attacked) but I get timeouts on every other request (even directly connecting, without Cloudflare).
Thanks

Boost proxy server security and defend against DoS attacks by blocking unsolicited packets or by using load balancers, as these actions could help reduce the impact the attack has on the server.
There are also attacks that use a proxy server on the Internet as a transit device to hide the originating source of the attack on your network. Blocking open or malicious proxy servers from accessing the network or servers is one way to prevent this type of attack from being successful
i Hope this will definitely help you

i think you have to ask your webhost or ask cloudflare support
and also raise s ticket on Sucuri. Their team closely works with the respective developers in fixing the security issues. Once fixed, Sucuri patches those vulnerabilities at the firewall level
During the attacks, website with heavy traffic like yours would slow down significantly due to the high server load. Sometimes it would even cause the server to restart causing downtime.
When you enable Sucuri, all your site traffic goes through their cloudproxy firewall before coming to your hosting server. This allows them to block all the attacks and only send you legitimate visitors.
Sucuri’s firewall blocks all the attacks before it even touches our server. Since they’re one of the leading security companies, Sucuri proactively research and report potential security issues to WordPress core team as well as third-party plugins.
If you still not resolve the problem then then it may be a different type of attack
TCP Connection Attacks
These attempt to use up all the available connections to infrastructure devices such as load-balancers, firewalls and application servers. Even devices capable of maintaining state on millions of connections can be taken down by these attacks.
Volumetric Attacks
These attempt to consume the bandwidth either within the target network/service, or between the target network/service and the rest of the Internet. These attacks are simply about causing congestion
Fragmentation Attacks
These send a flood of TCP or UDP fragments to a victim, overwhelming the victim's ability to re-assemble the streams and severely reducing performance.
Application Attacks
These attempt to overwhelm a specific aspect of an application or service and can be effective even with very few attacking machines generating a low traffic rate (making them difficult to detect and mitigate).

Related

How to Harden Apache against security vulnerabilities

We have an APache 2.4.6 version installed on Rhel 7.5 in production.
The security Audit team found few vulnerabilities lately which needs to be fixed.
1.) During audit ,it is observed that connect web server is vulnerable to Slowloris attack.
Mitigation suggested for this:
Rate limit incoming requests - Restricting access based on certain usage factors will help mitigate a Slowloris attack. Techniques such as limiting the maximum number of connections a single IP address is allowed to make, restricting slow transfer speeds, and limiting the maximum time a client is allowed to stay connected are all approaches for limiting the effectiveness of low and slow attacks.
2.)The lack of HSTS allows downgrade attacks, SSL-stripping man-in-the-middle attacks, and weakens cookie-hijacking protections.
Mitigation: Configure the remote web server to use HSTS response header.
3.) During the audit, it is observed that mod_security is not implemented which is an application security firewall for apache.
Mitigation:
Implement Mod_security to timely detect and prevent application security attacks.
I dont have much idea on how to configure these.
Please help me with the steps for getting the above issues fixed.
I can point you in the right direction, perhaps, but the full configuration/setup for two of these is not short-checklist-friendly.
1) mod_qos is a way to limit your exposure to slowloris. It's designed to be used in a reverse_proxy server. Not sure if it fits your situation, but it's a place for you to start looking. I'm not sure total immunity to slowloris can be achieved, at least not without the potential for spending lots of money on it.
http://mod-qos.sourceforge.net
2) This one is easy. For apache, put this in the site configuration file:
Header always set Strict-Transport-Security "max-age=15638400"
That essentially tells the user-agent to never even think about using http, only https, on this site for the next 6 months (roughly).
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security
3) mod_security should be available in one of the RHEL repos (probably EPEL) setup and configuration of that can get complex, so start here:
https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual-(v2.x)
(It's the manual for version 2.x, there's a 3.x but I suspect it's not made it into RHEL yet, so I'm posting the 2.x version)

Does Cloudflare accelerates websockets?

Will Cloudflare accelerate my websocket data transfer speed by default (without any additional configurations)?
What paid and free configurations can I use to improve my websocket connection? Will Argo help here?
What level of performance increase should I wait from these different configurations?
p.s. I know that CDN mostly concentrates about optimizing serving of static content but still I am curious will it help at least a bit with dynamic content?
CDNs accelerate static content that can be cached and distributed to servers with different geolocations. But Websockets are used to server dynamic content, so the limiting factor there is the power of the server and its geolocation.
So Cloudflare or any other CDN are not be able to accelerate websockets in the same way as they can do with static content, well yes Argo might help in certain cases. But the really limiting/problematic factor with WebSockets is your application/setup handling the requests.
There are however certain conditions under which Cloudflare can accelerate connection. Some ISP want to have extra money for better routing (Double Paid Traffic), and some data center owners refuse to pay those additional money.
So it might be that the none payed connection is slower then a routing using Cloudflare as a proxy, under the condition the Cloudflare pays for the better routing. But then its not the technical part of Cloudflare that accelerates the connection, but the contract. You might need to ask your hoster about that case.
Note that Cloudflare will reset the websocket connections now and then:
“Logs from tcpdump show that Cloudflare sends a TCP reset after 1-5 minutes, despite both client and server being in sync on packets sent in each direction” - https://community.cloudflare.com/t/websockets-disconnected-in-aws-tokyo/44680
“If you’re intending to use CF websockets, be prepared for random (and potentially massive) connection drops, and be sure you’re architected to handle these disconnects gracefully. Cloudflare rolling restarts have caused hundreds of thousands of websocket connections to be disconnected in a matter of minutes for us”; “when terminating a WebSocket connection due to releases CloudFlare now signals this action to both client and origin server by sending the 1001 status code” - https://news.ycombinator.com/item?id=11638081
“When Cloudflare releases new code to its global network, we may restart servers, which terminates WebSockets connections” - https://support.cloudflare.com/hc/en-us/articles/200169466-Using-Cloudflare-with-WebSockets#12345687
So to answer your question, Argo Smart Routing inside Cloudflare can, in theory, accelerate the websocket connections and make them more reliable. But we also know for a fact that it will lead to regular websocket disconnects.
Maybe use Cloudflare for backup connections, to improve the total resilience in the face of routing anomalies.

Medium sized website: Transition to HTTPS, Apache and reverse proxy

I have a medium sized website called algebra.com. As of today, it is ranked 900th website in US in Quantcast ratings.
At the peak of its usage, during weekday evenings, it serves over 120-150 queries for objects per second. Almost all objects, INCLUDING IMAGES, are dynamically generated.
It has 7.5 million page views per month.
It is server by Apache2 on Ubuntu and is supplemented by Perlbal reverse proxy, which helps reduce the number of apache slots/child processes in use.
I spent an inordinate amount of time working on performance for HTTP and the result is a fairly well functioning website.
Now that the times call for transition to HTTPS (fully justified here, as I have logons and registered users), I want to make sure that I do not end up with a disaster.
I am afraid, however, that I may end up with a performance nightmare, as HTTPS sessions last longer and I am not sure whether a reverse proxy can help as much as it did with HTTP.
Secondly, I want to make sure that I will have enough CPU capacity to handle HTTPS traffic.
Again, this is not a small website with a few hits per second, we are talking 100+ hits per second.
Additionally, I run multiple sites on one server.
For example, can I have a reverse proxy, that supports several virtual domains on one IP (SNI), and translates HTTPS traffic into HTTP, so that I do not have to encrypt twice (once by apache for the proxy, and once by the proxy for the client browser)?
What is the "best practices approach" to have multiple websites, some large, served by a mix of HTTP and HTTPS?
Maybe I can continue running perlbal on port 80, and run nginx on port 443? Can nginx be configured as a reverse proxy for multiple HTTPS sites?
You really need to load test this, and no one can give a definitive answer other than that.
I would offer the following pieces of advice though:
First up Stack overflow is really for programming questions. This question probably belongs on the sister site www.serverfault.com.
Https processing is, IMHO, not an issue for modern hardware unless you are encrypting large volumes of traffic (e.g. video streaming). Especially with proper caching and other performance tuning that I presume you've already done from what you say in your question. However not dealt with a site of your traffic so it could become an issue there.
There will be a small hit to clients as the negotiate the https session on initial connection. This is in the order of a few hundred milliseconds, will only happen on initial connection for each session, is unlikely to be noticed by most people, but it is there.
There are several things you can do to optimise https including choosing fast ciphers, implementing session resumption (two methods for this - and this can get complicated on load balanced sites). Ssllabs runs an excellent https tester to check your set up, Mozilla has some great documentation and advice, or you could check out my own blog post on this.
As to whether you terminate https at your end point (proxy/load balanced) that's very much up to you. Yes there will be a performance hit if you re-encrypt to https again to connect to your actual server. Most proxy servers also allow you to just pass through the https traffic to your main server so you only decrypt once but then you lose the original IP address from your webserver logs which can be useful. It also depends on if you access your web server directly at all? For example at my company we don't go through the load balanced for internal traffic so we do enable https on the web server as well and make the LoadBalancer re-encrypt to connect to that so we can view the site over https.
Other things to be aware of:
You could see an SEO hit during migration. Make sure you redirect all traffic, tell Google Search Console your preferred site (http or https), update your sitemap and all links (or make them relative).
You need to be aware of insecure content issues. All resources (e.g. css, javascript and images) need to be served over https or you will get browsers warnings and refuse to use those resources. HSTS can help with links on your own domain for those browsers that support HSTS, and CSP can also help (either to report on them or to automatically upgrade them - for browsers that support upgrade insecure requests).
Moving to https-only does take a bit of effort but it's once off and after that it makes your site so much easier to manage than trying to maintain two versions of same site. The web is moving to https more and more - and if you have (or are planning to have) logged in areas then you have no choice as you should 100% not use http for this. Google gives a slight ranking boost to https sites (though it's apparently quite small so shouldn't be your main reason to move), and have even talked about actively showing http sites as insecure. Better to be ahead of the curve IMHO and make the move now.
Hope that's useful.

How to set up a tunnel

For bypassing filtering in my country, I've rented an abroad server (CentOS 5) with 256 MB of RAM. Client is Ubuntu 12.04. I run this command in client to set up the tunnel:
ssh -CNfD 1080 <user>#<server-ip>
In Firefox settings, I defined a socks proxy server:
localhost:1080
By using this method, everything works properly and I can bypass the limitations. But, the speed degrades reasonably. I don't know why. I guess some reasons and I want to share them with you and have your opinions:
If I use direct connection, most sites use http, but when I use proxy, all sites have to use the secure connection prepared by ssh. My provider may have decreased the speed of secure connections. (I think this may be the matter, but it seems that https sites not using the proxy still open faster.)
Such tunnelling essentially causes the internet speed to decrease. Maybe because of overhead which applies to secure packets or some other reason. If so, what can I replace? I have a working dedicated server.
PS. The server internet connection speed is much higher than the speed (bandwidth) between client and server.
PPS. May I set up an http tunnel? Or use some software instead of ssh to be faster and has less overhead or not to use https?
Please help me figure out what is really happening, since I'm not so familiar with these concepts.
I am afraid there is not much you can do...
Indeed it is to be expected that speed, latency and throughput decrease when you tunnel your payload data through an encryption tunnel. Reason mainly is the overhead of encryption and also, depending on the connection at hand, the modified (longer) routing. You have to take into account that most of the encryption has to be done by your tunnel endpoint, so your server in this case. If that system lacks computation power, then the result will be reduced throughput, obviously. Things like CDN also won't work the same any more.
It might very well be that your service provider throttles different types of connection. Especially in areas with high control and censorship over communication content it clearly makes sense for the authorities to prefer not encrypted payload, so payload that can be controlled and filtered. Everything that keeps people from using encryption is in their interest. So throttling encrypted communication only makes sense from their point of view. Sad, but true nevertheless.
The only thing that could have an impact is some details about your tunnel endpoint, so your server in this case. Increased computation power could reduce an bottle neck if that system shows high load cause by the encryption.
Also it's network connection is of interest, just as your local connection: the encrypted tunnel requires much more control data on the upload side compared to not encrypted traffic. Since typically the upload bandwidth is much lower than for download this could also be an issue.

Using SSL Across Entire Site

Instead of just having a few select pages for HTTPS access, I was thinking about just using SSL for my entire site.
What would be the drawbacks to this?
Edit Aug 7, 2014
Google now factors in HTTPS for rankings, so you absolutely should use SSL across your entire site:
http://googleonlinesecurity.blogspot.com/2014/08/https-as-ranking-signal_6.html
It is highly recommended these days to run the entire site on TLS (https that is) if possible.
The overhead concern is a thing of the past, it is no longer an issue with the newer TLS protocols, because it is now maintaining sessions, and even caching them for reuse if the client drops the connection. In the old days this was not the case. Which means that today, the only time you have to do public-key crypto(the type that is cpu heavy) is when establishing the connection. So there isn't really any drawbacks when you have a cert anyway. This means that you won't have to send people back and forth between http and https, and the customers will always see the lock sign in their browser.
Extra attention has been drawn to this subject after the release of Firesheep. As you might've heard Firesheep is a Firefox addon that let's you easily (if you are both using the same open wifi network) highjack other people's sessions on sites like Facebook, Twitter etc. This works because those sites only use TLS selectively, and this would not be a problem for them if TLS was enabled site-wide.
So, in conclusion, the cons (such as added CPU use) are negligible with the state of current technology, and the pros are clear, so serve all content via SSL/TLS! It's the way to go these days.
Edit: As mentioned in other answers, another problem with serving some of a site's content (like images) without SSL/TLS, is that customers/users will get a very annoying "unsecure content on secure page" message.
Also, as stated by thirtydot, you should redirect people to the https site. And you can even enable the flag that makes your server deny non-ssl connections.
Another edit: As pointed out in a comment below, remember that SSL/TLS isn't the only solution to all your site's security needs, there is still a lot of other considerations, but it does solve a few security issues for the users, and solves them well (Even though there are ways to do a man-in-the-middle, even with SSL/TLS)
It is a good idea to do this if possible, however you should:
Serve static resources (images, CSS, etc) from plain HTTP to avoid the HTTPS overhead.
(Don't do this or you will get warnings about "insecure resources").
You should also redirect the HTTP homepage to the HTTPS version so that users do not have to type HTTPS to access your site.
Drawbacks include:
Less responsive browsing experience - because there is more back and forth between the
server and client with HTTPS vs HTTP - the amount this is noticeable will be dependent on the latency between the server and client.
More CPU usage on your server - because every page has to be encrypted instead of just the select few.
Server side algorithms for establishing SSL connection are expensive, so serving all content via SSL requires more CPU power on the back end.
As far as I know that is the only drawback.
SSL was not designed for virtual hosting, especially of the elastic cloud type. You may face some difficulties if you cannot control the host names of the web servers, and how they resolve to IP addresses.
But in general, that it is excellent idea, and if you allow users to login to your site, almost a necessity (as shown by Firesheep).
I should also add what I am trying to do. I would like to allow social service logins (like FaceBook), but we will also be storing credit card information
For the pages where the user can review his credit card information, or make financial transactions, better shift into a more secure authentication mode. Facebook is a big target, and attracts hackers. If someone's Facebook account gets hacked, and they can then spend money or gather credit card info from your site, that would not be good. Accepting social service logins for non-critical stuff is fine, but for the more serious parts of your site, better require additional passwords.
It is highly recommended these days to
run the entire side on TLS
It's highly recommended by some people.
The total number of users your
system can support is gated by
either the CPU demands or IO load;
if you are up against the CPU, TLS
makes it that much worse.
Encrypting the traffic makes it impossible to use certain kinds of diagnostic techniques.
Most browsers will give your user a warning if you load any non-encrypted files. Which can be a huge problem if you are trying access third-party resources.
In some circumstances (e.g. a lot of money at stake), it makes sense to just bite the bullet and encrypt everything; in others, the odds of an attacker intercepting a packet in flight and deciding to hijack the session are so low and the amount of damage that could be done is so small, you can just go bare-back, as it work. (For example, this session, the one I'm using to post this answer, is unencrypted and I really, really don't care.)
For still other cases, you may want to offer your user a choice. Someone using a hard-wired connection in his own basement can make a different situation than someone using WiFi at the Starbucks across from a Black Hat convention.
I'm working on a protocol and a library to let you sign XHR requests. The idea is that the entire site would be set up as static files of HTML, CSS, and JavaScript, which would be loaded from a CDN. The actual application would be conducted entirely by JavaScript making AJAX and COMET requests. Any request that has to be authenticated is, but as a practical matter, most requests do not. I've done several sites this way -- they're very, very scalable.
We run a fully forced, secured website and shop. I've done this on the advice of a friend that knows a thing or two about website security.
The positive is that our website doesn't seem noticeably slower. Also Google Analytics runs although I can't get ecommerce to work. If it protected us against attacks I can't say offcourse but until today no trouble.
The bad thing however is that you will have a very hard time running Youtube and Social ("Like") boxes on a secured website.
Tips for good security:
Good webhost (they will cost you but it's worth it!)
No login for visitors. It kills usability but with a fast and easy checkout it goes and the obvious pro is that you simply don't store sensitive info.
Use a good Payment Service Provider and let them handle payment.
*2 I know this won't go for a lot of websites but "what you don't have, can't be stolen".
We have been selling on our webshop without login for 2 years now and it works fine as long as the Checkout is Mega simple and lightning fast.