How to Harden Apache against security vulnerabilities - apache

We have an APache 2.4.6 version installed on Rhel 7.5 in production.
The security Audit team found few vulnerabilities lately which needs to be fixed.
1.) During audit ,it is observed that connect web server is vulnerable to Slowloris attack.
Mitigation suggested for this:
Rate limit incoming requests - Restricting access based on certain usage factors will help mitigate a Slowloris attack. Techniques such as limiting the maximum number of connections a single IP address is allowed to make, restricting slow transfer speeds, and limiting the maximum time a client is allowed to stay connected are all approaches for limiting the effectiveness of low and slow attacks.
2.)The lack of HSTS allows downgrade attacks, SSL-stripping man-in-the-middle attacks, and weakens cookie-hijacking protections.
Mitigation: Configure the remote web server to use HSTS response header.
3.) During the audit, it is observed that mod_security is not implemented which is an application security firewall for apache.
Mitigation:
Implement Mod_security to timely detect and prevent application security attacks.
I dont have much idea on how to configure these.
Please help me with the steps for getting the above issues fixed.

I can point you in the right direction, perhaps, but the full configuration/setup for two of these is not short-checklist-friendly.
1) mod_qos is a way to limit your exposure to slowloris. It's designed to be used in a reverse_proxy server. Not sure if it fits your situation, but it's a place for you to start looking. I'm not sure total immunity to slowloris can be achieved, at least not without the potential for spending lots of money on it.
http://mod-qos.sourceforge.net
2) This one is easy. For apache, put this in the site configuration file:
Header always set Strict-Transport-Security "max-age=15638400"
That essentially tells the user-agent to never even think about using http, only https, on this site for the next 6 months (roughly).
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security
3) mod_security should be available in one of the RHEL repos (probably EPEL) setup and configuration of that can get complex, so start here:
https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual-(v2.x)
(It's the manual for version 2.x, there's a 3.x but I suspect it's not made it into RHEL yet, so I'm posting the 2.x version)

Related

What is better with HTTP/2: Apache vs Nginx?

I am choosing a better web server for big SPA application with dozens of JS and css files. With HTTP/2 we are now able to not merge them into two big files (3 MB for JS), that take pretty long time to load when on slow connection. But which server is better for the new HTTP/2 paradigm?
Nginx was designed to solve http/1 problems, and it's advantage was better serving numerous connections, with HTTP/2 there is only one connection for all the files, so the feature seems redundant now. What do you think, what can you advise me?
That's a very subjective question, and probably not a good fit for StackOverflow so imagine this will get closed. But here's my two cents...
Full disclosure: I primarily use Apache.
For a start let's address one of your incorrect points: Nginx wasn't designed to solve HTTP/1 problems. Nginx was designed to solve some of the scalability problems of previous web servers by being based on an asynchronous, event-driven model. Under HTTP/2 there should be less connections per client, which you could argue makes scalability less of an issue as each client uses only 1/6th of the resources they did previously - but that's probably a little simplistic. Apache has an event-driven MPM module for years now too (though often not turned on by default in case of any thread unsafe PHP applications - but this would also be a problem with Nginx!). This brings them more back in line, though there's still a lot of debate about this and many say Nginx is still faster. In my experience, unless you are dealing with truly huge volumes (in which case you should be looking at CDNs, load-balancers and cache accelerators), few will ever notice the difference between Nginx and Apache. This is especially true when downstream applications and systems come into play - a slow PHP application will quickly negate any performance or scalability issues at the web server level.
Anyway, back to your main question:
For HTTP/2 support, my choice would be Apache over Nginx. It has had better HTTP/2 support for some time. Nginx only added HTTP/2 Push support in early 2018 for example, whereas Apache has had that for a number of years now. Apache also supports a PushDiary (based on the now-abandon Cache-Digests proposal) to prevent pushing resources that have already been sent, supports 103 Early Hints for pushing early, and push prioritisation options. Moving on from HTTP/2 push, Apache also supports using HTTP/2 in proxy mode (though it's still marked as experimental and the usefulness of this is questionable at the moment), and HTTP/2 over HTTP (h2c - though again usefulness is questionable since browsers do not support this). I also find the main developer of the Apache HTTP/2 implementation very responsive on the GitHub page for the mod_http2 module (included as part of core Apache since 2.4.18 and no longer marked as "experimental" since 2.4.26).
On the flip side, I understand that Cloudflare uses a customised Nginx based web server, and they have HTTP/2 push for over a year now (it was them that backported this implementation to Nginx). So, given Cloudflare's scale, that speaks volumes to the implementation of that though not sure how customised it is from the core Nginx code.
There is also a HTTP/2 conformance Testing tool available and when I ran this against some common HTTP/2 servers (for a book I wrote on the subject btw) I got the following results which clearly shows Apache as the most compliant with the spec:
Now to be fair, most of the causes of errors are in not responding correctly to bad requests, that in a perfect world should never be sent anyway so aren’t that important. But still, we don’t live in a perfect world, and error checking is an important part of technology so I for one would certainly prefer the more compliant server. Similarly as pointed out in the comments below, the tool and web servers themselves, can be subject to race conditions and other problems which may incorrectly report errors.
Ultimately you are best choosing the implementation you are most comfortable with. The general feel has always been that Nginx is lighter and easier to configure, but on the flip side perhaps isn't as fully featured as Apache because of that. HTTP/2 support seems to continue that theme. If you want to play with upcoming HTTP/2 features then, to me, Apache definitely has the edge at the moment (though nothing to say that won't change in the future). However, for the basic use cases of HTTP/2, they probably can be considered similar. Even HTTP/2 Push is not used much yet, and there are serious concerns it could degrade performance if not used appropriately or due to implementation issues, which is probably why it has not been a priority for Nginx and while they only implemented it later.

Apache Server Timing Out taking long time

i was in in trouble help me by figuring out the problem I've run my website on my Apache server for quite some time now and recently ran into an issue that has me stumped.
My server has been DDOS attacked in the past requiring me to move my server behind a proxy/WAF. For some time I was behind Sucuri as it provided the best affordable defense at the time. The attacks tapered off and I moved to Cloudflare free to protect my IP address while lightening up on my monthly server costs. The switch was smooth and everything has been working fine for several months.
I was recently hit again with what seemed to be a layer 7 attack. I could see several IP addresses making 10-20 requests every couple of seconds in my domain's access.log. Running netstat returned thousands of TIME_WAIT and SYN_RECV all with Cloudflare IP addresses. This lead me to believe the attack was against my domain, being proxied by Cloudflare, and reaching my server regardless of my security settings. I confirmed this by viewing the statistics provided by Cloudflare and seeing millions of requests being made in a short time period. Unfortunately this is making it even more difficult to pinpoint the attack. what should i do.
I've enabled syn cookies, added mod_cloudflare to Apache, activated Cloudflare's WAF / rate limiting rules, blocked offending IP addresses, and used mod_evasive to automatically blacklist future offenders. This has reduced (and almost stopped) the amount of malicious requests seen in the Apache access log but has not resolved the timeouts.check site
According to Cloudflare analytics, I've only received 16,000 requests in the previous 6 hours (as opposed to the tens of millions when I was being actively attacked) but I get timeouts on every other request (even directly connecting, without Cloudflare).
Thanks
Boost proxy server security and defend against DoS attacks by blocking unsolicited packets or by using load balancers, as these actions could help reduce the impact the attack has on the server.
There are also attacks that use a proxy server on the Internet as a transit device to hide the originating source of the attack on your network. Blocking open or malicious proxy servers from accessing the network or servers is one way to prevent this type of attack from being successful
i Hope this will definitely help you
i think you have to ask your webhost or ask cloudflare support
and also raise s ticket on Sucuri. Their team closely works with the respective developers in fixing the security issues. Once fixed, Sucuri patches those vulnerabilities at the firewall level
During the attacks, website with heavy traffic like yours would slow down significantly due to the high server load. Sometimes it would even cause the server to restart causing downtime.
When you enable Sucuri, all your site traffic goes through their cloudproxy firewall before coming to your hosting server. This allows them to block all the attacks and only send you legitimate visitors.
Sucuri’s firewall blocks all the attacks before it even touches our server. Since they’re one of the leading security companies, Sucuri proactively research and report potential security issues to WordPress core team as well as third-party plugins.
If you still not resolve the problem then then it may be a different type of attack
TCP Connection Attacks
These attempt to use up all the available connections to infrastructure devices such as load-balancers, firewalls and application servers. Even devices capable of maintaining state on millions of connections can be taken down by these attacks.
Volumetric Attacks
These attempt to consume the bandwidth either within the target network/service, or between the target network/service and the rest of the Internet. These attacks are simply about causing congestion
Fragmentation Attacks
These send a flood of TCP or UDP fragments to a victim, overwhelming the victim's ability to re-assemble the streams and severely reducing performance.
Application Attacks
These attempt to overwhelm a specific aspect of an application or service and can be effective even with very few attacking machines generating a low traffic rate (making them difficult to detect and mitigate).

Medium sized website: Transition to HTTPS, Apache and reverse proxy

I have a medium sized website called algebra.com. As of today, it is ranked 900th website in US in Quantcast ratings.
At the peak of its usage, during weekday evenings, it serves over 120-150 queries for objects per second. Almost all objects, INCLUDING IMAGES, are dynamically generated.
It has 7.5 million page views per month.
It is server by Apache2 on Ubuntu and is supplemented by Perlbal reverse proxy, which helps reduce the number of apache slots/child processes in use.
I spent an inordinate amount of time working on performance for HTTP and the result is a fairly well functioning website.
Now that the times call for transition to HTTPS (fully justified here, as I have logons and registered users), I want to make sure that I do not end up with a disaster.
I am afraid, however, that I may end up with a performance nightmare, as HTTPS sessions last longer and I am not sure whether a reverse proxy can help as much as it did with HTTP.
Secondly, I want to make sure that I will have enough CPU capacity to handle HTTPS traffic.
Again, this is not a small website with a few hits per second, we are talking 100+ hits per second.
Additionally, I run multiple sites on one server.
For example, can I have a reverse proxy, that supports several virtual domains on one IP (SNI), and translates HTTPS traffic into HTTP, so that I do not have to encrypt twice (once by apache for the proxy, and once by the proxy for the client browser)?
What is the "best practices approach" to have multiple websites, some large, served by a mix of HTTP and HTTPS?
Maybe I can continue running perlbal on port 80, and run nginx on port 443? Can nginx be configured as a reverse proxy for multiple HTTPS sites?
You really need to load test this, and no one can give a definitive answer other than that.
I would offer the following pieces of advice though:
First up Stack overflow is really for programming questions. This question probably belongs on the sister site www.serverfault.com.
Https processing is, IMHO, not an issue for modern hardware unless you are encrypting large volumes of traffic (e.g. video streaming). Especially with proper caching and other performance tuning that I presume you've already done from what you say in your question. However not dealt with a site of your traffic so it could become an issue there.
There will be a small hit to clients as the negotiate the https session on initial connection. This is in the order of a few hundred milliseconds, will only happen on initial connection for each session, is unlikely to be noticed by most people, but it is there.
There are several things you can do to optimise https including choosing fast ciphers, implementing session resumption (two methods for this - and this can get complicated on load balanced sites). Ssllabs runs an excellent https tester to check your set up, Mozilla has some great documentation and advice, or you could check out my own blog post on this.
As to whether you terminate https at your end point (proxy/load balanced) that's very much up to you. Yes there will be a performance hit if you re-encrypt to https again to connect to your actual server. Most proxy servers also allow you to just pass through the https traffic to your main server so you only decrypt once but then you lose the original IP address from your webserver logs which can be useful. It also depends on if you access your web server directly at all? For example at my company we don't go through the load balanced for internal traffic so we do enable https on the web server as well and make the LoadBalancer re-encrypt to connect to that so we can view the site over https.
Other things to be aware of:
You could see an SEO hit during migration. Make sure you redirect all traffic, tell Google Search Console your preferred site (http or https), update your sitemap and all links (or make them relative).
You need to be aware of insecure content issues. All resources (e.g. css, javascript and images) need to be served over https or you will get browsers warnings and refuse to use those resources. HSTS can help with links on your own domain for those browsers that support HSTS, and CSP can also help (either to report on them or to automatically upgrade them - for browsers that support upgrade insecure requests).
Moving to https-only does take a bit of effort but it's once off and after that it makes your site so much easier to manage than trying to maintain two versions of same site. The web is moving to https more and more - and if you have (or are planning to have) logged in areas then you have no choice as you should 100% not use http for this. Google gives a slight ranking boost to https sites (though it's apparently quite small so shouldn't be your main reason to move), and have even talked about actively showing http sites as insecure. Better to be ahead of the curve IMHO and make the move now.
Hope that's useful.

For a SaaS running on Node.JS, is a web-server (nginx) or varnish necessary as a reverse proxy?

For a SaaS running on Node.JS, is a web-server necessary?
If yes, which one and why?
What would be the disadvantages of using just node? It's role is to just handle the CRUD requests and serve JSON back for client to parse the date (like Gmail).
"is a web-server necessary"?
Technically, no. Practically, yes a separate web server is typically used and for good reason.
In this talk by Ryan Dahl in May 2010, at 37'30" he states that he recommends running node.js behind a reverse proxy or web server for "security reasons". To elaborate on that, hardened web servers like nginx or apache have had their TCP stacks evolve for a long time in terms of stability and security. Node.js is not at that same level yet. Thus, since putting node.js behind nginx is easy, doesn't have many negative consequences, and in theory increases the security of your deployment somewhat, it is a good choice. At some point in time, node.js may be deemed officially "ready for live direct Internet connections" but wait for Ryan/Joyent to make some announcement to that effect.
Secondly, binding to sub-1024 ports (like 80 and 443) requires the process to be root. nginx and others automatically handle binding as root and then dropping privileges to a safer user account (www-data or nobody typically). Although node.js has system call wrappers in the process module to drop root privileges with setgid and setuid, AFAIK other than coding this yourself the node community hasn't yet seen a convention emerge for doing this. More on this topic in this discussion.
Thirdly, web servers are good at virtual hosting and in general there are convenient things you can do (URL rewriting and such) that require custom coding in node.js to achieve otherwise.
Fourthly, nginx is great at serving static files. Better than node.js (at least by a little as of right now). Again as time goes forward this point may become less and less relevant, but in my mind a traditional static file web server and a web application server still have distinct roles and purposes.
"If yes, which one and why"?
nginx. Because it has great performance and is simpler to configure than apache.

Does Apache basic authentication defend brute force attacks?

Will it shut down & lock up after repeated false password tries, and/or will it add lags in-between retries? Or does this depend on which modules you or your provider install? Thanks!
default Apache installation does not do that.
usually this is better done by your web application (eg, PHP/JSP) for account attack.
for network attack, better not for web servers because it's hard to identify the source due to so many anonymous / transparent proxy / VPN / NAT stuff. once you've implement that, you'd usually get lots of "why I can't connect" complains...