Background
I have a scenario where I run two websites, Website A is hosted on Server 2016 running IIS 10 and Website B is hosted on CentOS 6 running Apache 2.2. Both websites are served using HTTPS and work just fine on the local network. Publicly, I use SNI and URL Rewrite Rules on the IIS server to gain access to the Apache 2.2 server.
Most user agents can access Website B without issue, however, iOS will report back the error "failed to load resource the operation couldn’t be completed. protocol error" and present a blank screen. I have determined the cause to be related IIS serving back an HTTP/2 response even though Apache 2.2 can't support those requests.
Question
Is there any way to disable HTTP/2 responses on just a specific site on IIS 10? I found many instructions to disable it entirely, but the performance improvements are too great to ignore on Website A.
I'm not aware of how to do this. HTTP/2 allows connection coalescing so will attempt to reuse HTTP/2 connections for as many sites as it can.
However your setup should work fine. I'd suggest the problem is probably a bad HTTP header in your Apache 2.2 setup. HTTP/2 is more strict about these, whereas HTTP/1 would try it's best to carry on despite there being bad headers, so a bad multi-line header which wasn't closed properly, or a header with spaces in the name or double colons, can cause issues. Weird that it's just iOS though. You don't see this on Chrome? Chrome has a nice way of debugging these, but that might be more difficult if just on iOS. Unless you are sending an specific bad header back to iOS only clients?
Oh and btw upgrade Apache 2.2. It's been end of life for over a year now and is no longer being patched! Do yourself a favour and upgrade to 2.4. If on Windows then it should be fairly easy to install, though some of the config options have been changed. You could always assign a separate IP address to this and host it directly rather than through IIS if you can find no other way around the issue, though for that you certainly shouldn't be using old, unsupported software.
Related
I scaned my site with Burp Suite Proffessional.
It said a vulnerability called "HTTP Request Smuggling" has been detected.
This vulnerability was detected in the August 7, 2019 Burp Suite Professional ver2.1.03.
My server environment is as follows.
CentOS 7
Apache 2.4
PHP 7.3
PortSwigger says how to resolve this problem.
That is by changing the network protocol of the web server from "HTTP/1.1" to "HTTP/2".
https://portswigger.net/web-security/request-smuggling#how-to-prevent-http-request-smuggling-vulnerabilities
So I changed my site with SSL support and then HTTP/2 support as well.
And I scaned again, the "HTTP Request Smuggling" vulnerability was detected AGAIN.
HOW TO FIX THIS?????????
I am NOT interested in what is this problem details or how it works at all.
What I want to know is how to stop detecting this problem.
If you have encountered a similar event, tell me the solution. please?
If possible, I wish what you did something to this, wrote in httpd.conf or php.ini, etc.
I found that need to improve version of tomcat but I haven't tried yet
Article about solution
If you are using end-to-end HTTP/2 communication then that should eliminate the vulnerability. What I mean by this is that HTTP/2 is the only HTTP version used in all HTTP traffic.
Many web architectures has a load balancer or proxy in front of the web server which accepts HTTP/2 traffic. However, many frontend servers rewrite the incoming HTTP/2 traffic into HTTP/1 when it forwards the traffic to the backend server/ web server. When the traffic gets rewritten to HTTP/1 then HTTP request smuggling is possible. More info here: https://www.youtube.com/watch?v=rHxVVeM9R-M
I'm posting this quote from James Kettle, a researcher from Portswigger: "you can resolve all variants of this vulnerability by configuring the front-end server to exclusively use HTTP/2 to communicate to back-end systems, or by disabling back-end connection reuse entirely. "
source: https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn
I would like to setup mod_security as a stand alone instance protecting Tomcat instances against web application attacks. Would anyone know the pros and cons of doing this via installing mod_security as an Apache module versus installing mod_security on a reverse proxy? Has anyone implemented mod_security in either of these fashions? And if so is one preferred over the other?
There's really no difference in your two options. What non reverse proxy would you install the module on to protect Tomcat?
The question doesn't really make sense as they are both the same to you.
If you already have an Apache server, then you install ModSecurity in one of two ways:
In embedded mode by installing ModSecurity as module in the existing Apache instance you already have. The advantages are that you won't have to set up a separate Apache instance, and that the ModSecurity will have access to the environment that Apache runs under (so can see environment variables for example or log to same log files).
In a reverse proxy mode. This involves setting up a separate Apache instance, with ModSecurity on it only, and funnel all requests through it, before sending on the requests to your normal Apache. The advantages here are a dedicated web server just for ModSecurity, so you will not share resources with your existing version of Apache, if it is already resource hungry. Disadvantages are that it doubles your infrastructure and the complications that brings.
Personally I prefer option 1.
However, as you want to set up a dedicated web server in front of TomCat, the two options are identical for you. The new instance of Apache (or Nginx) that you set up will be running it in embedded mode and will act as a reverse proxy to your Tomcat server.
Personally I always think it's best to run a dedicated web server like Apache in front of any app server like Tomcat - especially on a public facing website. Granted Tomcat does include a pretty good web server (called Coyote), which may serve most of your web server needs, but a dedicated web server like Apache is more geared towards serving static content and contains other features for performance and security which make it a better end point server (including the ability to run ModSecurity for example!).
And just in case there is any confusion, Apache is actually short for Apache HTTP Server, and is sometimes called Apache httpd after the process that it runs. It is Apache's most popular bit of software hence why the name gets shortened, but Apache actually have lots of bits of software (including Apache Tomcat - usually shortened just to Tomcat).
I have an old apache web server, with some security issues, that running an old application exposed by internet. Before upgrading apache version, I must perform a lot of tests in a dev environment. During this time I would put a reverse proxy (with last apache web server version) in front of this application.
This temporary workaround can resolve some old apache security issues or it is totally useless?
Thanks in advice and I'm sorry for my bad english.
This temporary workaround can resolve some old apache security issues ... ?
Yes, with "can" and "some" emphasized. A reverse proxy handles incoming requests and rewrites them in a canonical form, which is safer for the application server to parse. A reverse proxy can also reject malformed requests so that they never reach the application.
It might not resolve every security issue, but you imagine one that this would resolve. For example, if a specially crafted causes the older version to execute arbitrary code, while the new version would drop the request, then having the reverse proxy would help prevent this bug from affecting the application server.
It's not as much of a defense as using a web application firewall, but it's kind of related.
TL;DR
I want to set up a local HTTPS proxy that can (LOCALLY) modify the content of HTML pages on my machine. Is this possible?
Motivation
I have used an HTTP Proxy called GlimmerBlocker for years. It started in 2008 as a proxy-based approach to blocking ads (as opposed to browser extensions or other OS X-specific hacks like InputManagers). But besides blocking ads, it also allows the user to inject their own CSS or JavaScript into the page. Development has seriously slowed, but it remains incredibly useful.
The only problem is that it doesn’t do HTTPS (from its FAQ):
Ads on https pages are not blocked
When Safari fetches an https page using a proxy, it doesn't really use the http protocol, but makes a tunneled tcp connection so Safari receives the encrypted bytes. The advantage is that any intermediate proxies can't modify or read the contents of the page, nor the URL. The disadvantage is, that GlimmerBlocker can't modify the content. Even if GlimmerBlocker tried to work as a middleman and decoded/encoded the content, it would have no means of telling Safari to trust it, nor to tell Safari if the websites certificate is valid, so Safari would think you have visited a dubious website.
Fortunately, most ad-providers are not going to switch to https as serving pages using https are much slower and would have a huge processing overhead on the ad-providers servers.
Back in 2008, maybe that last part was true…but not any more.
To be clear, I think the increasing use of SSL is a good thing. I just want to get back the control I had over the content after it arrives on my end.
Points of Confusion
While searching for a solution, I’ve become confused by some apparently contradictory points.
(Also, although I’m quite experienced with the languages of web pages, I’ve always had a difficult time grokking networks and protocols. On that note, sorry if I’m missing something that is way obvious!)
I found this StackOverflow question asking whether HTTPS proxies were possible. The best answer says that “TLS/SSL (The S in HTTPS) guarantees that there are no eavesdroppers between you and the server you are contacting, i.e. no proxies.” (The same answer then described a hack to pull it off, but I don’t understand the instructions. It was very theoretical, anyway.)
In OS X under Network Preferences ▶︎ Advanced… ▶︎ Proxies, there is clearly a setting for an HTTPS proxy. This seems to contradict the previous statement that TLS/SSL’s guarantee against eavesdropping implies the impossibility of proxies.
Other things of note
I can’t remember where, but I read that it is possible to set up an HTTPS proxy, but that it makes HTTPS pointless (by breaking the secure communication in the process). I don’t want this! Encryption is good. I don’t want to filter anyone else’s traffic; I just want something to customize the content after I’ve already received it.
GlimmerBlocker has a nice GUI interface, but I’m fine with non-GUI solutions, too. I may have a poor understanding of networking and protocols, but I’m perfectly comfortable on the command line, tweaking settings in text editors, and so on.
Is what I’m asking possible? Or is my question a case of “either you get security, or you can break it with hacks and get to customize your content—but not both”?
The common idea of a HTTP proxy is a server which accepts a CONNECT request which includes the target hostname and port and then just builds a tunnel to the target server. All the https is done inside the tunnel, so there is no way for the proxy to modify it (end-to-end security from browser to web server).
To modify the data you need to have a proxy which plays man-in-the-middle. In this case you have a https connection between the proxy and the web server and another https connection between the browser and the proxy. Between proxy and web server the original server certificate is used, while between browser and proxy a newly created certificate is used, which is signed by a CA specific to the proxy. Of course this CA must be imported as trusted into he browser, otherwise it would complain all the time about possible attacks.
Of course - all the verification of the original server certificate has to be done in the proxy now, and not all solutions do this the correct way. See also http://www.secureworks.com/cyber-threat-intelligence/threats/transitive-trust/
There are several proxy solution which might do this SSL interception, like squid, mitmproxy (python) or App::HTTP_Proxy_IMP (perl). The last two are specifically designed to let you modify the content with your own code, so these might be good places to start.
Doesn't have to be Apache, but that's just the only HTTP server I know of (Actually could you guys recommend alternatives that I could look into as well?)
Anyways, so I have been messing around with Amazon Web Services and I created an EC2 server instance with an Amazon Linux Image. On that, (Following guides and examples) I installed Apache and now when I make a GET request to my public IP, it returns to me the HTML files I created on my server.
My question is, what if I never installed Apache, and then made an HTTP request to my public IP? For no reason really, the question just came up in my head and I'm curious. I'd rather not figure out how to uninstall Apache or create a new instance to figure it out, so I was wondering if somebody could weigh in as well as tell me a little more about what it is exactly apache does on a server. My understanding is that it is a layer you can install on your server OS that will create a socket listener to port 80 (HTTP), and when a request is made on that port, Apache will return web pages? Also I think I read somewhere you could configure Apache to forward a port to something like a python server script?
Thanks in advance for your time!
could you guys recommend alternatives that I could look into as well?)
nginx is a popular alternative to apache. It's much more efficient.
what if I never installed Apache, and then made an HTTP request to my public IP?
Your browser would get a "connection reset" because there is nothing on port 80. Your browser would display a message (Chrome says "This webpage is not available"). You would NOT get a "404" because that requires an HTTP server to send HTTP codes.
If your server was firewalled instead, you'd bet a busy wait for a while, then a message about the server not responding.
Also I think I read somewhere you could configure Apache to forward a port to something like a python server script?
Yes, that is called "reverse proxy" mode. It's essential to any application website if you want to scale. The web server(s) can distribute traffic to one or more backends running the application. The web server is useful for filtering bad requests (since your backend in Ruby/Python will be 1000's of times slower than the reverse proxy.)
Well, if you want to test what will happen if Apache isn't installed, you can always just stop the Apache service by typing:
sudo service apache2 stop
or
sudo service httpd stop
depending on your version. Then if you visit your site's webpage you'll get a 404 error or something similar.
There are ways to use python scripts to run simple servers, but in general it's easier to just let Apache handle that and use a framework like Ruby on Rails or Django to control the display and creation of content for your server.