We have an application that posts some xml data to another application and traffic flows through apache web server (version 2.2.26) reverse proxy. we are observing some sporadic Proxy Error (http response code 500) - SSL handshake error with the remove server. The error message is shown in apache web server error logs.
Earlier we were using sunOne web server and we didn't notice this error.
i am suspecting that may be some missing configuration on apache web server that is causing this issue. this is just my guess.
Please advise if anyone has any experience around this issue.
I had a similar issue with Microsofts IIS, maybe it helps you.
There was a problem with the compression, the proxy did not support gzip so I forced the proxy to set the compression to none.
Afterwards (outboundrule), I changed the compression back to the original header sended from client.
This was very strange issue where Apache complained about SSL Handshaking but it turned out to be something else, finally the issue got resolved by placing the SetEnv proxy-sendchunked in rproxy.conf file.
Related
We are running a legacy application on an Apache v7.0.47 server behind an Apache2 HTTPD proxy (v2.4.23).
I am trying to upgrade the Java version on the server (used by both the proxy as well as the tomcat) from v1.8.0_181 to v1.8.0_303.
After that upgrade the Tomcat does not respond any more to the Apache's passed-on requests (the application itself comes up and runs fine).
Both the Apache on its network facing side as well as Apache and Tomcat among each other were configured to "talk" TLS1.2 already for a while, so I don't think that the disabling of TLSv1.0 and TLSv1.1 in the later Java version is the cause of the issue here. And there is no error message in the logs giving any clue. The only indication is, that the Tomcat seems to close and tear down the connection without any response after having receiving the request. That seems to happen already in the SSL layer, since there is no entry in the access-log (of Tomcat).
Switching back to the "old" Java gets things going again, so firewall, network etc. are definitely NOT the issue here. With the newer Java version the connection setup fais again, causing the HTTPD to emit a "502 Bad gateway" error.
Any idea anyone what could cause the Tomcat to reject the HTTPD's requests just based on the Java version? Additional SSL verifications enabled by default in the newer stack? I searched extensively but didn't surface any suspect, yet.
Later addition: trying to identify the issue I found out that with Java v1.8.0_231 things are still working, with v1.8.0_241 and higher it fails.
Inspecting the release notes now to find a hint...
Any ideas or experiences with that upgrade anyone?
Just for the records - in case someone else stumbles over this question:
The issue here was that from Java v1.8.0_241 on upwards Java security verifies that a certificate chain read from a certificate store is rooted in a CA certificate that has a proper CA-flag. Since we were using an old certificate and trust store that had been generated with an old release of the java keytool back than this flag was missing and the new Java version thus rejected all the entries in that certificate file. It thus aborted the SSL connection setup and simply closed the connection without any response or indication.
There is a VM option -Djdk.security.allowNonCaAnchor=true that one can add to Tomcat's JAVA_OPTS variable (typically in a file setenv.sh) to disable this verification. After adding that our Tomcat was again responding to SSL-requests and worked OK again.
BTW: when trying to analyze SSL issues like the above the option -Djavax.net.debug=all:handshake:verbose proved to be a real live-saver! With this option one gets very details log output and can follow SSL handshakes and connection setups in detail. Once I had finally gotten a first useful error message pointing to this CA-flag issue searching for a solution (or rather workaround in this case) proved to be a snap compared to the initial search for what could be the issue here.
I scaned my site with Burp Suite Proffessional.
It said a vulnerability called "HTTP Request Smuggling" has been detected.
This vulnerability was detected in the August 7, 2019 Burp Suite Professional ver2.1.03.
My server environment is as follows.
CentOS 7
Apache 2.4
PHP 7.3
PortSwigger says how to resolve this problem.
That is by changing the network protocol of the web server from "HTTP/1.1" to "HTTP/2".
https://portswigger.net/web-security/request-smuggling#how-to-prevent-http-request-smuggling-vulnerabilities
So I changed my site with SSL support and then HTTP/2 support as well.
And I scaned again, the "HTTP Request Smuggling" vulnerability was detected AGAIN.
HOW TO FIX THIS?????????
I am NOT interested in what is this problem details or how it works at all.
What I want to know is how to stop detecting this problem.
If you have encountered a similar event, tell me the solution. please?
If possible, I wish what you did something to this, wrote in httpd.conf or php.ini, etc.
I found that need to improve version of tomcat but I haven't tried yet
Article about solution
If you are using end-to-end HTTP/2 communication then that should eliminate the vulnerability. What I mean by this is that HTTP/2 is the only HTTP version used in all HTTP traffic.
Many web architectures has a load balancer or proxy in front of the web server which accepts HTTP/2 traffic. However, many frontend servers rewrite the incoming HTTP/2 traffic into HTTP/1 when it forwards the traffic to the backend server/ web server. When the traffic gets rewritten to HTTP/1 then HTTP request smuggling is possible. More info here: https://www.youtube.com/watch?v=rHxVVeM9R-M
I'm posting this quote from James Kettle, a researcher from Portswigger: "you can resolve all variants of this vulnerability by configuring the front-end server to exclusively use HTTP/2 to communicate to back-end systems, or by disabling back-end connection reuse entirely. "
source: https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn
I'm using apache for my system. Recently I received complaints from the client saying they unable to access the system. I remote to my server and checks my Apache and its working fine. I try to launch the localhost ant it working fine also. I could not find any issue and have no other go rather than restarted my apache. After restarted the system back to normal. After some digging in the log files, I found something there.
The specified network name is no longer available. : AH00341: winnt_accept: Asynchronous AcceptEx failed.
I try to google it out but unable to find the solution for this. Is there anyone face the same issue earlier and solve this. Below is my server details:
Window Server 2008 R2 Standard
Apache 2.4
Using SSL Connection
Any advice or reference links for the issue is highly appreciated.
This usually occurs due to some software or driver inserting itself into the Windows network stack. You can skip over these layers by adding this to 2.4:
AcceptFilter https none
AcceptFilter http none
In older releases, this was turned off "Win32DisableAcceptEX".
I use mod_spdy to realize SPDY on apache. But I met some problems.
I did every step on ubuntu according to Google's document of installing mod_spdy.And I realized https on Apache. When I checked if mod_spdy has worked, I sent https request to the server, but I donot see server's domain listed in the "SPDY session" table that means the mod_spdy don't work. I also checked Apache server logs, and I didn't find any error message from mod_spdy.
I hope somebody can help me to deal with this problem.
i am getting the error 503 Service Temporarily Unavailable many times in my application
and i want to detect why this error occurs, how ? if there's a log file or something like that, since i am not familiar with apache.
second thing is that, is it possible to handle this error, that when it occurs apache is restarted ?
There is of course some apache log files. Search in your apache configuration files for 'Log' keyword, you'll certainly find plenty of them. Depending on your OS and installation places may vary (in a Typical Linux server it would be /var/log/apache2/[access|error].log).
Having a 503 error in Apache usually means the proxied page/service is not available. I assume you're using tomcat and that means tomcat is either not responding to apache (timeout?) or not even available (down? crashed?). So chances are that it's a configuration error in the way to connect apache and tomcat or an application inside tomcat that is not even sending a response for apache.
Sometimes, in production servers, it can as well be that you get too much traffic for the tomcat server, apache handle more request than the proxyied service (tomcat) can accept so the backend became unavailable.