The similar problem is described here: GWT IllegalArgumentException: encodedRequest cannot be empty
My GWT application is deployed in Tomcat6, which is linked with Apache by using Coyote/JK2 connectors. For SSO I use the mod_auth_sspi/1.0.4.
When I use IE8, pages is not displayed, but for Firefox everything OK. In Tomcat logs I see the following:
SEVERE: Exception while dispatching incoming RPC call
java.lang.IllegalArgumentException: encodedRequest cannot be empty
at com.google.gwt.user.server.rpc.RPC.decodeRequest(RPC.java:232)
at org.spring4gwt.server.SpringGwtRemoteServiceServlet.processCall(SpringGwtRemoteServiceServlet.java:32)
at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:248)
at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:643)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at gov.department.it.server.RequestInterceptorFilter.doFilter(RequestInterceptorFilter.java:90)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190)
at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:311)
at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:776)
at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:705)
at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:898)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690)
at java.lang.Thread.run(Thread.java:619)
What have I tried so far:
1) Can't find the registry key DisableNTLMPreAuth (IMHO it's not the solution, because in my case IE 8 is actively used).
2) I have installed and configured the Native Windows Authentication Framework WAFFLE
web.xml:
...
<filter>
<filter-name>NegotiateSecurityFilter</filter-name>
<filter-class>waffle.servlet.NegotiateSecurityFilter</filter-class>
<init-param>
<param-name>waffle.servlet.spi.NegotiateSecurityFilterProvider/protocols</param-name>
<param-value>NTLM</param-value>
</init-param>
</filter>
...
<filter-mapping>
<filter-name>NegotiateSecurityFilter</filter-name>
<url-pattern>/my-app/*</url-pattern>
</filter-mapping>
...
But it did not help.
3) In worker.properties I set socket_keepalive=0, but it did not help too -
worker.ajp13.type=ajp13
worker.ajp13.host=localhost
worker.ajp13.port=8009
worker.ajp13.lbfactor=50
worker.ajp13.cachesize=10
worker.ajp13.cache_timeout=600
worker.ajp13.socket_keepalive=0
worker.ajp13.socket_timeout=300
What else can I try to do?
You have rediscovered the 7 year old bug #1 in mod_auth_sspi which has affected numerous projects, frustrated numerous developers, and caused uncountable wasted man-hours over the years. Yet it still stands unresolved because the maintainer doesn't consider it a bug. Nor has it been addressed by Microsoft for older browsers, because indications are that IE9 doesn't have this problem.
Cause
It is caused by IE trying to be 'smart' and sending a zero content-length POST (I named it 0POST to try making it an indexable term to benefit those who rediscover it in the next 7 years.) with an NTLM auth header in anticipation of being challenged by the server. IE does this when it has been authenticated before in that protection space. So it knows that it will be challenged again. Sadly mod_auth_sspi is not as smart as IE, so bad things happen on the server side when a 0POST arrives and it is let through to the apps without being challenged. Except that sometimes this can happen even for unprotected areas, if they are under an area that requires authentication.
Other browsers don't pretend to be as smart as IE and don't try to save a few bytes on the first round trip for "performance", so they don't run into this problem. Here is Microsoft's explanation of this behavior.
Horrible Workaround
In Apache httpd.conf set
SSPIPerRequestAuth On
This is equivalent to the DisableNTLMPreAuth IE client-side fix you mentioned, which is impractical for a large user group. Plus it amounts to crippling all non-Apache apps also, which may be capable of handling a 0POST. There are literally NO examples of this setting being discussed or its side effects explained on the web, so I am including this only link I found that sheds some light on it. Anyway, making one server side change seems to be the lesser of the two evils. Although now, by changing the server config, you have crippled all other innocent browsers visiting this site as well.
The problem with this workaround is that it forces EVERY request to perform an SSPI handshake which results in a lot of extra 401 traffic and can affect performance. For performance, NTLM authentication is treated as 'session-based' not 'request-based' which means that the handshake occurs only at the start of the session. When using this setting, you should also set filters to prevent your log filling up with 401s. Also note that this requires KeepAlive to be turned on.
I am not sure your setup is the same as the one described in the WAFFLE fix; were they using Apache like you? I think WAFFLE applies to Tomcat, whereas you have Apache in front, so Apache is handling authentication. You might consider using that setup instead of Apache. If you can use that setup, it may be a better option than this workaround because WAFFLE has explicitly accounted for 0POST and can handle it. The author had also discovered this gem while working with GWT like you.
Interestingly, for jcifs, a fix for this very issue was posted 9 Years ago. The author also provided an excellent explanation later:
The code in the filter examines all HTTP POST requests and determines
if they contain an NTLM type 1 message. If the request contains an
NTLM type 1 message, the filter responds with a dummy type 2 message
to entertain IE's desire to re-negotiate NTLM prior to submitting any
POST data. The browser should then respond with an NTLM type 3
message along with the post data which the filter should then allow to
chain to the rest of the web application.
A simple patch was also created for mod_auth_sspi 5 years ago, if you are interested. See diff in the author's own repo. I am not sure if I agree with that approach though. It tries to detect IE/0POST, whereas I think the right fix should be to detect if the client is requesting auth with a NTLM Type 1 header, as in the jcifs filter. (Type 1 simply means that it is the first message of the handshake)
I wonder if anyone has used alternatives to mod_auth_sspi like mod_auth_ntlm_winbind and if they don't exhibit this behavior. If you have, please leave a comment. We already know WAFFLE works, but it is not a mod_auth_sspi replacement.
One alternative is to forget NTLM and use Kerberos, (mod_auth_kerb) but many people find that too complicated to setup. IE will behave this way on any challenge-response scheme, so odds are that kerb auth could run into the same problem, since a similar 401 sequence happens in both cases. But being a different module, its possible it is capable of handling this.
Lastly, I should mention that there is yet another issue that this per-request auth workaround doesn't seem to fix. I haven't seen it discussed anywhere, but I have found that sometimes after the 0POST, the server waits for a very long time before it responds with the final 200 response with the results of the (proper) POST. This long delay happens only in the end though, NOT immediately in response to the 0POST. That goes fine, and the handshake completes, but the server doesn't respond until after a long wait which I have noticed is suspiciously close to 90 seconds, like some sort of timeout. The practical result of this is that when users log in, IE8 will sometimes hang for 90sec waiting for server response. I thought the KeepAlive might be causing it, but it is not even explicitly defined in my config, so I assume it is at the 15sec Apache default. But I am sure this is related to the 0POST, because it happens only right after a successful 0POST auth handshake. Our server is in a separate (2-way) trusted domain across a firewall, so maybe that has something to do with it.
Diverse Examples of This Issue
https://confluence.atlassian.com/display/JIRAKB/NullPointerException+when+Authenticating+from+IE
http://trac.edgewall.org/ticket/2696
http://trac.edgewall.org/ticket/4560
https://drupal.org/node/82530
http://www.webmasterworld.com/apache/3087425.htm
Why "Content-Length: 0" in POST requests?
https://jira.springsource.org/browse/SEC-1087
The most hilarious example is how IE's smartness affected Microsoft's own products! They themselves couldn't understand how to deal with IE's behavior, causing a bug in ISA Server 2006.
http://support.microsoft.com/kb/942638
Related
I’m a long-time reader here but a newbie at posting a question. Hopefully I’ll cover everything you guys need to hopefully help.
Background information:
We are running ColdFusion 10 on two servers that are load balanced (I’m not sure how they are load balanced – they are not clustered and are not using sticky sessions, this much I know). Unfortunately, I do not have access to our CF server admin at all; I have to rely on others.
I’ve implemented a punch out system that allows our users to connect to a vendor’s site to shop, then returns their items to our cart on our site. This has been working in our development servers without any issues. Everything worked well when we tested this in production as well. However, when we moved it into production last week, we started getting an error, but only when the code was running off of ONE of the load balanced servers. The error we received back from the vendor site stated that the error detail was: “I/O Exception: Server Key”. All of the research I conducted led me to believe that our CF servers needed the vendors cert (it is an https connection), so I told this to our server guy. He reinstalled the certs (he had said that they were there) and that did seem to solve the problem. I was successfully able to punch out to our vendor site from both of our load balanced servers.
We did a bit more testing (which all seemed fine) and then put it back into production this morning only to have the same issue occur. On one of the servers, this is working and on the other one it is not. My server guy tells me that the vendor certs are currently in place in the ColdFusion keystore.
Here is the cfhttp call I’m using:
<cfhttp url="#vendorURL#" method="POST" throwOnError="no" result="returnedObj">
<cfhttpparam type="XML" name="xmlPunchoutData" value="#trim(RequestPunchoutXML)#" />
</cfhttp>
Where ‘RequestPunchoutXML’ has a xml structure requesting a punch out from the vendor.
This looks possibly related: ColdFusion 10 - CFHTTP - Random peer not authenticated on SSL calls (cacerts file updated) but the error I'm getting isn't this one, though I think that they are probably related.
Questions: Any idea what is going on here? Could a badly set up load balancer be the issue here? Is it possible that the cfhttp call is starting from one of the servers and getting the response returned to the other? Could there be some reason that the certs are failing? Is this some other issue altogether that I have not yet identified? Any thoughts/ideas/suggestions would be greatly helpful.
Thanks in advance,
Janice
I’m trying to get a mod_perl2 application ported to AWS. As part of the port I thought I’d move from Debian Squeeze to Wheezy with the latest stable mod_perl & Apache2 combination.
The application works right up to the point I try and write JSON responses to the client. At this point, each request is canceled on the client and on the server I get the error
Apache2::RequestIO::print: (103) Software caused connection abort
whenever I write to the client, i.e.:
$self->req->print($output);
I’ve tried tcpdumping the response to the client, and I can see it being written out, but no response is received on the client end and it just barfs chips. I can’t find any information on how to get around this.
I found quite a few people asking about this question on the net without many answers. The solution to my problem was very specific but I thought I’d post what I did anyway, it may help someone.
The client was canceling the request before the response was fully written, which was crapping out Apache::RequestIO (for reasons I still don’t know).
I couldn’t work out why I was seeing this behavior.
By using tcpdump I could see that data was being written out to the client – and it looked fine.
By inspecting the page in Chrome and looking at the network stack, I could see that my request for data was being canceled after no response was received (which was odd because the code worked fine on other servers and I could see the response was being written). Debugging was may harder because with Apache crashing out with an error in print IO I couldn’t check if the bytes written equaled the bytes of data. I wasn’t sure if something was getting stuck on the server side.
So, I changed the Content-Type of the response from application/json to text/html, so that I could query the page and just look at the actual response as text. Once I did that, I could see that the response was fine.
I started to look for other causes, and I found that in the migration to the new server, I’d missed altering some URLs in the DB to point to the new server, which meant my application was trying to get some data from the old DB.
This in turn was causing a load of timing issues, which was causing my problems. Once I fixed the config, the problems went away.
I am trying to monitor https traffic with Fiddler, using current newest version:2.4.4.5
I've successfully set up https, certificates and I can see the full https encrypted traffic for example browsing my bank's web site.
...however...
When I trying to monitor an other server I got this error message in the response window:
"Failed to secure existing connection for 77.87.178.160. A call to SSPI failed, see inner exception. InnerException: System.ComponentModel.Win32Exception: The client and server cannot communicate, because they do not possess a common algorithm"
For full Fiddler window see:
The client is not a in this case browser, but a custom client program, which communicates with its own server.
My question: Is this exception misleading and in reality some other error prevents the secure channel to set up?
...or...
We have still chance to monitor this https communication?
Thx in advance
What is the client program?
This error typically indicates that that client application is only offering certain HTTPS ciphers, and those ciphers are not supported by Fiddler.
However, in this case, the specific problem here is almost certainly this: http://blogs.msdn.com/b/ieinternals/archive/2009/12/08/aes-is-not-a-valid-cipher-for-sslv3.aspx
The client is trying to use AES with SSLv3, but that isn't one of the valid ciphers for SSL3. As a consequence, the connection fails.
You might be able to workaround this by clicking Rules > Customize Rules. Scroll down to the Main() function and add the following line within the function:
CONFIG.oAcceptedServerHTTPSProtocols =
System.Security.Authentication.SslProtocols.Ssl3;
Please let me know if this works.
NOTE Current versions of Fiddler offer a UI link for this: Look at the lis of enabled protocols on the HTTPS tab.
Unbelievably this issue is still present some 6 years later.
Just installed the latest version of Fiddle (v5.0.20194.41348), and sure enough on Win7 using Chrome or IE it keeps failing with the dreaded error:
"fiddler.network.https> HTTPS handshake to google.com (for #1) failed. System.ComponentModel.Win32Exception The client and server cannot communicate, because they do not possess a common algorithm"
After some hours of testing, I found a middle ground solution which seems to work with virtually all websites. The aim was to get the highest possible security with no errors in the log. Without needing to add any code, simply changing this line under Tools > Options > HTTPS > Protocols is what worked for me (just copy and paste it):
<client>;ssl3;tls1.1;tls1.2
Basically removed the ssl2 and tls1.0 protocols which leaves us with some pretty decent security and no errors so far. Having spent hours of frustration with this error, hope someone out there might find this useful, and a big thanks to EricLaw who discovered the root of the problem.
Yes I too have seen this error when working outside of fiddler and it was connected with AuthenticateAsServer but only went wrong when using IE10 and not Chrome as the browser.
Odd thing is that it did not break all the time for IE10 using SslProtocols.Tls for the protocol so I will add a bit of code to switch the protocol if one fails
The protocol that can be used also seems to change on if you are using a proxy server like Fiddler or using an invisible server by hijacking the DNS via the hosts file to divert traffic to the server
I am using a java raw HTTP client to connect to Shopify API (specifically, using Play Framework with the non-defualt sync driver which is actually the JDK's default driver).
My application usually manages to connect successfully and convert the temporary access token into a permanent one by calling the /admin/oauth/access_token endpoint.
However, sometimes I get this error result from the API:
Generic Error(400)
{"error":"invalid_request"}
I haven't been able to reproduce the issue with my test stores - I've tried installing a fresh store, reinstalling existing stores after uninstalling, I'm not sure why this call sometimes fail and how to debug it. The API call still continues to succeed for some stores using our application.
Some things that I am doing:
Even if the URL of the store is on a custom domain, I'm always using the https://foo.myshopfiy.com/admin/oauth/access_token URL and not the URL of the custom domain, to prevent a redirect.
I am always using an https URL and never an http one, again to prevent a redirect (we noticed a few issues with redirect with the Java HTTP client, so we aim to have zero redirects)
A thread I found about this error suggest possible problems with our SSL certificates, however I don't think this is my problem because some requests work for us, and the result of running openssl on our machine does't show any issues.
How should I proceed? Open a support ticket with Shopify?
FYI, I see that this specific problem only started yesterday on Feb 19 2013, so it might be a temporary issue.
FYI, the problem was caused by reusing a temporary access code.
Our fault - Shopify could have been more clear in their error message though.
I have a client-server application, in which the client communicates with the server using WCF (WCF is used both in the client and the server).
My problem is, that instantiating the auto-generated proxy in the client, in the following way:
new Service1Client() takes constantly 15.xxx seconds.
I tried to solve this problem, and came to the following results:
1) Compiling and running the same code on other computers, ends up in the same way (always 15.xxx seconds).
2) Instantiating the proxy using ChannelFactory.CreateChannel< IService1 >()
doesn't help (it gives the same result).
My guess, is that whenever the channel factory creates a channel, it tries to do something with a 15 seconds timeout, and when it fails, it continues with creation.
By the way, I use .Net 3.5 without SP1, and cannot upgrade to SP1 :(
Thanks ahead
Even though it is already outdated, it may be useful for somebody else searching for the same. Problem could be with DNS resolution problem, that might be solved in SP1. So you can check if it happens only when you use host name or also with specified IP address.
I've seen this before, where the time was being taken in looking for a proxy server. Check your WinINET (Internet Explorer) proxy settings.
My specific reason for thinking "proxy server" is that it takes 15s. 15s sounds like a nice round number for a network timeout.
Even though this is very old information I just found this issue too although I was experiencing a 7second delay on the First call to a method on the Service Client, I tracked it (in my environment) to Internet Explorer settings as stated above, but in my circumstances it wasn't a Proxy enabled, but the Automatically Detect Settings.
Connections -> Lan Settings and Automatically Detect settings was enabled.
I played with the machine.config and app.config and set
<runtime>
<generatePublisherEvidence enabled="false"/>
</runtime>
Which also made no difference.
I found this answer here and thought I'd contribute a little more information in case someone else in the future experiences something like this.
(This with a .Net 4 WCF service)