I'm using Selenium 2.20 . Why does WebDriver InternetExplorerDriver throw this warning when launching browser? This is happening to me during a parameterized JUnit test. The warning is thrown each time I am invoking "new InternetExplorerDriver()" . After it retries, it succeeds on the second attempt of whatever it is doing. So, in other words, the tryExecute call has to run twice before my IE instance works in WebDriver.
org.apache.http.impl.client.DefaultRequestDirector tryExecute
INFO: I/O exception (java.net.SocketException) caught when processing request:
Software caused connection abort: recv failed
org.apache.http.impl.client.DefaultRequestDirector tryExecute
INFO: Retrying request
This is a warning message. The native code (C++) component of the IE driver includes an HTTP server, since the driver uses the JSON Wire Protocol for its communications. That HTTP server takes a small amount of time to start and be ready to receive HTTP requests. However, the RemoteWebDriver's HTTP client (remember that InternetExplorerDriver is a subclass of RemoteWebDriver) cannot know exactly when that server is available, so this causes a race condition. The HTTP client must poll the server until it receives a valid response. When you're seeing this warning, it's only telling you that the internal HTTP server hasn't completed its initialization, and the HTTP client has lost the race. It should be harmless, and you should be able to safely ignore it.
Since this message is not going to be important for most cases as it is a known race condition, you can configure java.util.logging to ignore it by passing in a custom log configuration using this Java code:
LogManager.getLogManager().readConfiguration(
getClass().getResourceAsStream(
"/META-INF/logger.properties"));
And a file META-INF/logger.properties
handlers=java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
org.apache.http.impl.client.DefaultHttpClient.level=WARNING
Related
I use ApiKit to receive queries. Occasionally I get the following line in a log file:
WARN org.mule.module.http.internal.listener.grizzly.ResponseCompletionHandler - HTTP response sending task failed with error: Locally closed
It seems that in this case the integration has not sent a response to the party that called the integration. I thought that there might be a sort of timeout before ApiKit closes the connection to caller but based on timestamps that might not be the case as everything happens within a second.
In this case the payload is sent to Artemis queue before this warning and despite the warning the message is read from Artemis normally and the whole flow works otherwise just fine besides this warning and not sending the response.
So, am I correct when I think that this warning is an indication why the response is not sent? In addition to that what can be done to prevent this situation?
I made a simple spring-boot application that returns a static json response for all requests.
When the app gets a request with a large payload (~5mb json, 1 TP ), the client receives the following error:
java.net.SocketException: Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
I have tried increasing every limit i could - here are my tomcat settings:
spring.http.multipart.max-file-size=524288000
spring.http.multipart.max-request-size=524288000
spring.http.multipart.enabled=true
server.max-http-post-size=10000000
server.connection-timeout=30000
server.tomcat.max-connections=15000
server.tomcat.max-http-post-size=524288000
server.tomcat.accept-count=10000
server.tomcat.max-threads=200
server.tomcat.min-spare-threads=200
What can I do to make this simple spring boot with just one controller, to handle such payloads successfully?
This springboot application and the client sending the large payload run on an 8-core machine with 16gb ram. So resources shouldn't be a problem.
This was because the controller was returning a response without consuming the request body.
So the server closes the connection as soon as it receives the request, without consuming the full request body. The client still hadn't finished sending the request and the server closed the connection before that.
Solution:
1. Read the full request body in your code
2. Set tomcat's maxSwallowSize to a higher value (default : 2mb)
server.tomcat.max-swallow-size=10MB
While making a network request with low connectivity very rarely I see that I get
<-- HTTP FAILED: java.net.UnknownHostException: Unable to resolve host ....
while my server seems to have got the request correctly. I found 1 instance of it with device logs which shows actually an SSLException happened
D/NativeCrypto: jniThrowException: javax/net/ssl/SSLException: Read error: ssl=0x7dc365f080: I/O error during system call, Software caused connection abort
D/NativeCrypto: jniThrowException: javax/net/ssl/SSLException: SSL shutdown failed: ssl=0x7dc365f080: I/O error during system call, Broken pipe
My question is why does okhttp and retrofit throw UnknownHostException and not SSLException, and is there a way to actually get the SSLException as currently my app thinks the request did not go while server processes that request.
I am using
okhttp:3.10.0
retrofit:2.2.0
adapter-rxjava2:2.2.0
I am using the chrome websocket client extension to attach to a running container calling the Docker remote API like this:
ws://localhost:2375/containers/34968f0c952b/attach/ws?stream=1&stdout=1
The container is started locally from my machine executing a jar in the image that waits for user input. Basically I want to supply this input from an input field in the web browser.
Although I am able to attach using the API endpoint, I am encountering a few issues - probably due to my lackluster understanding of the ws endpoint as well as the bad documentation - that I would like to resolve:
1) When sending data using the chrome websocket client extension, the frame appears to be transmitted over the websocket according to the network inspection tool. However, the process running in the container waiting for input only receives the sent data when the websocket connection is closed - all at once. Is this standard behaviour? Intuitively you would expect that the input is immediately sent to the process.
2) If I attach to stdin and stdout at the same time, the docker deamon gets stuck waiting for stdin to attach, resulting in not being able to see any output:
[debug] attach.go:22 attach: stdin: begin
[debug] attach.go:59 attach: stdout: begin
[debug] attach.go:143 attach: waiting for job 1/2
[debug] server.go:2312 Closing buffered stdin pipe
[error] server.go:844 Error attaching websocket: use of closed network connection
I have solved this opening two separate connections for stdin and stdout, which works, but is really annoying. Any ideas on this one?
Thanks in advance!
I'm getting a lot of errors about "Read timed out".
Caused by: org.apache.cxf.binding.soap.SoapFault: Read timed out
Is this error on yodlee's servers side?
How can this be fixed?
This is because the API is taking more time. You can override by setting
Java System parameter as “-Dcom.yodlee.soap.client.read.timeout="time out in milli sec" without quotes. Please configure it to 60 secs and see if it resolves.
If you are using CXF, you can control the client timeout by modifying the configuration for the client http-conduit file.