Mule 3.9 logs HTTP response sending task failed with error: Locally closed - mule

I use ApiKit to receive queries. Occasionally I get the following line in a log file:
WARN org.mule.module.http.internal.listener.grizzly.ResponseCompletionHandler - HTTP response sending task failed with error: Locally closed
It seems that in this case the integration has not sent a response to the party that called the integration. I thought that there might be a sort of timeout before ApiKit closes the connection to caller but based on timestamps that might not be the case as everything happens within a second.
In this case the payload is sent to Artemis queue before this warning and despite the warning the message is read from Artemis normally and the whole flow works otherwise just fine besides this warning and not sending the response.
So, am I correct when I think that this warning is an indication why the response is not sent? In addition to that what can be done to prevent this situation?

Related

SO_KEEPALIVE issue in Mulesoft

we had a Mulesoft app that basically picks message from queue (ActiveMQ), then posts to target app via HTTP request to target's API.
Runtime: 4.3.0
HTTP Connector version: v1.3.2
Server: Windows, On-premise standalone
However, sometimes the message doesn't get sent successfully after picking from queue , and below message can be found in the log -
WARN 2021-07-10 01:24:46,080 [[masked-app].http.requester.requestConfig.02 SelectorRunner] [event: ] org.glassfish.grizzly.nio.transport.TCPNIOTransport: GRIZZLY0005: Can not set SO_KEEPALIVE to false
java.net.SocketException: Invalid argument: no further information
at sun.nio.ch.Net.setIntOption0(Native Method) ~[?:1.8.0_281]
The flow completed silently without any error after above message, hence no error handling happens.
I found this mentioning it is a known bug on Windows server and won’t affect the well behavior of the application, but the document is failing to set SO_KEEPALIVE to true rather than false.
Looks the message didn't get posted successfully as the target system team can't find corresponding incoming request in their log.
It is not acceptable as the message is critical and no one knows unless the target system realizes something is wrong... Not sure if the SO_KEEPALIVE is failing to be set to false is the root cause, could you please share some thoughts? Thanks a lot in advance.
The is probably unrelated to the warning you mentioned but there doesn't seem to be enough information to identify the actual root cause.
Having said that the version of the HTTP connector is old and it's missing almost 3 years of fixes. Updating the version to the last one should improve the reliability of the application.

POST with bodies larger than 64k to Express.js failing to process

I'm attempting to post some json to an express.js endpoint. If the size of the json is less than 64k, then it succeeds just fine. If it exceeds 64k, the request is never completely received by the server. The problem only occurs when running express directly locally. When running on heroku, the request proceeds without issue.
The problem is seen across MacOS, Linux (ubuntu 19), and Windows. It is present when using Chrome, Firefox, or Safari.
When I make requests using postman, the request fails.
If I make the request using curl, the request succeeds.
If I make the request after artificially throttling chrome to "slow 3G" levels in network settings, the request succeeds.
I've traced through express and discovered that the problem appears when attempting to parse the body. The request gets passed to body-parser.json() which in turns called getRawBody to get the Buffer from the request.
getRawBody is processing the incoming request stream and converting it into a buffer. It receives the first chunk of the request just fine, but never receives the second chunk. Eventually the request continues parsing with an empty buffer.
The size limit on bodyparser is set to 100mb, so it is not the problem. getRawBody never returns, so body-parser never gets a crack at it.
If I'm logging the events from getRawBody I can see the first chunk come in, but no other events are fired.
Watching wireshark logs, all the data is getting sent over the wire. But it looks like for some reason, express is not receiving all the chunks. I think it's got to be due to how express is processing the packets, have no idea how to proceed.
In the off chance anyone in the future is running into the same thing: The root problem in this case was that we were overwriting req.socket with our socket.io client. req.socket is used by node internally to transfer data. We were overwriting such that the first packets would get through, but not subsequent packets. So if the request was processed sufficiently quickly, all was well.
tl;dr: Don't overwrite req.socket.

Tomcat server causing broken pipe for big payloads

I made a simple spring-boot application that returns a static json response for all requests.
When the app gets a request with a large payload (~5mb json, 1 TP ), the client receives the following error:
java.net.SocketException: Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
I have tried increasing every limit i could - here are my tomcat settings:
spring.http.multipart.max-file-size=524288000
spring.http.multipart.max-request-size=524288000
spring.http.multipart.enabled=true
server.max-http-post-size=10000000
server.connection-timeout=30000
server.tomcat.max-connections=15000
server.tomcat.max-http-post-size=524288000
server.tomcat.accept-count=10000
server.tomcat.max-threads=200
server.tomcat.min-spare-threads=200
What can I do to make this simple spring boot with just one controller, to handle such payloads successfully?
This springboot application and the client sending the large payload run on an 8-core machine with 16gb ram. So resources shouldn't be a problem.
This was because the controller was returning a response without consuming the request body.
So the server closes the connection as soon as it receives the request, without consuming the full request body. The client still hadn't finished sending the request and the server closed the connection before that.
Solution:
1. Read the full request body in your code
2. Set tomcat's maxSwallowSize to a higher value (default : 2mb)
server.tomcat.max-swallow-size=10MB

NServiceBus exceptions logged as INFO messages

I'm running an NServiceBus endpoint on an Azure Workerrole. I send all diagnostics to table storage at the moment. I was getting messages in my DLQ, and I couldn't figure out why I wasn't getting any exceptions logged in my table storage.
It turns out that NSB logs the exceptions as INFO, which is why I couldn't easily spot them in between all the actual verbose logging.
In my case, a command handler's dependencies couldn't be resolved so Autofac throws an exception. I totally get why the exception is thrown, I just don't understand why they're logged as INFO. The message ends up in my DLQ, and I only have a INFO-trace to understand why.
Is there a reason why exceptions are handled this way in NSB?
NServiceBus is not logging container issue as an error because it's happening during an attempt to process a message. First Level Retry and Second Level Retry will be attempted. When SLR is executed, it will log a WARN about the retry. Ultimately, a message will fail processing and an ERROR message will be logged. NSB and Autofac sample can be used to reproduce this.
When endpoint is running with a scaled out role and MadDeliveryCount is not big enough to accommodate all the role instances and retry count that each instance would hold, this will result in DeliveryCount reaching it's max while NServiceBus endpoint instance still thinks it has attempts before sending message to an error queue and logging an error. Similar to the question here I'd recommend to increase MaxDeliveryCount.
There is an open NServiceBus issue to have a native support for SLR counter. You can add your voice to the issue. The next version of NServiceBus (V6) will be logging message id along with the exception so that you at least could correlate between message in DLQ and log file.

How to be sure that channel.basic_publish has succeeded (internet connection error, ...)?

Im doing this :
channel.basicPublish("myexchange", "routing", MessageProperties.PERSISTENT_TEXT_PLAIN,
"message".getBytes());
I would like to retry later to send the message if the publish didn't succeeded (connection loss, ...) but basicPublish is a void function and there is no callback in the arguments
Any idea ?
You are looking an HA client,
By default you have to implement the feature by your self.
If you use java there is :
https://github.com/joshdevins/rabbitmq-ha-client (it's just a bit old but it think it still work).
Anyway, if you want implement the functionality you have catch the exception and re-try later.
If the client lose the connection you should re-connect the client before re-send the message.
On the version 3.3.0 the last features is implemented by default to the java client:
java client
enhancements
14587 support automatically reconnecting to server(s) if connection is
interrupted
This point is very important you want send the messages sequentially.
A simple solution is put the messages in a client list and then remove the message from the list only if the message has been sent correctly.
I think you could find interesting also the Publisher Acknowledgements