easynetQ delayed respond/request resulting in timeout - rabbitmq

I've run into a problem with using the request/respond pattern of EasyNetQ while using it on our server (Windows Server 2008). Not able to reproduce it locally at the moment.
The setup is we have 2 windows services (running as console applications for testing) which are connected through the request/respond pattern in EasyNetQ. This has been working as expected until recently on the server where the request side does not "consume" the responses until after the request timeouts.
I have included 2 links to pastebin which contain the console logging of EasyNetQ which will hopefully make my problem a bit more clear.
RequestSide
RespondSide
Besides that, my request code looks like this:
var request = new foobar();
var response = _bus.Request<foobar, foobar2>(request);
and on the respond side:
var response = new response();
_bus.Respond<foobar, foobar2>(request =>
{
try
{
....
return response;
}
catch (Exception e)
{
....
return response;
}
});
As I've said, the request side sends the request as expected and the respond side consumes/catches it. This works as it should, but when the respond side is done processing and responds (which it does, the messages can be seen in the RabbitMQ management thingy) the request doesn't consume/catch the response until after the request has timed out (default timeout is 10s, tried setting to 60s aswell, makes no difference). This is also evident in the logs linked above as you'll see on the RequestSide, with the 5 or so messages received from the response queue which previously timed out.
I've tried using RespondAsync in case the processing was taking too long and messing something up, didn't help. Tried using both RespondAsync & RequestAsync, just messed everything up even more (I was probably doing something wrong with the request :)).
I might be missing something, but I'm not sure what to try from here.
EDIT: Noticed I messed something up. As well as added more context below:
The IBus used for the request/response is created and injected with Ninject:
class FooModule : NinjectModule
{
public override void Load()
{
Bind<IBus>().ToMethod(ctx => RabbitHutch.CreateBus("host=localhost", x => x.Register<IEasyNetQLogger>(_ => logger))).InSingletonScope();
}
}
And it's all tied together by the service being constructed using Topshelf with Ninject like so:
static void Main(string[] args)
{
HostFactory.Run(x =>
{
x.UseNinject(new FooModule());
x.Service<FooService>(s =>
{
s.ConstructUsingNinject();
s.WhenStarted((service, control) => service.Start(control));
s.WhenStopped((service, control) => service.Stop(control));
});
x.RunAsLocalSystem();
});
}
The Topshelf setup has all been tested pretty thoroughly and it works as intended, and should not really be relevant for the request/respond problem, but I thought I would provide a bit more context.

I had this same issue, my problem was i set the timeout only in the response but not in the request side, after i set the timeoute in both side it worked fine
my connection for eg.
host=hostname;timeout=120;virtualHost=myhost;username=myusername;passw
ord=mypassword

Related

Ratchet PHP server establishes connection, but Kotlin never receives acknowledgement

I have a ratchet server, that I try to access via Websocket. It is similar to the tutorial: logging when there is a new client or when it receives a message. The Ratchet server reports having successfully established a connection while the Kotlin client does not (the connection event in Kotlin is never fired). I am using the socket-io-java module v.2.0.1. The client shows a timeout after the specified timeout time, gets detached at the server and attaches again after a short while, just as it seems to think, the connection did not properly connect (because of a missing connection response?).
The successful connection confirmation gets reported to the client, if the client is a Websocket-Client in the JS-console of Chrome, but not to my Kotlin app. Even an Android emulator running on the same computer doesn´t get a response (So I think the problem is not wi-fi related).
The connection works fine with JS, completing the full handshake, but with an Android app it only reaches the server, but never the client again.
That´s my server code:
<?php
namespace agroSMS\Websockets;
use Ratchet\ConnectionInterface;
use Ratchet\MessageComponentInterface;
class SocketConnection implements MessageComponentInterface
{
protected \SplObjectStorage $clients;
public function __construct() {
$this->clients = new \SplObjectStorage;
}
function onOpen(ConnectionInterface $conn)
{
$this->clients->attach($conn);
error_log("New client attached");
}
function onClose(ConnectionInterface $conn)
{
$this->clients->detach($conn);
error_log("Client detached");
}
function onError(ConnectionInterface $conn, \Exception $e)
{
echo "An error has occurred: {$e->getMessage()}\n";
$conn->close();
}
function onMessage(ConnectionInterface $from, $msg)
{
error_log("Received message: $msg");
// TODO: Implement onMessage() method.
}
}
And the script that I run in the terminal:
<?php
use Ratchet\Server\IoServer;
use agroSMS\Websockets\SocketConnection;
use Ratchet\WebSocket\WsServer;
use Ratchet\Http\HttpServer;
require dirname(__DIR__) . '/vendor/autoload.php';
$server = IoServer::factory(
new HttpServer(
new WsServer(
new SocketConnection()
)
)
);
$server->run();
What I run in the browser for tests (returns "Connection established" in Chrome, but for some reason not in the Browser "Brave"):
var conn = new WebSocket('ws://<my-ip>:80');
conn.onopen = function(e) {
console.log("Connection established!");
};
conn.onmessage = function(e) {
console.log(e.data);
};
What my Kotlin-code looks like:
try {
val uri = URI.create("ws://<my-ip>:80")
val options = IO.Options.builder()
.setTimeout(60000)
.setTransports(arrayOf(WebSocket.NAME))
.build()
socket = IO.socket(uri, options)
socket.connect()
.on(Socket.EVENT_CONNECT) {
Log.d(TAG, "[INFO] Connection established")
socket.send(jsonObject)
}
.once(Socket.EVENT_CONNECT_ERROR) {
val itString = gson.toJson(it)
Log.d(TAG, itString)
}
}catch(e : Exception) {
Log.e(TAG, e.toString())
}
After a minute the Kotlin code logs a "timeout"-error, detaches from the server, and attaches again.
When I stop the script on the server, it then gives an error: "connection reset, websocket error" (which makes sense, but why doesn´t he get the connection in the first time?)
I also tried to "just" change the protocol to "wss" in the url, in case it might be the problem, even though my server doesn´t even work with SSL, but this just gave me another error:
[{"cause":{"bytesTransferred":0,"detailMessage":"Read timed out","stackTrace":[],"suppressedExceptions":[]},"detailMessage":"websocket error","stackTrace":[],"suppressedExceptions":[]}]
And the connection isn´t even established at the server. So this try has been more like a down-grade.
I went to the github page of socket.io-java-client to find a solution to my problem there and it turned out, the whole problem was, that I misunderstood a very important concept:
That socket.io uses Websockets doesn´t mean it is compatible with Websockets.
So speaking in clear words:
If you use socket.io at client side, you also need to use it at the server side and vice versa. Since socket.io sends a lot of meta data with its packets, a pure Websocket-server will accept their connection establishment, but his acknowledgement coming back will not be accepted by the socket.io client.
You have to go for either full socket.io or full pure Websockets.

HTTPClient intermittently locking up server

I have a .NET Core 2.2 app which has a controller acting as a proxy to my APIs.
JS makes a fetch to the proxy, proxy forwards call onto API's and returns response.
I am experiencing intermittent lock ups on the proxy app when its awaiting the response from the HttpClient. When this happens it locks up the entire server. No more requests will be processed.
According to the logs of the API that is being proxied to it is returning fine.
To reproduce this is i have to make 100+ requests in a loop on the client through the proxy. Then i have to reload the page multiple times, reloading it whilst the 100 requests are in flight. It usually takes around 5 hits before things start slowing down.
The proxy will lock up waiting for an awaited request to resolve. Sometimes it comes back after a 4 - 5 second delay, other times after a minuet. Most of the time i haven't waited longer then 10 min before giving up and killing the proxy.
I've distilled the code down to the following block that will reproduce the issue.
I believe im following best practices, its async all the way down, im using IHttpClientFactory to enable sharing of HttpClient instances, im implementing using where i believe it is required.
The implementation was based on this: https://github.com/aspnet/AspLabs/tree/master/src/Proxy
I'm hoping im making a rather obvious mistake that others with more experience can pin point!
Any help would be greatly appreciated.
namespace Controllers
{
[Route("/proxy")]
public class ProxyController : Controller
{
private readonly IHttpClientFactory _factory;
public ProxyController(IHttpClientFactory factory)
{
_factory = factory ?? throw new ArgumentNullException(nameof(factory));
}
[HttpGet]
[Route("api")]
async public Task ProxyApi(CancellationToken requestAborted)
{
// Build API specific URI
var uri = new Uri("");
// Get headers frpm request
var headers = Request.Headers.ToDictionary(x => x.Key, y => y.Value);
headers.Add(HeaderNames.Authorization, $"Bearer {await HttpContext.GetTokenAsync("access_token")}");
// Build proxy request method. This is within a service
var message = new HttpRequestMessage();
foreach(var header in headers) {
message.Headers.Add(header.Key, header.Value.ToArray());
}
message.RequestUri = uri;
message.Headers.Host = uri.Authority;
message.Method = new HttpMethod(Request.Method);
requestAborted.ThrowIfCancellationRequested();
// Generate client and issue request
using(message)
using(var client = _factory.CreateClient())
// **Always hangs here when it does hang**
using(var result = await client.SendAsync(message, requestAborted).ConfigureAwait(false))
{
// Appy data from request onto response - Again this is within a service
Response.StatusCode = (int)result.StatusCode;
foreach (var header in result.Headers)
{
Response.Headers[header.Key] = header.Value.ToArray();
}
// SendAsync removes chunking from the response. This removes the header so it doesn't expect a chunked response.
Response.Headers.Remove("transfer-encoding");
requestAborted.ThrowIfCancellationRequested();
using (var responseStream = await result.Content.ReadAsStreamAsync())
{
await responseStream.CopyToAsync(responseStream, 81920);
}
}
}
}
}
EDIT
So modified the code to remove the usings and return the proxied response directly as a string instead of streaming and still getting the same issues.
When running netstat i do see a lot of logs for the url of the proxied API.
4 rows mention the IP of the API being proxied to, probably about another 20 rows mentions the IP of the proxy site. Those numbers dont seem odd to me but i don't have much experience using netstat (first time ive ever fired it up).
Also i have left the proxy running for about 20 min. Its it technically still alive. Responses are coming back. Just taking a very long time between the API being proxied to returning data and the HttpClient resolving. However it wont service any new requests, they just sit there hanging.

Web API 2 return OK response but continue processing in the background

I have create an mvc web api 2 webhook for shopify:
public class ShopifyController : ApiController
{
// PUT: api/Afilliate/SaveOrder
[ResponseType(typeof(string))]
public IHttpActionResult WebHook(ShopifyOrder order)
{
// need to return 202 response otherwise webhook is deleted
return Ok(ProcessOrder(order));
}
}
Where ProcessOrder loops through the order and saves the details to our internal database.
However if the process takes too long then the webhook calls the api again as it thinks it has failed. Is there any way to return the ok response first but then do the processing after?
Kind of like when you return a redirect in an mvc controller and have the option of continuing with processing the rest of the action after the redirect.
Please note that I will always need to return the ok response as Shopify in all it's wisdom has decided to delete the webhook if it fails 19 times (and processing too long is counted as a failure)
I have managed to solve my problem by running the processing asynchronously by using Task:
// PUT: api/Afilliate/SaveOrder
public IHttpActionResult WebHook(ShopifyOrder order)
{
// this should process the order asynchronously
var tasks = new[]
{
Task.Run(() => ProcessOrder(order))
};
// without the await here, this should be hit before the order processing is complete
return Ok("ok");
}
There are a few options to accomplish this:
Let a task runner like Hangfire or Quartz run the actual processing, where your web request just kicks off the task.
Use queues, like RabbitMQ, to run the actual process, and the web request just adds a message to the queue... be careful this one is probably the best but can require some significant know-how to setup.
Though maybe not exactly applicable to your specific situation as you are having another process wait for the request to return... but if you did not, you could use Javascript AJAX kick off the process in the background and maybe you can turn retry off on that request... still that keeps the request going in the background so maybe not exactly your cup of tea.
I used Response.CompleteAsync(); like below. I also added a neat middleware and attribute to indicate no post-request processing.
[SkipMiddlewareAfterwards]
[HttpPost]
[Route("/test")]
public async Task Test()
{
/*
let them know you've 202 (Accepted) the request
instead of 200 (Ok), because you don't know that yet.
*/
HttpContext.Response.StatusCode = 202;
await HttpContext.Response.CompleteAsync();
await SomeExpensiveMethod();
//Don't return, because default middleware will kick in. (e.g. error page middleware)
}
public class SkipMiddlewareAfterwards : ActionFilterAttribute
{
//ILB
}
public class SomeMiddleware
{
private readonly RequestDelegate next;
public SomeMiddleware(RequestDelegate next)
{
this.next = next;
}
public async Task Invoke(HttpContext context)
{
await next(context);
if (context.Features.Get<IEndpointFeature>().Endpoint.Metadata
.Any(m => m is SkipMiddlewareAfterwards)) return;
//post-request actions here
}
}
Task.Run(() => ImportantThing() is not an appropriate solution, as it exposes you to a number of potential problems, some of which have already been explained above. Imo, the most nefarious of these are probably unhandled exceptions on the worker process that can actually straight up kill your worker process with no trace of the error outside of event logs or something at captured at the OS, if that's even available. Not good.
There are many more appropriate ways to handle this scenarion, like a handoff a service bus or implementing a HostedService.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-6.0&tabs=visual-studio

Apache Http Client Put Request Error

I'm trying to upload a file using the Apache Http Client's PUT method. The code is as below;
def putFile(resource: String, file: File): (Int, String) = {
val httpClient = new DefaultHttpClient(connManager)
httpClient.getCredentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(un, pw))
val url = address + "/" + resource
val put = new HttpPut(url)
put.setEntity(new FileEntity(file, "application/xml"))
executeHttp(httpClient, put) match {
case Success(answer) => (answer.getStatusLine.getStatusCode, "Successfully uploaded file")
case Failure(e) => {
e.printStackTrace()
(-1, e.getMessage)
}
}
}
When I tried running the method, I get to see the following error:
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultResponseParser.parseHead(DefaultResponseParser.java:101)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:252)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:281)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:247)
at org.apache.http.impl.conn.AbstractClientConnAdapter.receiveResponseHeader(AbstractClientConnAdapter.java:219)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:298)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:633)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:454)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
I do not know what has gone wrong? I'm able to do GET requests, but PUT seems not to work! Any clues as to where I should look for?
Look on the server. If GET Works, but PUT does not, then you have to figure out the receiving end.
Also, you may want to write a simple HTML File that has a form with PUT Method in it to rule out your Java Part.
As a sidenode: Its technically possible that something in between stops the request from going through or the response reaching you. Best setup a dummy HTTP Server to do the testing against.
Maybe its also a timeout issue, so the server takes to long to process your PUT.
The connection you are trying to use is a stale connection and therefore the request is failing.
But why are you only seeing an error for the PUT request and you are not seeing it for the GET request?
If you check the DefaultHttpRequestRetryHandler class you will see that by default HttpClient attempts to automatically recover from I/O exceptions. The default auto-recovery mechanism is limited to just a few exceptions that are known to be safe.
HttpClient will make no attempt to recover from any logical or HTTP protocol errors (those derived from HttpException class).
HttpClient will automatically retry those methods that are assumed to be idempotent. Your GET request, but not your PUT request!!
HttpClient will automatically retry those methods that fail with a transport exception while the HTTP request is still being transmitted to the target server (i.e. the request has not been fully transmitted to the server).
This is why you don't notice any error with your GET request, because the retry mechanism handles it.
You should define a CustomHttpRequestRetryHandler extending the DefaultHttpRequestRetryHandler. Something like this:
public class CustomHttpRequestRetryHandler extends DefaultHttpRequestRetryHandler {
#Override
public boolean retryRequest(IOException exception, int executionCount, HttpContext context) {
if(exception instanceof NoHttpResponseException) {
return true;
}
return super.retryRequest(exception, executionCount, context);
}
}
Then just assign your CustomHttpRequestRetryHandler
final HttpClientBuilder httpClientBuilder = HttpClients.custom();
httpClientBuilder.setRetryHandler(new CustomHttpRequestRetryHandler());
And that's it, now your PUT request is handled by your new RetryHandler (like the GET was by the default one)

Redis Timeout Expired message on GetClient call

I hate the questions that have "Not Enough Info". So I will try to give detailed information. And in this case it is code.
Server:
64 bit of https://github.com/MSOpenTech/redis/tree/2.6/bin/release
There are three classes:
DbOperationContext.cs: https://gist.github.com/glikoz/7119628
PerRequestLifeTimeManager.cs: https://gist.github.com/glikoz/7119699
RedisRepository.cs https://gist.github.com/glikoz/7119769
We are using Redis with Unity ..
In this case we are getting this strange message:
"Redis Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use.";
We checked these:
Is the problem configuration issue
Are we using wrong RedisServer.exe
Is there any architectural problem
Any idea? Any similar story?
Thanks.
Extra Info 1
There is no rejected connection issue on server stats (I've checked it via redis-cli.exe info command)
I have continued to debug this problem, and have fixed numerous things on my platform to avoid this exception. Here is what I have done to solve the issue:
Executive summary:
People encountering this exception should check:
That the PooledRedisClientsManager (IRedisClientsManager) is registed in a singleton scope
That the RedisMqServer (IMessageService) is registered in a singleton scope
That any utilized RedisClient returned from either of the above is properly disposed of, to ensure that the pooled clients are not left stale.
The solution to my problem:
First of all, this exception is thrown by the PooledRedisClient because it has no more pooled connections available.
I'm registering all the required Redis stuff in the StructureMap IoC container (not unity as in the author's case). Thanks to this post I was reminded that the PooledRedisClientManager should be a singleton - I also decided to register the RedisMqServer as a singleton:
ObjectFactory.Configure(x =>
{
// register the message queue stuff as Singletons in this AppDomain
x.For<IRedisClientsManager>()
.Singleton()
.Use(BuildRedisClientsManager);
x.For<IMessageService>()
.Singleton()
.Use<RedisMqServer>()
.Ctor<IRedisClientsManager>().Is(i => i.GetInstance<IRedisClientsManager>())
.Ctor<int>("retryCount").Is(2)
.Ctor<TimeSpan?>().Is(TimeSpan.FromSeconds(5));
// Retrieve a new message factory from the singleton IMessageService
x.For<IMessageFactory>()
.Use(i => i.GetInstance<IMessageService>().MessageFactory);
});
My "BuildRedisClientManager" function looks like this:
private static IRedisClientsManager BuildRedisClientsManager()
{
var appSettings = new AppSettings();
var redisClients = appSettings.Get("redis-servers", "redis.local:6379").Split(',');
var redisFactory = new PooledRedisClientManager(redisClients);
redisFactory.ConnectTimeout = 5;
redisFactory.IdleTimeOutSecs = 30;
redisFactory.PoolTimeout = 3;
return redisFactory;
}
Then, when it comes to producing messages it's very important that the utilized RedisClient is properly disposed of, otherwise we run into the dreaded "Timeout Expired" (thanks to this post). I have the following helper code to send a message to the queue:
public static void PublishMessage<T>(T msg)
{
try
{
using (var producer = GetMessageProducer())
{
producer.Publish<T>(msg);
}
}
catch (Exception ex)
{
// TODO: Log or whatever... I'm not throwing to avoid showing users that we have a broken MQ
}
}
private static IMessageQueueClient GetMessageProducer()
{
var producer = ObjectFactory.GetInstance<IMessageService>() as RedisMqServer;
var client = producer.CreateMessageQueueClient();
return client;
}
I hope this helps solve your issue too.