I have been doing a small sample application to consume messages on a queue in RabbitMQ.
The code should read the message an call a REST API (here replaced with a Task.Delay):
static void Main(string[] args)
{
var factory = new ConnectionFactory
{
Uri = new Uri("..."),
DispatchConsumersAsync = true
};
var connection = factory.CreateConnection();
var channel = connection.CreateModel();
var consumer = new AsyncEventingBasicConsumer(channel);
consumer.Received += async (model, eventArgs) =>
{
Console.WriteLine("Doing a fake API call...");
await Task.Delay(2000);
Console.WriteLine("Done with fake API call!");
channel.BasicAck(eventArgs.DeliveryTag, false);
};
channel.BasicConsume("myQueue", false, consumer);
}
When I run this application with five messages on the queue I get the following result:
The messages are processed one by one and with the 2 second delay it takes ~10 seconds.
I would have expected to see five lines with Doing a fake API call... followed by five lines of Done with fake API call! with a total time of ~2 seconds.
When doing the synchronous version I see the exact same result - which was expected:
static void Main(string[] args)
{
var factory = new ConnectionFactory
{
Uri = new Uri("...")
};
var connection = factory.CreateConnection();
var channel = connection.CreateModel();
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, eventArgs) =>
{
Console.WriteLine("Doing a fake API call...");
Thread.Sleep(2000);
Console.WriteLine("Done with fake API call!");
channel.BasicAck(eventArgs.DeliveryTag, false);
};
channel.BasicConsume("myQueue", false, consumer);
}
My question is: What is the difference in using the AsyncEventingBasicConsumer compared to the EventingBasicConsumer?
And: Is there a way of making the consumer process other messages while awaiting work for previous messages?
This is an old question and I'm sure you're not still waiting for an answer but I've found that it can be challenging to really nail down the details on how RabbitMQ behaves. As such, hoping at least some information here will help someone in the future.
RMQ EventingBasicConsumer and AsyncEventingBasicConsumer differ mostly in their signatures, so we can attach async handlers to events like Received and ConsumerCancelled. If you switch back and forth between them (and setting DispatchConsumersAsync appropriately), you might notice that there's no change in behavior. This is because the dispatcher underneath is invoking the events "asynchronously", but with no degree of concurrency.
To enable concurrent message handling on your RMQ connection, set ConsumerDispatchConcurrency = {>1} on the IConnectionFactory object prior to establishing the connection. It defaults to one, which is effectively serial processing.
Using your scenario, for example, if you set ConsumerDispatchConcurrency = 5 then I would suspect you'd see what you originally expected. Something like:
Doing a fake API call...
Doing a fake API call...
Doing a fake API call...
Doing a fake API call...
Doing a fake API call...
Done with fake API call!
Done with fake API call!
Done with fake API call!
Done with fake API call!
Done with fake API call!
While ConsumerDispatchConcurrency = 2 might look something more like this:
Doing a fake API call...
Doing a fake API call...
Done with fake API call!
Done with fake API call!
Doing a fake API call...
Doing a fake API call...
Done with fake API call!
Done with fake API call!
Doing a fake API call...
Done with fake API call!
Related
So I have the following scenario. I have a method in my WCF, where the client will send a request, the WCF service would then perform some background processing and do call an external webservice method, and the method will respond with an acknowledgement immediately (before the background processing has been completed).
The way I have thought of doing is having my WCF method return a response after spawning a thread to do the background processing, and call the external webservice. The flow is something like this:
Caller sends request to INITIAL_CALL
WCF starts a thread that calls PROCESS
WCF returns true
PROCESS makes call to EXTERNALWS and gets response in postResponse
postReponse gets logged to the database
See example code below:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class Service : IService
public bool INITIAL_CALL()
{
new Thread(()=>
{
PROCESS();
}).Start();
return true;
}
private void PROCESS()
{
//Do some background processing and create request for call below
var processRequest = "Request goes here";
using (var client = new EXTERNALWS.ResponseTypeClient())
{
var postResponse = client.POST(processRequest);
//Log postResponse to database
}
}
Having in mind that PROCESS() may run for a long time, I just wanted to see if there is a better way of doing this with WCF and IIS? Or if there are any pitfalls that I have to consider i.e IIS app pool recycling destroying the thread.
I have found a solution for this. I ended up using Hangfire to do the background processing needed (https://www.hangfire.io/). Hangfire seems to be specifically made for this. I have implemented it following the documentation found at their homepage, in a separate ASP MVC application. I have also configured it as always running on IIS. All instructions and sample codes to setup Hangfire to do this are found here https://docs.hangfire.io/en/latest/index.html. I had to change the flow (since I am not spawning any new threads manually as previously), and also create a new table in the database so that the INITIAL_CALL in the WCF Application would queue all the long running jobs (later to be picked up and executed by Hangfire). Have in mind this is seperate from Hangfire's queue, this table will be checked by Hangfire in a predefined interval, and will check this database table that stores which function to call, its parameters, and an indicator if the job has already been picked up by Hangfire or not (to avoid the re-entrant scenario described here https://docs.hangfire.io/en/latest/best-practices.html). A little extra work, but works like a charm.
The way the flow works now is as follows:
Caller sends request to INITIAL_CALL
In INITIAL_CALL, an entry is made in a new database table (this is the job
queue that will be checked by Hangfire in a predefined interval).
INITIAL_CALL returns true
Hangfire checks this database table in a predefined interval using PROCESS_JOBS (this interval can be defined in Hangfire itself).
If there is a queued item, PROCESS_JOBS proceeds and makes the call to EXTERNALWS and gets response in postResponse. If not, it just exits and does nothing further.
postReponse gets logged to the database.
See updated example code below:
WCF Application
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class Service : IService
public bool INITIAL_CALL()
{
//Add job queue entry in database table to be picked up by Hangfire
return true;
}
Hangfire Application
public void PROCESS_JOBS()
{
//Check in a predefined interval if there is a pending job in the queue.
//If there is continue with below, otherwise exit function.
//Do some background processing and create request for call below
var processRequest = "Request goes here";
using (var client = new EXTERNALWS.ResponseTypeClient())
{
var postResponse = client.POST(processRequest);
//Log postResponse to database
}
}
I have a .NET Core 2.2 app which has a controller acting as a proxy to my APIs.
JS makes a fetch to the proxy, proxy forwards call onto API's and returns response.
I am experiencing intermittent lock ups on the proxy app when its awaiting the response from the HttpClient. When this happens it locks up the entire server. No more requests will be processed.
According to the logs of the API that is being proxied to it is returning fine.
To reproduce this is i have to make 100+ requests in a loop on the client through the proxy. Then i have to reload the page multiple times, reloading it whilst the 100 requests are in flight. It usually takes around 5 hits before things start slowing down.
The proxy will lock up waiting for an awaited request to resolve. Sometimes it comes back after a 4 - 5 second delay, other times after a minuet. Most of the time i haven't waited longer then 10 min before giving up and killing the proxy.
I've distilled the code down to the following block that will reproduce the issue.
I believe im following best practices, its async all the way down, im using IHttpClientFactory to enable sharing of HttpClient instances, im implementing using where i believe it is required.
The implementation was based on this: https://github.com/aspnet/AspLabs/tree/master/src/Proxy
I'm hoping im making a rather obvious mistake that others with more experience can pin point!
Any help would be greatly appreciated.
namespace Controllers
{
[Route("/proxy")]
public class ProxyController : Controller
{
private readonly IHttpClientFactory _factory;
public ProxyController(IHttpClientFactory factory)
{
_factory = factory ?? throw new ArgumentNullException(nameof(factory));
}
[HttpGet]
[Route("api")]
async public Task ProxyApi(CancellationToken requestAborted)
{
// Build API specific URI
var uri = new Uri("");
// Get headers frpm request
var headers = Request.Headers.ToDictionary(x => x.Key, y => y.Value);
headers.Add(HeaderNames.Authorization, $"Bearer {await HttpContext.GetTokenAsync("access_token")}");
// Build proxy request method. This is within a service
var message = new HttpRequestMessage();
foreach(var header in headers) {
message.Headers.Add(header.Key, header.Value.ToArray());
}
message.RequestUri = uri;
message.Headers.Host = uri.Authority;
message.Method = new HttpMethod(Request.Method);
requestAborted.ThrowIfCancellationRequested();
// Generate client and issue request
using(message)
using(var client = _factory.CreateClient())
// **Always hangs here when it does hang**
using(var result = await client.SendAsync(message, requestAborted).ConfigureAwait(false))
{
// Appy data from request onto response - Again this is within a service
Response.StatusCode = (int)result.StatusCode;
foreach (var header in result.Headers)
{
Response.Headers[header.Key] = header.Value.ToArray();
}
// SendAsync removes chunking from the response. This removes the header so it doesn't expect a chunked response.
Response.Headers.Remove("transfer-encoding");
requestAborted.ThrowIfCancellationRequested();
using (var responseStream = await result.Content.ReadAsStreamAsync())
{
await responseStream.CopyToAsync(responseStream, 81920);
}
}
}
}
}
EDIT
So modified the code to remove the usings and return the proxied response directly as a string instead of streaming and still getting the same issues.
When running netstat i do see a lot of logs for the url of the proxied API.
4 rows mention the IP of the API being proxied to, probably about another 20 rows mentions the IP of the proxy site. Those numbers dont seem odd to me but i don't have much experience using netstat (first time ive ever fired it up).
Also i have left the proxy running for about 20 min. Its it technically still alive. Responses are coming back. Just taking a very long time between the API being proxied to returning data and the HttpClient resolving. However it wont service any new requests, they just sit there hanging.
I'm writing a Telegram bot and I'm using the official bot API. I've got a webhook server that handles requests and sends a 200 OK response for every request.
Before the server stops, the webhook is detached so Telegram does not send updates anymore. However, whenever I turn the bot on and set the webhook URL again, Telegram starts flooding the webhook server with old updates.
Is there any way I can prevent this without requesting /getUpdates repeatedly until I reach the last update?
Here's a heavily simplified version of how my code looks like:
var http = require('http'),
unirest = require('unirest'),
token = '***';
// Attach the webhook
unirest.post('https://api.telegram.org/bot' + token + '/setWebhook')
.field('url', 'https://example.com/api/update')
.end();
process.on('exit', function() {
// Detach the webhook
unirest.post('https://api.telegram.org/bot' + token + '/setWebhook')
.field('url', '')
.end();
});
// Handle requests
var server = http.createServer(function(req, res) {
res.writeHead(200, { 'Content-Type': 'text/plain' })
res.end('Thanks!');
});
server.listen(80);
Thanks in advance.
The best way is to use update_id which is a specific number that increases on every new request (i.e. update). How to implement it?
First off, let's start with the following anonymous class (using PHP7):
$lastUpdateId = new class()
{
const FILE_PATH = "last-update-id.txt";
private $value = 1;
public function __construct()
{
$this->ensureFileExists();
$this->value = filesize(self::FILE_PATH) == 0
? 0 : (int)(file_get_contents(self::FILE_PATH));
}
public function set(int $lastUpdateId)
{
$this->ensureFileExists();
file_put_contents(self::FILE_PATH, $lastUpdateId);
$this->value = $lastUpdateId;
}
public function get(): int
{
return $this->value;
}
public function isNewRequest(int $updateId): bool
{
return $updateId > $this->value;
}
private function ensureFileExists()
{
if (!file_exists(self::FILE_PATH)) {
touch(self::FILE_PATH);
}
}
};
What the class does is clear: Handling the last update_id via a plain file.
Note: The class is tried to be as short as possible. It does not provide error-checking. Use your custom implementation (e.g. use SplFileObject instead of file_{get|put}_contents() functions) instead.
Now, there are two methods of getting updates: Long Polling xor WebHooks (check Telegram bot API for more details on each methods and all JSON properties). The above code (or similar) should be used in both cases.
Note: Currently, it is impossible to use both methods at the same time.
Long Polling Method (default)
This way, you send HTTPS requests to Telegram bot API, and you'd get updates as response in a JSON-formatted object. So, the following work can be done to get new updates (API, why using offset):
$botToken = "<token>";
$updates = json_decode(file_get_contents("https://api.telegram.org/bot{$botToken}/getUpdates?offset={$lastUpdateId->get()}"), true);
// Split updates from each other in $updates
// It is considered that one sample update is stored in $update
// See the section below
parseUpdate($update);
WebHook Method (preferred)
Requiring support for HTTPS POST method from your server, the best way of getting updates at-the-moment.
Initially, you must enable WebHooks for your bot, using the following request (more details):
https://api.telegram.org/bot<token>/setWebhook?url=<file>
Replace <token> with you bot token, and <file> with the address of your file which is going to accept new requests. Again, it must be HTTPS.
OK, the last step is creating your file at the specified URL:
// The update is sent
$update = $_POST;
// See the section below
parseUpdate($update);
From now, all requests and updates your bot will be directly sent to the file.
Implementation of parseUpdate()
Its implementation is totally up to you. However, to show how to use the class above in the implementation, this is a sample and short implementation for it:
function parseUpdate($update)
{
// Validate $update, first
// Actually, you should have a validation class for it
// Here, we suppose that: $update["update_id"] !== null
if ($lastUpdateId->isNewRequest($update["update_id"])) {
$lastUpdateId->set($update["update_id"]);
// New request, go on
} else {
// Old request (or possible file error)
// You may throw exceptions here
}
}
Enjoy!
Edit: Thanks to #Amir for suggesting editions made this answer more complete and useful.
When you server starts up you can record the timestamp and then use this to compare against incoming message date values. If the date is >= the timestamp when you started...the message is ok to be processed.
I am not sure if there is a way you can tell Telegram you are only interested in new updates, their retry mechanism is a feature so that messages aren't missed...even if your bot is offline.
In the webhook mode, Telegram servers send updates every minute until receives an OK response from the webhook program.
so I recommend these steps:
Check your webhook program that you specified its address as url parameter of the setWebhook method. Call its address in a browser. It does not produce an output to view, but clears that probably there is no error in your program.
Include a command that produces a '200 OK Status' header output in your program to assure that the program sends this header to the Telegram server.
I have the same issue, then I tried to reset the default webhook with
https://api.telegram.org/bot[mybotuniqueID]/setWebhook?url=
after that, i verified the current getUpdates query were the same old updates but I sent new requests through the telegram's bot chat
https://api.telegram.org/bot[mybotuniqueID]/getUpdates
when I set up my webhook again the webhook read the same old updates. Maybe the getUpdates method is not refreshing the JSON content.
NOTE:
in my case, it was working fine until I decided to change /set privacy bot settings from botfather
I've run into a problem with using the request/respond pattern of EasyNetQ while using it on our server (Windows Server 2008). Not able to reproduce it locally at the moment.
The setup is we have 2 windows services (running as console applications for testing) which are connected through the request/respond pattern in EasyNetQ. This has been working as expected until recently on the server where the request side does not "consume" the responses until after the request timeouts.
I have included 2 links to pastebin which contain the console logging of EasyNetQ which will hopefully make my problem a bit more clear.
RequestSide
RespondSide
Besides that, my request code looks like this:
var request = new foobar();
var response = _bus.Request<foobar, foobar2>(request);
and on the respond side:
var response = new response();
_bus.Respond<foobar, foobar2>(request =>
{
try
{
....
return response;
}
catch (Exception e)
{
....
return response;
}
});
As I've said, the request side sends the request as expected and the respond side consumes/catches it. This works as it should, but when the respond side is done processing and responds (which it does, the messages can be seen in the RabbitMQ management thingy) the request doesn't consume/catch the response until after the request has timed out (default timeout is 10s, tried setting to 60s aswell, makes no difference). This is also evident in the logs linked above as you'll see on the RequestSide, with the 5 or so messages received from the response queue which previously timed out.
I've tried using RespondAsync in case the processing was taking too long and messing something up, didn't help. Tried using both RespondAsync & RequestAsync, just messed everything up even more (I was probably doing something wrong with the request :)).
I might be missing something, but I'm not sure what to try from here.
EDIT: Noticed I messed something up. As well as added more context below:
The IBus used for the request/response is created and injected with Ninject:
class FooModule : NinjectModule
{
public override void Load()
{
Bind<IBus>().ToMethod(ctx => RabbitHutch.CreateBus("host=localhost", x => x.Register<IEasyNetQLogger>(_ => logger))).InSingletonScope();
}
}
And it's all tied together by the service being constructed using Topshelf with Ninject like so:
static void Main(string[] args)
{
HostFactory.Run(x =>
{
x.UseNinject(new FooModule());
x.Service<FooService>(s =>
{
s.ConstructUsingNinject();
s.WhenStarted((service, control) => service.Start(control));
s.WhenStopped((service, control) => service.Stop(control));
});
x.RunAsLocalSystem();
});
}
The Topshelf setup has all been tested pretty thoroughly and it works as intended, and should not really be relevant for the request/respond problem, but I thought I would provide a bit more context.
I had this same issue, my problem was i set the timeout only in the response but not in the request side, after i set the timeoute in both side it worked fine
my connection for eg.
host=hostname;timeout=120;virtualHost=myhost;username=myusername;passw
ord=mypassword
I'm working with .Net 4.5 and my services are written using Async and returning Task<>.
When I call the service from the client I use await.
When the service returns after the await it seems that the channel is blocked if the client code calls another operation which is not async.
Sample code:
value = await Service.InitAsync();
Service.SyncOperation(); // Further callbacks/return values from Server are blocked and this statement never returns
(The whole point of async/await as I see it is to allow me to write the full flow in one method and let the framework deal with threads and callbacks)
I was able to solve this by wrapping the call to the service with a new task and then Task.Yield:
async Task<T> CallAsync<T>(Func<Task<T>> func)
{
Task<T> t = await func;
await Task.Yield();
return t;
}
and in the client:
value = CallAsync(Service.InitAsync);
Service.SyncOperation(); //Now the server is not blocked since Yield changed back to the client context.
My question is, is there any other way to do this without wrapping the Service call in the Client?
Maybe some attribute or property in the Service?
Thanks