Is it possible for workers to consume based on topic? - rabbitmq

I understand that you can send messages directly to a queue using channel.sendToQueue, and this creates a tasks-and-workers situation, where only one consumer will handle each task.
I also understand that you can use channel.publish with a topic-based exchange, and messages will be routed to queues based on the routing key. To my understanding, though, this will always broadcast to all subscribers on any matching queues.
I would essentially like to use the topic-based exchange, but only have one consumer handle each task. I've been through the documentation, and I don't see a way to do this.
My use-case:
I have instances of a microservice set up in multiple locations. There might be two in California, three in London, one in Singapore, etc. When a task is created, the only thing that matters is that it's handled by one of the instances in a given location.
Of course, I can create hundreds of queues named "usa-90210", "uk-ec1a", etc. It just seems like using topics would be much cleaner. It would also be more flexible, given the ability to use wildcards.
If this isn't a feature of RabbitMQ, I'm open to other thoughts or ideas as well.
UPDATE
As per istepaniuk's suggestion, I've tried creating two workers, each binding their own queue to the exchange:
const connectionA = await amqplib.connect('amqp://guest:guest#127.0.0.1:5672');
const channelA = await connectionA.createChannel();
channelA.assertExchange('test_exchange', 'topic', { durable: false });
await channelA.assertQueue('test_queue_a', { exclusive: true });
channelA.bindQueue('test_queue_a', 'test_exchange', 'usa.*');
await channelA.consume('test_queue_a', () => { console.log('worker a'); }, { noAck: true });
const connectionB = await amqplib.connect('amqp://guest:guest#127.0.0.1:5672');
const channelB = await connectionB.createChannel();
channelB.assertExchange('test_exchange', 'topic', { durable: false });
await channelB.assertQueue('test_queue_b', { exclusive: true });
channelB.bindQueue('test_queue_b', 'test_exchange', 'usa.*');
await channelB.consume('test_queue_b', () => { console.log('worker b'); }, { noAck: true });
const pubConnection = await amqplib.connect('amqp://guest:guest#127.0.0.1:5672');
const pubChannel = await pubConnection.createChannel();
pubChannel.assertExchange('test_exchange', 'topic', { durable: false });
pubChannel.publish('test_exchange', 'usa.90210', Buffer.from(''));
Unfortunately, both consumers are still receiving the message.
worker a
worker b

Yes.
I would essentially like to use the topic-based exchange, but only have one consumer handle each task. I've been through the documentation, and I don't see a way to do this.
Use a topic exchange and have your consumers declare and bind their own queues. What you describe is a very common scenario. It is outlined in the tutorial 5, "Topics".
Additionally, you can have multiple consumers share a queue (just don't declare it exclusive). This is described in tutorial 2, "Workers".
The multiple instances of consumers can declare the same queue and bindings, the operation is idempotent. Using durable queues (as opposed to exclusive) also means that the messages will queue up if all your consumers disappear or the network fails.

Related

Sending message to queue in SingleActiveConsumer mode in MassTransit

I have registered receive endpoint in SingleActiveConsumer mode. However I can't find a way to send a message directly to queue by using sendEndpoint. I receive following error:
The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=406, text='PRECONDITION_FAILED - inequivalent arg 'x-single-active-consumer' for queue 'test' in vhost '/': received none but current is the value 'true' of type 'bool'',
I tried setting header "x-single-active-consumer"=true by using bus configurer:
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.Host("localhost", "/", h =>
{
h.Username("guest");
h.Password("guest");
});
cfg.ConfigureSend(a => a.UseSendExecute(c => c.Headers.Set("x-single-active-consumer", true)));
});
and directly on sendEndpoint:
await sendEndpoint.Send(msg, context => {
context.Headers.Set("x-single-active-consumer", true);
});
If you want to send directly to a receive endpoint in MassTransit, you can use the short address exchange:test instead, which will send to the exchange without trying to create/bind the queue to the exchange with the same name. That way, you decouple the queue configuration from the message producer.
Or, you could just use Publish, and let the exchange bindings route the message to the receive endpoint queue.

calculate packet send/receive time

i am using vue.js for webrtc client. My system has multi servers so i need to pick one and connect. I am getting server list from my webservice. Everything is ok till here, i want user connect to best server so decided to make custom ping (its called rtt, i think).
to sum up my aim is send a udp packet to server(sendTime in long), then server receives this packet and now server sends my a packet(receiveTime in long) => this servers ping is receiveTime - sendTime
So i will do this for every server and choose best server.Is it posible to send and receive udp packet or anyone got better idea?
In my opinion the simplest way would be to make a get request to every url, then do a promise race on the requests, which one returns the fastest is the server you would use.
const urls = ['https://www.google.com/', 'https://www.facebook.com/', 'https://www.twitter.com/', 'https://www.instagram.com/'];
const serverPromises = [];
urls.forEach((url) => {
serverPromises.push(
new Promise((resolve, reject) => {
fetch(url, {
mode: 'no-cors'
}).then((response) => {
if ([400, 500, 404].includes(response.status)) {
return reject(null);
}
return resolve(url);
})
})
);
});
Promise.race(serverPromises).then((url) => {
if (url) {
console.log('URL to use: ', url);
}
});
Check it out, see if it suits your needs!
To me this seems to be the easiest, and quickest way, without any calculations, plus you get the result as soon as the first request resolves, this way you don't have to wait for every request to resolve.

RabbitMQ queue gets blank randomly with around 5K messages remaining

I have a queue to which a lot of messages are published (~10K). Connected to this queue are multiple consumers with the following code in codeigniter using the php-amqplib library
public function processQueue()
{
// Make connection
$connection = new AMQPStreamConnection('localhost', 5672, 'guest', 'guest');
// Make channel
$channel = $connection->channel();
// Declare queue
$channel->queue_declare(QUEUE_NAME, false, false, false, false);
// PHP callable
$callback = function ($msg) {
//DO MESSAGE PROCESSING HERE
$msg->delivery_info['channel']->basic_ack($msg->delivery_info['delivery_tag']);
};
$channel->basic_consume(AGENTS_QUEUE_PROCESSING, '', false, true, false, false, $callback);
// While queue is empty, wait
while (count($channel->callbacks)) {
// Wait
$channel->wait();
}
// Close channel and connection
$channel->close();
$connection->close();
}
The messages gets filled up and are simultaneously consumed by multiple such consumers. I've observed that with some 5-6k messages remaining (i.e. after consuming around 4-5k) messages, the queue suddenly gets empty with the consumers idling and waiting for more messages. Also, there is a sudden drop at this point in time in the total number of messages on the RabbitMQ Management web panel.
I've tried making the queue with the durable parameter but the problem seems to be the same. What could be the issue and its resolution?

How to read from AzureIOT only messages from one device

I have an Azure IOT solution where data from 2 devices go to the same IOT hub. From my computer I need to read the messages only from one of the devices. I implemented the ReadDeviceToCloudMessages.js in https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-node-node-getstarted
var client = EventHubClient.fromConnectionString(connectionString);
client.open()
.then(client.getPartitionIds.bind(client))
.then(function (partitionIds) {
return partitionIds.map(function (partitionId) {
return client.createReceiver('todevice', partitionId, { 'startAfterTime' : Date.now()}).then(function(receiver) {
console.log('Created partition receiver: ' + partitionId)
receiver.on('errorReceived', printError);
receiver.on('message', printMessage);
});
});
})
.catch(printError);
But I am getting all the messages in the IOThub. How do I get messages only from one device.
You can route the expected device message to build-in endpoint: events. Then you can only receive the selected device message from your above code.
Create the route:
Turn "Device messages which do not match any rules will be written to the 'Events (messages/events)' endpoint." to off and make sure the route is enabled.

How is everyone going about implementing scheduled jobs / cloud jobs on parse-server?

According to the parse-server migration guide we could use something like Kue and Kue-UI to emulate the parse.com scheduled jobs functionality.
I haven't implemented Kue or Kue-ui, but looking at the guides, it doesn't look like it provides anywhere close to the same level of functionality as the existing parse.com scheduled jobs. Is this observation correct? Has someone implemented this? Is it true that jobs have to be scheduled through Kue in javascript and Kue-ui only provides a summary of the current status of the jobs and new schedules can't be added through Kue-ui?
Has anyone tried to achieve the same outcome with something like Jenkins? So this is what I had in mind:
each job would still be defined in the cloud code Parse.Cloud.job("job01", function(request, response) {));
modify parse-server slightly to expose job at a similar url to existing cloud functions e.g. /parse/jobs/job01 (this might exist in parse-server soon: github.com/ParsePlatform/parse-server/pull/2560)
create a new jenkins job do a curl at that url
define a cron like schedule for that jenkins job from within the jenkins web ui
I can see the benefits being:
little to no coding
setting up jenkins sounds like much less work then setting up kue, redis and kue-ui
existing cloud job / definitions stay exactly the same
schedule and manually trigger jobs through the jenkins web ui
The only thing that the current parse.com schedule jobs / cloud jobs can do that a jenkins based solution can't is being able to select a job name to create a new schedule for from a drop down list.
Am I missing something? How is everyone else going about this? Thanks.
I ended up doing this with ´node-schedule´. Not sure if it is the best option but it is working fine for me.
index.js
var schedule = require('node-schedule');
var request = require('request');
schedule.scheduleJob('*/15 * * * *', function() {
var options = {
url: serverUrl + '/functions/{function name}}',
headers: {
'X-Parse-Application-Id': appID,
'X-Parse-Master-Key': masterKey,
'Content-Type': 'application/json'
}
};
request.post(options, function (error, response, body) {
if (!error && response.statusCode == 200) {
console.log(body);
}
});
});
main.js
Parse.Cloud.define('{function name}', function(req, res) {
//Check for master key to prevent users to call this
if (req.master === true) {
//Do operations here
} else {
res.error('Need Master Key');
}
});
I use kue for this purpose. I've described the appoach in my article. In short, this function:
Parse.Cloud.job("sendReport", function(request, response) {
Parse.Cloud.httpRequest({
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
url: "https://example.com/url/", // Webhook url
body: "body goes here",
success: function(httpResponse) {
console.log("Successfully POSTed to the webhook");
},
error: function(httpResponse) {
console.error("Couldn't POST to webhook: " + httpResponse);
}
});
});
becomes this:
// Create a kue instance and a queue.
var kue = require('kue-scheduler');
var Queue = kue.createQueue();
var jobName = "sendReport";
// Create a job instance in the queue.
var job = Queue
.createJob(jobName)
// Priority can be 'low', 'normal', 'medium', 'high' and 'critical'
.priority('normal')
// We don't want to keep the job in memory after it's completed.
.removeOnComplete(true);
// Schedule it to run every 60 minutes. Function every(interval, job) accepts interval in either a human-interval String format or a cron String format.
Queue.every('60 minutes', job);
// Processing a scheduled job.
Queue.process(jobName, sendReport);
// The body of job goes here.
function sendReport(job, done) {
Parse.Cloud.httpRequest({
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
url: "https://example.com/url/", // Webhook url
body: "body goes here"}).then(function(httpResponse) {
console.log("Successfully POSTed to the webhook");
// Don't forget to run done() when job is done.
done();
}, function(httpResponse) {
var errorMessage = "Couldn't POST to webhook: " + httpResponse;
console.error(errorMessage);
// Pass Error object to done() to mark this job as failed.
done(new Error(errorMessage));
});
}
It works fine, though I noticed that sometimes kue-scheduler fires the event more often than needed. See this issue for more info: https://github.com/lykmapipo/kue-scheduler/issues/45
If you are on AWS this may be an option:
Create a AWS CloudWatch Event Rule that is triggered at specific intervals and calls a Lambda function. The Event Rule can pass parameters to the Lambda function.
Create a simple Lambda function that calls a Cloud Code function / job. If you receive the cloud code function name and other parameters from the Event Rule, you only need one generic Lambda function for any Cloud Code call.
This has several advantages as Event Rules are part of AWS infrastructure and can be easily integrated with other AWS services. For example, you can set up intelligent queueing of Event Rule calls so that if the previous call did not complete yet, you discard the next call in the queue, overflow to another queue, notify an operator, etc.
You can use the parse-server-scheduler npm-module.
It does not require any external server and simply allows you to setup scheduling in parse-dashboard.