How to create consumer Rabbitmq dynamically? - rabbitmq

Is it possible to launch RabbitMQ consumer dynamically. A mean connect consumer to exist queue after specific time?
Or all consumers should be created in advance?
My case can be without consumers, when queues are filled by messages. Could I connect consumers after some time?

Yes, you could do it just like channel not yet created.
example for node.js
const conn = await amqplib.connect(`${rabbitmq.url}?heartbeat=300`);
conn.on('error', function (err) {
api.log.error('AMQP:Error:', err);
});
conn.on('close', () => {
api.log.info("AMQP:Closed");
});
const ch = await conn.createChannel();
await ch.assertQueue(queue_name, queue_options); // check if queue_name exists,
// if not, creates it
await ch.consume(queue_name, callback) // message from queue goes to callback

Related

Is this possible to speed up datachannel processing using webworkers

I have this code from a well known chat bot AI.
//main thread:
const worker = new Worker('worker.js');
worker.onmessage = event => {
console.log(`Received message from worker: ${event.data}`);
};
const peerConnection = new RTCPeerConnection();
const dataChannel = peerConnection.createDataChannel('myDataChannel');
dataChannel.onmessage = event => {
console.log(`Received message from data channel: ${event.data}`);
};
worker.postMessage({ type: 'dataChannel', dataChannel });
//web worker
self.onmessage = event => {
if (event.data.type === 'dataChannel') {
const dataChannel = event.data.dataChannel;
dataChannel.onopen = () => {
dataChannel.send('Hello, main thread!');
};
}
};
Appart from signaling and other things related to the connection I was wondering if this code might work, in a sense that if I send or receive messages it won't be blocked by the main thread? I can't really see a first glance what would be the real improvement here as the endpoint is the main thread and will be treated synchronously but maybe there are some?!

MassTransit consumed on single server

I am struggling to configure MassTransit with RabbitMQ to publish messages and subscribe to the queue. I think that its a simple configuration change that needs to be done. I have multiple services connected but the messages get consumed alternatively and never gets delivered / consumed on the other server.
I would like each message to get delivered to every connection / subscriber.
I am running this on ASP.net core 6 on the latest version of MassTransit.
services.TryAddSingleton(KebabCaseEndpointNameFormatter.Instance);
services.AddMassTransit(cfg =>
{
cfg.AddBus(context => Bus.Factory.CreateUsingRabbitMq(c =>
{
c.Host(connectionString, c =>
{
c.Heartbeat(10);
});
c.ConfigureEndpoints(
context,
KebabCaseEndpointNameFormatter.Instance);
c.Publish<VideoManagerResultEvent>(x =>
{
x.BindQueue("result", "video-msgs");
x.ExchangeType = Fanout;
});
c.ReceiveEndpoint("result:video-msgs", e =>
{
e.Consumer<VideoManagerResultConsumer>();
});
}));
// Request clients / DTO
RegisterRequestClients(cfg);
});
services.AddMassTransitHostedService();
}
private static void RegisterRequestClients(IServiceCollectionBusConfigurator cfg)
{
cfg.AddRequestClient<VideoManagerResultEvent>();
}
// consumer
public class VideoManagerResultConsumer : BaseConsumer<VideoManagerResultEvent>
{
public override async Task Consume(ConsumeContext<VideoManagerResultEvent> context)
{
Logger.Debug("Consumed video event");
await context.RespondAsync(new GenericResponse());
}
}
I call "SendMessage()" to publish a message to RabbitMQ.
// constructor and DI
public EventBusV2(IPublishEndpoint publishEndpoint)
{
_publishEndpoint = publishEndpoint;
}
public async Task SendMessage()
{
await _publishEndpoint.Publish<VideoManagerResultEvent>(msg);
}
To add to the original question the diagram display what is currently happening.
Diagram 1 - The request or message gets published to the eventbus but only gets delivered to the one instance of the consumer.
Diagram 2 - The required result the message gets published to both instances of the consumer.
The RaabitMQ Queue
The only configuration you should need is the following:
services.AddMassTransit(cfg =>
{
cfg.AddConsumer<VideoManagerResultConsumer>()
.Endpoint(e => e.InstanceId = "Web1"); // or 2, etc.
cfg.SetKebabCaseEndpointNameFormatter();
cfg.UsingRabbitMq((context, c) =>
{
c.Host(connectionString, c =>
{
c.Heartbeat(10);
});
c.ConfigureEndpoints(context);
});
// Request clients / DTO
RegisterRequestClients(cfg);
});
services.AddMassTransitHostedService();
Any published messages of type VideoManagerResultEvent will end up on the queue video-manager-result based upon the consumer name.

Unable to configure MassTransit send topology correlationId

I have two APIs one that sends a message and another with a consumer saga to consume that message. On the send side, my message model does not implement CorrelatedBy<> interface, but has a field I would like to use as a correlationId. As I understand from the documentation, that should be configured as follows.
GlobalTopology.Send.UseCorrelationId<SubmitOrder>(x => x.OrderId);
However, after setting that, I do not see a correlationId in my message once consumed by my consumer saga, it is an empty GUID.
I have also tried another approach outlined in the documentation, but that did not produce a correlationId in my saga either.
Bus.Factory.CreateUsingRabbitMQ(..., cfg =>
{
cfg.Send<OrderSubmitted>(x =>
{
x.UseCorrelationId(context => context.Message.OrderId);
});
});
Message Interface
public interface MyEvent {
Guid MyId { get; }
DateTime Timestamp { get; }
}
Registration
builder.AddMassTransit(x => {
x.Builder.RegisterBuildCallback(bc => {
var bus = bc.Resolve<IBusControl>();
bus.Start();
});
x.AddConsumers(Assembly.GetExecutingAssembly());
x.UsingRabbitMq((context, cfg) => {
cfg.Send<MyEvent>(x => {
x.UseCorrelationId(ctx => ctx.MyId);
});
cfg.Host(new Uri(Configuration["RabbitMqHost"]), host => {
host.Username(Configuration["RabbitMqUsername"]);
host.Password(Configuration["RabbitMqPassword"]);
});
cfg.ReceiveEndpoint(Configuration["RabbitMqQueueName"], ec => {
ec.ConfigureConsumers(context);
});
cfg.ConfigureEndpoints(context);
});
});
Where am I going wrong here? I would expect the value of MyId to be assigned to CorrelationId on send and show up in the consume context of my saga on the other end.
The workaround here is simple, follow convention and name my field CorrelationId, and everything works as expected.
Consumer sagas only correlate using the CorrelatedBy<Guid> interface, which must be on the message contract. They do not use the CorrelationId property on the ConsumeContext.

NEst RabbitMQ RPC nack doesn't reject the promise

With nestJS rabbit MQ I use RPC from one server to another one. What I basically want is await this.amqpConnection.request to reject the promise when the consumer throw an error. As I understand from this and this and this documentation when you NACK (negative ack) the message it should put the RPC response to a corresponding queue with an error information which would be passed to a client. Here's some code example:
Producer
import { AmqpConnection } from '#golevelup/nestjs-rabbitmq';
let amqpConnection: AmqpConnection;
sendMessage(payload: any) {
try {
const result = await this.amqpConnection.request({
exchange: 'user',
routingKey: 'test',
queue: 'test',
payload,
});
console.log(result);
} catch (e) {
console.error(e); // I expect this flow !!!
}
}
Consumer:
#RabbitRPC({
exchange: 'user',
routingKey: 'test',
queue: 'test',
errorBehavior: MessageHandlerErrorBehavior.NACK
})
public async rpcHandler(payload: any): Promise<any> {
throw Error("Error text that should be proxied to console.error"):
}
But the message doesn't even return to the client. Client gets rejected after a timeout, rather because of an error during consumer processing. If my understanding is not correct I would like to know if there's a builtin mechanism with nestjs to reject the message w/o manually creating some response types for an error and handling it on a client.
You should throw exactly the RpcException from #nestjs/microservices and only with a certain argument
throw new RpcException({
code: 42, // some number code
message: 'wait. oh sh...', // some string value
});
Also if you using a custom exception filter then it should extends the BaseRpcExceptionFilter from #nestjs/microservices. Call the getType() method on an ArgumentsHost and if it's result equals to 'rpc' then using the super.catch method.

Node.js + socket.io + node-amqp and queue binginds when "re" connecting thru socket.io

I have one scenario which is very close to this sample:
One main screen:
this screen (client side) will connect to the socket.io server thru server:9090/scope (io.connect("http://server:9090/scope)) and will send one event "userBindOk" (socket.emit("userBindOk", message)) to the socket.io server;
the server receives the connection and the "userBindOk". At this moment, the server should get the active connection to rabbitmq server and bind the queue to the respective user that just connected to the application thru socket.io. sample:
socket.on("connection", function(client){
//client id is 1234
// bind rabbitmq exchange, queue, and:
queue.subscribe(//receive callback);
})
So far, no problem - I can send/receive messages thru socket.io without problems.
BUT, If I refresh the page, all those steps will be done again. As consequence, the binding to the queue will occur, but this time related to another session of the socket.io client. This means that if I send a message to the queue which is related to the first socket.io session (before the page refresh), that bind should (I think) receive the message and send it to a invalid socket.io client (page refresh = new client.id on the socket.io context). I can prove this behaviour because every time I refresh the page I need to send x times more messages. For instance: I`ve connected for the first time: - so, 1 message - one screen update; refresh the page: I need to send 2 messages to the queue and only the second message will be received from the "actual" socket.io client session - this behaviour will occur as many as I refresh the page (20 page refreshs, 20 messages to be sent to a queue and the server socket.io "last" client will send the message to the client socket.io to render into the screen).
The solutions I believe are:
Find a way to "unbind" the queue when disconnecting from the socket.io server - I didn`t see this option at the node-amqp api yet (waiting for it :D)
find a way to reconnect the socket.io client using the same client.id. This way I can identify the client that is coming and apply some logic to cache the socket.
Any ideas? I tried to be very clear... But, as you know, it`s not so eaey to expose your problem when trying to clarify something that is very specific to some context...
tks
I solved it like this:
I used to declare the rabbitMq queue as durable=true,autoDelete=false,exclusive=false and in my app there was 1 queue/user and 1 exchange(type=direct) with the routing_key name=queueName, my app also used the queue for other client diffent to browsers like android app or iphone app as push fallback, so i use to crear 1 queue for earch user.
The solution to this problem was to change my rabbitMQ queue and exchange declaration. Now i declare the exchange/user as fanout and autoDelete=True, and the user is going to have N queues with durable=true, autoDelete=true, exclusive=true (No. queue = No. clients) and all the queues are bind to the user-exchange(multicast).
NOTE: my app is wirten in django, and i use node+socket+amqp to be able to comunicate with the browser using web.scokets, so i use node-restler to query my app api to get the user-queue info.
thats the rabbitMQ side, for the node+amqp+socket i did this:
server-side:
onConnect: the declaration of the user exchange as fanout, autoDelete, durable. then declaration of the queue as durable, autodelete and exclusive, then the queue.bind to the user-exchange and finaly the queue.subscribe and the socket.disconnect will destroy the queue so there are going to exist queue as client connected the app and this solve the problem of the refresh and allow the user to have more than 1 window-tab with the app:
Server-side:
/*
* unCaught exception handler
*/
process.on('uncaughtException', function (err) {
sys.p('Caught exception: ' + err);
global.connection.end();
});
/*
* Requiere libraries
*/
global.sys = require('sys');
global.amqp = require('amqp');
var rest = require('restler');
var io = require('socket.io').listen(8080);
/*
* Module global variables
*/
global.amqpReady = 0;
/*
* RabbitMQ connection
*/
global.connection = global.amqp.createConnection({
host: host,
login: adminuser,
password: adminpassword,
vhost: vhost
});
global.connection.addListener('ready',
function () {
sys.p("RabbitMQ connection stablished");
global.amqpReady = 1;
}
);
/*
* Web-Socket declaration
*/
io.sockets.on('connection', function (socket) {
socket.on('message', function (data) {
sys.p(data);
try{
var message = JSON.parse(data);
}catch(error){
socket.emit("message", JSON.stringify({"error": "invalid_params", "code": 400}));
var message = {};
}
var message = JSON.parse(data);
if(message.token != undefined) {
rest.get("http://dev.kinkajougames.com/api/push",
{headers:
{
"x-geochat-auth-token": message.token
}
}).on('complete',
function(data) {
a = data;
}).on('success',
function (data){
sys.p(data);
try{
sys.p("---- creating exchange");
socket.exchange = global.connection.exchange(data.data.bind, {type: 'fanout', durable: true, autoDelete: true});
sys.p("---- declarando queue");
socket.q = global.connection.queue(data.data.queue, {durable: true, autoDelete: true, exclusive: false},
function (){
sys.p("---- bind queue to exchange");
//socket.q.bind(socket.exchange, "*");
socket.q.bind(socket.exchange, "*");
sys.p("---- subscribing queue exchange");
socket.q.subscribe(function (message) {
socket.emit("message", message.data.toString());
});
}
);
}catch(err){
sys.p("Imposible to connection to rabbitMQ-server");
}
}).on('error', function (data){
a = {
data: data,
};
}).on('400', function() {
socket.emit("message", JSON.stringify({"error": "connection_error", "code": 400}));
}).on('401', function() {
socket.emit("message", JSON.stringify({"error": "invalid_token", "code": 401}));
});
}
else {
socket.emit("message", JSON.stringify({"error": "invalid_token", "code": 401}));
}
});
socket.on('disconnect', function () {
socket.q.destroy();
sys.p("closing socket");
});
});
client-side:
The socket intance with options 'force new connection'=true and 'sync disconnect on unload'= false.
The client side use the onbeforeunload and onunload windows object events to send socket.disconnect
The client on socket.connect event send the user token to node.
proces message from socket
var socket;
function webSocket(){
//var socket = new io.Socket();
socket = io.connect("ws.dev.kinkajougames.com", {'force new connection':true, 'sync disconnect on unload': false});
//socket.connect();
onSocketConnect = function(){
alert('Connected');
socket.send(JSON.stringify({
token: Get_Cookie('liveScoopToken')
}));
};
socket.on('connect', onSocketConnect);
socket.on('message', function(data){
message = JSON.parse(data);
if (message.action == "chat") {
if (idList[message.data.sender] != undefined) {
chatboxManager.dispatch(message.data.sender, {
first_name: message.data.sender
}, message.data.message);
}
else {
var username = message.data.sender;
Data.Collections.Chats.add({
id: username,
title: username,
user: username,
desc: "Chat",
first_name: username,
last_name: ""
});
idList[message.data.sender] = message.data.sender;
chatboxManager.addBox(message.data.sender, {
title: username,
user: username,
desc: "Chat",
first_name: username,
last_name: "",
boxClosed: function(id){
alert("closing");
}
});
chatboxManager.dispatch(message.data.sender, {
first_name: message.data.sender
}, message.data.message);
}
}
});
}
webSocket();
window.onbeforeunload = function() {
return "You have made unsaved changes. Would you still like to leave this page?";
}
window.onunload = function (){
socket.disconnect();
}
And that's it, so no more round-robing of the message.