I have two APIs one that sends a message and another with a consumer saga to consume that message. On the send side, my message model does not implement CorrelatedBy<> interface, but has a field I would like to use as a correlationId. As I understand from the documentation, that should be configured as follows.
GlobalTopology.Send.UseCorrelationId<SubmitOrder>(x => x.OrderId);
However, after setting that, I do not see a correlationId in my message once consumed by my consumer saga, it is an empty GUID.
I have also tried another approach outlined in the documentation, but that did not produce a correlationId in my saga either.
Bus.Factory.CreateUsingRabbitMQ(..., cfg =>
{
cfg.Send<OrderSubmitted>(x =>
{
x.UseCorrelationId(context => context.Message.OrderId);
});
});
Message Interface
public interface MyEvent {
Guid MyId { get; }
DateTime Timestamp { get; }
}
Registration
builder.AddMassTransit(x => {
x.Builder.RegisterBuildCallback(bc => {
var bus = bc.Resolve<IBusControl>();
bus.Start();
});
x.AddConsumers(Assembly.GetExecutingAssembly());
x.UsingRabbitMq((context, cfg) => {
cfg.Send<MyEvent>(x => {
x.UseCorrelationId(ctx => ctx.MyId);
});
cfg.Host(new Uri(Configuration["RabbitMqHost"]), host => {
host.Username(Configuration["RabbitMqUsername"]);
host.Password(Configuration["RabbitMqPassword"]);
});
cfg.ReceiveEndpoint(Configuration["RabbitMqQueueName"], ec => {
ec.ConfigureConsumers(context);
});
cfg.ConfigureEndpoints(context);
});
});
Where am I going wrong here? I would expect the value of MyId to be assigned to CorrelationId on send and show up in the consume context of my saga on the other end.
The workaround here is simple, follow convention and name my field CorrelationId, and everything works as expected.
Consumer sagas only correlate using the CorrelatedBy<Guid> interface, which must be on the message contract. They do not use the CorrelationId property on the ConsumeContext.
I want to have a hybrid NestJS app, HTTP + RabbitMQ. But don't get if I should create different microservices for each queue.
I have followed NestJS' RabbitMQ guide (https://docs.nestjs.com/microservices/rabbitmq) and GitHub example (https://github.com/nestjs/nest/tree/master/sample/03-microservices).
main.ts
app.connectMicroservice({
transport: Transport.RMQ,
options: {
urls: [`amqp://user:user#hostname:5672`],
queue: "cats_queue",
queueOptions: { durable: false },
prefetchCount: 1,
}
});
...
await app.startAllMicroservicesAsync();
app.module.ts (module import)
ClientsModule.register([{
name: "CATS_QUEUE", transport: Transport.RMQ, options: {
urls: [`amqp://user:user#hostname:5672`],
queue: "cats_queue",
queueOptions: { durable: false },
prefetchCount: 1
}
}])
app.controller.ts
constructor(
#Inject("CATS_QUEUE") private readonly client: ClientProxy
) {
}
#Get("mq")
mq(): Observable<number> {
const pattern = { cmd: "sum" };
const data = [1, 2, 3, 4, 5];
return this.client.send<number>(pattern, data);
}
#MessagePattern({ cmd: "sum" })
sum(data: number[]): number {
console.log("MESSAGE RECEIVED : " + data.toString());
return (data || []).reduce((a, b) => a + b);
}
As I understood, I need to define ClientsModule.register() for each queue in app.module.ts. But why I also need to define RabbitMQ queue name while creating microservice? Do I need to create different microservices for each queue?
I'm not a NestJS user, but it seems logical that when you use queues to send messages between your microservices, you will need a queue for each microservice (not the other way around).
Queues are used to send (TO) and receive(Process) messages between independent components asynchronously.
Does that make sense?
Is it possible to launch RabbitMQ consumer dynamically. A mean connect consumer to exist queue after specific time?
Or all consumers should be created in advance?
My case can be without consumers, when queues are filled by messages. Could I connect consumers after some time?
Yes, you could do it just like channel not yet created.
example for node.js
const conn = await amqplib.connect(`${rabbitmq.url}?heartbeat=300`);
conn.on('error', function (err) {
api.log.error('AMQP:Error:', err);
});
conn.on('close', () => {
api.log.info("AMQP:Closed");
});
const ch = await conn.createChannel();
await ch.assertQueue(queue_name, queue_options); // check if queue_name exists,
// if not, creates it
await ch.consume(queue_name, callback) // message from queue goes to callback
I have an application flow where:
Clientside Javascript triggers signalR Hub
Asynchronous Call is made
for long running operation
When operation is complete Clientside JavaScript is notified by signalR
I had assumed this would be as simple as:
Server Side:
public async void SendMessage()
{
await Task.Delay(1000);
Clients.All.SendAsync("ReceiveMessage");
}
Client Side:
var hub = new signalR.HubConnectionBuilder().withUrl('/disposeBugHub').build();
this.hub.on("ReceiveMessage", function () {
alert('Message Received');
});
this.hub.start()
.then(function () {
hub.invoke("SendMessage");
})
.catch(function (err) {
return console.error(err.toString());
});
However the Clients.All.SendAsync("ReceiveMessage"); call always throws a System.ObjectDisposedException: 'Cannot access a disposed object.' Exception.
This appears to be expected behavoir and not a bug, so my question is how do i programmatically acheive the desired workflow? I assume there must be a well known pattern to acheive this but I cant find it online.
First of all , remove void method for long running process. use Task return type method .
Try this.
public async Task SendMessage()
{
await Task.Delay(10000);
Clients.All.SendAsync("ReceiveMessage");
}
I have one scenario which is very close to this sample:
One main screen:
this screen (client side) will connect to the socket.io server thru server:9090/scope (io.connect("http://server:9090/scope)) and will send one event "userBindOk" (socket.emit("userBindOk", message)) to the socket.io server;
the server receives the connection and the "userBindOk". At this moment, the server should get the active connection to rabbitmq server and bind the queue to the respective user that just connected to the application thru socket.io. sample:
socket.on("connection", function(client){
//client id is 1234
// bind rabbitmq exchange, queue, and:
queue.subscribe(//receive callback);
})
So far, no problem - I can send/receive messages thru socket.io without problems.
BUT, If I refresh the page, all those steps will be done again. As consequence, the binding to the queue will occur, but this time related to another session of the socket.io client. This means that if I send a message to the queue which is related to the first socket.io session (before the page refresh), that bind should (I think) receive the message and send it to a invalid socket.io client (page refresh = new client.id on the socket.io context). I can prove this behaviour because every time I refresh the page I need to send x times more messages. For instance: I`ve connected for the first time: - so, 1 message - one screen update; refresh the page: I need to send 2 messages to the queue and only the second message will be received from the "actual" socket.io client session - this behaviour will occur as many as I refresh the page (20 page refreshs, 20 messages to be sent to a queue and the server socket.io "last" client will send the message to the client socket.io to render into the screen).
The solutions I believe are:
Find a way to "unbind" the queue when disconnecting from the socket.io server - I didn`t see this option at the node-amqp api yet (waiting for it :D)
find a way to reconnect the socket.io client using the same client.id. This way I can identify the client that is coming and apply some logic to cache the socket.
Any ideas? I tried to be very clear... But, as you know, it`s not so eaey to expose your problem when trying to clarify something that is very specific to some context...
tks
I solved it like this:
I used to declare the rabbitMq queue as durable=true,autoDelete=false,exclusive=false and in my app there was 1 queue/user and 1 exchange(type=direct) with the routing_key name=queueName, my app also used the queue for other client diffent to browsers like android app or iphone app as push fallback, so i use to crear 1 queue for earch user.
The solution to this problem was to change my rabbitMQ queue and exchange declaration. Now i declare the exchange/user as fanout and autoDelete=True, and the user is going to have N queues with durable=true, autoDelete=true, exclusive=true (No. queue = No. clients) and all the queues are bind to the user-exchange(multicast).
NOTE: my app is wirten in django, and i use node+socket+amqp to be able to comunicate with the browser using web.scokets, so i use node-restler to query my app api to get the user-queue info.
thats the rabbitMQ side, for the node+amqp+socket i did this:
server-side:
onConnect: the declaration of the user exchange as fanout, autoDelete, durable. then declaration of the queue as durable, autodelete and exclusive, then the queue.bind to the user-exchange and finaly the queue.subscribe and the socket.disconnect will destroy the queue so there are going to exist queue as client connected the app and this solve the problem of the refresh and allow the user to have more than 1 window-tab with the app:
Server-side:
/*
* unCaught exception handler
*/
process.on('uncaughtException', function (err) {
sys.p('Caught exception: ' + err);
global.connection.end();
});
/*
* Requiere libraries
*/
global.sys = require('sys');
global.amqp = require('amqp');
var rest = require('restler');
var io = require('socket.io').listen(8080);
/*
* Module global variables
*/
global.amqpReady = 0;
/*
* RabbitMQ connection
*/
global.connection = global.amqp.createConnection({
host: host,
login: adminuser,
password: adminpassword,
vhost: vhost
});
global.connection.addListener('ready',
function () {
sys.p("RabbitMQ connection stablished");
global.amqpReady = 1;
}
);
/*
* Web-Socket declaration
*/
io.sockets.on('connection', function (socket) {
socket.on('message', function (data) {
sys.p(data);
try{
var message = JSON.parse(data);
}catch(error){
socket.emit("message", JSON.stringify({"error": "invalid_params", "code": 400}));
var message = {};
}
var message = JSON.parse(data);
if(message.token != undefined) {
rest.get("http://dev.kinkajougames.com/api/push",
{headers:
{
"x-geochat-auth-token": message.token
}
}).on('complete',
function(data) {
a = data;
}).on('success',
function (data){
sys.p(data);
try{
sys.p("---- creating exchange");
socket.exchange = global.connection.exchange(data.data.bind, {type: 'fanout', durable: true, autoDelete: true});
sys.p("---- declarando queue");
socket.q = global.connection.queue(data.data.queue, {durable: true, autoDelete: true, exclusive: false},
function (){
sys.p("---- bind queue to exchange");
//socket.q.bind(socket.exchange, "*");
socket.q.bind(socket.exchange, "*");
sys.p("---- subscribing queue exchange");
socket.q.subscribe(function (message) {
socket.emit("message", message.data.toString());
});
}
);
}catch(err){
sys.p("Imposible to connection to rabbitMQ-server");
}
}).on('error', function (data){
a = {
data: data,
};
}).on('400', function() {
socket.emit("message", JSON.stringify({"error": "connection_error", "code": 400}));
}).on('401', function() {
socket.emit("message", JSON.stringify({"error": "invalid_token", "code": 401}));
});
}
else {
socket.emit("message", JSON.stringify({"error": "invalid_token", "code": 401}));
}
});
socket.on('disconnect', function () {
socket.q.destroy();
sys.p("closing socket");
});
});
client-side:
The socket intance with options 'force new connection'=true and 'sync disconnect on unload'= false.
The client side use the onbeforeunload and onunload windows object events to send socket.disconnect
The client on socket.connect event send the user token to node.
proces message from socket
var socket;
function webSocket(){
//var socket = new io.Socket();
socket = io.connect("ws.dev.kinkajougames.com", {'force new connection':true, 'sync disconnect on unload': false});
//socket.connect();
onSocketConnect = function(){
alert('Connected');
socket.send(JSON.stringify({
token: Get_Cookie('liveScoopToken')
}));
};
socket.on('connect', onSocketConnect);
socket.on('message', function(data){
message = JSON.parse(data);
if (message.action == "chat") {
if (idList[message.data.sender] != undefined) {
chatboxManager.dispatch(message.data.sender, {
first_name: message.data.sender
}, message.data.message);
}
else {
var username = message.data.sender;
Data.Collections.Chats.add({
id: username,
title: username,
user: username,
desc: "Chat",
first_name: username,
last_name: ""
});
idList[message.data.sender] = message.data.sender;
chatboxManager.addBox(message.data.sender, {
title: username,
user: username,
desc: "Chat",
first_name: username,
last_name: "",
boxClosed: function(id){
alert("closing");
}
});
chatboxManager.dispatch(message.data.sender, {
first_name: message.data.sender
}, message.data.message);
}
}
});
}
webSocket();
window.onbeforeunload = function() {
return "You have made unsaved changes. Would you still like to leave this page?";
}
window.onunload = function (){
socket.disconnect();
}
And that's it, so no more round-robing of the message.