RabbitMQ: Strange Behavior in Publisher Confirm - rabbitmq

I'm new in RabbitMQ, and I using Publisher Confirm to ensure that message is delivered successfully.
But I'm facing a strange behavior, which is when I publish a message and take a sequenceNumber the function channel.BasicAcks is fired n times which n is the DeliveryTag, so if the DeliveryTag for this message is 5 the function channel.BasicAcks is fired 5 times?!!
And here is my code:
IModel channel = _customRabbitMQ.GetChannel();
IBasicProperties properties = _customRabbitMQ.GetBasicProperties();
ulong sequenceNumber = channel.NextPublishSeqNo;
message.SequenceNumber = sequenceNumber.ToString();
_context.Messages.Update(message);
await _context.SaveChangesAsync();
try
{
_customRabbitMQ.AddOutstandingConfirm(sequenceNumber, message.Id);
channel.BasicPublish(message.ExchangeName, message.RoutingKey, properties, Encoding.UTF8.GetBytes(JsonSerializer.Serialize(message)));
Console.WriteLine("Message Published");
}
catch (Exception ex)
{
// TODO: log to custom logger here
Console.WriteLine($"Error => {ex.Message}");
}
channel.BasicAcks += async (sender, ea) =>
{
try
{
Console.WriteLine("Message Confirmed");
using var scope = _provider.CreateScope();
var _context = scope.ServiceProvider.GetService<Data.DataContext>();
var _customRabbitMQ = scope.ServiceProvider.GetService<CustomRabbitMQ>();
Guid messageId = _customRabbitMQ.GetOutstandingConfirm(ea.DeliveryTag);
Message message = await _context.Messages.Where(m => m.Id == messageId).FirstOrDefaultAsync();
message.Status = MessageStatuses.INQUEUE;
_context.Messages.Update(message);
await _context.SaveChangesAsync();
_customRabbitMQ.RemoveOutstandingConfirm(ea.DeliveryTag);
}
catch (Exception ex)
{
Console.WriteLine($"Error => {ex.Message}");
}
};
channel.BasicNacks += (sender, ea) =>
{
Console.WriteLine("Message Not Confirmed");
_customRabbitMQ.RemoveOutstandingConfirm(ea.DeliveryTag);
};
}
So, why this happening and how to stop it and make it confirm only one time?
Thanks in advance.

Related

Using MQTT ManagedClient with ASP NET API, how to?

I'm currently working on a project that has to rely heavily on MQTT - one of the parts that needs to utilize MQTT is a ASP Net API, but I'm having difficulties receiving messages.
Here is my MQTTHandler:
public MQTTHandler()
{
_mqttUrl = Properties.Resources.mqttURL ?? "";
_mqttPort = Properties.Resources.mqttPort ?? "";
_mqttUsername = Properties.Resources.mqttUsername ?? "";
_mqttPassword = Properties.Resources.mqttUsername ?? "";
_mqttFactory = new MqttFactory();
_tls = false;
}
public async Task<IManagedMqttClient> ConnectClientAsync()
{
var clientID = Guid.NewGuid().ToString();
var messageBuilder = new MqttClientOptionsBuilder()
.WithClientId(clientID)
.WithCredentials(_mqttUsername, _mqttPassword)
.WithTcpServer(_mqttUrl, Convert.ToInt32(_mqttPort));
var options = _tls ? messageBuilder.WithTls().Build() : messageBuilder.Build();
var managedOptions = new ManagedMqttClientOptionsBuilder()
.WithAutoReconnectDelay(TimeSpan.FromSeconds(5))
.WithClientOptions(options)
.Build();
_mqttClient = new MqttFactory().CreateManagedMqttClient();
await _mqttClient.StartAsync(managedOptions);
Console.WriteLine("Klient startet");
return _mqttClient;
}
public async Task PublishAsync(string topic, string payload, bool retainFlag = true, int qos = 1)
{
await _mqttClient.EnqueueAsync(new MqttApplicationMessageBuilder()
.WithTopic(topic)
.WithPayload(payload)
.WithQualityOfServiceLevel((MQTTnet.Protocol.MqttQualityOfServiceLevel)qos)
.WithRetainFlag(retainFlag)
.Build());
Console.WriteLine("Besked published");
}
public async Task SubscribeAsync(string topic, int qos = 1)
{
var topicFilters = new List<MQTTnet.Packets.MqttTopicFilter>
{
new MqttTopicFilterBuilder()
.WithTopic(topic)
.WithQualityOfServiceLevel((MQTTnet.Protocol.MqttQualityOfServiceLevel)(qos))
.Build()
};
await _mqttClient.SubscribeAsync(topicFilters);
}
public Status GetSystemStatus(MqttApplicationMessageReceivedEventArgs e)
{
try
{
var json = Encoding.UTF8.GetString(e.ApplicationMessage.Payload);
var status = JsonSerializer.Deserialize<Status>(json);
if (status != null)
{
return status;
}
else
{
return null;
}
}
catch (Exception)
{
throw;
}
}
The above has been tested with a console app and works as it should.
The reason I need MQTT in the APi is that a POST method has to act on the value of a topic;
In particular I need to check a systems status before allowing the post;
[HttpPost]
public async Task<ActionResult<Order>> PostOrder(Order order)
{
if (_lastStatus != null)
{
if (_lastStatus.OpStatus)
{
return StatusCode(400, "System is busy!");
}
else
{
var response = await _orderManager.AddOrder(order);
return StatusCode(response.StatusCode, response.Message);
}
}
return StatusCode(400, "Something went wrong");
}
So I will need to set up a subscriber for this controller, and set the value of _lastStatus on received messages:
private readonly MQTTHandler _mqttHandler;
private IManagedMqttClient _mqttClient;
private Status _lastStatus;
public OrdersController(OrderManager orderManager)
{
_orderManager = orderManager;
_mqttHandler = new MQTTHandler();
_mqttClient = _mqttHandler.ConnectClientAsync().Result;
_mqttHandler.SubscribeAsync("JSON/Status");
_mqttClient.ApplicationMessageReceivedAsync += e =>
{
_lastStatus = _mqttHandler.GetSystemStatus(e);
return Task.CompletedTask;
};
}
However, it's behaving a little odd and I'm not experienced enough to know why.
The first time I make a POST request, _lastStatus is null - every following POST request seem to have the last retained message.
I'm guessing that I am struggling due to stuff being asynchronous, but not sure, and every attempt I've attempted to make it synchronous have failed.
Anyone have a clue about what I'm doing wrong?

Best practice for interrupting streaming async action

I am working on an asp.net core controller action that exposes stream for X seconds or until canceled depending on what happens first.
My first approach was to copy stream asynchronously into response, this way I get to use cancellation token rather easily by passing it into CopyToAsync.
[HttpGet]
[Route("getstream1")]
public async Task GetStream1(CancellationToken cancellationToken)
{
var stream = await new HttpClient().GetStreamAsync("http://localhost:4747/video");
var cst = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken);
var ct = cst.Token;
Response.OnStarting(() =>
{
cst.CancelAfter(5000);
return Task.CompletedTask;
});
Response.StatusCode = 200;
Response.ContentType = "video/stream";
await Response.StartAsync(ct);
try
{
await stream.CopyToAsync(Response.Body, 1024, ct);
}
catch (Exception ex)
{
if (ex is TaskCanceledException || ex is OperationCanceledException)
{
return;
}
throw;
}
}
My second idea was to return it as a file stream response and to fire and forget task which awaits given amount of time and aborts HttpContext or is cancelled by user via cancellation token.
[HttpGet]
[Route("getstream2")]
public async Task<IActionResult> GetStream2(CancellationToken cancellationToken)
{
var stream = await new HttpClient().GetStreamAsync("http://localhost:4747/video");
Task.Factory.StartNew(async () =>
{
await Task.Delay(5000, cancellationToken);
try
{
HttpContext.Abort();
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}, cancellationToken).ContinueWith(task =>
{
// handle exception
}
);
return new FileStreamResult(stream, "video/stream");
}
My question is. Is one of those approaches better that the other in some way? Is there a better way to implement this? Is it ok to call Abort on HttpContext when I want to end my response?

Rabbitmq's priority queue

Rabbitmq's priority queue mechanism has been tested and will not take effect until the producer is started to publish the message before the consumer is started. How to solve this problem?
**Code snippet**
consumer:
Map<String,Object> args = new HashMap<String,Object>();
args.put("x-max-priority", 10);
channel.queueDeclare(TEST_PRIORITY_QUEUE, true, false, false,args);
//omit... ...
DeliverCallback deliverCallback= (consumerTag, delivery) -> {
try {
String message = new String(delivery.getBody(), "UTF-8");
System.out.println("message="+message);
Thread.sleep(20*1000);
}catch (Exception e) {
e.printStackTrace();
}finally {
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
}
};
//... ...
producer:
for (int i = 0; i <20; i++) {
String messagelow = "lowLevelMsg";
channel.basicPublish(TEST_EXCHANGE_direct,
"prikey",
new BasicProperties.Builder().priority(1).build(),
messagelow.getBytes());
}
String messagehigh = "HigherLevelMsg";
channel.basicPublish(TEST_EXCHANGE_direct,
"prikey",
new BasicProperties.Builder().priority(9).build(),
messagehigh.getBytes());

May i subscribe multiple hosts (and clusters) with one subscriber pattern in Redis?

I am implementing a feature which integrating redis published messages to mongodb that i made the project and working perfect on testing environment.
But I'm concerning about the production environment, that i have 3 master server and they exist 12 slave cluster. if i publish message from them to a channel pattern may i subscribe all messages in one place
Yes it is possible by stackexchange redis settings, i had done my general structure like below
public class RedisSubscriber: IRedisSubscriber
{
private readonly RedisConfigurationManager _config;
private readonly IMongoDbRepository _mongoDbRepository;
private readonly ILogger<RedisSubscriber> _logger;
private readonly IConnectionMultiplexer _connectionMultiplexer;
public RedisSubscriber(IServiceProvider serviceLocator, ILogger<RedisSubscriber> logger, IConnectionMultiplexer conn)
{
_config = (RedisConfigurationManager)serviceLocator.GetService(typeof(RedisConfigurationManager));
_mongoDbRepository = (IMongoDbRepository)serviceLocator.GetService(typeof(IMongoDbRepository));
_connectionMultiplexer = conn;
_logger = logger;
}
public void SubScribeChannel()
{
_logger.LogInformation("!SubScribeChannel started!!");
string channelName = _config.ActiveChannelName;
var pubSub = _connectionMultiplexer.GetSubscriber();
try
{
pubSub.Subscribe(channelName, async (channel, message) => await MessageActionAsync(message, channel));
}
catch(Exception ex)
{
_logger.LogInformation(String.Format("!error: {0}", ex.Message));
}
Debug.WriteLine("EOF");
}
private async Task MessageActionAsync(RedisValue message, string channel)
{
try
{
Transformer t = new Transformer(_logger);
_logger.LogInformation(String.Format("!SubScribeChannel message received on message!! channel: {0}, message: {1}", channel, message));
string transformedMessage = Transformer.TransformJsonStringData2Message(message);
List<Document> documents = Transformer.Deserialize<List<Document>>(transformedMessage);
await MergeToMongoDb(documents, channel);
_logger.LogInformation("!Merged");
}
catch (Exception ex)
{
_logger.LogInformation(String.Format("!error: {0}", ex.Message));
}
}
private async Task MergeToMongoDb(IList<Document> documents, string channelName)
{
try
{
foreach (Document doc in documents)
{
TurSysPartitionedDocument td = JsonConvert.DeserializeObject<TurSysPartitionedDocument>(JsonConvert.SerializeObject(doc));
td.DepartureDate = td.DepartureDate.ToLocalTime();
td.PartitionKey = channelName;
TurSysPartitionedDocument isExist = await _mongoDbRepository.GetOneAsync<TurSysPartitionedDocument>(k =>
k.ProductCode == td.ProductCode &&
k.ProviderCode == td.ProviderCode &&
k.CabinClassName == td.CabinClassName &&
k.OriginAirport == td.OriginAirport &&
k.DestinationAirport == td.DestinationAirport &&
k.Adult >= td.Adult &&
k.DepartureDate == td.DepartureDate,
td.PartitionKey);
if (isExist != null)
{
//_logger.LogInformation(String.Format("!isExist departure date: {0}", isExist.DepartureDate));
isExist.SearchCount++;
await _mongoDbRepository.UpdateOneAsync(isExist, k => k.Adult, td.Adult);
await _mongoDbRepository.UpdateOneAsync(isExist, k => k.SearchCount, isExist.SearchCount);
}
else
{
//_logger.LogInformation(String.Format("!last ToLocalTime td departure date: {0}", td.DepartureDate));
td.SearchCount = 1;
await _mongoDbRepository.AddOneAsync(td);
//_logger.LogInformation(String.Format("!last ToLocalTime result td departure date: {0}", td.DepartureDate));
}
}
}
catch(Exception ex)
{
_logger.LogInformation(String.Format("!error: {0}", ex.Message));
}
}
}

JMS message consumption isn't happening outside of a bean

I'm running through a Glassfish web process and I need a non-container managed class (EJBUserManager) to be able to receive messages from a MessageDrivenBean. The class has the javax.jms.Queues and connection factories and I can write to the Queues. The queue sends to a MessageDrivenBean (AccountValidatorBean) that receives the code correctly, and then writes back a message. But the EJBUserManager attempts to read from the queue and never receives the message.
#Override
public boolean doesExist(String username) throws FtpException {
LOGGER.finer(String.format("Query if username %s exists", username));
QueueConnection queueConnection = null;
boolean doesExist = false;
try {
queueConnection = connectionFactory.createQueueConnection();
final UserManagerMessage userManagerMessage =
new UserManagerMessage(UserManagerQueryCommands.VALIDATE_USER, username);
final Session session = queueConnection.createSession(false, Session.AUTO_ACKNOWLEDGE);
final ObjectMessage objectMessage = session.createObjectMessage(userManagerMessage);
session.createProducer(accountValidatorQueue).send(objectMessage);
session.close();
queueConnection.close();
queueConnection = connectionFactory.createQueueConnection();
final QueueSession queueSession =
queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
LOGGER.finest(String.format("Right before doesExist receive for username %s", username));
final Message firstAttemptMessage = queueSession.createConsumer(userManagerQueue).receive(3000);
final Message message = firstAttemptMessage != null ?
firstAttemptMessage : queueSession.createConsumer(userManagerQueue).receiveNoWait();
LOGGER.finest(String.format("Right after doesExist receive for username %s", username));
LOGGER.finest(String.format("Is the message null: %b", message != null));
if (message != null && message instanceof StreamMessage) {
final StreamMessage streamMessage = (StreamMessage) message;
doesExist = streamMessage.readBoolean();
}
} catch (JMSException e) {
e.printStackTrace();
} finally {
if (queueConnection != null) {
try {
queueConnection.close();
} catch (JMSException e) {
e.printStackTrace();
}
}
}
return doesExist;
}
The above is the code from the EJBUserManager. Now, it can send to the accountValidatorQueue. It just never receives from the userManagerQueue
Here's the code for the AccountValidatorBean
private void validateUser(final String username) {
QueueConnection queueConnection = null;
final String doctype = doctypeLookupDAO.getDocumentTypeForUsername(username);
LOGGER.finest(String.format("Doctype %s for username %s", doctype, username));
try {
queueConnection = queueConnectionFactory.createQueueConnection();
final Session session = queueConnection.createSession(false, Session.AUTO_ACKNOWLEDGE);
//final StreamMessage message = session.createStreamMessage();
//message.clearBody();
//message.writeBoolean(doctype != null);
//message.reset();
final ObjectMessage message = session.createObjectMessage(Boolean.valueOf(doctype != null));
final MessageProducer messageProducer =
session.createProducer(userManagerQueue);
LOGGER.finest(String.format("Queue name %s of producing queue", userManagerQueue.getQueueName()));
messageProducer.send(message);
LOGGER.finest(String.format("Sending user validate message for user %s", username));
messageProducer.close();
session.close();
} catch (JMSException e) {
e.printStackTrace();
} finally {
if (queueConnection != null) {
try {
queueConnection.close();
} catch (JMSException e1) {
e1.printStackTrace();
}
}
}
}
Fixed. I needed to call QueueConnection.start() to consume messages from the queue.