I search accross google answer to my question, but I didn't manage to find an clear answer. Is it possible to batch sended messages in rabbitmq ?
This has been discussed before, and RabbitMQ is not supporting batch publish now. You have to bundle them up in one message by yourself.
Please see: http://rabbitmq.1065348.n5.nabble.com/Batching-messages-td22852.html
AFAIK and in accordance with official documentation (most tightly related to your question topic, with examples in Java https://www.rabbitmq.com/tutorials/tutorial-seven-java.html) there is such a thing called "batching publisher confirms".
I mean you basically cannot publish multiple messages at one time (I also dig into an API of Java and even in the most recent versions, such as 5.12.0, and there is completely no such API that provides publishing multiple messages at one batch). Even in documentation as an example of publishing multiple messages they basically use something like this :
int batchSize = 100;
int outstandingMessageCount = 0;
while (thereAreMessagesToPublish()) {
byte[] body = ...;
BasicProperties properties = ...;
channel.basicPublish(exchange, queue, properties, body);
outstandingMessageCount++;
if (outstandingMessageCount == batchSize) {
ch.waitForConfirmsOrDie(5_000);
outstandingMessageCount = 0;
}
}
if (outstandingMessageCount > 0) {
ch.waitForConfirmsOrDie(5_000);
}
And I wager it is defiantly not what you want, because here they basically publish 100 messages separately, but confirms them as one batch. That is what I mean when said "batching publisher confirms" - simply confirm that multiple messages have reached broker successfully, but messages themselves are published separately, one by one.
I really hope this answer help someone. Completely appreciate any addition.
Related
I am using ReactiveRedisOperations to save data objects in Redis and this call returns a Mono as per the api.
I notice that if I don't do anything with this Mono return than this code does not do anything.
Just trying to understand how this works.
I would like below code to save every Object to Redis in this loop, however it does not do so, please share what is missing here.
for (SomeObject obj : list) {
reactiveRedisOperations.opsForHash().put(key, hashKey, obj).map(b -> obj); }
On the other side if i return the Mono result from similar code via a rest service response than it seems to save in Redis correctly, not sure why this is this way. Thanks
This is a quirk of reactive streams, not Lettuce.
Unlike a completable future which begins execution when it's created, a stream won't begin executing (the command isn't sent) until a consumer has subscribed to it.
I believe this is to facilitate backpressure, so a slow consumer isn't flooded with data by the producer.
Some nice reading -> https://blog.knoldus.com/working-with-project-reactor-reactive-streams/
If you return a Mono, to the underlying web framework, which generally will handle subscribe(ing) to this Mono, the respective operation will trigger resulting in whatever side-effects such as data being created in your Redis datastore.
Shall you wish to have your operations executed, you should do the same, i.e. subscribe to the publisher (Mono, or Flux) or return these data wrappers to whatever calling function you would know will handle this for you as in the aforementioned example:
Flux.fromIterable(list)
.flatMap(obj -> reactiveRedisOperations.opsForHash().put(key, hashKey, obj))
.subscribe();
We're using NserviceBus as our messaging infrastructure with RabbitMQ as the transport.
I'm trying to upgrade to NServiceBus 6.0 from 5.* version. In 5.0, we could defer events using "Bus.Defer()". But it seems like in 6.0 we can defer only messages but not events ??
If I use below code with message being an "event", I get an error saying that events should be published.
var sendOptions = new SendOptions();
sendOptions.DoNotDeliverBefore(DateTimeOffset.Now.AddMinutes(30));
sendOptions.RouteToThisEndpoint();
return context.Send(message, sendOptions);
but context.Publish(message, new PublishOptions()) method takes in "PublishOptions" which does not have an option to defer.
Am I missing something here ? Appreciate if someone could help.
Some changes are not immediately effective, so we will have to defer some of those events.
The publisher should not be constrained by any of the subscribers.
Is it correct to assume that Product Authoring system publishes ProductDataUpdate events regardless of when the actual effective date will take place? In such case, you are already notified about a decision that was made. What are you, as a subscriber, going to do with it is a different thing and entirely internal.
You could send a command, for the sake of this discussion call it UpdateProductCost, that would be a delayed message if EffectiveDate is in the future. Otherwise, it's an immediate command.
I've got an answer in another forum and I think it is the most relevant, So posting it here so that it could help someone in future. Thanks to Daniel Marbach
https://groups.google.com/forum/#!topic/particularsoftware/ivy1wdsycT8
Bus.Defer in v5 was internally always doing a send operation. It seems the difference to v6 is that it automatically disabled the messaging best practices. You can achieve the same by calling
var sendOptions = new SendOptions();
sendOptions.DoNotDeliverBefore(DateTimeOffset.Now.AddMinutes(30));
sendOptions.RouteToThisEndpoint();
sendOptions.DoNotEnforceBestPractices();
return context.Send(message, sendOptions);
https://docs.particular.net/nservicebus/messaging/best-practice-enforcement
I have failed to find an enterprise integration pattern or recipe that promotes a solution for this problem:
After the re-delivery attempts have been exhausted, I need to send a web service request back to the originating source, to notify the sender of a failed delivery.
Upon exhaustion of all re-delivery attempts, should I move the message to a dead letter queue? Then create a new consumer listening on that DL queue? Do I need a unique dead letter queue for each of my source message queues? Should I add a message header, noting the source queue, before I move it to the dead letter queue? If all messages go to a single dead letter queue, how should my consumer know where to send the web service request?
Can you point me to a book, blog post, or article? What is the prescribed approach?
I'm working in a really old version of Fuse ESB but I expect that solutions in ServiceMix to be equally applicable.
Or maybe, what I'm asking for is an anti-pattern or code-smell. Please advise.
If you are new to Camel and really want to get an in-depth knowledge of it, I would recommend Camel in Action, a book by Claus Ibsen. There's a second edition in the works, with 14 out of 19 chapters already done so you may also give that a shot.
If that's a bit too much, online documentation is pretty okay, you can find out the basics just fine from it. For error handling I recommend starting with the general error handling page then moving on to error handler docs and exception policy documentation.
Generally, dead letter channel is the way to go - Camel will automatically send to DLC after retries have been exhausted, you just have to define the DLC yourself. And its name implies, it's a channel and doesn't really need to be a queue - you can write to file, invoke a web-service, submit a message to a message queue or just write to logs, it's completely up to you.
// error-handler DLC, will send to HTTP endpoint when retries are exhausted
errorHandler(deadLetterChannel("http4://my.webservice.hos/path")
.useOriginalMessage()
.maximumRedeliveries(3)
.redeliveryDelay(5000))
// exception-clause DLC, will send to HTTP endpoint when retries are exhausted
onException(NetworkException.class)
.handled(true)
.maximumRedeliveries(5)
.backOffMultiplier(3)
.redeliveryDelay(15000)
.to("http4://my.webservice.hos/otherpath");
I myself have always preferred having a message queue and then consuming from there for any other recovery or reporting. I generally include failure details like exchange ID and route ID, message headers, error message and sometimes even stacktrace. The resulting message, as you can imagine, grows quite a bit but it tremendously simplifies troubleshooting and debugging, especially in environments where you have quite a number of components and services. Here's a sample DLC message from one my projects:
public class DeadLetterChannelMessage {
private String timestamp = Times.nowInUtc().toString();
private String exchangeId;
private String originalMessageBody;
private Map<String, Object> headers;
private String fromRouteId;
private String errorMessage;
private String stackTrace;
#RequiredByThirdPartyFramework("jackson")
private DeadLetterChannelMessage() {
}
#SuppressWarnings("ThrowableResultOfMethodCallIgnored")
public DeadLetterChannelMessage(Exchange e) {
exchangeId = e.getExchangeId();
originalMessageBody = e.getIn().getBody(String.class);
headers = Collections.unmodifiableMap(e.getIn().getHeaders());
fromRouteId = e.getFromRouteId();
Optional.ofNullable(e.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class))
.ifPresent(throwable -> {
errorMessage = throwable.getMessage();
stackTrace = ExceptionUtils.getStackTrace(throwable);
});
}
// getters
}
When consuming from the dead letter queue, route ID can tell you where the failure originated from so you can then implement routes that are specific for handing errors coming from there:
// general DLC handling route
from("{{your.dlc.uri}}")
.routeId(ID_REPROCESSABLE_DLC_ROUTE)
.removeHeaders(Headers.ALL)
.unmarshal().json(JsonLibrary.Jackson, DeadLetterChannelMessage.class)
.toD("direct:reprocess_${body.fromRouteId}"); // error handling route
// handle errors from `myRouteId`
from("direct:reprocess_myRouteId")
.log("Error: ${body.errorMessage} for ${body.originalMessageBody}");
// you'll probably do something better here, e.g.
// .convertBodyTo(WebServiceErrorReport.class) // requires a converter
// .process(e -> { //do some pre-processing, like setting headers/properties })
// .toD("http4://web-service-uri/path"); // send to web-service
// for routes that have no DLC handling supplied
onException(DirectConsumerNotAvailableException.class)
.handled(true)
.useOriginalMessage()
.removeHeaders(Headers.ALL)
.to({{my.unreprocessable.dlc}}); // errors that cannot be recovered from
We need to send large messages on ServiceBus Topics. Current size is around 10MB. Our initial take is to save a temporary file in BlobStorage and then send a message with reference to the blob. The file is compressed to save upload time. It works fine.
Today I read this article: http://geekswithblogs.net/asmith/archive/2012/04/10/149275.aspx
The suggestion there is to split the message in smaller chunks and on the receiving side aggregate them again.
I can admit that is a "cleaner approach", avoiding the roundtrip to BlobStore. On the other hand I prefer to keep things simple. The splitting mechanism introduces increased complexity. I mean there must have been a reason why they didn't include that in the ServiceBus from the beginning ...
Has anyone tried the splitting approach in real life situation?
Are there better patterns?
I wrote that blog article a while ago, the intention was to implement
the splitter and aggregator patterns using the Service Bus. I found this question by chance when searching for a better alternative.
I agree that the simplest approach may be to use Blob storage to store the message body, and send a reference to that in the message. This is the scenario we are considering for a customer project right now.
I remember a couple of years ago, there was some sample code published that would abstract Service Bus and Storage Queues from the client application, and handle the use of Blob storage for large message bodies when required. (I think it was the CAT team at Microsoft, but I'm not sure).
I can't find the sample with a Quick google search, but as it's probably a couple of years old, it will be out of date, as the Service Bus client library has been improved a lot since then.
I have used the splitting of messages when the message size was too large, but as this was for batched telemetry data there was no need to aggregate the messages, and I could just process a number of smaller batches on the receiving end instead of one large message.
Another disadvantage of the splitter-aggregator approach is that it requires sessions, and therefore a session enabled Queue or Subscription. This means that all messages will require sessions, even smaller ones, and also the Session Id cannot be used for another purpose in the implementation.
If I were you I would not trust the code on the blog post, it was written a long time ago, and I have learned a lot since then :-).
The Blob Storage approach is probably the way to go.
Regards,
Alan
In case someone will stumble in the same scenario, the Claim Check approach would help.
Details:
Implement Claim Check Pattern
Use ServiceBus.AttachmentPlugin (Assuming you use C#. Optionally, you can create your own)
Use extral storage e.g. Azure Storage Account (optionally, you can use other storage)
C# Code Snippet
using ServiceBus.AttachmentPlugin;
...
// Getting connection information
var serviceBusConnectionString = Environment.GetEnvironmentVariable("SERVICE_BUS_CONNECTION_STRING");
var queueName = Environment.GetEnvironmentVariable("QUEUE_NAME");
var storageConnectionString = Environment.GetEnvironmentVariable("STORAGE_CONNECTION_STRING");
// Creating config for sending message
var config = new AzureStorageAttachmentConfiguration(storageConnectionString);
// Creating and registering the sender using Service Bus Connection String and Queue Name
var sender = new MessageSender(serviceBusConnectionString, queueName);
sender.RegisterAzureStorageAttachmentPlugin(config);
// Create payload
var payload = new { data = "random data string for testing" };
var serialized = JsonConvert.SerializeObject(payload);
var payloadAsBytes = Encoding.UTF8.GetBytes(serialized);
var message = new Message(payloadAsBytes);
// Send the message
await sender.SendAsync(message);
References:
https://learn.microsoft.com/en-us/azure/architecture/patterns/claim-check
https://learn.microsoft.com/en-us/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/
https://www.enterpriseintegrationpatterns.com/patterns/messaging/StoreInLibrary.html
https://github.com/SeanFeldman/ServiceBus.AttachmentPlugin
https://github.com/mspnp/cloud-design-patterns/tree/master/claim-check/code-samples/sample-3
I tried posting on their boards (authors of this library), however it literally takes months for them to reply when it comes to the free software (can't blame them).
But anyways
I have found that this library is behaving weirdly - for instance, a major problem with my application is when someone is trying to sign in (through FTP), they provide a correct login and mistype the password, no reply is received from FTP server.
I tried doing the same from command window just to verify that it's not the FTP server's fault; and FTP commands were received instantaneously.
It almost looks as though this library eats the commands. The same actions often times will yield different results.
Can anyone recommend a stable, reliable library to use with Compact framework? Or shed some light on this issue...?
I modified the source code inside ConnectThread() as follows:
// if a PWD is required, send it
if( response.ID == 331 )
{
response = SendCommand("PASS " + m_pwd, false);
//ADDED THIS - try again.
if (response.ID == 0)
{
response = SendCommand("PASS " + m_pwd, false);
}
//end of my addition
if( !((response.ID == 202) || (response.ID == 230)) )
{
m_cmdsocket.Close();
m_cmdsocket=null;
Disconnect();
m_connected = false;
return;
}
}
This solved the issue for awhile, until now it started doing it again, the culprit seems to be when 0 is coming back as a response from FTP server, the connection just stalls. I am not sure whether it is a socket issue or some other obscure problem, but I think I am going to give up at this point.
Which FTP set are you using, the stream-based classes in the SDF, or the separate one in the Forums? If you're using the one from the forums (which is the one I actually recommend), then you've got the source. I wrote that one from the ground up by looking at nothing by the RFC. It's really, really simple and if it's "eating" responses, it's likely a timeout issue, though it should be easy to put in a break point and see where it's coming apart.