I have question about Publisher Confirms correlation between published message and confirmation.
In tutorial I read that possible to correlate published message which confirmation by delivary tag.
Before publish message possible to get NextPublishSeqNo and by this number possible to correlate confirmation.
But is there guarantee that published message with NextPublishSeqNo X (correlate the publishing sequence number) will be always match to confirmation message with delivery tag X or there some risk of mismatch for example in case if server didnt got message because its lost by some reason and confirmation delivery tag X server will set to another message or mismatch by some another reason.
Related
I am trying to simply Get a message off an Azure Storage Queue and then delete it using the REST API.
I can retrieve the message and get a popreceipt but when I try and use this to delete the message, I keep getting, "The specified message does not exist."
In the documentation (https://learn.microsoft.com/en-us/rest/api/storageservices/delete-message2) it looks like you only have to supply the popreceipt however further down the page, it says,
After a client retrieves a message with the Get Messages operation, the client is expected to process and delete the message. To delete the message, you must have two items of data returned in the response body of the Get Messages operation:
The message ID, an opaque GUID value that identifies the message in
the queue.
A valid pop receipt, an opaque value that indicates that the message has been retrieved.
So this implies you do need to send the MessageId as well but there is nothing in the docs that specifies where to place the messageid.
The URI in the docs says to pass DELETE to
https://myaccount.queue.core.windows.net/myqueue/messages/messageid?popreceipt=string-value and I have tried replacing messageid with the actual messageid from the GET but this does not seem to be correct.
Has anyone used this and can explain why I always get "The specified message does not exist" when trying to DELETE the message off the queue or am I missing something?
When GET message /dequeue a message is requested, the message becomes
invisible for certain amount of time .In this mean time ,if it is
not deleted in process which dequeued ,it will be visible again as
said by #gaurav mantri and have a chance to be picked up by another
process i.e ; when you perform get message operation with a visibility timeout reached, another message object is returned with a new pop_receipt .
So please check the time, when you dequeue the message and wait if
the time is out or delete it on time as sometimes encoding process
may take more time than the message invisibility timeout.
Also please note that the maximum timeout interval for Queue service
operations is 30 seconds. If the server timeout interval elapses
before the service has finished processing the request, the service
returns an error.
For the next process pop receipt is updated and so using old one
gives error error code 404 (Not Found) because message with a
matching pop receipt wont be found .
So please check by gradually increasing the invisibility timeout.
References:
Queue getmessage fails - some messages only. (microsoft.com)
Setting timeouts for Queue service operations (REST API) - Azure
Storage | Microsoft Docs
The telegram documentation states:
Receipt of virtually all messages (with the exception of some purely
service ones as well as the plain-text messages used in the protocol
for creating an authorization key) must be acknowledged. This requires
the use of the following service message (not requiring an
acknowledgment):
msgs_ack#62d6b459 msg_ids:Vector long = MsgsAck;
This thread alludes to sending acks back to the server but not the mechanism by which those acks are sent. I attempted sending a MsgsAck and a msgs_ack to the server but they failed because those are data types, not constructors (methods). This leads me to two questions:
How does a telegram client send acks back to the server? (both individually and as part of a method call)
How does a telegram client differentiate between server responses that require an ack and those who don't? (it appears responses that include a req_msg_id require an ack, but I'd like confirmation)
The simple way to go about this is:
1) accumulate the msg_ids that you receive for from the server - those that need to be acknowledged as indicated in the documentation: these are all content related messages, not service messages
2) Every time you want to send new messages to the server, you could include your accumulated acknowledgment messages in a message container along with the messages you intend to send.
3) If you have accumulated msg_ids to be acknowledged for over a period say X minutes, without an opportunity to clear them via step 2) above, then you can simply send an acknowledgment message back to telegram wit the list of msg_ids to be acknowledged.
To send an acknowledgement use this:
msgs_ack#62d6b459 msg_ids:Vector<long> = MsgsAck;
I would like to give feedback to users (message issuers) when their messages will be processed (not when they are done).
Theoretically I could receive all the messages, count and reschedule them. This would be stupid due to new arriving messages would be processed immediately due to all the message before the new messages would be rescheduled after the new (initial last) messages.
Is there a way to determine the number of messages before a specific message with RabbitMQ?
I am afraid there is currently no way on the server side to find the number of messages published before a particular message though rabbitmq provides an HTTP API which returns the number of messages currently in the queue.
You will have to maintain this statistic on the client side(like a counter or something) preferably at the publisher end. But make sure to publish the message only once in order to avoid duplicate counts or implement your counter in a such a way(keep a unique identifier per message) so as to avoid counting the same message more than once (if republishing is allowed).
A similar question has been asked before :
RabbitMQ - Get total count of messages enqueued
In our scenario I'm thinking of using the pub sub technique. However I don't know which is the better option.
1 ########
A web service of ours will publish a message that something has happened when it is called externally, ExternalPersonCreatedMessage!
This message will contain a field that represents the destinations to process the message into (multiple allowed).
Various subscribers will subscribe. These subscribers will filter the message to see if any action is required by checking the destination field.
2 ########
A web service of ours will parse the incoming call and publish specific types of messages depending on the destinations supplied in the field. i.e. many Destination[n]PersonCreatedMessage messages would be created.
Subscribers will subscribe to only the specific message they care for. i.e. not having to filter any messages
QUESTIONS
Which of the above is the better option and why? And how do I stop myself from making RequestMessages. From what I've read/seen I should be trying to structure this in a way of PersonCreated, PersonDeleted i.e. SOMETHING HAS HAPPENED and NOT in the REQUEST SOMETHING TO HAPPEN form such as CreatePerson or DeletePerson
Are my thoughts correct? I've been looking for guidance on how to structure messages and making sure I don't go down a wrong path but have found no guidance out there on do's and dont's. Can any one help and guide? I want to try and get this correct from the off :)
Based on the integration scenario in the referenced article, it appears to me that you may need a Saga to complete the workflow of accept message -> operate on message -> send confirmation. In the case that the confirmation is sent immediately after the operation, you could use NSBs message handler pipeline feature which allows you to chain handlers in a specified sequence such as...
First<FilterHandler>.Then<DoWorkHandler>().AndThen<SendConfirmationHandler>();
In terms of the content filtering, you can do this although you incur some transport overhead, meaning the queue will have to accept the message and the process will always call the first handler on every message(you can short-circuit the above pipeline at any point). It may be the case that what you really want is a Distributor/Worker setup where all Workers are the same and you can handle some load.
If you truly have different endpoints with completely different logic, then I would have the Publisher process(only accepts and Publishes message) do the work of translating the inbound message to something else a Subscriber can then be interested in. If then you find that a given Published message only ever has 1 Subscriber, then you don't need to Publish at all, you need to just Bus.Send() to the correct endpoint.
The way NServiceBus handles pub-sub is more like your option two.
A publisher service has an input queue and a subscription store.
A subscriber service has an input queue
The subscriber, on start-up will send a subscription message to the input queue of the publisher
The subscription message contains the type of message subscriber is interested in and the subscribers queue address
The publisher records the subscription in the subscription store.
The publisher receives a message.
The publisher evaluates the message type against the list of subscriptions
For each match found the publisher sends the message to the queue address.
In my opinion, you should stop thinking about destinations. Messages are messages. They should not have any inherent destination information in them. The subscription mechanism defines the addressing/routing requirements for the solution.
hi
I cannot see any explanation of the implementation of the collapse_key.
I think i understand what it does but not how it do it!
Android Cloud to Device Messaging Framework
I have a C2DM framework set up and sending 4 types of messages to many phones.
String messages very basic looks kind of like this:
type:name:uuid
type:name:uuid:number
type:uuid:id
If the phone is off many of this can get piled up waiting for phone on-line.
as far as i can tell my system works but what will the collapse_key do for me here?
addEncodedParameter(sb, "collapse_key", "no_ide_what_to_put_here");
You mentioned retrying the same message 3 times and using the same key value. It doesn't really have to be the same message. If you've got a message that indicates the current price of a stock, for instance, and you really only care about the latest price, then you could send different messages with the same key. When the device comes back online, it only gets the latest price quote message.
This may have been what you were saying already, but wanted to make it clear it's not only for "retrying sending same message".
I found this text: “collapse key” used for overriding old messages with the same key on the Google C2DM servers" I think if im retrying sending same message 3 times I must use same key value right. Google cloud server will send the latest msg with the same key value
...but be aware of the following (from http://code.google.com/intl/sv-SE/android/c2dm/):
"Note that since there is no guarantee of the order in which messages get sent, the "last" message may not actually be the last message sent by the application server."
But maybe this is not an issue if you don't generate a lot of messages.