I'm writing a rabbit adapter for a messaging framework that we have, which accepts a DateTime ExpiryTimestampUtc for a message
It's not documented what the maximum message TTL value is, just that:
The value of the TTL argument or policy must be a non-negative integer (0 <= n), describing the TTL period in milliseconds. Thus a value of 1000 means that a message added to the queue will live in the queue for 1 second or until it is delivered to a consumer. The argument can be of AMQP 0-9-1 type short-short-int, short-int, long-int, or long-long-int.
https://www.rabbitmq.com/ttl.html
In our code, we send a Expiry Timestamp of 9999-12-31 23:59:59:999 if the message is essentially never meant to expire. We handle this in the adapter by:
properties.Expiration = ((long)(message.ExpiryTimestampUtc - _clock.UtcNow).TotalMilliseconds).ToString();
This crashes when publishing the message to rabbit
Already closed: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=406, text='PRECONDITION_FAILED - invalid expiration '251733866845480': {value_too_large,251733866845480}', classId=60, methodId=40
By trial and error, it seems that the maximum TTL is around 288094644805 milliseconds - which puts it at around 3334 days (~ end of 2031 at time of writing). This appears to be a rolling value, so the maximum expiry date today is earlier than the maximum expiry date in a week.
This comment from 2015:
The limit used to be 4294967295 (2^32-1) milliseconds, so about 49 days.
We have eliminated it in recent releases. The limit now is likely high enough to
not matter in most cases :)
https://groups.google.com/g/rabbitmq-users/c/cvvkB0rOAdU
still didn't answer the question.
Does anyone know the answer for sure?
I'm thinking that (although it introduces inconsistent behaviour in our framework) that if expiry timestamp > 288094644805 then don't set an expiration. Technically it's wrong but tbh, by then the message is going to be stale anyway ;)
Related
I tried setting the timeouts small to force the failures to see what happens:
ClientBuilder.newBuilder()
.readTimeout(1, TimeUnit.NANOSECONDS)
.connectTimeout(1, TimeUnit.NANOSECONDS)
But the code still seems to hang for what feels like the default timeout values.
readTimeout and connectTimeout both accept a TimeUnit parameter so it makes sense the NANOSECONDS would be ok right?
The javadoc for these both read:
Value 0 represents infinity. Negative values are not allowed.
And these are internally converted to MILLISECONDS via TimeUnit.convert which states:
Conversions from finer to coarser granularities truncate, so lose precision.
That is what is happening here. TimeUnit.convert even has an example:
For example, converting {#code 999} milliseconds to seconds results in {#code 0}.
Which would be a similar problem for converting 1 nanosecond to milliseconds resulting in 0.
And 0 is infinity... that is, the operating system default timeouts.
Clearly this is obvious, but none of the Javadocs indicate that the specified times will be internally converted into MILLISECONDS and to beware of losing precision.
And I've wasted days wondering why this wasn't working, when I should have remembered from years of network programming milliseconds are the default units.
I'm writing a very specific application protocol to enable communication between 2 nodes. Node 1 is an embedded platform (a microcontroller), while node 2 is a common computer.
Such protocol defines messages of variable length. This means that sometimes node 1 sends a message of 100 bytes to node 2, while another time it sends a message of 452 bytes.
Such protocol shall be independent on how the messages are transmitted. For instance, the same message can be sent over USB, Bluetooth, etc.
Let's assume that a protocol message is defined as:
| Length (4 bytes) | ...Payload (variable length)... |
I'm struggling about how the receiver can recognise how long is the incoming message. So far, I have thought about 2 approaches.
1st approach
The sender sends the length first (4 bytes, always fixed size), and the message afterwards.
For instance, the sender does something like this:
// assuming that the parameters of send() are: data, length of data
send(msg_length, 4)
send(msg, msg_length - 4)
While the receiver side does:
msg_length = receive(4)
msg = receive(msg_length)
This may be ok with some "physical protocols" (e.g. UART), but with more complex ones (e.g. USB) transmitting the length with a separate packet may introduce some overhead. The reason being that an additional USB packet (with control data, ACK packets as well) is required to be transmitted for only 4 bytes.
However, with this approach the receiver side is pretty simple.
2nd approach
The alternative would be that the receiver keeps receiving data into a buffer, and at some point tries to find a valid message. Valid means: finding the length of the message first, and then its payload.
Most likely this approach requires adding some "start message" byte(s) at the beginning of the message, such that the receiver can use them to identify where a message is starting.
I have a real-time RabbitMQ queue that I'm running. I'd like to consume the most recent entry, ignoring all others.
Better yet, is it possible to have a fanout exchange with a singleton queue size?
Yes, this can be done by specifying the maximum queue length limit when declaring the queue.
As the documentation states,
The maximum length of a queue can be limited to a set number of messages, or a set number of bytes (the total of all message body lengths, ignoring message properties and any overheads), or both.
The default behaviour for RabbitMQ when a maximum queue length or size is set and the maximum is reached is to drop or dead-letter messages from the front of the queue (i.e. the oldest messages in the queue). To modify this behaviour, use the overflow setting described below.
If you're using Java, you would do the following:
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-max-length", 1);
channel.queueDeclare("myqueue", false, false, false, args);
I use ordered set to true, however when many (1000 or more) messages are sent in a short period of time (< 1 second) the messages received are not all received in the same order.
rtcPeerConnection.createDataChannel("app", {
ordered: true,
maxPacketLifeTime: 3000
});
I could provide a minimal example to reproduce this strange behavior if necessary.
I also use bufferedAmountLowThreshold and the associated event to delay when the send buffered amount is too big. I chose 2000 but I don't know what the optimal number is. The reason I have so many messages in a short period of time is because I don't want to overflow the maximum amount of data sent at once. So I split the data into 800 Bytes packs and send those. Again I don't know what the maximum size 1 message can be.
const SEND_BUFFERED_AMOUNT_LOW_THRESHOLD = 2000; //Bytes
rtcSendDataChannel.bufferedAmountLowThreshold = SEND_BUFFERED_AMOUNT_LOW_THRESHOLD;
const MAX_MESSAGE_SIZE = 800;
Everything works fine for small data that is not split into too many messages. The error occurs randomly for big files only.
In 2016/11/01 , there is a bug that lets the dataChannel.bufferedAmount value change during the event loop task execution. Relying on this value can thus cause unexpected results. It is possible to manually cache dataChannel.bufferedAmount, and to use that to prevent this issue.
See https://bugs.chromium.org/p/webrtc/issues/detail?id=6628
I'm trying to use ActiveMQPrefetchPolicy but cannot quite understand how to use it.
I'm using queue, there are 3 params that I can define for PrefetchPolicy:
queuePrefetch, queueBrowserPrefetch, inputStreamPrefetch
Actually I don't get the meaning of queueBrowserPrefetch and inputStreamPrefetch so I do not know how to use it.
I assume that you have seen the ActiveMQ page on prefetch limits.
queueBrowserPrefetch sets the maximum number of messages sent to a
ActiveMQQueueBrowser until acks are received.
inputStreamPrefetch sets the maximum number of messages sent
through a jms-stream until acks are received
Both queue-browser and jms-stream are specialized consumers. You can read more about each one of them but if you are not using them it won't matter what you assign to their prefetch limits.