how can I change the default behaviour which makes my queues durable? I want them to be non-durable. The queues are created in the runtime as a backend for websockets.
There is a default exchange defined which has durable feature set to TRUE. I played a bit with exchanges but could not make it work as I expect.
When you declare a queue using the Channel class, you can see those parameters:
Queue.DeclareOk queueDeclare(String queue, boolean durable, boolean exclusive, boolean autoDelete,
Map<String, Object> arguments) throws IOException;
I am using:
'amqp-client:3.5.4'
typically, you just set "durable=false" in whatever library is declaring the queue.
for example,
python: channel.queue_declare(queue='hello', durable=True)
java:
boolean durable = true; channel.queueDeclare("hello", durable, false, false, null);
you can find other language examples in the worker queue example on RabbitMQ.com
you should consult the documentation for the library you are using, though
Related
The following code only prints 10000 i.e. only the last element
val channel = BroadcastChannel<Int>(Channel.CONFLATED)
val flowJob = channel.asFlow().buffer(Channel.UNLIMITED).onEach {
println(it)
}.launchIn(GlobalScope)
for (i in 0..100) {
channel.offer(i*i)
}
flowJob.join()
Code can be ran in the playground.
But since the Flow is launched in separate dispatching thread, and value is sent to the Channel and since Flow has an unlimited buffer, it should receive each element till onEach is invoked. But why only the last element is able to get received?
Is this the expected behavior or some bug? If its expected behavior how would somebody try to push only the newest elements to the flow, but all the flow that has certain buffer can receive the element.
Actually, this is about the "Conflate" way of buffering. For buffering a flow you have a couple of ways such as using buffer() method or collectLatest() or conflate(). Each of them has their own way to buffer. So conflate() method's way is that when the flow emits values, it tries to collect but when the collector is too slow, then conflate() skips the intermediate values for the sake of the flow. And it's doing it even tho every time it's emitted in a separate coroutine. So in a channel, a similar thing is happening basically.
Here is the official doc explanation:
When a flow represents partial results of the operation or operation
status updates, it may not be necessary to process each value, but
instead, only most recent ones. In this case, the conflate operator
can be used to skip intermediate values when a collector is too slow
to process them.
Check out this link.
The explanation is for flow but you need to focus on the feature that you are using. And in this case, conflation is same for channel and flow.
The problem here is the Channel.CONFLATED. Taken from the docs:
Channel that buffers at most one element and conflates all subsequent `send` and `offer` invocations,
so that the receiver always gets the most recently sent element.
Back-to-send sent elements are _conflated_ -- only the the most recently sent element is received,
while previously sent elements **are lost**.
Sender to this channel never suspends and [offer] always returns `true`.
This channel is created by `Channel(Channel.CONFLATED)` factory function invocation.
This implementation is fully lock-free.
so this is why you only get the most recent (last) element. I'd use an UNLIMITED Channel instead:
val channel = Channel<Int>(Channel.UNLIMITED)
val flowJob = channel.consumeAsFlow().onEach {
println(it)
}.launchIn(GlobalScope)
for (i in 0..100) {
channel.offer(i*i)
}
flowJob.join()
As some of the comments stated, using Channel.CONFLATED will store only the last value, and you are offering to the channel, even if your flow has a buffer.
Also join() will suspend until the Job is not complete, in your case infinitely, that's why you needed the timeout.
val channel = Channel<Int>(Channel.RENDEZVOUS)
val flowJob = channel.consumeAsFlow().onEach {
println(it)
}.launchIn(GlobalScope)
GlobalScope.launch{
for (i in 0..100) {
channel.send(i * i)
}
channel.close()
}
flowJob.join()
Check out this solution (playground link), with the Channel.RENDEZVOUS your channel will accept new elements only if the others are already consumed.
This is why we have to use send instead of offer, send suspends until it can send elements, while offer returns a boolean indicating if send was succesfull.
At last, we have to close the channel, in order for join() not to suspend until eternity.
SharedFlow has just been introduced in coroutines 1.4.0-M1, and it is meant to replace all BroadcastChannel implementations (as stated in the design issue decription).
I have a use case where I use a BroadcastChannel to represent incoming web socket frames, so that multiple listeners can "subscribe" to the frames.
The problem I have when I move to a SharedFlow is that I can't "end" the flow when I receive a close frame, or an upstream error (which I would like to do to inform all subscribers that the flow is over).
How can I make all subscriptions terminate when I want to effectively "close" the SharedFlow?
Is there a way to tell the difference between normal closure and closure with exception? (like channels)
If MutableSharedFlow doesn't allow to convey the end of the flow to subscribers, what is the alternative if BroadcastChannel gets deprecated/removed?
The SharedFlow documentation describes what you need:
Note that most terminal operators like Flow.toList would also not complete, when applied to a shared flow, but flow-truncating operators like Flow.take and Flow.takeWhile can be used on a shared flow to turn it into a completing one.
SharedFlow cannot be closed like BroadcastChannel and can never represent a failure. All errors and completion signals should be explicitly materialized if needed.
Basically you will need to introduce a special object that you can emit from the shared flow to indicate that the flow has ended, using takeWhile at the consumer end can make them emit until that special object is received.
I think a possible solution is creating a boolean flag isValid and publicly expose only flows with .takeWhile { isValid }. Then just call isValid = false and sFlow.emit() when you want to close all subscribers.
Possible implementation:
private var isValid = true // In real scenario use atomic boolean
private val _sharedFlow = MutableSharedFlow<Unit>()
val sharedFlow: Flow<Unit> get() = _sharedFlow.takeWhile { isValid }
suspend fun cancelSharedFlow() {
isValid = false
_sharedFlow.emit(Unit)
}
EDIT: In my case .emit() was always suspending so I had to use BufferOverflow.DROP_LATEST (which is not suitable for many usecases). Not sure if the problem is in this example or elsewhere in my app. If you see a problem, please comment :)
I send the following message with content type application/json:
However whene i get messages from the same RabbitMQ Web console, it shows the payload as String.
What am I doing wrong? Or am I fundamentally misunderstanding and the Payload is always of type String?
From the official docs:
AMQP messages also have a payload (the data that they carry), which AMQP brokers treat as an opaque byte array. The broker will not inspect or modify the payload. It is possible for messages to contain only attributes and no payload. It is common to use serialisation formats like JSON, Thrift, Protocol Buffers and MessagePack to serialize structured data in order to publish it as the message payload. AMQP peers typically use the "content-type" and "content-encoding" fields to communicate this information, but this is by convention only.
So basically, RabbitMQ has no knowledge on JSON, messages all are just byte arrays to it
From NodeJS Context:
If we want to send JSON object as message, we may get the following error:
The first argument must be of type string or an instance of Buffer,
ArrayBuffer, or Array or an Array-like Object. Received an instance of
Object
So, we can convert the JSON payload as string and parse it in the worker. We stringify the JSON object before sending the data the Queue-
let payloadAsString = JSON.stringify(payload);
And from worker's end, we can then JSON.parse
let payload = JSON.parse(msg.content.toString());
//then access the object as we normally do, i.e. :
let id = payload.id;
For anyone using .Net to send objects via RabbitMQ.
You have to serialise your JSON object to byte array, send via RabbitMQ then de-serialise after receiving. You can do this like this:
Install the Newtonsoft JSON library
using Newtonsoft.Json;
Create a model for your JSON object message (in this case AccountMessage)
Serialise your object into byte array like this:
byte[] messagebuffer = Encoding.Default.GetBytes(JsonConvert.SerializeObject(accountMessage) );
After receiving the message data, you can de-serialise like this:
AccountMessage receivedMessage = JsonConvert.DeserializeObject<AccountMessage>(Encoding.UTF8.GetString(body));
from here
Content Type and Encoding
The content (MIME media) type and content encoding fields allow publishers communicate how message payload should be deserialized and decoded by consumers.
RabbitMQ does not validate or use these fields, it exists for applications and plugins to use and interpret.
by the way, using the rabbitMQ web gui, you use the words content_type, however in code (javascript confirmed), you use the key name contentType. it's a subtle difference, but enough to drive you crazy.
I'm trying to force CAN signals to given values using COM interface of CANalyzer. Since there is no COM method to send CAN messages, I'm implementing a workaround using CAPL:
void SendMySignal(int value) {
message MyMessage msg;
msg.MySignal = value;
output(msg);
}
This works fine, however since MyMessage and MySignal are referenced statically (by name) here, I'll have to implement N functions to be able to send N signals (or an N-way switch statement, etc). Is there a way to avoid the hassle and access signals inside a message by string? Something like this:
void SendSignal(int MessageID, char SignalName, int value)
I'm also open to alternative solutions in case I have missed something in the COM interface. If there is a solution which only works for CANoe, I can ask my boss for a license, but of course I'd prefer to do without.
there is such function, but it is restricted to be used only in test nodes
long setSignal(char signalName[], double aValue);
you can find details in:
CAPL Function Overview » Test Feature Set / Signal Access » SetSignal
Special Use Case: Signal is not known before Measurement Start
and take care about not to send for each signal a new message to avoid bus over-flooding. In my opinion it is a better style to set all signals for whole message and to send it on change only when it is not cyclic. Signal updates in cyclic messages mostly have to be sent in next cycle.
Following the guidelines here I'm able to set the "consumer_cancel_notify" property for my client connection, but when the Queue is deleted the client still isn't noticing. I'm guessing that I probably have to override some method or set a callback somewhere, but after digging through the source code I'm lost as to where I'd do this. Does anybody offhand know where I'd listen for this notification?
Ok here's how I got it to work:
When creating the Queue (i.e. "declaring" the Queue), add a callback for the "AMQP_CANCEL" messages.
Inside AMQPQueue::sendConsumeCommand(), inside the while(1) loop where the code checks for the different *frame.payload.method.id*s, add a check for the AMQP_BASIC_CANCEL_METHOD, e.g.
if (frame.payload.method.id == AMQP_BASIC_CANCEL_METHOD){
cout << "AMQP_BASIC_CANCEL_METHOD received" << endl;
if ( events.find(AMQP_CANCEL) != events.end() ) {
(*events[AMQP_CANCEL])(pmessage);
}
continue;
}
That's it.
For my purposes I wanted to redeclare the Queue if it got deleted so that I could keep consuming messages, so inside my callback I just redeclared the queue, set up the bindings, added events, set the consumer tag, and consumed.