Message broker with dynamic queues - rabbitmq

I have application that accepts data for updating products prices and I wondering how I can optimize it.
Data is received in some kind of queue ( rabbitMQ )
Few key notes:
I can't change incoming data format ( data is received from third party )
Updates must be performed in order from product perspective ( due attributes )
Each of product CAN have additional attributes by which system can behave differently when updating prices internally
I was thinking about using some messaging system too to distribute processing something like that:
where :
Q1 is queue for handling only p1 product updates.
Q2 is queue for handling only p2 product updates.
and so on..
However I have found it is likely to be more anti-pattern: Dynamic queue creation with RabbitMQ
For example seems with RabbitMQ it would be even quite hard to achieve since we need to have predefined queues in order to listen them.
The question is:
1) Should I use another pattern in case this is not valid and which pattern I should use
2) In case this pattern is valid is there some kind different messaging system that would allow distribute data by this pattern

Related

Building Rest API response Object based on consumers requests

I am building rest API & below are my end points.
EndPoint 1:
/products/{code} --> giving product inforamtion
Endpoint 2:
/products/{code}/packages --> provides packages for a given productcode
Endpoint 3:
/products/{code}/suppliers --> provides suppliers for a given product code
Endpoint 4:
/products/{code}/shelfTags --> provides shelfTags for a given product code
We have multiple down stream systems(more than 20 downstream systems) which require products & it's related information.
Note: Not all users require the nested collection information, some clients need only product information and they are good and below are the combinations and it varies by consumers
1. product info only --> **consumer 1**
2. product , packages --> **consumer 2**
3. product, suppliers, packages--> **consumer 3**
4. product, supplier, packages, shelfTags--> **consumer 4**
5. product, supplier, shelfTags --> **consumer 5**
6. product, shelfTags --> **consumer 6**
7. etc...
From above example, consumer 4 makes Http call to get product code and now has to make multiple Http calls to get packages (Endpoint 2) or suppliers (Endpoint 3) or shelfTags (Endpoint 4) etc... Is this a good design ?
Is there a way consumers can get only what they want in response on one request ? (Now is it a good design to give data needs in one request ? or it's good to ask consumers to make multiple Http calls to get nested collection ?)
Note : I cannot include all nested collection along with Products Endpoint 1 itself as it's requires huge data querying so I am planning to only provide what consumer may need, that will reduce unnecessary querying and also providing irrelevant information to few consumers who don't need that data.
Current Design:
I have below now:
Approach 1:
/products/{code}?Options = packages, Suppliers
Above would give Product details and have options query parameter based on that I can decide whether to pass Packages & supplier, shelftags etc, but here we are not filtering on resource to pass query parameter, I believe this is not a good approach as query params are only used to filter on the resources.
Approach 2:
Form a different endpoint as query parameter on the resource is for only filters if I am not wrong so looking at below option:
/products/{code}/extendedProductDetails?Options = Packages, Suppliers
In option2 extendedProductDetails is an operation rather than resource itself and I am filtering on the operation.
Can anyone provide solution on how to solve this requirement
Approach 1 vs. Approach 2
Assuming that you want to use REST, from my point of view, between the options you gave, I would go with something like Approach 2, since it is a proper collection for extended information. However, I think I'd prefer to model it such as /products-extended/{code}?options=packages,suppliers, since it defines a different collection.
Besides enhancing the readability of the API, in this way, you have the products collection and the products-extended collection: each of them can be consumed independently and with different query string filters (of course that less filtering is prone to increase complexity and latency, but in my opinion, query string parameters should be optional). If they must really not be optional and there is always the need to provide a product id and at least one nested collection, then, you can also consider designing something like products-extended/{code}/{packages,suppliers,etc}. Either way, this would "protect" your products collection.
Moreover, this would allow you to perform different operations (POST, PUT,...) to both collections, if your use case requires such.
Other approaches
Besides the other suggestions on GraphQL - would be great, yes :) -, OData or the custom types, couldn't you keep only with the individual collections? Depending on your use case, maybe you could perform parallel calls to /products/{code}/packages, /products/{code}/suppliers and so on, since you already know the product id. Perhaps, the major drawback of this design would be, for example, to create new products. However, the GET requests became super easy :)
Maybe a solution would be to use custom media types in the request header:
application/json+info-only
application/json+supplier
application/json+supplier+packages
etc.
In your controller action you would check for the selected media type and respond to the request based on them. Simply return an IActionResult and your consumer will get the data within one request.
It's very similar to your approaches but with custom extended media types you would have still one endpoint without additional parameters.

RabbitMQ: expires-policy does not delete queues

I'm using RabbitMQ. The problem is, the queues are not getting deleted, despite me having a policy set up for this and I cannot figure out why it is not working.
This is the policy definition:
And this is a typical queue; it is idle for a long time and has 0 consumers.
I know the rules for expiring, however I cannot see that any of this would be the case. Any hints on what could be wrong here?
The pattern you provide restuser* doesn't match the name of the queue restresult-user001n5gb2, you can also confirm that from the Policy applied to the queue, here being ha.
Two additional points to pay attention to:
the pattern is a regular expression, and unless you "anchor" the beginning of the match or its end, as long as your pattern shows somewhere in the name it's good. restuse as a pattern should yield the same result as your pattern. If you want to match any queue starting with restuser, the pattern should be ^restuser
Policies are not cumulative (if you have configured high availability through policies, and you want to keep it for your restuser queues, you'll need to add the ha parameters to your clearrestuser policy too.

Sigfox or Lora devices with Azure-Digital-Twins

I have a couple of questions for the setup of digital twin with Lora and Sigfox devices which data are encoded:
how do we get the iothubowner string to create the callback to Lora or Sigfox backend ?
how do we deal with mandatory properties especially with HardwareId?
what is the best practice to decode message and then compute the message? Knowing that we have to cascade the processing : decoding then normalization then telemetry analytics (monitor room condition for example)
Here are the answers:
1. IoT Hub connection string (iothubowner) will be exposed in the API in couple of months
2. For device the unique identifier from client side is HardwareId. We recommend adding the MacAddress of the device. For SensorId.HardwareId, you have multiple options that we recommend: either Device.HardwareId + SensorName or just SensorName if unique per device or just a GUID. SensorId.HardwareId is important to be set because this value must match the telemetry message header property DigitalTwins-SensorHardwareId in order for the UDF to kick off. See https://learn.microsoft.com/en-us/azure/digital-twins/concepts-device-ingress#device-to-cloud-message
3. You'd have to create a matcher that associates the right UDF with the code to decode the byte array to a certain type of sensors. For example, if you have sensors of Type: LoRa, and then various DataTypes: you'd create a matcher against the Type to match "LoRa" and then various datatypes. For now, you would have to handle all of that in one UDF. In the future, we might support chaining and you could have a UDF for each step separately, but until then, all in one.

Preserving order of execution in case of an exception on ActiveMQ level

Is there an option on Active MQ level to preserve the order of execution of messages in case of an exception? . In other words, assume that we have inside message ID=1 info about an object called student having for example ID=Student_1000 and this message failed and entered in DLQ for a certain reason but we have in the principal queue message ID= 2 and message ID = 3 having the same ID of this student (ID=Student_1000) . We should not allow those messages from getting processed because they are containing info about same ID of object as inside message ID = 1; ideally, they should be redirected directly to DLQ to preserve the order of execution because if we allow this processing, we will loose the order of execution in case we are performing an update.
Please note that I'm using message groups of Active MQ.
How to do that on Active MQ level?
Many thanks,
Rosy
Well, not really. But since the DLQ is by default shared, you would not have ordered messages there unless you configure individual DLQs.
Trying to rely on strict, 100% message order on queues to keep business logic simple is a bad idea, from my experience. That is, unless you have a single broker, a single producer and a single consumer and no DLQ handling (infinite redeliviers on RedeliveryPolicy).
What you should do is to read the entire group in a single transaction. Roll it back or commit it as a group. It will require you to set the prefetch size accordingly. DLQ handling and reading is actually a client concern and not a broker level thing.

Query random but unread keys the Redis way

I have thousands of messages each stored like a list of properties (text, subject, date, etc) in a separate key: msg:1001, msg:1002 etc...
There is also a list keyed as messages with ids of all existing messages: 1001,1002,1003...
Now I need to get 10 random messages.
But, I only need those messages that are not flagged by the user (sort of unread).
There is a hash for each user keyed as flags:USERID = 1001=red,1005=blue,1010=red,...
Currently I have to keep in memory of my application a full list of messages plus all flags for all users currently logged in and do all the math by hand (in JavaScript).
Is there a way to do such a query in Redis way, with no duplicating all the data on the application end?
Your question is an example of a space–time tradeoff. On the one hand, you say that you don't want to keep a list of the unflagged messages in your system, but I would guess that you also want to keep your application relatively fast. Therefore, I suggest giving up some space and keeping a set of unflagged messages.
As messages are created in your system, add them both to messages (SADD messages <messageid>) and messages_unflagged (SADD messages_unflagged <messageid>). After a user adds a flag to a message, remove the message from the unflagged set (SREM messages_unflagged <messageid>). When you need 10 random, unflagged messages, you can get their IDs in constant time (SRANDMEMBER messages_unflagged 10).