ActiveMQ MessageGroupHashBucket - what is cache property needed for? - activemq

I'm trying to find a best strategy to deal with ActiveMQ Message Groups support.
ActiveMQ has several strategies (MessageGroupMap implementations).
The one that is confusing me a little is MessageGroupHashBucket.
Specifically, after looking at sources, I don't understand why is the cache property needed there? When assigning consumer id for message group or retrieving consumer id by message group - the array of buckets is used.
It would be great if someone can suggest why.
Thanks in advance,

MessageGroupHashBucket implements MessageGroupMap interface method getGroups() by returning the cache property as a map of all group names and associated consumer Id.

Adding to #piola's answer, it looks like the cache property is used to configure the number of group names that will be inside a bucket. This is a very efficient way to handle a large number of groups. Going by this logic, a configuration of 1024 buckets with the cache size of 64 can handle 65,536 groups.

Related

is there any way to customize ignite built-in cleaning TTL functions

I see that Ignite currently supports the TTL feature to remove unusual keys. Is there any way to customize this TTL feature?
In my case, I have BinaryObjects in IgniteCache, key -> BinaryObject, and those BinaryObjects contain several values, one of them is a timestamp. Could I customize Ignite's built-in cleaning TTL functions somehow so that Ignite can check the timestamp value and decide to remove or keep a key?
Thank you
Yes and no. You can implement your own expiry policy if you like. You just need to create a class that implements ExpiryPolicy. And each row can have a different policy.
However, you'll note that the API does not give access to the record, so you can't have it automatically set the policy based on a column.

How To: Save JMeterVariable values to influxdb with the sampleresults

I'd like to store some JMeterVariables together with the sampleResults to an influxdb using a BackendListenerClient for influxdb (I am using package rocks.nt.apm.jmeter to get the raw results).
My current test logs in for a random customer requests some random entities and logs out. Most of the results are within a range, I'd like to zoom in to certain extreme sample results, find out for which customer / requested entity these results are. We have seen in the past we can find performance issues with specific configurations this way.
I store customer and entity ID in a variable. My issue is that the JMeterVariables are not accessible from the BackendListenerClient. I looked at the sample_variables property, but this property will store the variables in the sampleEvent, which is not accessible in the BackendListener.
I could use the threadName, or sample label to store the vars, but I saw the CSVwriter can actually write the var values from the event, which is a much nicer solution.
Looking forward on your thoughts,
Best regards, Spud
You get it right - the Backend Listener is not customizable in terms of fine-shaping the data you're sending to Influx.
Alas.
However, there's a Swiss Army Knife always available in JMeter: the JSR223 components.
The JSR223 listener, in your case.
The InfluxDB line protocol is simple as simple could be, the HTTP/Rest libraries are
in abundance (Apache HTTP must have been already included with standard JMeter, to my recollection, no additional jars needed) - just pick it all up, form your timeseries as you like, toss it towards your InfluxDB REST endpoint, job's done.

Default spring.cloud.stream.rabbit.* properties to appy to multiple channels?

With spring cloud stream, you can avoid redundant properties for each individual channel, by specifying "default" properties.
For example, if I have 2 channels bound to the same destination/exchange, I can do:
spring.cloud.stream.default.destination=myExchange
spring.cloud.stream.bindings.myChannel1.group=queue1
spring.cloud.stream.bindings.myChannel2.group=queue2
And queue1 and queue2 will both by bound to myExchange.
That works as documented, and I do it for some properties.
But....I'd like to do the same for RabbitMQ binding properties.
For example, if I want DLQ for all of my consumers/queues, do something like:
spring.cloud.stream.rabbit.default.consumer.auto-bind-dlq=true
spring.cloud.stream.rabbit.default.consumer.dlq-ttl=10000
spring.cloud.stream.rabbit.default.consumer.dlq-dead-letter-exchange=
Otherwise, I have to specify those same 3 lines for every channel.
Is there any way to do this? I've tried several different permutations to no avail.
BTW, I'm on version 1.2.1.RELEASE of spring-cloud-starter-stream-rabbit.
Thanks.
It is supported. Please see https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#binding-properties section of the user guide
To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of spring.cloud.stream.default.<property>=<value>
According to Spring Cloud Stream documentation, it is possible since version 2.1.0.RELEASE.
See 9.2 Binding Properties.
When it comes to avoiding repetitions for extended binding properties,
this format should be used
spring.cloud.stream.<binder-type>.default.<producer|consumer>.<property>=<value>
Unfortunately so far I couldn't make it work. Did anyone get it?
It is not supported yet.
See 3.2. RabbitMQ Consumer Properties
The following properties are available for Rabbit consumers only and must be prefixed with spring.cloud.stream.rabbit.bindings..consumer..
including ttl/dlq*

REST how to name queue resource

I'm implementing Asynchronous REST API where file is posted and then it's added to queue for further processing. My questions is what's the best practice in this case of naming the resources.
Resource #1 POST Files where {type} is dynamic.
POST /files/{type}
Posting data to this resource. It's queued and user will receive unique Queue ID. How should the resource of files queue be named ?
Resource #2. GET files queue
OPTION 1. GET /files/queues/{QueueID}
OR
OPTION 2. GET /files/{type}/queues/{QueueID}
Which one makes more sense ? User can upload files with different {type}.
OR should I just use completely different resource of Getting queue items like:
GET /queues
AND
/queues/{QueueID}
Thanks for tips.
It depends on your needs. (Or the client's needs)
I would go with the "/queues/{QueueID}" option, since the queueId itself (without the type) identifies the file, so there is no need to include it.
Additionally I would omit the {type} variable even from the POST method, because you can simply send that information in the HTTP header. (Content-Type)
The "files/{type}" approach is more useful, when you have to display the files grouped by type. Without that need, there is no need to further complicate the resource identifier.
(Note: If the "queue" and "file" items are the same, then you could use GET /files/{QueueId} )

In Symfony2, should I use an Entity or a custom Repository

I am creating a new web app and would like some help on design plans.
I have "store" objects, and each one has a number of "message" objects. I want to show a store page that shows this store's messages. Using Doctrine, I have mapped OneToMany using http://symfony.com/doc/current/book/doctrine.html
However, I want to show messages in reverse chronological order. So I added a:
* #ORM\OrderBy({"whenCreated" = "DESC"})
Still I am calling the "store" object, then calling
$store->getMessages();
Now I want to show messages that have been "verified". At this point, I am unsure how to do this using #ORM so I was thinking I need a custom Repository layer.
My question is twofold:
First, can I do this using the Entity #ORM framework?
And second, which is the correct way to wrap this database query?
I know I eventually want the SQL SELECT * FROM message WHERE verified=1 AND store_id=? ORDER BY myTime DESC but how to make this the "Symfony2 way"?
For part 1 of your question... technically I think you could do this, but I don't think you'd be able to do it in an efficient way, or a way that doesn't go against good practices (i.e. injecting the entity manager into your entity).
Your question is an interesting one, because at first glance, I would also think of using $store->getMessages(). But because of your custom criteria, I think you're better off using a custom repository class for Messages. You might then have methods like
$messageRepo->getForStoreOrderedBy($storeId, $orderBy)
and
$messageRepo->getForStoreWhereVerified($storeId).
Now, you could do this from the Store entity with methods like $store->getMessagesWhereVerified() but I think that you would be polluting the store entity, especially if you need more and more of these custom methods. I think by keeping them in a Message repository, you're separating your concerns in a cleaner fashion. Also, with the Message repository, you might save yourself a query by not needing to first fetch your Store object, since you would only need to query to Message table and use its store_id in your WHERE clause.
Hope this helps.