Is there a way to programmatically get all IgniteQueue & IgniteCache proxies for the caches & queues created on the whole ignite cluster? - ignite

I am currently running ignite 2.5 & wondering if there is a way to programmatically get all IgniteQueue & IgniteCache proxies for the caches & queues created on the whole ignite cluster (or their configurations, for caches i think i can get that from IgniteConfiguration if its a configured one or from IgniteCache proxy, can queues be configured aswell? also how do i get their configuration).
I see for example this one, Ignite#cacheNames() which i think will returns all cache names including the ones internally created for the queue? I am going to try it but want to make sure i don't do/use something based not documented/intended for the purpose.
The intention is to recreate the queues/caches programmatically if they are no longer present in the cluster.
Thanks
UPDATE 1:
Thanks #alex-k for confirming there is no public API for queues like caches to get the configuration..it would be nice to have had this support.

You can use Ignite.cacheNames() top get cache names, and Ignite.configuration().getCacheConfiguration() to get the configs.
There are no public APIs to get all queue names.

Related

Apache Ignite's Continuous Queries event handler group & sequencing

We are trying to use the Continuous Query feature of Ignite. But we are facing an issue on handling that event. Below is our problem statement
We have defined a Continuous Query with remote filter for a cache and shared the filter definition with Thick Client.
We are running multiple replica of the "Thin Client" in Kubernetes cluster.
Now the problem is each instance of the "Thin Client" running in k8s cluster have registered the remote filter and each instance receiving the event and trying to process the data in parallel. This resulting in duplicating data process or even overriding the data in my store.
Is there any way to form a consumer group and ensure that only one instance of the "Thin Client" is receiving the notification and its processing the data ?
My Thick client and Thin Clients are in .NET
Couldn't found any details on Ignite document
https://ignite.apache.org/docs/latest/key-value-api/continuous-queries
Here each thin client is starting its own continuous query and thereby, by design, each thin client is getting its own event to consume. If you want to route an event to a specific client then you would need to start only one continuous query, and distribute that event to your app as you see fit.
Take a look at ignite messaging to see whether it fits your use case.
Also check out the distributed Queue/Set which have unique delivery guarantees.

Detect and Setup alerts on clusters being dropped from envoy

We are using envoy as a reverse proxy and have few static/dynamic clusters. I need a way to monitor all the static clusters (all are critical) and create alerts whenever any of them is not reachable. The alert will help team take timely action.
I am new to envoy and exploring its features. It would be helpful if someone can answer/ point me to right resource.
thanks
As far as I know, this is not possible out-of-the-box with Envoy. But you can use something like Prometheus and Alertmanager to monitor and create alerts for your clusters.
If you have admin interface set up (https://www.envoyproxy.io/docs/envoy/v1.21.1/operations/admin), you can query /stats/prometheus to get some metrics.
The following metrics can be interesting in your case :
envoy_cluster_update_failure{envoy_cluster_name="my-cluster"} : increase when the cluster is not reachable
envoy_cluster_update_success{envoy_cluster_name="my-cluster"} : increase when the cluster is reachable
I am not an expert in Prometheus/Alertmanager, but something like :
increase(envoy_cluster_update_failure{envoy_cluster_name="my-cluster"}[1m]) > 0
should trigger alerts when the cluster my-cluster become not reachable.

API or other queryable source for getting total NiFi queued data

Is there an API point or whatever other queryable source where I can get the total queued data?:
setting up a little dataflow in NiFi to monitor NiFi itself sounds sketchy, but if it's a common practice, let's be it. Anyway, I cannot find the API endpoint to get that total
Note: I have a single NiFi instance: I don't have nor will implement S2S reporting since I am on a single instance, single node NiFi setup
The Site-to-Site Reporting tasks were developed because they work for clustered, standalone, and multiple instances thereof. You'd just need to put an Input Port on your canvas and have the reporting task send to that.
An alternative as of NiFi 1.10.0 (via NIFI-6780) is to get the nifi-sql-reporting-nar and use QueryNiFiReportingTask, you can use a SQL query to get the metrics you want. That uses a RecordSinkService controller service to determine how to send the results, there are various implementations such as Site-to-Site, Kafka, Database, etc. The NAR is not included in the standard NiFi distribution due to size constraints, but you can get the latest version (1.11.4) here, or change the URL to match your NiFi version.
#jonayreyes You can find information about how to get queue data from NiFi API Here:
NiFi Rest API - FlowFile Count Monitoring

JMS message received at only one server

I'm having a problem with a JEE6 application running in a clustered environment using WebSphere ApplicationServer 8.
A search index is used for quick search in the UI (using Lucene), which must be re-indexed after new data arrived in the corresponding DB layer. To achieve this we're sending a JMS message to the application, then the search index will be refreshed.
The problem is, that the messages only arrives at one of the cluster members. So only there the search index is up to date. At the other servers it remains outdated.
How can I achieve that the search index gets updated at all cluster members?
Can I receive the message somehow on all servers?
Or is there a better way to do this?
I found a possible solution:
Generally, a JMS message delivered via a queue goes only to one of the cluster members. I found a possible way to get the info to all of the cluster members, using a EJB timer. Creating a non-persistent timer should call the callback method on all of the cluster members. This might be a convenient way to recreate the local search index on all the cluster members.
It is important to be a non-persistent ejb timer, because persistent timers get synchronized on the cluster and are only executed on one of the cluster members.

Using ServiceStack.Redis with RedisCloud

Using RedisCloud as a datastore for a ServiceStack based AppHarbor hosted app.
The RedisCloud .net client documentation states to not use the ServiceStack.Redis connection managers:
Note: the ServiceStack.Redis client connection managers (BasicRedisClientManager and PooledRedisClientManager) should be disabled when working with the Garantia Data Redis Cloud. Use the single DNS provided upon DB creation to access your Redis DB. The Garantia Data Redis Cloud distributes your dataset across multiple shards and efficiently balances the load between these shards.
Why would they suggest that? Because they are doing fancy load balancing stuff in their 'Garantia Data' layer and don't want to handle unnecessary connections? The RedisClient class is not thread-safe, so it makes it much more difficult from the application programming perspective.
Should I just ignore their instructions and use a PooledRedisClientManager? How would I configure it with the single uri that RedisCloud provides?
Or will I need to write a basic RedisClient pool wrapper that just creates new RedisClient connections as needed to handle concurrent access (i.e. ignores all read/write pooling specifics, hopefully delegating all that up-stream to the RedisCloud layer)?
Why would they suggest that? Because they are doing fancy load balancing stuff in their 'Garantia Data' layer and don't want to handle unnecessary connections?
I think you could be right. To my knowledge these classes simply wrap creating/retrieving instances of RedisClient (though, I think Basic always creates a new RedisClient). While I looked over their site, I did't see anything about 'max number of connections to the Redis server(s). The previous Redis vendor from AppHarbor (MyRedis) had plans that listed the number of max connections allowed per plan. However, I also didn't see anything on the site mention connection limits/handling.
Should I just ignore their instructions and use a PooledRedisClientManager? How would I configure it with the single uri that RedisCloud provides?
Well, if you do ignore their instructions my guess is you could eventually run into a 'max number of connections exceeded' error. That would make it difficult to get to your Redis Server(s). I think you could still use the BasicRedisClientManager because when you call GetClient() it always 'news up' a RedisClient in the same way shown in their example.