Authentication can be enabled on a Cassandra cluster using database roles. Recently, I landed into a situation where multiple roles were created for a Cassandra cluster running 3.11.x version, and I didn't have any easy way to figure out which role is actively used or not. Is there a way to get usage statistics of database roles in Cassandra 3.11.x versions?
Thanks.
No, that information is not stored locally.
The closest you can get to something like it is if you have audit logging enabled but it's a feature that was added in Apache Cassandra 4.0 (CASSANDRA-12151) so it's not available in 3.11. Cheers!
Related
I'm using Hortonworks schema registry with NIFI and things are working fine. I have installed Hortonworks schema registry on a single node and I'm afraid if that machine goes down what will happen to my NIFI flows. I have seen in Hortonworks schema registry architecture that we can use mysql, PostgreSql and In-Memory storage for storing schema. AFAIK none of them are distributed system. Is there any way to achieve cluster mode for high availability?
Sure, you can do active-active or active-passive replication for MySQL and Postgres, but that is left up to you to implement, as Hortonworks will likely forward you to the respective documentation on each tool, and that is the reason why the documentation for these tools doesn't guide you towards these design decisions in itself, as you should be aware of the drawbacks when having a SPoF
The Schema Registry itself is just a web-app, so you could put it behing your favorite reverse proxy, or within a container orchestrator, such as Docker support in HDP 3.x
I was wondering about how deepstream decides to store an info in cache vs database if both of them are configured. Can this be decided by the clients?
Also, when using redis will it provide both cache and database functionality? I would be using amazon elastic cache with redis backend for the same.
It stores it in both, first in the cache in a blocking way and outside the critical path in the database in a non-blocking way.
Here's an animation illustrating this.
You can also find more information here: https://deepstream.io/tutorials/core/storing-data/
Good day,i have a need to use hive just the way we could use mysql. I want to find a way in which I can host it online so that people in different places can communicate to one hive service. Thanks in advance.
This is the functionality that Hive Server 2 provides
https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2
It exposes Hive as a thrift web service, and there are JDBC and ODBC drivers available.
You can also put Apache Knox in front of it, in order to have more options for authentication and authorization https://knox.apache.org/
If you are using a common distribution such as Hortonworks or Cloudera, Hive Server 2 will probably be installed automatically when you install Hive.
I have developed an application and want to deploy it on a cluster environment.But I am not sure how replicate session when one server goes down.
What are the things I need to do for session replication?
Any suggestion would be greatly appreciated!
Try using this
It is always better to have session replication off of weblogic and on the web tier end as the target is weblogic.
You can also look into session persistence and how it can be achieved is hazelcast.
I'm sure a proper cluster with caching such as coherence can maintain sessions over multiple machines and provide high availability in weblogic. From an infrastructure stand point, I would go with Coherence or "Coherence-like" products to achieve session replication and persistence.
WebLogic has built-in clustering, so you don't need some other thing to do this. Also, WebLogic Suite includes Coherence, and with WebLogic Suite you can turn on Coherence session clustering just by clicking a check-box in the console.
Please read this
I know LDAP is a Protocol but is there a way to monitor it?
I am using WhatsUps Gold monitoring and have been asked to look into LDAP monitors.
How can I set up monitoring for LDAP?
There is no standard for monitoring LDAP directory services, but most of the products support getting monitoring information via LDAP itself, under the "cn=monitor" suffix.
Servers such as OpenDJ (continuation of the OpenDS project, replacement of Sun DSEE) also have support for monitoring through SNMP and JMX.
Regards,
Ludovic.
I have been using cnmonitor (http://cnmonitor.sourceforge.net/) for some years with excellent results, although it's not perfect and there are some errors. You can see lots of statistics without almost doing anything: number of requests, searches, add, modifications, deletes, index status, replication, schema, etc. It is also compatible with many different LDAP servers (although I have only used it with 389 Directory Server).