I know LDAP is a Protocol but is there a way to monitor it?
I am using WhatsUps Gold monitoring and have been asked to look into LDAP monitors.
How can I set up monitoring for LDAP?
There is no standard for monitoring LDAP directory services, but most of the products support getting monitoring information via LDAP itself, under the "cn=monitor" suffix.
Servers such as OpenDJ (continuation of the OpenDS project, replacement of Sun DSEE) also have support for monitoring through SNMP and JMX.
Regards,
Ludovic.
I have been using cnmonitor (http://cnmonitor.sourceforge.net/) for some years with excellent results, although it's not perfect and there are some errors. You can see lots of statistics without almost doing anything: number of requests, searches, add, modifications, deletes, index status, replication, schema, etc. It is also compatible with many different LDAP servers (although I have only used it with 389 Directory Server).
Related
We are using kong api gateway as a single gateway for all apis. we are facing latency issue with few of our api's (1500-2000ms). later when we checked, latency was being created because of the "rate limiting" plugin. When we disable the plugin, latency improves and the response is same as what we get directly from IP (close to 300ms approx).
I m trying to setup redis node to cache database queries, I m not sure how we can configure kong to read from redis itself. how we can cache the database queries to redis node.
We are using postgresql for kong.
I think maybe you are trying to do a couple different things at once.
First, rate-limiting: what is the value for your config.policy parameter? The Kong documentation has three values: "local (counters will be stored locally in-memory on the node), cluster (counters are stored in the datastore and shared across the nodes) and redis (counters are stored on a Redis server and will be shared across the nodes)."
If you are seeing high latency, and your config.policy is set to cluster or redis, it might be due to latency between Kong and postgres/redis (depending on what policy you're using). If you are using rate-limiting just to prevent abuse, using the 'local' policy is faster. (There's more about this at the Kong documentation.)
The other question is about caching: Kong Enterprise has a built-in caching plugin, but for Kong Community, since it's built on top of Nginx, you can do caching with Nginx. This link might help you.
There is a community custom plugin out there that enables the use of caching with redis without the need to use the Kong Enterprise -> https://github.com/globocom/kong-plugin-proxy-cache
Maybe you could combine that with rate limiting to achieve the desired latency performance or use this plugin as inspiration.
As a recovery mechanism I need to write a software if my tomcat fails, I need to send email notification. Are there any api's supported from tomcat where I can receive critical events.
Any help on this regard would be very useful to me.
thanks
Lokesh
It depends: What do you consider a critical event?
Answering time above 2 sec/page?
Out of Memory
crash
database not available
...
You should look for generic monitoring tools, nagios is a good starting point and there are lots of equally good alternatives, open source as well as commercial.
Then monitor your tomcat installation, e.g. through standard http, on jmx, on process/OS level. Include your infrastructure: Database, Apache, others.
I was just curious why an LDAP (Lightweight Directory Protocol) would or would not be considered persistence data?
You are mixing up a "protocol" with "data". There is no "LDAP data".
Apart from that, an LDAP directory can be seen as a classical example of persistent storage.
LDAP directory entries have an average lifetime in the range of weeks, or even months.
LDAP servers are optimized for an "occasional writes, many reads" usage pattern.
Modern LDAP servers allow mechanisms for ensuring data consistency in the directory.
Is there any good way to integrate OpenLDAP or ApacheDS servers (or maybe another open-source LDAP server) with JMS to propagate LDAP database modification to another service?
Basically I need to have LDAP server cluster (several instances with master to master replication) and another standalone Java application, connected via a JMS server (e.g. ActiveMQ), so that:
All changes to LDAP data structure are sent to the Java app.
The Java app. can send messages to the LDAP database via JMS server to update LDAP data
I found out that there is a way to set up JMS replication for ApacheDS (https://cwiki.apache.org/DIRxSRVx11/replication-requirements.html#ReplicationRequirements-GeneralRequirements), but I am in doubt whether it will work in case we have a cluster of several ApacheDS masters + one JMS replication node to send all modifications to the cluster.
UPDATE: The page describing JMS replication for ApacheDS turned out to be 5 ears old, so currently the only way of replication in ApacheDS, I know about, is LDAP protocol based replication.
There IDM products that will perform what you are asking about.
I know NetIQs IDM products works well with JMS.
OpenLDAP and ApacheDS have a changeLog that you could use to determine the changes made.
You could then write some code to send the changes to JMS Queue.
I can't speak for ApacheDS, but OpenLDAP already contains a full-blown replication system, with about six different ways to configure it; in other words, you can do it perfectly well, and much more efficiently, without Java and JMS.
I'm currently developing a project supported on a WebLogic clustered environment. I've successfully set up the cluster, but now I want a load-balancing solution (currently, only for testing purposes, I'm using WebLogic's HttpClusterServlet with round-robin load-balancing).
Is there any documentation that gives a clear comparison (with pros and cons) of the various ways of providing load-balancing for WebLogic?
These are the main topics I want to cover:
Performance (normal and on failover);
What failures can be detected and how fast is the failover recovery;
Transparency to failure (e.g., ability to automatically retry an idempotent request);
How well is each load-balancing solution adapted to various topologies (N-tier, clustering)
Thanks in advance for your help.
Is there any documentation that gives a clear comparison (with pros and cons) of the various ways of providing load-balancing for WebLogic?
It's not clear what kind of application you are building and what kind of technologies are involved. But...
You will find useful information in Failover and Replication in a Cluster and Load Balancing in a Cluster (also look at Cluster Implementation Procedures) but, no real comparison between the different options, at least not to my knowledge. But, the choice isn't that complex: 1. Hardware load balancers will perform better than software load balancers and 2. If you go for software load balancers, then WebLogic plugin for Apache is the recommended (by BEA) choice for production. Actually, for web apps, its pretty usual to put the static files on a web server and thus to use the Apache mod_wl plugin. See the Installing and Configuring the Apache HTTP Server Plug-In chapter.
These are the main topics I want to cover:
Performance (normal and on failover): If this question is about persistent session, WebLogic uses in memory replication by default and this works pretty well with a relatively low overhead.
What failures can be detected and how fast is the failover recovery: It is unclear which protocols you're using. But see Connection Errors and Clustering Failover.
Transparency to failure (e.g., ability to automatically retry an idempotent request): Clarifying the protocols you are using would make answering easier. If this question is about HTTP requests, then see Figure 3-1 Connection Failover.
How well is each load-balancing solution adapted to various topologies (N-tier, clustering): The question is unclear and too vague (for me). But maybe have a look at Cluster Architectures.
Oh, by the way, another nice chapter that you must read Clustering Best Practices.