LDAP and Persistence Data? - ldap

I was just curious why an LDAP (Lightweight Directory Protocol) would or would not be considered persistence data?

You are mixing up a "protocol" with "data". There is no "LDAP data".
Apart from that, an LDAP directory can be seen as a classical example of persistent storage.
LDAP directory entries have an average lifetime in the range of weeks, or even months.
LDAP servers are optimized for an "occasional writes, many reads" usage pattern.
Modern LDAP servers allow mechanisms for ensuring data consistency in the directory.

Related

Is Redis security weak but does not matter?

Considering Redis Security Document, is my thoughts right?
Redis does not provide strong security functions by itself.
Redis already assumes that only trusted Redis clients are connecting in a secured network.
Simple security setting, for example, IP restriction settings in OS firewall is a way.
I don't think that Redis security is wrong. Basically, Redis is a backend program in a private network, just like Database servers are.
Redis security is weak, but security does matter.
It can be observed from the document itself that different methods are mentioned to address the weak points, such as, implementing authentication.
It is also mentioned that the "Redis is not optimized for maximum security but for maximum performance and simplicity". Hence, it is up to the developer to implement the security.

Where to store configuration information in large Distributed Enterprise System

This is more of a high level question. But say you have a large number of applications, many of them distributed (server/clusters) and they share configuration parameter.
What is a good way to store this application specific configuration (preferable in a central place) without relying on a single point of failure.
For configuration I mean things like "database server addresses", "web services endpoint", "Logging file name" and even why not some business related constants and parameter.
Some of this parameter could eventually be changed at runtime so the application needs to be able to also query dynamically these parameters.
I can think of an application storing the configuration at a local file (forget about the format) or a central database to store the same.
But I would like to ask the community if there are standards for handling configuration of multiple distributed systems.
Thanks.
Apache ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.

LDAP Fault-tolerance configuration (e.g SunOne)

LDAP Fault-tolerance configuration (e.g SunOne):
Does anyboby know how to configuration "Fault-tolerance" for LDAP, e.g SunOne LDAP.
I search via google without any userful result?
Thanks
Assuming, by "fault tolerance," "high availability (HA)" is being asked, I would say it can be achieved by redundancy. And, it would not be peculiar to SunOne or any directory server software from other vendors.
There are different ways to solve this. It depends on the business requirements and the affordability. One method that comes to mind is to have the LDAP software installed on an HA pair. This requires hardware and OS capabilities for fail-over and it requires two servers (in a world of virtualization, "server" can mean different things [physical box, frame, LPAR, etc.]; so, I'll just leave the interpretation to the reader). When one server fails, the other server takes over and assumes the primary role in the pair. This is the fault-tolerance part. In this approach, the machine/server with the secondary role is passive (i.e., it's not serving clients) until the primary goes down. You will need to implement LDAP data replication between two servers. They can be two LDAP masters in a P2P replication topology.
Another method is to have multiple LDAP servers (i.e., masters, replicas) and cluster them using a network dispatcher (ND) software/appliance/etc., which would distribute the incoming traffic to the individual servers (usually replicas) in the cluster. If you lose one replica in the cluster, ND will not send any traffic to that replica until it comes back. However, other replicas will still be receiving load and therefore serving to the incoming traffic. This is the fault-tolerance part in this method. The degree of the availability you want will also dictate what can be done in a clustered environment. You can have a single LDAP master (to which the organization's applications would make updates) and keep it out of the cluster, but pair with another server for fail-over (so you wouldn't lose availability for updates from the applications - this also gives you the freedom to do maintenance on the master without interrupting your applications [well, they need to be written to be able to write to more than one LDAP master if the primary one is not available]). You would have to have the secondary server to receive replication from the primary in any case. If the budget doesn't let you have more servers/replicas, then you can put the master server along with replicas in the cluster as well to help with the read traffic. Instead of an HA-pair in which one of the servers would be passive, you can have two masters configured in a P2P replication topology and have them both in the cluster to help with the traffic too. There are different ways to approach to this method depending on the level of redundancy wanted or that can be afforded.

Wcf Domain Service vs Silverlight Enabled Wcf services

I am working with silverlight project that is consuming domain services. Actually i find that quite messy as one domain service class and metadata. I have already worked with Wcf services and found them very easy to update and handle. But domain service's modification (as new field or tables are added) is really a pain.
I want to know why people prefer domain services over silverlight enabled Wcf services? I mean advantages or disadvantages of both and performance implication
After goggling i found this are things you should see :
To authenticate users faster in the domain
To authenticate resources(gps etc) faster for the users
Utilization of resources
Utilization of network and descreasing the overall traffic in the
network.
The main benefit is that of the users and passwords management, which
could grow to be massive amount of work having to manage them
individually on each independent servers. The proposed changes of
migrating the whole platform to Active Directory environment will
assist in propagating the changes (such as new users, password
changes, new security requirements via GPO, etc) on to the servers
(which will run as domain clients, only 1 or 2 will run Primary and
Secondary ADC. Not all these servers are going to run host AD or be
an ADC, server OS is used due to it's robustness and reliability).
disadvantage
cost of infrastructure
good planning is must
Complex structure for user

LDAP Monitoring

I know LDAP is a Protocol but is there a way to monitor it?
I am using WhatsUps Gold monitoring and have been asked to look into LDAP monitors.
How can I set up monitoring for LDAP?
There is no standard for monitoring LDAP directory services, but most of the products support getting monitoring information via LDAP itself, under the "cn=monitor" suffix.
Servers such as OpenDJ (continuation of the OpenDS project, replacement of Sun DSEE) also have support for monitoring through SNMP and JMX.
Regards,
Ludovic.
I have been using cnmonitor (http://cnmonitor.sourceforge.net/) for some years with excellent results, although it's not perfect and there are some errors. You can see lots of statistics without almost doing anything: number of requests, searches, add, modifications, deletes, index status, replication, schema, etc. It is also compatible with many different LDAP servers (although I have only used it with 389 Directory Server).