Is there an ldap C/C++ library that provides fail over? - ldap

I'm looking for an LDAP libracy in C or C++ that allows me to specify a list of LDAP hostnames instead of a single hostname. The library should then use the first one it can connect to in case one or more of the servers is/are down. I'm sure it'd be easy to wrap an existing library to create this, but why reinvent the wheel?

Use multiple A records, each with a different IP.
ldapserver.example.com. IN A 1.2.3.4
ldapserver.example.com. IN A 2.3.4.5
The OpenLDAP client libs will try each host in turn. Failover is (unfortunately) as slow as your TCP connection timeout...

The novell cldap libraries (and java libraries) support a list of space separated hosts when connecting. It'll try each one in turn, as noted in the ldap_init() page.
The openldap libldap library also supports a space separated list of hosts passed to ldap_open() or a comma separated list passed to ldap_initialize().
The only catch is to make sure to handle the LDAP_SERVER_DOWN error that gets returned after a connection goes away. I usually write a wrapper function that tries an operation (ie: a search), and tries to reconnect if LDAP_SERVER_DOWN occurs, and then does the operation again.

I can't say I've ever heard of one. Furthermore, most LDAP-capable software I've used supported failover poorly or not at all. You might be better off trying to implement the failover at the server, by putting it behind a load balancer or similar.

Related

vb.net - passing parameter to an application which is already running [duplicate]

Both Pipes and ASP.NET Core gRPC support local and remote IPC/RPC (with some platform limitations for gRPC)
When would I use one technology (Pipes) or the other (gRPC)?
Observations, thoughts and considerations I'm keeping in mind:
gRPC seems to be geared towards replacing WCF in some future iteration.
local deployments and with machine restrictions (running as non-admin/user, machine firewalls, different platforms/OS)
network traversal, and compatibility with same-machine -> multi-machine (frontend/backend arrays) for load and expansion
Spanning secure zones (where a Proxy is used, or other TLS cipher/order/registry setting) affects the ability for HTTP/2 to work
Pipes (named pipes?) have a different surface area and port (do they also use port 135, or NetBIOS over TCP (not sure of name))... how is it scanned and secured?
"memory mapped files" seem to be a challenge to get working, however it seems to work in ASP.NET Core with gRPC in the UDS configuration. Is this a correct inference?
Right now my scenario is to have two console apps communicate with each other, same machine or remote. Adding Asp.NET Core Web is an optional front end alternative for my scenario.
Simple IPC
Depends on how much communication is going to happen. If your communication is limited to simple collaborative signal passing or sharing some data between two processes you can safely use NamedPipeClientStream and NamedPipeServerStream on local system or local network but if you plan for the same on different systems then I would suggest using TcpClient and TcpListener.
Comprehensive IPC
WCF or now its replacement gRPC is for scenario where a complete API/Framework need to be executed remotely. For example I have an entire library of classes which I need to call from a different process (which mostly run on a different system); in that case gRPC kind of solutions make more sense.
Only you can decide.
This is a design decision which is highly unique for your application; your future plans and your system environment and any third person can only give you clues but ultimately you are the only person who can make the right decision.

Configure F5 load balancer for LDAP

We care currently running LDAP as a Master-Master configuration with one primary. We are supplying the Spring LdapContextSource with two LDAP nodes to use as primary/failover.
We went to this configuration because previously our LDAP had been behind an f5 load balancer, but we would run into replication issues when a user was created on Node A, but the f5 sent the updates to Node B before the two could sync.
However, now we are running into a situation where we are over-utilizing one node. And ignoring the second node.
What I would like to be able to do is configure the f5 such that all Create, Update, Delete operations went to a primary node, but reads were distributed between the two LDAP Nodes.
Any thoughts on how to configure the f5 to achieve this?
For reference we are using a 389-ds implementation of LDAP.
Recommendation: Split the work into two separate VIPs if possible. At least that's what we've done with mysql here — a write VIP and a read-only VIP. I know this question is for LDAP, but LDAP is a type of database, and your needs are very similar to mysql read/write dilemmas.
Write VIP: Set up your F5 pool with Priority Group Activation set to "Less than 1" on the Members tab. This is a failover configuration and does not split load since the LDAP sync isn't fast enough to support it. The higher priority number takes the traffic first. If it goes down, traffic flows to the lower priority. You assign priority as you are adding each node.
Read VIP: Load-balance traffic with a typical configuration as you had it before.
In both VIPs, of course you need a valid LDAP query in your health monitor that proves the service is working correctly. If allowed by your directory, you don't even have to login. You can just read the directory, searching for a particular base and filter defining an object within that base. This makes the health monitor faster and less troublesome while remaining effective. LDAP login in F5 monitors can be a major pain, so it's nice to skip it when feasible.

Can Cloudbees instances within an app communicate directly?

I am looking to build an Akka-based application in the cloud, for a garage startup that I'm bootstrapping; by the nature of the app, it's semi-stateful, with as much as possible cached in RAM for performance. (It'll be tolerant of being shut down and restarted periodically, but we want to mostly operate via cached information inside the Actors.)
The architecture is designed for a cluster of servers, communicating between them as necessary so that a user session on node A can query a middleware Actor on node B when appropriate. So my question is, how hard is that in CloudBees?
My understand from this page is that there is no automatic directory service to manage this sort of intra-cluster communication yet, but I can probably live with that -- worse comes to worst, I should be able to manage discovery via the DB, with each node registering itself when it comes up and opening up many-to-many communications with the others.
What I want to check, though, is that this communication is straightforward. Does each node have a reliable local IP that it can advertise for others to contact it on, that is at least stable during this run of the application? Or is there another/better way for a node to advertise its address to the rest of the nodes running this app?
(I assume that the nodes of an app all share the same DB instance.)
Any guidance here would be greatly appreciated. I'd like to choose a hosting provider soon, and keep returning to CloudBees as the most promising-looking of the options...
There are no limitations currently on instances communicating with each other - the trick is in discovering membership. There is an api that will be shortly be released that will allow you to track membership - but for now, the following may work:
To get the port, look at the file names in $PWD/.genapp/ports (as applications can have multiple ports) - (eg System.getenv("PWD") + ".genapp/ports" - list the files in that directory - generally will be just 1 - the file name is the port). There are other ways - for example the "sun.java.command" system property on JVM apps too.
The hostname can be obtained via the usual means (eg InetAddress.getLocalHost().getHostName()): this host
name will be the private name - ie it will resolve to a private IP -
good for node to node communication.
Public IP/hostname: perform a HTTP get (from the server) to the following URL:
http://instance-data/latest/meta-data/public-hostname (will only
return the public IP on the server side of course).
(see http://developer-blog.cloudbees.com/2012/11/finding-port-or-address-of-your.html)
You can then, as you say, on startup, register the appropriate port/private hostname with a DB, and then read that on each node to "seed" the cluster (akka doesn't have to know about all members - just enough seeds) I would think a 2 phase startup: 1: register host/port, 2, look for other members, add them as seed members to the local Akka configuration (may need to periodically do the same for a while, as other nodes startup - to ensure it is seeded enough)
From my reading of Akka setup here: http://doc.akka.io/docs/akka/snapshot/scala/remoting.html
It looks like you can specify the port - so if possible, I would set that to be the app_port environment variable - that means each node can communicate via the private hostname with that port. However, http traffic will also be routed to it - can akka handle this as well - or does it need to have a discrete port for akka and another for any http interface?

Apache Camel equivalent in Rails

Is there an equivalent to Apache Camel in Rails ?!
I'm creating an application that needs to "listen" to messages from one source (for example: email (POP3)) and sends them to another source (for example: logfile or email (using smtp)).
Any ideas ?
I am not sure about a complete equivalent to Apache Camel. But, to just listen for mails from a POP3 server and send to another source, try the mailman gem
EDIT: You should also look at mailcatcher gem
I am pretty sure there are no ports of Apache Camel to other languages, including ruby (but others as well, there was a question about .net recently). However, you can use Apache Camel with your application. You can treat Camel as an independent daemon that you need to configure and you can do that conveniently via xml. If you need some of your ruby code to be invoked during processing you can use the Camel org.jruby:jruby support. It may be less than ideal, but it works well. To interact with external systems Camel supports a large number of protocols already (including the ones you mentioned), but one could plug in her own.
Given Camel's support for many languages, protocols and data formats, I doubt anybody will go through the significant effort of porting it to other languages, but you never know.
You should definitely look at Llama.
They are in an early stage, but seems like they are going to build "an integration-framework on top of EventMachine that helps with tying together various backend services", which is what Camel is.

Can the SVN and HTTP protocols be used safely on the same repository simultaneously?

We would like to evaluate whether the SVN protocol works better for our team than HTTP, but we don't want to commit to a full switch just yet.
Right now we have an Apache sever serving up our main repository. Can we safely use svnserve.exe to with the same repository so that a few of our developers can test it? My initial guess is that we can, but we don't want to risk corrupting our repository.
Yes, it's possible. The official SVN book has a chapter devoted to this situation:
http://svnbook.red-bean.com/en/1.5/svn.serverconfig.multimethod.html . There are some pitfalls but they have more to do with permission settings.
Exactly, Subversion is designed to support concurrent access via multiple protocols, something which causes major problems with CVS. Not only can you use http:// and svn://, but also file:// (if you happen to be working locally on the machine, for example with a continuous integration tool or other post-commit hook) https://, svn+ssh://, etc.
In my experience, one method hasn't proven to be objectively "better" than the other, but there are certain benefits to each. For example, Apache is extremely adept at handling lots of accesses at one. On the other hand, if you're not already using Apache, or don't want to make it handle SVN traffic, the svnserve daemon is lightweight and quite performant. On my Macs, I set up svnserve using launchd to start up only when a request comes in, so it doesn't use any resources when there is no repository activity. What works best will largely be a factor of the access patterns you see in practice.