Right now I just have 1 slave DB that is recieving streaming binary replication from the master.
In my master's pg_hba.conf file, I have this entry.
host replication all 98.10.144.135/24 trust
Is it possible to add another entry with another IP? Will that enable streaming replication from master to the 2 slave servers?
Is it possible to add another entry with another IP? Will that enable streaming replication from master to the 2 slave servers?
Yes, and yes. Of course, you still have to set up the slave with a pg_basebackup and add an appropriate recovery.conf.
Your current entry already allows all 254 servers from 98.10.144.1 to 98.10.144.254 to receive streaming replication data from your server, by the way. The last octet of the IP address, .135, is masked out (effectively ignored) by the netmask length of /24. If you don't understand why, see CIDR and subnetwork.
That might potentially mean that anybody at the netblock owner can access your server for replication. whois 98.10.144.135 says that is VoIP Residential, 1000 Picture Parkway, Webster, NY with control of 98.10.144.0/22. That might include their customers. So unless you are that company and control that network, change your settings to specify the exact server IP /32 now.
It's generally an incredibly bad idea to use trust with anything more than a single /32 IP, and even then only on a network you control. You should really be getting your replicas to authenticate, and if it's over a remote network should be using SSL.
Related
In this post, if I read it correctly, it was suggested that replication with ActiveMQ Artemis could be achieved with only two nodes as follows:
However, it's still possible for the virtual IP address to direct traffic to one of the two VMs based on the broker's availability since the backup broker will not be able to receive connections until the primary broker fails at which point the backup broker will become active and begin accepting connections.
I'm assuming from the answer that this be accomplished simply using the classic HA policy to configure one as the master, the other as the slave and configuring them to be a part of the same cluster, as per the documentation.
Is this a correct assumption?
The point from what you quoted from my answer was really just about the use of a virtual IP address in front of a primary/backup pair of brokers. I didn't mean to imply anything about the configuration of the primary/backup pair itself.
In short, even with a virtual IP address in front of the pair of brokers you still need a way to mitigate split brain and the minimum viable way to do that is with a ZooKeeper node. If you use the "classic" configuration approach then there will be no mitigation for split brain.
I have a Redis Cluster that clients are connecting to via HAPRoxy with a Virtual IP. The Redis cluster has three nodes (with each node sharing the same server with a running sentinel instance).
My question is, when i clients gets a "MOVED" error/message from a cluster node upon sending a request, does it bypass the HAProxy the second time when it connects since it has been provided with an IP:port when the MOVEd message was issued? If not, how does the HAProxy know the second time to send it to the correct node?
I just need to understand how this works under the hood.
If you want to use HAProxy in front of Redis Cluster nodes, you will need to either:
Set up an HAProxy for each master/slave pair, and wire up something to update HAProxy when a failure happens, as well as probably intercept the topology related commands to insert the virtual IPs rather than the IPs the nodes themselves have and report via the topology commands/responses.
Customize HAProxy to teach it how to be the cluster-aware Redis client so the actual client doesn't know about cluster at all. This means teaching it the Redis protocol, storing the cluster's topology information, and selecting the node to query based on the key(s) being accessed by the consumer code.
With Redis Cluster the client must be able to access every node in the cluster. Of the two options above Option 2 is the "easier" one, but at this point I wouldn't recommend either.
Conceivably you could use the VIP as a "first place to get the topology info" IP but I suspect you'd have serious issues develop as that original IP would not be one of the ones properly being reported as a nod handling data. For that you could simply use round-robin DNS and avoid that problem, or use the built-in "here is a list of cluster IPs (or names?)" to the initial connection configuration.
Your simplest, and least likely to be problematic, route is to go "full native" and simply give full and direct access to every node in the cluster to your clients and not use HAProxy at all.
LDAP Fault-tolerance configuration (e.g SunOne):
Does anyboby know how to configuration "Fault-tolerance" for LDAP, e.g SunOne LDAP.
I search via google without any userful result?
Thanks
Assuming, by "fault tolerance," "high availability (HA)" is being asked, I would say it can be achieved by redundancy. And, it would not be peculiar to SunOne or any directory server software from other vendors.
There are different ways to solve this. It depends on the business requirements and the affordability. One method that comes to mind is to have the LDAP software installed on an HA pair. This requires hardware and OS capabilities for fail-over and it requires two servers (in a world of virtualization, "server" can mean different things [physical box, frame, LPAR, etc.]; so, I'll just leave the interpretation to the reader). When one server fails, the other server takes over and assumes the primary role in the pair. This is the fault-tolerance part. In this approach, the machine/server with the secondary role is passive (i.e., it's not serving clients) until the primary goes down. You will need to implement LDAP data replication between two servers. They can be two LDAP masters in a P2P replication topology.
Another method is to have multiple LDAP servers (i.e., masters, replicas) and cluster them using a network dispatcher (ND) software/appliance/etc., which would distribute the incoming traffic to the individual servers (usually replicas) in the cluster. If you lose one replica in the cluster, ND will not send any traffic to that replica until it comes back. However, other replicas will still be receiving load and therefore serving to the incoming traffic. This is the fault-tolerance part in this method. The degree of the availability you want will also dictate what can be done in a clustered environment. You can have a single LDAP master (to which the organization's applications would make updates) and keep it out of the cluster, but pair with another server for fail-over (so you wouldn't lose availability for updates from the applications - this also gives you the freedom to do maintenance on the master without interrupting your applications [well, they need to be written to be able to write to more than one LDAP master if the primary one is not available]). You would have to have the secondary server to receive replication from the primary in any case. If the budget doesn't let you have more servers/replicas, then you can put the master server along with replicas in the cluster as well to help with the read traffic. Instead of an HA-pair in which one of the servers would be passive, you can have two masters configured in a P2P replication topology and have them both in the cluster to help with the traffic too. There are different ways to approach to this method depending on the level of redundancy wanted or that can be afforded.
Does REDIS has built-in mechanism that will use slave when master is down?
Can I use virtual IP to direct to master and when Master is down is it possible to direct to slave?
As per the documentaion:
elect the slave to master using the SLAVEOF NO ONE command, and shut down your master.
But how the application will know about the changed IP?
mysql has a third party utility that is called MMM (master master replication with monitor). Is there such an utility for REDIS?
You can use a virtual IP in a load balancer, though this is not built in to Redis. Any quality hardware or software load balancer should be able to do it. For example you can use "balance" or HAProxy to front the VIP and use a script or rules that checks the status of Redis instances to see which is master and sets that as the destination in the load balancer (LB).
Going this route would require one or more additional servers (or VMs depending on your setup) but it would provide you with a configuration that has clients talking to a single IP and being clueless about which server they need to talk to on the back end. How you update the LB with which server to talk to is dependent on what LB you use. Fortunately, none of them would need to know or handle the Redis protocol; they are just balancing a port.
When I go this route I go with a Slave-VIP and a Master-VIP. The slave-VIP load balances across all Redis instances, whereas the Master-VIP only has the current master enabled. If your write load is very heavy you can leave the current master out of the Slave-VIP pool. Otherwise I would leave it in; that eliminates the need for failover updating of the Slave-VIP pool.
Can someone explain to me how high-availability ("HA") works for a web application ... because I assume HA means that there exist no single-point-of-failure.
However, even if a load balancer is used- isn't that the single point of failure?
I have found this article on the subject:
http://www.tenereillo.com/GSLBPageOfShame.htm
Basically if you do not require long lasting sticky sessions you can configure your DNS servers to return multiple A records (IP addresses) for your website.
Web browsers are smart enough to try all the addresses until they find one that works.
In simple words high availability can be defined as running a system 24*7 without a downtime even if there are hardware and software failures. In other way a fault tolerance application. This helps ensure uninterrupted use of the application for it’s intended users.
Read more on High Availability Deployment Architecture
It works the following way that you setup two HA Proxy servers with heartbeat, so when one fails (stops responding to queries), it's being removed from the cluster.
Requests from HA Proxy can be forwarded to web servers in round robin fashion, and if one web server fails, HA Proxy servers do not try to contact it until it's alive.
Web servers are storing all dynamic information in database, which is replicated across two MySQL instances.
As you can see, HA Proxy and Cluster MySQL (or simply MySQL replication) as well IP Clustering here is the key.
Sure it is when operated alone. Usual highly available setup includes 2 or more load balancers running in cluster in either active/active or active/passive configuration. To further increase the availability you can have 2 different Internet Service Providers (or geo distributed datacenters) each running a pair of clustered load balancers. Then you configure DNS A record resolving to 2 distinct public IP addresses which guarantees round-robin processing splitting DNS requests evenly (CloudFlare is very fast and reliable at this). There's also possibility to return IP address of datacenter closest to your originating geo location by using something like PowerDNS dnsdist
This is what big players do to make their services highly available.
Please read https://docs.oracle.com/cd/E23824_01/html/821-1453/gkkky.html for more clearity. Actually both load balancer uses same vip(Virtual IP Address. https://techterms.com/definition/vip).
HA architecture is a entire field and multiple books were written on it, so it is hard to answer in a short paragraph.
To sum up the ideal situation, you would be using multiple servers, interconnected to a layer of multiple load balancers. The nodes and LB will be located in a few different data centers, and connected to different network backbone. Ideally the data centers will be located all over the world.
In short, all component will have redundancy, including the load balancers.
For a starting point, see Wikipedia for High Availability Cluster