CloudSQL Replicas Load Balance - sql

When read replica is created 2 IPs are assigned to the master and the read replica.
So when an application is connected to the CloudSQL using master IP, does it only use the master instance or is it connected to both instances?
Does the CloudSQL load balance the traffic among the replicas or do the application have to manually connect to the replicas?
Is there a way to achieve this without manually connecting to each instance?

So when an application is connected to the CloudSQL using master IP,
does it only use the master instance or is it connected to both
instances?
When the client is connected to the IP address of the master, it is only connected to the master.
Does the CloudSQL load balance the traffic among the replicas or do
the application have to manually connect to the replicas?
Google Cloud SQL does not load balance. If you wish to distribute read-only traffic, the client must perform that function.
Is there a way to achieve this without manually connecting to each
instance?
No. The client must connect to the masters and replicas to distribute read-only traffic. Logic must be present to send write traffic to the master only.
I wrote an in-depth article on this topic:
Google Cloud SQL for MySQL – Connection Security, High Availability and Failover

Related

Redirecting redis client to slave if master is under a large transaction (redis cluster) and vice versa

I am trying to implement a 3 master 3 slave architecture with redis cluster. I want to redirect my client to slave if master is blocked (like undergoing a MULTI EXEC query) or redirect to master if slave is synchronising the MULTI EXEC query. Is there any way I can achieve it through redis configuration, or do I need to manually implement this logic with the client library (redis-rb) I am using?
Thanks in advance.
As I know, there isn't any proxy or balancing in redis cluster that you can control. In Redis Cluster nodes don't proxy commands to the right node in charge for a given key, but instead, they redirect clients to the right nodes serving a given portion of the keyspace. So you can't somehow control this from config.
Maybe your case with MULTI EXEC will be handled by the client library because it knows all about redis master nodes config

State replication of memory variables in a game/application server

How does a cluster of servers behind a load balancer achieve the replication
of variables that are initialized during a client/server session? For example
when a client (let's say a game client) starts a session first with a load balancer that forwards the request to an available server, how does that server REPLICATES that session AND their memory STATES to the OTHER servers in a cluster?
LB
|
App App App
|
(memory variable)
|------------ replicated?
Or are the memory variables for the established session NOT replicated, and only session files are replicated in a database tier.. that wouldn't account for all the variables that a server must keep in memory.
It seems to me like to achieve synchronization in a multiplayer game, a cluster of servers must replicate all their states to other servers, but does that mean replicating all their memory variables?

Possible to have multiple replication entries in pg_hba.conf? (multiple slaves)

Right now I just have 1 slave DB that is recieving streaming binary replication from the master.
In my master's pg_hba.conf file, I have this entry.
host replication all 98.10.144.135/24 trust
Is it possible to add another entry with another IP? Will that enable streaming replication from master to the 2 slave servers?
Is it possible to add another entry with another IP? Will that enable streaming replication from master to the 2 slave servers?
Yes, and yes. Of course, you still have to set up the slave with a pg_basebackup and add an appropriate recovery.conf.
Your current entry already allows all 254 servers from 98.10.144.1 to 98.10.144.254 to receive streaming replication data from your server, by the way. The last octet of the IP address, .135, is masked out (effectively ignored) by the netmask length of /24. If you don't understand why, see CIDR and subnetwork.
That might potentially mean that anybody at the netblock owner can access your server for replication. whois 98.10.144.135 says that is VoIP Residential, 1000 Picture Parkway, Webster, NY with control of 98.10.144.0/22. That might include their customers. So unless you are that company and control that network, change your settings to specify the exact server IP /32 now.
It's generally an incredibly bad idea to use trust with anything more than a single /32 IP, and even then only on a network you control. You should really be getting your replicas to authenticate, and if it's over a remote network should be using SSL.

ActiveMQ Master/Slave Pair with Network of Brokers

I was able to set up Network of Brokers with store and forward strategy and working fine. I was given bigger machines now and would like to set up Master/Slave pair within the network of brokers. I understand Masters don't need any config changes but Slaves should indicate its corresponding master with URI. However, I'm not very clear on what uri to specify in the client. I'm using 5.6 release.
For example: Two machines with MasterA, SlaveB on 1 machine, and MasterB, SlaveA on another machine. No Network connectors between Masters and Slaves but network connectors between MasterA and MasterB. I hope that I'm right till this point. What about client uri? I'm currently using nio protocol at the clients like failover:(nio:localhost1:61616,nio:localhost2:61616)?randomize=true. I specify randomize=true to balance the load between the brokers.
Please suggest what client URI should I use? Should I include all brokers URI or just masters URI? Can I still use nio protocol? I prefer to use randomize=true so that load will be balanced.
In the simplest case, the client uri should contain 4 brokers, both pairs of master/slave uris.
For the network connectors, they will need to be prepared to bridge master to master or master to slave, which ever is available.
There is a new masterslave: discovery agent in 5.6 that simplifies the configuration for a networkconnector.
http://activemq.apache.org/networks-of-brokers.html#NetworksofBrokers-MasterSlaveDiscovery

redirect to slave

Does REDIS has built-in mechanism that will use slave when master is down?
Can I use virtual IP to direct to master and when Master is down is it possible to direct to slave?
As per the documentaion:
elect the slave to master using the SLAVEOF NO ONE command, and shut down your master.
But how the application will know about the changed IP?
mysql has a third party utility that is called MMM (master master replication with monitor). Is there such an utility for REDIS?
You can use a virtual IP in a load balancer, though this is not built in to Redis. Any quality hardware or software load balancer should be able to do it. For example you can use "balance" or HAProxy to front the VIP and use a script or rules that checks the status of Redis instances to see which is master and sets that as the destination in the load balancer (LB).
Going this route would require one or more additional servers (or VMs depending on your setup) but it would provide you with a configuration that has clients talking to a single IP and being clueless about which server they need to talk to on the back end. How you update the LB with which server to talk to is dependent on what LB you use. Fortunately, none of them would need to know or handle the Redis protocol; they are just balancing a port.
When I go this route I go with a Slave-VIP and a Master-VIP. The slave-VIP load balances across all Redis instances, whereas the Master-VIP only has the current master enabled. If your write load is very heavy you can leave the current master out of the Slave-VIP pool. Otherwise I would leave it in; that eliminates the need for failover updating of the Slave-VIP pool.