Equivalent to phpmemcacheadmin in redis - redis

Could someone please suggest a tool like phpmemcacheadmin, who allow me to add different redis server and monitor all the stats like del/s, set/s, get/s etc.
I have more than 10 redis server right now for different projects. I want a centralized tool to monitor all these servers. I was going through with phpredmin and php-rb projects but the problem is that i am able to add only 1 server or can add master and his slave.

Related

SQL Database Blue / Green deployments

I am dealing with the infrastructure for a new project. It is a standard Laravel stack = PHP, SQL server, and Nginx. For the PHP + Nginx part, we are using Kubernetes cluster - so scaling and blue/green deployments are taken care of.
When it comes to the database I am a bit unsure. We don't want to use Kubernetes for SQL, so the current idea is to go for Google Cloud SQL managed service (Are the competitors better for blue/green deployment of SQL?). The question is can it sync the data between old and new versions of the database nodes?
Let's say that we have 3 active Pods and at least 2 active database nodes (and a load balancer).
So the standard deployment should look like this:
Pod with the new code is created.
New database node is created with current data.
The new Pod gets new environment variables to connect to the new database.
Database migrations are run on the new database node.
Health check for the new Pod is run, if it passes Pod starts to receive traffic.
One of the old Pods is taken offline.
It should keep doing this iteration until all of the Pods and Database nodes are replaced.
The question is can this work with the database? Let's imagine there is a user on the website that is using the last OLD database node to write some data and when switched to the NEW database node the data are simply not there until the last database node is upgraded. Can they be synced behind the scenes? Does Google Cloud SQL managed service provide that?
Or is there a completely different and better solution to this problem?
Thank you!
I'm not 100% sure if this is what you are looking for, but for my understanding, Cloud SQL replicas would be a better solution. You can have read replicas [1], that are a copy of the master instance and have different options [2]
A read replica is a copy of the master that reflects changes to the master instance in almost real time. You create a replica to offload read requests or analytics traffic from the master. You can create multiple read replicas for a single master instance.
or a failover replica [3], that in case the master goes down, the data continue to be available there.
If an instance configured for high availability experiences an outage or becomes unresponsive, Cloud SQL automatically fails over to the failover replica, and your data continues to be available to clients. This is called a failover.
You can combine those if you need.

Prometheus target management

We are using prometheus in our production envirment recently. Before we only have 30-40 nodes for each service and those servers not change very often, so we just write it in the prometheus.yml, but right now it become too long to hold in one file and change much frequently then before, so my question is should i use file_sd_config to put those server list out of yml file and change those config files sepearately, or using consul for service discovery(same much easy to handle changes).
I have install 3 nodes consul cluster in data center and as i can see if i change to use consul to slove this problem , i also need to install consul client in each server(node) and define its services info. Is that correct? or does anyone have good advise.
Thanks
I totally advocate the use of a service discovery system. It may be a bit hard to deploy at first but surely it will worth it in the future.
That said, Prometheus comes with a lot of service discovery integrations. It's possible that you don't need a Consul cluster. If your servers are in a cloud provider like AWS, GCP, Azure, Openstack, etc, prometheus are able to autodiscover the instances.
If you keep running with Consul, the answer is yes, the agent must be running in every node. You can also register services and nodes via API but it's easier to deploy the agent.

Unison sync across more than 2 computers

I am currently using Unison across 2 computers (server and laptop). I need to create another connection where I can timely backup my data from server.
laptop <--> server -> backup
Here the connection to backup from server can be unidirectional. Is there any way to accomplish this?
This is a very common thing to do. When setting up Unison across multiple machines, you should prefer a star topology. So if you can, you should run another instance of Unison, along with any backup-related scrips, on your machine where you are storing your backups. It should look about the same as your setup on your laptop (depending on where your backups are being stored).

Do I need to run the WebLogic node manager on a single machine that has multiple WebLogic instances?

Forward: I'm using Java 6u45, WebLogic 10.3.6, and Ubuntu Desktop 14.04 64-bit.
I just started as a student assistant at one of my state's IT offices. On my first day I was tasked with testing WebLogic on Ubuntu (Windows isn't cases sensitive, causing later issues because WebLogic is...). I started messing around with clustering, and now my setup is as follows:
1 Ubuntu machine
1 domain
6 servers: Admin server, wls1-4, and wlsmaster (wlsmaster was supposed to be what wls1 and wls2 reported to within the cluster because I set the cluster to be unicast, but that's a secondary question for now).
2 clusters: cluster1 and cluster2. wls1, wls2, and wlsmaster are on cluster1. wls3 and 4 are on cluster2.
Given my setup, do I even need to use node manager because I'm only using one physical machine? Secondary question; if I want to use unicast, how do I set the master? $state uses unicast for what few Weblogic servers we have, so I was told to check that out.
A few things:
No, you don't necessarily have to use a nodemanager, but it will make your life easier. When you log into the weblogic admin console and attempt to start one of your servers e.g. wls1-4, the Admin server will attempt to talk to the node manager to start the servers. Without the nodemanager you will have to start each server individually using the startManagedWebLogic.sh script and if you need to bring servers up and down often it will be very annoying.
With regards to Unicast it is pretty easy to set up (we just leave all the default values alone). Here is the pertinent info from the Oracle Docs:
"Each of the Managed Servers in a WebLogic Server cluster has a name. For unicast clusters, WebLogic Server reads these Managed Server names and then sorts them into an ordered list by alphanumeric name. The first 10 Managed Servers in the list (up to 10 Managed Servers) become the first unicast clustering group. The second set of 10 Managed Servers (if applicable) becomes the second group, and so on until all Managed Servers in the cluster are organized into groups of 10 Managed Servers or less. The first Managed Server for each group becomes the group leader for the other (up to) nine Managed Servers in the group."
So you will want to name your master servers in such a way that they are the first alphanumerically in the cluster. That said, for your use case I doubt you need those master servers as all. Just have 2 clusters, one with wls1-2 and one with wls3-4.

2 ActiveMQ Servers different versions same machine

I want to know if it is possible to run 2 versions (5.5 and 5.10) of ActiveMQ on the same machine. I simply assume that all I need to do reconfigure the ports on one of them to something different to the other.
The reason for this is that we are using Informatica B2B which uses ActiveMQ # 5.5 with a 3rd party (Fuse) addition for its internal messaging. We would also like to run a separate JMS server on the same machine for various reasons using 5.10 or 5.11.
I have found lots of examples of creating multiple instances, but they apply to using the same installation.
If it is that simple (as just changing the ports), can they also share the same JVM or not?
You can run multiple instances on the same machine by changing the ActiveMQ configuration. You should assign each Broker a unique name and configure the transport connector to listen on different ports. You also want to ensure that they are configure with different data directory instances and so on.
You cannot run two in the same JVN however.