initialise svnsync command from slave svn server - apache

I have a setup of master-slave SVN servers.
Master and two Slave server are set up in three different location of three different timezone. We cannot let the slave server up for 24 hrs, so slave servers need to be shut down at the end of each day. But at the same time developers are committing changes from another slave to master server. Master server is up forever.
So my situation is at the starting of the day each slave server needs to synchronized with master which can only done from master by svnsync command.
Is there any way to automatically synchronize slave server when it starts up?
We are using apache server to host subversion. OS are windows server 2008 R2.
Thanks

If using svnsync on your slave server in a boot time script is not possible, you can do something like the solution described in this blog post.
To sum it up, then have a dedicated server listening on the master server (written in Python) that will start svnsync upon reception of a special TCP packet.
On your slave, you can then use the Windows version of netcat like described in the blog post to trigger the sync.

Related

Adding new redis node to the existing cluster

I have installed latest version ( 6.0.8) of redis in new centos D,E,F servers, now I want to add these new servers to the the existing cluster A,B,C which has old redis version, My plan is to after added new redis servers then decommission the old servers. Can anyone please guide me with the steps
1. Setup your new Redis instance as a slave for your current Redis instance. In order to do so you need a different server, or a server that has enough RAM to keep two instances of Redis running at the same time.
2. If you use a single server, make sure that the slave is started in a different port than the master instance, otherwise the slave will not be able to start at all.
3. Wait for the replication initial synchronization to complete (check the slave log file).
4. Make sure using INFO that there are the same number of keys in the master and in the slave. Check with redis-cli that the slave is working as you wish and is replying to your commands.
5. Allow writes to the slave using CONFIG SET slave-read-only no
6. Configure all your clients in order to use the new instance (that is, the
slave). Note that you may want to use the CLIENT PAUSE command in order to make sure that no client can write to the old master during the switch.
7. Once you are sure that the master is no longer receiving any query (you can check this with the MONITOR command), elect the slave to master using the SLAVEOF NO ONE command, and shut down your master.
You can follow this guide upgrading-or-restarting-a-redis-instance-without-downtime.

Failover in SQL Server 2017 with Availibility Group

I'm configuring a failover in an above-mentioned setup with two nodes and synchronous commit. It's a test setup. Primary is server1 and secondary is server2. I configured the availability group and added the database.
When I manually execute a failover it seems to work, server2 becomes primary, but when I connect to cluster resource IP, the database remains in a read-only mode. It works only when I shutdown server1. So far I couldn't find any solution.

Upgrade RabbitMQ Software

We are currently running RabbitMQ on our Windows server machine.
We want to switch to Linux server machine.
Our setup is on AWS.
We have already created a Linux machine and installed latest version of RabbitMQ in it.
Our client applications use IP to connect to RabbitMQ server. The linux server has an IP.
We would like to change the RabbitMQ server without any downtime. We have messages in Windows based RabbitMQ server and would like to move those messages as well.
What would be possible options in this scenario?
Is there a way to upgrade RabbitMQ software later without any downtime?
It's far easier if you don't need to move messages from one server to another.
I'd suggest this:
Run both servers in parallel
Create a bunch of new consumers ( a copy of all current consumers ) and make them consume from Linux server. At this moment, Linux server has no load yet.
Gradually switch producers from Windows to Linux server, monitoring the system.
Once all producers are switched, wait for queues on Windows server to be drained by existing consumers.
Once all queues are drained on Windows server, switch off Windows server's consumers.
You are done, all your load is on Linux server now
It is possible to proceed in such order:
Read and handle both servers on backend: old (Windows machine) and new (Linux machine) RabbitMQ server.
Toggle all the clients to write into new server only.
When queues in the old server are empty, old server will be no required anymore.

redis sentinel out of sync with servers in a cluster

We have a setup with a number of redis (2.8) servers (lets say 4) and as many redis sentinels. On startup of each machine, we set a pre-select machine as master through the command line and all the rest as slaves of that. and the sentinels all monitor these machines. The clients first connect to the local sentinel and retrieve the master's IP address and then connect there.
This setup is trouble free most of the time but sometimes the sentinels go out of sync with servers. if I name the machines A,B,C and D - sentinels will think B is master while redis servers are all connected to A as the master. bringing down redis server on B doesnt help either. I had to bring it down and manually "Sentinel failover" on A to fix the issue. Question is
1. What causes this to happen and whats the easiest and quickest way to fix this ?
2. What is best configuration - is there something better than this ?
The only time you should set a master is the first time. Once sentinel has taken over management of replication you should let it do it. This includes on restarts. Don't use the command line to set replication. Let sentinel and redis manage it. This is why you're getting issues - you've told sentinel it is authoritative, but you are telling the Redis servers to ignore sentinel.
Sentinel stores the status in its Config file, so when it restarts it can resume the last configuration. So even on restart, let sentinel do it's job.
Also, if you have 4 servers (be specific, not "let's say") you should be running a quorum of three on your monitor statement in sentinel. With a quorum of two you can wind up with two masters

How can I launch a slave agent via SSH on Jenkins programmatically?

How can I launch a slave agent via SSH on Jenkins programmatically?
Or enable auto refresh such that Jenkins checks automatically to see if a slave is online.
Basically I have a job which reboots one of the slaves. I need some jobs to run on the same slave after it boots up (by chaining another job using the Startup Trigger plugin) without any manual intervention in between these steps.
Jenkins will automatically reconnect to the slave after it's rebooted; the master checks the slave connection every minute or so (I'm not sure of the exact interval without digging into the source code).
As long as the slave configuration is still defined in the Jenkins master, you shouldn't need to do anything on the slave machine.