How can I launch a slave agent via SSH on Jenkins programmatically? - ssh

How can I launch a slave agent via SSH on Jenkins programmatically?
Or enable auto refresh such that Jenkins checks automatically to see if a slave is online.
Basically I have a job which reboots one of the slaves. I need some jobs to run on the same slave after it boots up (by chaining another job using the Startup Trigger plugin) without any manual intervention in between these steps.

Jenkins will automatically reconnect to the slave after it's rebooted; the master checks the slave connection every minute or so (I'm not sure of the exact interval without digging into the source code).
As long as the slave configuration is still defined in the Jenkins master, you shouldn't need to do anything on the slave machine.

Related

Ansible playbook stops after loosing connection (even for few seconds) with ssh window of VM on which it is running?

My ansible playbook consist several task in it and I am running my ansible playbook on Virtual Machine. I am using ssh method to log in to VM and run the playbook. if my ssh window gets closed during the execution of any task (when internet connection is not stable and not reliable), the execution of ansible playbook stops as the ssh window already got closed.
It takes around 1 hour for My play book to run, and sometimes even if I loose internet connectivity for few seconds , the ssh terminal lost its connection and thus entire playbook stops. any idea how to make ansible script more redundant to avoid this problem ?
Thanks in advance !!
If you need to run a job on an external system that hangs for a long time and it is relevant that the task completes. It is extremly bad idea to run that job in the foreground.
It is not important that the task is Ansible or the connection is SSH. In every case you would always just "push" the command to the remote host and send it to background with something like "nohup" if available. The problem is of course the tree of processes. Your connection creates a process on the remote system and that creates the job you want to run. Is the connection gets lost, al subprocesses will be killed automatically by the OS.
So - under Windows - maybe use RDP to open a screen that stays available even after connection is lost or use something like Cygwin and nohup via SSH to change the hung up your process from the ssh session.
Or - when you need to run a playbook on that system install for example a AWX container and use that. There are many options based on your requirements, resources and administrative options.

Adding new redis node to the existing cluster

I have installed latest version ( 6.0.8) of redis in new centos D,E,F servers, now I want to add these new servers to the the existing cluster A,B,C which has old redis version, My plan is to after added new redis servers then decommission the old servers. Can anyone please guide me with the steps
1. Setup your new Redis instance as a slave for your current Redis instance. In order to do so you need a different server, or a server that has enough RAM to keep two instances of Redis running at the same time.
2. If you use a single server, make sure that the slave is started in a different port than the master instance, otherwise the slave will not be able to start at all.
3. Wait for the replication initial synchronization to complete (check the slave log file).
4. Make sure using INFO that there are the same number of keys in the master and in the slave. Check with redis-cli that the slave is working as you wish and is replying to your commands.
5. Allow writes to the slave using CONFIG SET slave-read-only no
6. Configure all your clients in order to use the new instance (that is, the
slave). Note that you may want to use the CLIENT PAUSE command in order to make sure that no client can write to the old master during the switch.
7. Once you are sure that the master is no longer receiving any query (you can check this with the MONITOR command), elect the slave to master using the SLAVEOF NO ONE command, and shut down your master.
You can follow this guide upgrading-or-restarting-a-redis-instance-without-downtime.

jmeter distribution load testing for no result return from slave to master gui mode

I am not getting any result from slave machine also no entry in DB
1. connectivity between master and slave is established since when i run remote from master, slave states 'Start the test' 'Finish the test'
2. also from single master and slave script is executed successfully
3. also since there is dynamic ip to server so i am not able to provide ip and port
Not able to catch what exactly is the problem, if you can check through Team viewer, please guide me further as you get some time
slave machine
master machine
Make sure both master and slave are running the same Java version
Make sure both master and slave are running the same JMeter version (also consider upgrading to latest JMeter version - JMeter 3.3 as of now)
If you use any JMeter Plugins or there are any libraries in JMeter Classpath make sure to copy them over to slave machines as well.
Check jmeter-server-log on slave side
If you want to see request and response details in GUI add the next line to user.properties file for all slaves:
mode=Standard
Check out How to Load Test Opening a URL on a Remote Machine with JMeter article for comprehensive explanation and step-by-step instructions.

Jmeter distributed testing

I'm using JMeter distributed testing for load testing. The problem is the client gets stopped immediately after starting it. Don't know why.
Can anybody help me in this problem?
You can follow these steps:
Start only jmeter-server.bat from the slave machines. (no need to run both jmeter.bat and jmeter-server.bat)
Configure jmeter.properties file of the master machine as follows:
Remote Hosts - comma delimited
remote_hosts=xxx.xxx.xxx.xxx (IP of your slave machines)
Start jmeter.bat from client(master) machine.
Now you can run your test from GUI mode to check everything is okay or not.
To do this: Run->Remote start-> check the IP's of slaves. (if its in there you are ready to run your test remotely).
Pre-requisites:
All the machines (both master and slaves) must be in the same subnet.
Firewall must be turned off for all machines.
Java and JMeter versions must be same for all machines.
For more details, you should read JMeter Distributed Testing Step-by-step.

initialise svnsync command from slave svn server

I have a setup of master-slave SVN servers.
Master and two Slave server are set up in three different location of three different timezone. We cannot let the slave server up for 24 hrs, so slave servers need to be shut down at the end of each day. But at the same time developers are committing changes from another slave to master server. Master server is up forever.
So my situation is at the starting of the day each slave server needs to synchronized with master which can only done from master by svnsync command.
Is there any way to automatically synchronize slave server when it starts up?
We are using apache server to host subversion. OS are windows server 2008 R2.
Thanks
If using svnsync on your slave server in a boot time script is not possible, you can do something like the solution described in this blog post.
To sum it up, then have a dedicated server listening on the master server (written in Python) that will start svnsync upon reception of a special TCP packet.
On your slave, you can then use the Windows version of netcat like described in the blog post to trigger the sync.