Jmeter remote testing: Master machine starts but freezes - testing

I have followed along the below tutorial to setup a distributed testing environment for Jmeter:
https://www.perfmatrix.com/configuration-process-for-distributed-testing-in-jmeter-5-3/
I have managed to start the remote (slave machine) server and then to trigger the test from the master machine in NON-GUI mode.
But it doesn't want to finish the execution...what could be the reasons for this?
(I am using Jmeter version 5.4 on both machines, and they are in the same network. The master machine is Win OS and the slave machine is Mac OS)
Details about the test
When it comes to the Thread plan I am having a simple HTTP Sampler that makes a request to https://www.google.com (port 443) and no customized listener plugins in the Thread group, just a simple listener. I have no externalized data such as a CSV either.
In master jmeter.properties file I have only added an entry:
remote_hosts=[internal IP-address]
I have also copied over the .jks file generated from the master to the bin folder of the slave machine.
I have first started the jmeter-server from the slave machine with the following command:
sh ./jmeter-server Djava.rmi.server.hostname=[slave machine internal IP-address]
Afterwards I have started the master jmeter in NON-GUI by following:
jmeter -n -t [UNC-path to jmx file] -r
If you need additional details, just let me know!

The referenced article contains several steps which are not required and some statements which are not true at all.
We cannot help you without seeing:
Your test plan, at least Thread Group configuration
jmeter.log file from master
jmeter-server.log file from slave
The most common problems are:
RMI ports are not open in the firewall so the master cannot communicate with the slave or vice versa
Test plan uses a JMeter Plugin which is not installed on the slave
Test plan uses an external data file, i.e. CSV file used in the CSV Data Set Config and the file isn't present in the slave
More information:
Remote Testing
Apache JMeter Distributed Testing Step-by-step
How to Perform Distributed Testing in JMeter
Remote hosts and RMI configuration

Related

Redis Config Files - Config Write

I have 5 redis server
2 of them run redis both Master and Slave roles ( looks like redis.conf is not setup manually but via some sort of process cause it has the following line at the bottom: Generated by CONFIG REWRITE )
From time to time I can see Master and Slave switch roles automatically - no human intervention
3 of them run redis sentinel
Question 1: I need to replicate this setup on a 5 different systems but I don’t know how is that “Generated by CONFIG REWRITE” portion setup. Where and how is this automation setup?
Question 2: Why is that /etc/redis/ has a 6329.conf file? I thought redis setup is redis.conf...
Thanks
The Config Rewrites are all caused by Redis Sentinel. The 3 sentinels you have monitor the master and in the event that enough sentinels think the master is down, they will force a failover by promoting an existing slave to the new master, then will reconfigure all other hosts to be a slave of the new master. You can read more about Redis Sentinel, including how to set it up for common scenarios, (docs page, examples section).
For the 6329.conf file, you can name the config files however you want, but however you start your redis server has to reference the non-default file name. Here's the usage example from the --help option to redis-server:
Usage: ./redis-server [/path/to/redis.conf] [options]

jmeter distribution load testing for no result return from slave to master gui mode

I am not getting any result from slave machine also no entry in DB
1. connectivity between master and slave is established since when i run remote from master, slave states 'Start the test' 'Finish the test'
2. also from single master and slave script is executed successfully
3. also since there is dynamic ip to server so i am not able to provide ip and port
Not able to catch what exactly is the problem, if you can check through Team viewer, please guide me further as you get some time
slave machine
master machine
Make sure both master and slave are running the same Java version
Make sure both master and slave are running the same JMeter version (also consider upgrading to latest JMeter version - JMeter 3.3 as of now)
If you use any JMeter Plugins or there are any libraries in JMeter Classpath make sure to copy them over to slave machines as well.
Check jmeter-server-log on slave side
If you want to see request and response details in GUI add the next line to user.properties file for all slaves:
mode=Standard
Check out How to Load Test Opening a URL on a Remote Machine with JMeter article for comprehensive explanation and step-by-step instructions.

Flink Jobmanager not able to see task managers

So I've installed an apache flink cluster on our network. I've done the configurations as illustrated below. This Master (JobManager) starts, and sends the start command to all the slaves via ssh. I can see that the task managers are running after they were started by the master node.
Config file on all nodes:
jobmanager.rpc.address: flmaster
jobmanager.rpc.port: 6123
jobmanager.heap.mb: 1024
taskmanager.heap.mb: 2048
taskmanager.numberOfTaskSlots: 1
taskmanager.memory.preallocate: false
parallelism.default: 1
jobmanager.web.port: 8081
taskmanager.tmp.dirs: /apps/storage/runtime/flink/workspace
recovery.mode: zookeeper
recovery.zookeeper.quorum:zk1:2181, zk2:2181, zk3:2181
recovery.zookeeper.storageDir: /apps/runtime/flink/recovery
env.java.home: /apps/java/
Then i have a file called slaves in the config folder with a list of the slaves nodes.
flSlave1
flSlave2
flSlave3
I then start it
../bin/start-cluster.sh
This opens an ssh session to all the slave nodes, and starts the task manager. I can see this with ps ax | grep java
I can open the Web-Ui on flMaster:8081
On the WebUI I can see the slave node count is 0. I have no task managers.
As a test, I started the wordcount.jar job, and it tells me it cannot run the job since there are no slots open.
/apps/flink/bin/flink run /apps/flink/examples/batch/WordCount.jar
the response:
07/20/2016 13:19:01 Job execution switched to status FAILING.
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Not enough free slots available to run the job.*
Well I guess if there is no task managers/slave nodes, there will be no slots.
Any one ever seen this issue?
Use fully qualified hostname instead of short name. For e.g hostname.xyx.com instead of just hostname. OR you could also try using ip address.
Try doing a telnet on jobmanager machine rpc port. The taskmanagers talk with jobmanager through rpc. So check the network settings whether you are able to access the jobmanager and task managers' rpc ports or not.
Also check the blob server port. Check the taskmanager logs whether it is able to connect to the jobmanager blob server or not.

Jmeter distributed testing

I'm using JMeter distributed testing for load testing. The problem is the client gets stopped immediately after starting it. Don't know why.
Can anybody help me in this problem?
You can follow these steps:
Start only jmeter-server.bat from the slave machines. (no need to run both jmeter.bat and jmeter-server.bat)
Configure jmeter.properties file of the master machine as follows:
Remote Hosts - comma delimited
remote_hosts=xxx.xxx.xxx.xxx (IP of your slave machines)
Start jmeter.bat from client(master) machine.
Now you can run your test from GUI mode to check everything is okay or not.
To do this: Run->Remote start-> check the IP's of slaves. (if its in there you are ready to run your test remotely).
Pre-requisites:
All the machines (both master and slaves) must be in the same subnet.
Firewall must be turned off for all machines.
Java and JMeter versions must be same for all machines.
For more details, you should read JMeter Distributed Testing Step-by-step.

How to fix the Zookeeper error for Hbase

Main OS is windows 7 64bit. Using VM player to create two vm CentOS 5.6 system. The net connection is bridge. I installed Hbase on both of the CentOS system, one is master, the other is slave. When I enter the shell, and run status 'details'.
The error from master is
zookeeper.ZKConfig: no valid quorum servers found in zoo.cfg ERROR:
org.apache.hadoop.hbase.ZooKeeperConnectionException: An error is
preventing HBase from connecting to ZooKeeper
And the error from slave is
ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is
able to connect to ZooKeeper but the connection closes immediately.
This could be a sign that the server has too many connections (30 is
the default). Consider inspecting your ZK server logs for that error
and then make sure you are reusing HBaseConfiguration as often as you
can. See HTable's javadoc for more information.
Please give me some suggestion.
Thanks a lot
Check if this is within your .bashrc, if not, add them and restart all hbase services (do not forget to manually run them as well), that did it for me with a pseudo-distributed installation. My problem (and maybe yours as well) was that Hbase wasn't detecting it's configuration.
export HADOOP_CONF_DIR=/etc/hadoop/conf
export HBASE_CONF_DIR=/etc/hbase/conf
I see this very often on my machine. I don't have a failsafe cure, but end up running stop-all.sh, and deleting every place that hadoop and dfs (its a dfs failure) store their temp files. It seems to happen after my computer goes to sleep while dfs is running.
I am going to experiment with single-user mode to avoid this. I dont need distribution while developing.