I'm using JMeter distributed testing for load testing. The problem is the client gets stopped immediately after starting it. Don't know why.
Can anybody help me in this problem?
You can follow these steps:
Start only jmeter-server.bat from the slave machines. (no need to run both jmeter.bat and jmeter-server.bat)
Configure jmeter.properties file of the master machine as follows:
Remote Hosts - comma delimited
remote_hosts=xxx.xxx.xxx.xxx (IP of your slave machines)
Start jmeter.bat from client(master) machine.
Now you can run your test from GUI mode to check everything is okay or not.
To do this: Run->Remote start-> check the IP's of slaves. (if its in there you are ready to run your test remotely).
Pre-requisites:
All the machines (both master and slaves) must be in the same subnet.
Firewall must be turned off for all machines.
Java and JMeter versions must be same for all machines.
For more details, you should read JMeter Distributed Testing Step-by-step.
Related
I have followed along the below tutorial to setup a distributed testing environment for Jmeter:
https://www.perfmatrix.com/configuration-process-for-distributed-testing-in-jmeter-5-3/
I have managed to start the remote (slave machine) server and then to trigger the test from the master machine in NON-GUI mode.
But it doesn't want to finish the execution...what could be the reasons for this?
(I am using Jmeter version 5.4 on both machines, and they are in the same network. The master machine is Win OS and the slave machine is Mac OS)
Details about the test
When it comes to the Thread plan I am having a simple HTTP Sampler that makes a request to https://www.google.com (port 443) and no customized listener plugins in the Thread group, just a simple listener. I have no externalized data such as a CSV either.
In master jmeter.properties file I have only added an entry:
remote_hosts=[internal IP-address]
I have also copied over the .jks file generated from the master to the bin folder of the slave machine.
I have first started the jmeter-server from the slave machine with the following command:
sh ./jmeter-server Djava.rmi.server.hostname=[slave machine internal IP-address]
Afterwards I have started the master jmeter in NON-GUI by following:
jmeter -n -t [UNC-path to jmx file] -r
If you need additional details, just let me know!
The referenced article contains several steps which are not required and some statements which are not true at all.
We cannot help you without seeing:
Your test plan, at least Thread Group configuration
jmeter.log file from master
jmeter-server.log file from slave
The most common problems are:
RMI ports are not open in the firewall so the master cannot communicate with the slave or vice versa
Test plan uses a JMeter Plugin which is not installed on the slave
Test plan uses an external data file, i.e. CSV file used in the CSV Data Set Config and the file isn't present in the slave
More information:
Remote Testing
Apache JMeter Distributed Testing Step-by-step
How to Perform Distributed Testing in JMeter
Remote hosts and RMI configuration
I am not getting any result from slave machine also no entry in DB
1. connectivity between master and slave is established since when i run remote from master, slave states 'Start the test' 'Finish the test'
2. also from single master and slave script is executed successfully
3. also since there is dynamic ip to server so i am not able to provide ip and port
Not able to catch what exactly is the problem, if you can check through Team viewer, please guide me further as you get some time
slave machine
master machine
Make sure both master and slave are running the same Java version
Make sure both master and slave are running the same JMeter version (also consider upgrading to latest JMeter version - JMeter 3.3 as of now)
If you use any JMeter Plugins or there are any libraries in JMeter Classpath make sure to copy them over to slave machines as well.
Check jmeter-server-log on slave side
If you want to see request and response details in GUI add the next line to user.properties file for all slaves:
mode=Standard
Check out How to Load Test Opening a URL on a Remote Machine with JMeter article for comprehensive explanation and step-by-step instructions.
I just investigate DC/OS, I find that DC/OS has three roles:master, slave, slave_public, I want to deploy a cluster which can host master, slave or slave_public roles on one host, but currently I can't do that.
I want to know that why can't put them on one host when designed. If I do that, could I get some suggestions?
I just have the idea. If I can't do, I'll quit using DCOS, I'll use mesos and marathon.
Is there someone has the idea with me? I look forward to the reply.
This is by design, and things are actually being worked on to re-enforce that an machine is installed with only one role because things break with more than one.
If you're trying to demo / experiment with DC/OS and you only have one machine, you can use Virtual Machines or Docker to partition that one machine into multiple machines / parts which you can install DC/OS on. dcos-vagrant and dcos-docker can help you there.
As far as installing though, the configuration for each of the three roles is incompatible with one another. The "master" role causes a whole bunch of pieces of software to be started / installed on a host (Mesos-DNS, Mesos master, marathon, exhibitor, zookeeper, 3dt, adminrouter, rexray, spartan, navstar among others) which listen on various ports. The "slave" role causes a machine to have a mesos-agent (mesos renamed mesos-slave to mesos-agent, hence the disconnect) configured and started on the agent. The mesos-agent is configured to control / most ports greater than 1024 to tasks which are launched by mesos frameworks on the agent. Several of those ports are used by services which are run on masters, resulting in odd conflicts and hard to fix bad behavior.
In the case of running the "slave" and "slave_public" on the same host, those two conflict more directly, because both of them cause mesos-agent to be run on the host, with slightly different configuration. Both the mesos-agent (the one configured with the "slave" role and the one with the "slave_public" role are configured to listen on port 5051. Only one of them can use it though, so you end up with one of the agents being non-functional.
DC/OS only supports running a node as either a master or an agent(slave). You are correct that Mesos does not have this limitation. But DC/OS is more than just a Mesos/Marathon. To enable all the additional features of DC/OS there are various components built around Mesos and Marathon. At times these components behave differently whether they are running on a master or an agent and at other times the components that exist on a master may or may not exist on an agent or vice versa. So running a master and an agent on the same node would lead to conflicts/issues.
If you are looking to run a small development setup before scaling the solution out to a bigger distributed system DC/OS Vagrant might be a good starting point.
I've been trying to setup Enterprise Jenkins with the High Availabilty setup. The current setup consists of two jenkins masters sharing the same jenkins home, say master1 and master2, an installation of the jenkins-ha-monitor-1.1-1.1 rpm on both these masters, say monitor1 and monitor2. With this setup, according to the documentation atleast, the HA plugin should work as expected. Promotion and demotion scripts are similar to the ones in the documentation (only the ip and interface is different, same approach). i.e
For demotion
ifconfig eth0:2 down
For promotion
ifconfig eth0:2 the.floating.ip
Now for the nodes to get registered correctly I have to start master1, master2, monitor1 and monitor2 in that order. Tailing the logs for both I see that when the services are started in that order they are registered correctly by both monitor services as nodes in a cluster, and in the HA status gui in the jenkins console.
Now when master1 is killed by sending it a KILL signal monitor2 recognizes this and runs the promotion script. But monitor one keeps throwing :
Oct 24, 2012 3:47:36 PM
com.cloudbees.jenkins.ha.singleton.HASingleton$3 suspect INFO:
Suspecting a node failure in a cluster: jenkins-master-1-285 Oct 24,
2012 3:47:39 PM com.cloudbees.jenkins.ha.singleton.HASingleton$3
suspect INFO: Suspecting a node failure in a cluster:
jenkins-master-1-285
continuously without ever runnign the demotion script. Now since master2 has taken up the floating ip via its promotion script, and master1 still has that ip because demotion script is not run the setup ends up with two boxes claiming the same ip. Moreover restarting master1 does not do anything, i.e master1 does not get added to the cluster as a seconday node, monitor1 still keeps spitting the above messages to log, the floating ip keeps returning "Unable to connect" and master2 and monitor2 show the cluster as master2,monitor2 and monitor1. So my question/problem is twofold - why isnt master1 accepted back into the cluster? And why isn't the demotion script run as it should?
Also FYI i have tried to do a
service jenkins stop
and in that case the demotion script runs but again there are similar issues when
service jenkins start
is run on the master that was stopped earlier since the promotion script is run regardless of whether a primary jenkins exists. And in this case the two monitors register different clusters like so monitor1 : master1,monitor1 and monitor2 : master2,monitor2.
Running an ifconfig shows that both masters have taken up the floating ip at this point.
Any help is appreciated! Thanks!
Still under investigation with support. The originally reported problem (here) suggests that the two nodes are communicating fine, but promotions/demotions are not run correctly—either a bug in JGroups or in its usage in Jenkins high availability.
But further tests turned up problems with UDP multicast communication, which has been reported for RedHat/CentOS hosts. Work is underway to offer an alternate JGroups stack which does not rely on multicast (or UDP) at all, using the shared $JENKINS_HOME directory to register Jenkins and monitor instances (as TCP address:port records).
Main OS is windows 7 64bit. Using VM player to create two vm CentOS 5.6 system. The net connection is bridge. I installed Hbase on both of the CentOS system, one is master, the other is slave. When I enter the shell, and run status 'details'.
The error from master is
zookeeper.ZKConfig: no valid quorum servers found in zoo.cfg ERROR:
org.apache.hadoop.hbase.ZooKeeperConnectionException: An error is
preventing HBase from connecting to ZooKeeper
And the error from slave is
ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is
able to connect to ZooKeeper but the connection closes immediately.
This could be a sign that the server has too many connections (30 is
the default). Consider inspecting your ZK server logs for that error
and then make sure you are reusing HBaseConfiguration as often as you
can. See HTable's javadoc for more information.
Please give me some suggestion.
Thanks a lot
Check if this is within your .bashrc, if not, add them and restart all hbase services (do not forget to manually run them as well), that did it for me with a pseudo-distributed installation. My problem (and maybe yours as well) was that Hbase wasn't detecting it's configuration.
export HADOOP_CONF_DIR=/etc/hadoop/conf
export HBASE_CONF_DIR=/etc/hbase/conf
I see this very often on my machine. I don't have a failsafe cure, but end up running stop-all.sh, and deleting every place that hadoop and dfs (its a dfs failure) store their temp files. It seems to happen after my computer goes to sleep while dfs is running.
I am going to experiment with single-user mode to avoid this. I dont need distribution while developing.