Jmeter remote testing exits too early - testing

I have 3 instances in AWS with Jmeter installed - one master and two slaves.
I want to test 1M requests against my application. I have a script, which runs 100 threads concurrently 10,000 times.
When running the test on localhost or on a single instance only it runs fine.
My issue is that when I run the test using remote servers it exits immediately on both machines. The only logs I get from this are these:
Starting the test on host 10.229.48.10 # Mon Dec 02 15:21:49 UTC 2019 (1575300109383)
Warning: Nashorn engine is planned to be removed from a future JDK release
Finished the test on host 10.229.48.10 # Mon Dec 02 15:22:00 UTC 2019 (1575300120030)
I get nothing else even with verbose logging enabled.
This is the command I use to run the test:
JVM_ARGS="-Xms2048m -Xmx2048m" ./bin/jmeter -n -t test.jmx -R 10.229.48.10,10.
System load: 0.0 Processes: 122 │229.48.23
Both machines are fully open to the master instance.
Why does the script run fine on a single instance but craps out when using remote hosts?

The general checklist for troubleshooting JMeter master-slave configuration is:
Check jmeter.log file on the master and jmeter-server.log on the slaves
Ensure that Java version is the same on master and the slaves, if it is not the same - get the relevant (better latest) version of 64-bit JDK or Server JRE
Ensure that JMeter version is the same on master and the slaves, if it's not the case - get the relevant (better latest) version of JMeter
If your test is using any of JMeter Plugins - ensure that the same set of plugins is installed on all the machines. The plugins can be installed using JMeter Plugins Manager
If you're using any external data files, i.e. CSV files which are consumed by the CSV Data Set Config - the file(s) need to be copied over to all the slaves
If your test relies on some JMeter Properties make sure to supply the properties via -J or -D command-line arguments on all the machines or via -G command-line arugment on the master or put them into user.properties file

Which version of JDK are you using?
Is it JDK 8 or something else?
Make sure the following things,
a. Internal Networking is enabled in all three instances.
b. JDK 8 is installed from official resources.
c. You are able to communicate with the instances individually.
d. Installed JMeter from the official resource instead of "apt install jmeter"

Related

bluemix java application cannot be deployed anymore

We have had a java application that's been running on bluemix for more than a year that we update periodically (a few times a week). In the last few days however, even though the build is successful, we cannot launch it. The error is the following (we never saw this before):
App/0 Error occurred during initialization of VMJul 10, 2017 12:13:14.002 PM
App/0 Could not find agent library /home/vcap/app/.java-J-buildpack/open_jdk_jre/bin/jvmkill-J-1.9.0_RELEASE in absolute path, with error: /home/vcap/app/.java-J-buildpack/open_jdk_jre/bin/jvmkill-J-1.9.0_RELEASE: cannot open shared object file: No such file or directory
The deploy cmd is
cf push "${CF_APP}" -p target/universal/myapp-SNAPSHOT.zip -b https://github.com/cloudfoundry/java-buildpack.git -k 2G
As you are able to deploy using a previous version of the buildpack, this suggests that a recent change in the buildpack may be a reason for the latest buildpack breaking.
I was going to suggest opening a ticket on the github repo, but I see you have already done that :)

AttachNotSupportedException when trying to start a JFR recording

I'm receiving AttachNotSupportedException when trying to start a JFR recording.
It was working normally, until now.
jcmd 3658 JFR.start maxsize=100M filename=jfr_1.jfr dumponexit=true settings=profile
Output:
3658:
com.sun.tools.attach.AttachNotSupportedException: Unable to open socket file: target process not responding or HotSpot VM not loaded
at sun.tools.attach.LinuxVirtualMachine.<init>(LinuxVirtualMachine.java:106)
at sun.tools.attach.LinuxAttachProvider.attachVirtualMachine(LinuxAttachProvider.java:63)
at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:208)
What might be happening?
SO: Oracle Linux Server release 6.7
$ java -version
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
One of the probable reasons is that /tmp/.java_pid1234 file has been deleted (where 1234 is PID of a Java process).
Tools that depend on Dynamic Attach Mechanism (jstack, jmap, jcmd, jinfo) communicate to JVM through a UNIX domain socket created at /tmp.
This socket is created by JVM lazily on the first attach attempt or eagerly at JVM initialization if -XX:+StartAttachListener flag is specified.
Once the file corresponding to the socket is deleted, tools cannot connect to the target process, and unfortunately there is no way to re-create communication socket without restarting JVM.
For the description of Dynamic Attach Mechanism see this answer.
With personal experience... This problem also occurs in scenarios where the development environment is divided into partitions, and the partition where the operating system is located is different from the operating system partition. Example, operating system partition is EXT4 and the development environment partition is NTFS (where the JVM is). Problem occurs because you can not create a file "/tmp/.java_pid6024" (where 6024 is the PID of the java process).
To troubleshoot add -XX: + StartAttachListener at the start of the JVM, or application server.
Another possibility: your app is running under systemd with 'PrivateTmp=yes'. This prevents the /tmp/.java_pid1234 file from being found.

Cannot connect Impala-Kudu to Apache Kudu (without Cloudera Manager): Get TTransportException Error

I have successfully installed kudu on Ubuntu (Trusty) as per the official kudu documentations (see http://kudu.apache.org/docs/installation.html ). The setup has one node running master and tablet server and another node running the tablet server only. I am having issues installing impala-kudu without Cloudera Manager on the node running kudu master. I have followed CDH installation instructions on this (see http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_cdh5_install.html ) page until Step 3. I have avoided installing CDH with YARN and MRv1 as I don’t need to run any mapreduce jobs and will not be using hadoop. Impala-kudu and impala-kudu-shell installed without errors. When I launch the impala-shell it returns:
Starting Impala Shell without Kerberos authentication
Error connecting: TTransportException, Could not connect to kudu_test:21000
***********************************************************************************
Welcome to the Impala shell. Copyright (c) 2015 Cloudera, Inc. All rights reserved.
(Impala Shell v2.7.0-cdh5-IMPALA_KUDU-cdh5 (48f1ad3) built on Thu Aug 18 12:15:44 PDT 2016)Want to know what version of Impala you're connected to? Run the VERSION command to
find out!
***********************************************************************************
[Not connected] >
I have tried to use the CONNECT option to connect to the kudu-master node without success. Both imapala-kudu and kudu are running on the same machine. Are there additional configuration settings which need to be changed or is hadoop and YARN a strict requirement to make impala-kudu work?
After running ps -ef | grep -i impalad I can confirm the impala daemon is not running. After navigating to the impala logs at ~/var/log/impala I find a few errors and warning files. Here is the output of impalad.ERROR:
Log file created at: 2016/09/13 13:26:24
Running on machine: kudu_test
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0913 13:26:24.084389 3021 logging.cc:118] stderr will be logged to this file.
E0913 13:26:25.406966 3021 impala-server.cc:249] Currently configured default filesystem: LocalFileSystem. fs.defaultFS (file:///) is not supported.ERROR: block location tracking is not properly enabled because
- dfs.datanode.hdfs-blocks-metadata.enabled is not enabled.
- dfs.client.file-block-storage-locations.timeout.millis is too low. It should be at least 10 seconds.
E0913 13:26:25.406990 3021 impala-server.cc:252] Aborting Impala Server startup due to improper configuration. Impalad exiting.
Maybe I need to revisit HDFS and the Hive Metastore to ensure I have these services configured properly?
According to the log, impalad quits because the default filesystem is configured to be LocalFileSystem, which is not supported. You have to set a distributed filesystem, such as HDFS as the default.
Although Kudu is a separate storage system and does not rely on HDFS, Impala still seems to require a non-local default FS even when using with Kudu. The Impala_Kudu documentation explicitly lists the following requirement:
Before installing Impala_Kudu, you must have already installed and configured services for HDFS (though it is not used by Kudu), the Hive Metastore (where Impala stores its metadata), and Kudu.
I can even imagine that HDFS may not really be needed for any other reason than to make Impala happy, but this is just speculation from my side. Update: Found IMPALA-1850 which confirms my suspicion that HDFS should not be needed for Impala any more, but it's not just a single check that has to be removed.

Crashplan on FreeNAS missing /var/lib/crashplan/.ui_info

So I spent a few weeks on this problem now. I've been trying to get CrashPlan running on a headless FreeNAS server. I have found lots a tutorial to do this. However the fact is that I'm missing the .un_info file on my FreeNAS server after installing CrashPlan.
I have searched the whole file system to try and find the elusive .ui_info file.
I've tried creating it manually with information copied from desktop PC but that does not help me resolve my CrashPlan Pro app connecting to the Crashplan server service on FreeNAS.
INFO:
FreeNAS 9.3 STABLE
Crashplan 3.6.3_1 Plugin
The crashplan remote access behaviour changed several times during the last updates, however with version 3.6.3_1 you should find the .ui_info file in
/var/lib/crashplan/.ui_info
Although the jail version is 3.6.3 it's possible that Crashplan updated itself, please check this with:
tail -f /usr/pbi/crashplan-amd64/share/crashplan/log/service.log.0
In the end you want your Crashplan to update itself anyway. If the update process produces an error related to bash, please run:
pkg update
pkg install bash
ln -siv /usr/local/bin/bash /bin/bash
And restart crashplan while checking the log output with the tail -f command from above:
service crashplan restart
If you finally reach a recent version (>4.4.1), its time to remotely connect to crashplan.
The only change on the server necessary for the easiest method without ssh tunnel is the serviceHost tag in /usr/pbi/crashplan-amd64/share/crashplan/conf/my.service.xml.
<serviceUIConfig>
<serviceHost>0.0.0.0</serviceHost>
Either do this everytime you want to connect, because the token will change after every crashplan restart or use my script from here (for OS X): https://gist.github.com/Phlogi/8654e353786ed1cf0858
Copy /var/lib/crashplan/.ui_info to the correct place on your desktop machine and edit the IP address at the end (to your servers address), for example:
4339,7f1d655f-*****,192.168.1.20
That's it, you can start crashplan on your remote machine and it will connect properly, there are no other changes neccessary. Latest crashplan (>4.4.1) will actually use the IP address from .ui_info.
Install JRE. You will need to add --no-check-certificate to the JRE wget line in the install.sh file

Xvfb Jenkins plugin: Unrecognized option: -displayfd

Jenkins version: 1.573
Jenkins Xvfb Plugin version: 1.0.15 (latest)
Linux OS: Red Hat Enterprise Linux Server release 5.9 (Tikanga)
Xorg -version
X Window System Version 7.1.1
Release Date: 12 May 2006
X Protocol Version 11, Revision 0, Release 7.1.1
Build Operating System: Linux 2.6.18-308.13.1.el5 x86_64 Red Hat, Inc.
Current Operating System: Linux kobaloki2 2.6.18-348.16.1.el5 #1 SMP Sat Jul 27 01:05:23 EDT 2013 x86_64
Build Date: 06 November 2012
Build ID: xorg-x11-server 1.1.1-48.100.el5
Before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
Module Loader present
which Xvfb
/usr/bin/Xvfb
I have some Selenium GUI based tests, that I'm running against a given environment/server's web site and where these tests check if everything for that site is working fine or not i.e. performing logging in / out and some other few clicks here n there successfully.
As these are Selenium GUI tests and I want to run these tests on a machine (Linux) in a HEADLESS mode, I need X display server (Xvfb).
I exported DISPLAY variable and started /etc/init.d/xvfb successfully.
root 5996 1 0 2014 ? 00:00:00 /usr/bin/Xvfb :99 -ac -screen 0 1024x768x8
I'm using Xvfb plugin, which is installed successfully on my Jenkins instance and configurations in both Jenkins Global and Jenkins job level is setup correctly and it's working fine if I run the job on master/slave instances (NOTE: Currently I have created 2 slaves on the same master server but I have other separate servers where I'm planning to install more slaves).
When I run only 2 simultaneous runs of the job, I see the following additional processes i.e. per run and the job finishes successfully. NOTE: My offset value in Xvfb plugin is 1. If I use 100, then the following will show :101 and :102 respectively.
u10003 16264 6921 1 12:56 ? 00:00:01 Xvfb :1 -screen 0 1024x768x8 -fbdir /production/JSlaves/kobaloki2_2/xvfb-2015-02-03_12-56-41-60597.fbdir
u10003 16289 6691 0 12:56 ? 00:00:00 Xvfb :2 -screen 0 1024x768x8 -fbdir /production/JSlaves/kobaloki2_1/xvfb-2015-02-03_12-56-46-7546741396559175462.fbdir
I trying to run concurrent runs of a Jenkins job (which successfully runs Selenium GUI Integration/Acceptance tests on a master / slave servers).
Now, What I'm trying to achieve is to run multiple concurrent builds/runs of this Jenkins job (so that I can have multiple tests running at the same time i.e. to perform some kind of Volume based testing). At this moment, I don't want to run these tests on a Selenium Grid server (out of scope of this post).
My questions:
1. If the check box for "Let Xvfb choose display name" is checked, then I'm getting the following error (here the job ran on a master Jenkins instance instead of a slave, thus /production/jenkinsAKS/... base folder). How can I make Xvfb to use -displayfd variable successfully?
13:33:01 Xvfb starting$ Xvfb -displayfd 2 -screen 0 1024x768x8 -fbdir /production/jenkinsAKS/xvfb-2015-02-03_13-33-00-6577455998897275731.fbdir
13:33:01 Unrecognized option: -displayfd
...
....bunch of options for Xvfb command
...
..
13:33:01 Fatal server error:
13:33:01 Unrecognized option: -displayfd
13:33:01
13:33:11
13:33:11 ERROR: Xvfb failed to start, consult the lines above for errors
Per this link: https://wiki.jenkins-ci.org/display/JENKINS/Xvfb+Plugin
Let Xvfb choose display name Uses the -displayfd option of Xvfb by which it chooses its own display name by scanning for an available one. This option requires a recent version of xserver, check your installation for support. Useful if you do not want to manage display number ranges but have the first free display number be used.
2. In the above snapshot (Xvfb plugin), I see Xvfb additional options box, is there any option that I can try which will tell Xvfb to use a display# which is not currently in use?
3. It seems like I need to update X server version (Xorg -version). How can I do that, what commands should I run?
4. If I un-check the above mentioned checkbox and if I run multiple builds (more than 2) of this Jenkins job, then I get the following error if the DISPLAY number is already in use. Using that checkbox in Xvfb plugin, I was trying to tell Xvfb to use the display number from the free list if one if not available.
This error comes either for display #1 or #2 depending upon how Xvfb plugin assigns the number in Jenkins environment (using node/slave# etc).
13:04:27 Fatal server error:
13:04:27 Server is already active for display 1
13:04:27 If this server is no longer running, remove /tmp/.X1-lock
13:04:27 and start again.
13:04:27
13:04:27 unlink: No such file or directory
13:04:42 unlink failed, errno 2
13:04:42 ERROR: Xvfb failed to start, consult the lines above for errors
**How can I get rid of the above error** (seems like when I can resolve bullet 2 above)?
NOTE: If I use a single slave (either on the same master instance or on any other server) and increase the # of executors from 1/2 to 20 or greater, then, Xvfb is successfully running N number of builds/runs/tests at the same time without any failures. I can also use naginator plugin if required for retrying a failed build if any due to DISPLAY not available. BUT, this is not what I'm looking at this time.
Answer time.
It depends on your machine i.e. Xvfb installed on your machine may not have -displayfd option (but may be a different similar one) but Xvfb plugin in Jenkins is passing it for you when you check that checkbox. Try a different option if available (see Xvfb help or man page on your OS machine). Now, I'm NOT using / checking this checkbox.
Actually not required as Xvfb plugin will generate a new instance and assign a DISPLAY (:NN) where NN is a number automatically per individual run.
I can use yum command.
This error doesn't come each time. If this happens and error comes in all Jenkins job runs, then you can run the following command to fix it.
/etc/init.d/xvfb stop; sleep 2; /etc/init.d/xvfb start
To get a copy of xvfb file, you can get it online (where some xvfb file which sits under /etc/init.d folder, have more options that just stop/start.
Now, the solution to my ACTUAL problem (for which I was trying everything) is mentioned in other post here: Xvfb, Jenkins, Selenium tests - Capture Screenshots of all pages