JMeter- property values are not substituted when running from CLI - testing

I am currently using JMeter 5.1.1 on a Mac and have developed a very simple JMeter project to test out property value substitutions via Command Line. My JMeter project is pretty simple with a Dummy Sampler trying to print out the following
${__P(resources.folder)}, ${__P(propertiesfile)} and ${__property(propertiesfile)}
Link to JMeter project image
I am trying to run this project via CLI using the command
jmeter -n -t TestProj.jmx -l jmeter/TestProjResults.jtl -j jmeter/TestProj.log -Dresources.folder=/Users/h244955/Coding/bga/spogdashboard/tests/perf -Dpropertiesfile=baforgeperfproperties
The values are not getting substituted and I am seeing the following in the log:
2019-10-22 20:48:09,531 DEBUG o.a.j.e.u.ValueReplacer: About to replace in property of type: class org.apache.jmeter.testelement.property.StringProperty: ${__P(resources.folder)}
${__P(propertiesfile)}
${__property(propertiesfile)}
2019-10-22 20:48:09,533 DEBUG o.a.j.t.p.AbstractProperty: Not running version, return raw function string
2019-10-22 20:48:09,533 DEBUG o.a.j.e.u.ValueReplacer: Replacement result: ${__P(resources.folder)}
${__P(propertiesfile)}
${__property(propertiesfile)}
2019-10-22 20:48:09,534 DEBUG o.a.j.e.u.ValueReplacer: About to replace in property of type: class org.apache.jmeter.testelement.property.StringProperty: Dummy Sampler used to simulate requests and responses
without actual network activity. This helps debugging tests.
2019-10-22 20:48:09,534 DEBUG o.a.j.e.u.ValueReplacer: Replacement result: Dummy Sampler used to simulate requests and responses
without actual network activity. This helps debugging tests.
2019-10-22 20:48:09,534 DEBUG o.a.j.e.u.ValueReplacer: About to replace in property of type: class org.apache.jmeter.testelement.property.StringProperty: ${__Random(50,500)}
2019-10-22 20:48:09,534 DEBUG o.a.j.t.p.AbstractProperty: Not running version, return raw function string
2019-10-22 20:48:09,534 DEBUG o.a.j.e.u.ValueReplacer: Replacement result: ${__Random(50,500)}
However, when I run this project from GUI with the help of declaring the same properties using a JSR223 Sampler, the values are getting substituted in the Dummy Sampler as expected. I tried looking around for answers for the highlighted log above, but in vain.

You need to override JMeter property using -J:
jmeter -n -t TestProj.jmx -Jresources.folder=/Users/h244955/Coding/bga/spogdashboard/tests/perf -Jpropertiesfile=baforgeperfproperties -l jmeter/TestProjResults.jtl -j jmeter/TestProj.log
-D[prop_name]=[value]
defines a java system property value.
-J[prop_name]=[value]
defines a local JMeter property.
To add additional JMeter property file use -q
-q, --addprop <argument>
additional JMeter property file(s)

I cannot reproduce the issue:
so my expectation is that your JMeter installation is broken somehow, i.e. make sure that ApacheJMeter_functions.jar file is present in "lib/ext" folder of your JMeter installation
Make sure to get JMeter from the official downloads page and check the downloaded archive integrity, check How to Get Started With JMeter: Part 1 - Installation & Test Plans article for details.
Make sure to launch JMeter from its "bin" folder, to wit
cd /path/where/jmeter/lives/bin
./jmeter -Dpropertiesfile=baforgeperfproperties -n -t test.jmx ....
this ./jmeter bit is important to ensure that you launch JMeter from the current folder, not from another folder in your MacOS PATH

Related

TestCafe Studio - How to debug test failure when .testcafe file is executed?

I'm using TestCafe Studio to create my tests and executing the tests written in .testcafe format using the testcafe docker container. Further I'm using 'Drone' as CI environment.
Below is the command that I use to execute my tests
`- /opt/testcafe/docker/testcafe-docker.sh -c 3 chromium -q --skip-js-errors --ass`ertion-timeout 60000 --selector-timeout 60000 CommonScenarios/*.testcafe
When a test failure occur I will not get enough information about the test failure. For example below is an error log printed.
1) AssertionError: expected false to be truthy
Browser: Chrome 91.0.4472.124 / Linux 0.0
7
Is there any way that I can get enough details about which step is actually failing when the tests are executed in .testcafe format?
(When I ran .js format of the test it gives which line is failing)
This looks like a bug in TestCafe Framework. I opened an issue in the GitHub repository: https://github.com/DevExpress/testcafe/issues/6424. Subscribe it to be notified of updates.
As a simple solution, you can convert your *.testcafe fixture file to *.js. Also, there is a more complex workaround - it allows you to determine which step is failing:
Open the *.testcafe file in VS Code or some other editor. You will see that it looks like a JSON file.
Find an object with the "name" property whose value corresponds to your test name: "name": "Your-failed-test-name"
Look at the "commands" array. Find an object with the "callsite" property whose value is equal to the number that you can see in the error console output. This object specifies the failed step.
Note that the format of this file is intended for internal use, and it is not recommend to modify it manually.

How to Use Command Line Parameters by using JMeter?

I'm using Jmeter for testing APIs and I want to parametrize the project's path from the terminal and then I want to use this parameter in JMeter. I set testurl = test.com in basic terminal and i want to get this url by using testurl. The parameter that I've sent via Command Line : ./jmeter -n -t your_script.jmx -l -Jurl=$testurl in homebrew terminal. The parameter that I've used in httpsRequest --> Server name or IP; ${__P(url)}. But when I run my automation in the homebrew terminal, my test scripts are not going to URL that's been defined. Please help me!! Thanks.
From first impression it seems what you are trying should work, but devil is in the detail. I would suggest you try:
Verify if the environment variable is set correctly use export testurl=test.com (removed spaces). Try verify using echo $testurl
Try debug sampler which should help you verify if JMeter is picking up the var correctly: https://www.blazemeter.com/blog/how-debug-your-apache-jmeter-script/
Hope this helps.

JMeter Test Results Monitoring/ Analysis

I want to start load testing by running JMeter from command line for more accurate test results, but how can I monitor the run and then analyze the results after the test finishes.
You can generate JTL (JMeter results) file while executing the JMX (JMeter script) file from command line. A sample command for generating JTL file will look like this..
jmeter -n -t path-to-jmeterScript.jmx -l path-to-jtlFile.jtl
After completion of script execution you can open the JMeter GUI and simply open the JTL file in any listener (as per your requirement).
Most of the listeners in JMeter have an option to save the results into a file. This file contains usually not the report itself, but the samples which are generated by the tests. If you define this filename, you can generate the reports using these saved files. For example see http://jmeter.apache.org/usermanual/component_reference.html#Summary_Report .
If you run JMeter in command-line non-GUI mode passing results file name via -l parameter it will output results there. After test finishes you will be able to open the file with the Listener of your choice and perform the analysis.
By default JMeter writes results in chunks, if you need to monitor them in real time add the following line to user.properties file (lives under /bin folder of your JMeter installation)
jmeter.save.saveservice.autoflush=true
You can use other properties which names start with jmeter.save.saveservice.* to control what metrics you need to store. The list with default values can be seen in jmeter.properties file. See Apache JMeter Properties Customization Guide for more information on various JMeter properties types and ways of working with them.
You can also consider running your JMeter test via Taurus tool - it provides some statistics as the test goes either in console mode or via web interface.

Go, Golang : travis error for main program, go get -v

In my repo's subdirectory, I have some scripts with package main to show some example usage fo my package. But this gives me the following errors when being tested on Travis.
repo
example-dir
sub-dir
main.go // this gives me error like the following
github.com/~/directory-for-main-program
The command "go get -v ./..." failed. Retrying, 2 of 3.
I see this error only in Travis , not in local machine with go test.
Is there anyway to separate the main program and still able to pass the Travis testing?
Either use the correct path in your main.go, which is the proper way or use build constraints to disable that file:
// +build local
package main
//other code
then to locally build it use go build -tags local or go run -tags local

Send build status from Travis to Sauce Labs

I have my testing up and running with Travis/SauceLabs. Now I would like to add a SauceLabs test badge to my repo.
I added the badge markdown to my Readme file but how can I send the build pass/fail to SauceLabs? I found this instructions for Selenium,
Key: passed
Value type: bool
Example: "passed": true
but how/where do I add info in my files for my Grunt-Karma/Travis/SauceLabs testing?
The Karma-Sauce-Launcher was using the wrong id, and this was fixed on Github by this PR and released in NPM as version 0.2.5
Aditionally the saucelabs reporter has to be added to the gruntfile options, besides the existing ones or the default progress.
So the bugfix I added to my package.json:
"karma-sauce-launcher": "~0.2.5"
and added this:
reporters: ['progress', 'saucelabs'],
in the Karma options.
You have to use the REST API. What you would do is add code in a function that is executed at the very end of your test suite and knows the result of your test run. This code would have to perform a query equivalent to this curl command:
$ curl -H "Content-Type:text/json" -s -X PUT -d '{"passed": <status>}' http://<username>:<key>#saucelabs.com/rest/v1/<username>/jobs/<job-id>
(The identifiers in angular brackets have to be replaced with appropriate values.)
I've done it with Python but I don't have JavaScript code to share. And by the way, you have to do this for Selenium too because as the documentation states, when Selenium sends the job data to Sauce Labs, it cannot know yet what the test result is going to be.
If you're using Grunt already, you should use https://github.com/axemclion/grunt-saucelabs as it is the official plug-in that is worked on by devs from SauceLabs.