Given a single (.jmx) file - what is the equivalent to the following configuration in the image seen below.
Taurus (.yml) Configuration
included-configs:
- Variables-local.yml # the file containing your properties definitions
execution:
iterations: 1
concurrency: 1
scenario:
script: AutomatedTests.jmx
modules:
jmeter:
class: bzt.modules.jmeter.JMeterExecutor
path: C:/JMeter
properties:
jmeter.save.saveservice.autoflush: 'true'
reporting:
- module: junit-xml
filename: TEST-Taurus.xml
Image Reference to JMeter UI Functionality
Amend you "execution" block and add all the modifications you want to make to the original JMeter script there like:
included-configs:
- Variables-local.yml # the file containing your properties definitions
execution:
iterations: 1
concurrency: 1
scenario:
script: AutomatedTests.jmx
modifications:
set-prop:
"Test Plan>TestPlan.serialize_threadgroups": true
modules:
jmeter:
class: bzt.modules.jmeter.JMeterExecutor
path: C:/JMeter
properties:
jmeter.save.saveservice.autoflush: 'true'
reporting:
- module: junit-xml
filename: TEST-Taurus.xml
More information:
Modifications for Existing Scripts
Navigating your First Steps Using Taurus
Related
I want to prevent the scenario where my model runs, even though any number of source tables are (erroneously) empty. The phrase coming to mind is a "pre-hook," although I'm not sure that's the right terminology.
Ideally I'd run dbt run --select MY_MODEL and as a part of that, these tests for non-emptiness in the source tables would run. However, I'm not sure dbt works like that. Currently I'm thinking I'll have to apply these tests to the sources and run those tests (according to this document), prior to executing dbt run.
Is there a more direct way of having dbt run fail if any of these sources are empty?
Personally the way I'd go about this would be to define your my_source.yml
to have not_null tests on every column using something like this docs example
version: 2
sources:
- name: jaffle_shop
database: raw
schema: public
loader: emr # informational only (free text)
loaded_at_field: _loaded_at # configure for all sources
tables:
- name: orders
identifier: Orders_
loaded_at_field: updated_at # override source defaults
columns:
- name: id
tests:
- not_null
- name: price_in_usd
tests:
- not_null
And then in your run / build, use the following order of operations:
dbt test --select source:*
dbt build
In this circumstance, I'd highly recommend making your own variation on the generate_source macro from dbt-codegen which automatically defines your sources with columns & not_null tests included.
Trying to config my first script
My goal is to automate an alert if my heater is in Error… there are many type error … The only state that is good is E-00: OK’
i would like to trigger the script only if the value is <> to state: "E-00: OK’
Is there a way to do that?
Script Yaml
alias: >-
Heater E10
sequence:
condition: state
entity_id: sensor.heater_error_string
state: "E-00: OK’
mode: single
icon: mdi:radiator
Yes, you can create server-side automation script which is triggered only when your text sensor entity value changes from OK to any other.
For example, you may try:
automation:
trigger:
- platform: state
entity_id: sensor.heater_error_string
from:
- "E-00: OK"
action:
- service: notify.mobile_phone_app
data:
message: heater is not ok
title: Heater Notification
mode: single
I am investigating exponential increase in JAVA heap size when executing complex scenarios especially with multiple reusable scenarios. This is my attempt to troubleshoot the issue with simple example and possible explanation to JVM heap usage.
Environment: Karate 1.1.0.RC4 | JDK 14 | Maven 3.6.3
Example: Download project, extract and execute maven command as per READEME
Observation: As per following example, if we call same scenario multiple times, response object grows exponentially since it includes response from previous called scenario along with copies of global variables.
#unexpected
Scenario: Not over-writing nested variable
* def response = call read('classpath:examples/library.feature#getLibraryData')
* string out = response
* def resp1 = response.randomTag
* karate.log('FIRST RESPONSE SIZE = ', out.length)
* def response = call read('classpath:examples/library.feature#getLibraryData')
* string out = response
* def resp2 = response.randomTag
* karate.log('SECOND RESPONSE SIZE = ', out.length)
Output:
10:26:23.863 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 9 by tag: getLibraryData
10:26:23.875 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 14 by tag: libraryData
10:26:23.885 [main] INFO com.intuit.karate - FIRST RESPONSE SIZE = 331
10:26:23.885 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 9 by tag: getLibraryData
10:26:23.894 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 14 by tag: libraryData
10:26:23.974 [main] INFO com.intuit.karate - SECOND RESPONSE SIZE = 1783
10:26:23.974 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 9 by tag: getLibraryData
10:26:23.974 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 14 by tag: libraryData
10:26:23.988 [main] INFO com.intuit.karate - THIRD RESPONSE SIZE = 8009
Do we really need to include response and global variables in the response of called feature file (non-shared scope)?
When we read large json file and call multiple reusable scenario files, each time copy of read json data gets added to response object. Is there way to avoid this behavior?
Is there a better way to script complex test using reusable scenarios without having multiple copies of same variables?
Okay, can you look at this issue:
https://github.com/intuit/karate/issues/1675
I agree we can optimize the response and global variables. Would be great if you can contribute code.
When running the following scenario, the tests finish running but execution hangs immediately after and the gradle test command never finishes. The cucumber report isn't built, so it hangs before that point.
It seems to be caused by having 2 call read() to different scenarios, that both call a third scenario. That third scenario references the parent context to inspect the current request.
When that parent request is stored in a variable the tests hang. When that variable is cleared before leaving that third scenario, the test finishes as normal. So something about having a reference to that context hangs the tests at the end.
Is there a reason this doesn't complete? Am I missing some important code that lets the tests finish?
I've added * def currentRequest = {} at the end of the special-request scenario and that allows the tests to complete, but that seems like a hack.
This is the top-level test scenario:
Scenario: Updates user id
* def user = call read('utils.feature#endpoint=create-user')
* set user.clientAccountId = user.accountNumber + '-test-client-account-id'
* call read('utils.feature#endpoint=update-user') user
* print 'the test is done!'
The test scenario calls 2 different scenarios in the same utls.feature file
utils.feature:
#ignore
Feature: /users
Background:
* url baseUrl
#endpoint=create-user
Scenario: create a standard user for a test
Given path '/create'
* def restMethod = 'post'
* call read('special-request.feature')
When method restMethod
Then status 201
#endpoint=update-user
Scenario: set a user's client account ID
Given path '/update'
* def restMethod = 'put'
* call read('special-request.feature')
When method restMethod
Then status 201
And match response == {"status":"Success", "message":"Update complete"}
Both of the util scenarios call the special-request feature with different parameters/requests.
special-request.feature:
#ignore
Feature: Builds a special
Scenario: special-request
# The next line causes the test to sit for a long time
* def currentRequest = karate.context.parentContext.getRequest()
# Without the below clear of currentRequest, the test never finishes
# We are de-referencing the parent context's request allows test to finish
* def currentRequest = {}
without currentRequest = {} these are the last lines of output I get before the tests seem to stop.
12:21:38.816 [ForkJoinPool-1-worker-1] DEBUG com.intuit.karate - response time in milliseconds: 8.48
1 < 201
1 < Content-Type: application/json
{
"status": "Success",
"message": "Update complete"
}
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.818 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print] the test is done!
12:21:38.818 [pool-1-thread-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
<==========---> 81% EXECUTING [39s]
With currentRequest = {}, the test completes and the cucumber report generates successfully which is what I would expect to happen even without that line.
Two comments:
* karate.context.parentContext.getRequest()
Wow, these are internal API-s not intended for users to use, I would strongly advise passing values around as variables instead. So all bets are off if you have trouble with that.
It does sound like you have a null-pointer in the above (no surprises here).
There is a bug in 0.9.4 that causes test failures in some edge cases such as the things you are doing, pre-test life-cycle or failures in karate-config.js to hang the parallel runner. You should see something in the logs that indicates a failure, if not - do try help us replicate this problem.
This should be fixed in the develop branch, so you could help if you can build from source and test locally. Instructions are here: https://github.com/intuit/karate/wiki/Developer-Guide
And if you still see a problem, please do this: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue
I would like to know why parallel rspec is showing a different coverage percentage and missed resources compared to when I run without parallelisation.
Here is the output:
Sysctl[net.ipv6.conf.all.accept_redirects]
Sysctl[net.ipv6.conf.all.disable_ipv6]
Sysctl[net.ipv6.conf.default.accept_ra]
Sysctl[net.ipv6.conf.default.accept_redirects]
Sysctl[net.ipv6.conf.default.disable_ipv6]
Sysctl[net.ipv6.conf.lo.disable_ipv6]
Sysctl[vm.min_free_kbytes]
Sysctl[vm.swappiness]
Systemd::Unit_file[puppet_runner.service]
Users[application]
Users[global]
F
Failures:
1) Code coverage. Must be at least 95% of code coverage
Failure/Error: RSpec::Puppet::Coverage.report!(95)
expected: >= 95.0
got: 79.01
# /usr/local/bundle/gems/rspec-puppet-2.6.11/lib/rspec-puppet/coverage.rb:104:in `block in coverage_test'
# /usr/local/bundle/gems/rspec-puppet-2.6.11/lib/rspec-puppet/coverage.rb:106:in `coverage_test'
# /usr/local/bundle/gems/rspec-puppet-2.6.11/lib/rspec-puppet/coverage.rb:95:in `report!'
# ./spec/spec_helper.rb:22:in `block (2 levels) in <top (required)>'
Finished in 42.12 seconds (files took 2.11 seconds to load)
995 examples, 1 failure
Failed examples:
rspec # Code coverage. Must be at least 95% of code coverage
2292 examples, 2 failures
....................................................................
Total resources: 1512
Touched resources: 1479
Resource coverage: 97.82%
Untouched resources:
Apt::Source[archive.ubuntu.com-lsbdistcodename-backports]
Apt::Source[archive.ubuntu.com-lsbdistcodename-security]
Apt::Source[archive.ubuntu.com-lsbdistcodename-updates]
Apt::Source[archive.ubuntu.com-lsbdistcodename]
Apt::Source[postgresql]
Finished in 1 minute 25.3 seconds (files took 1.43 seconds to load)
2292 examples, 0 failures
Because it is not entirely clear from the question, I assume here that you have set up code coverage by adding a line to your spec/spec_helper.rb like:
at_exit { RSpec::Puppet::Coverage.report!(95) }
The coverage report is a feature provided by rspec-puppet.
Also, I have assumed that you have more than one spec file that contain your tests and that these are being run in parallel by calling the parallel_spec task that is provided by puppetlabs_spec_helper.
The problem is this:
For code coverage to work properly, all of the Rspec tasks need to run within the same process (see the code here).
Meanwhile, for parallelisation to occur, there must be multiple spec files, which are run in parallel in separate processes. That limitation arises from the parallel_tests library that is used by the parallel_spec task. See its README.
The code coverage report, therefore, only counts resources that were seen inside each process.
Example:
class test {
file { '/tmp/foo':
ensure => file,
}
file { '/tmp/bar':
ensure => file,
}
}
Spec file 1:
require 'spec_helper'
describe 'test' do
it 'is expected to contain file /tmp/foo' do
is_expected.to contain_file('/tmp/foo').with({
'ensure' => 'file',
})
end
end
Spec file 2:
require 'spec_helper'
describe 'test' do
it 'is expected to contain file /tmp/bar' do
is_expected.to contain_file('/tmp/bar').with({
'ensure' => 'file',
})
end
end
spec_helper.rb:
require 'puppetlabs_spec_helper/module_spec_helper'
at_exit { RSpec::Puppet::Coverage.report!(95) }
Run in parallel:
Total resources: 2
Touched resources: 1
Resource coverage: 50.00%
Untouched resources:
File[/tmp/bar]
Finished in 0.10445 seconds (files took 1.03 seconds to load)
1 example, 0 failures
Total resources: 2
Touched resources: 1
Resource coverage: 50.00%
Untouched resources:
File[/tmp/foo]
Must be at least 95% of code coverage (FAILED - 1)
4 examples, 0 failures
Took 1 seconds
Run without parallelisation:
Finished in 0.12772 seconds (files took 1.01 seconds to load)
2 examples, 0 failures
Total resources: 2
Touched resources: 2
Resource coverage: 100.00%