Is there a better way to reference sub-features so that this test finishes? - karate

When running the following scenario, the tests finish running but execution hangs immediately after and the gradle test command never finishes. The cucumber report isn't built, so it hangs before that point.
It seems to be caused by having 2 call read() to different scenarios, that both call a third scenario. That third scenario references the parent context to inspect the current request.
When that parent request is stored in a variable the tests hang. When that variable is cleared before leaving that third scenario, the test finishes as normal. So something about having a reference to that context hangs the tests at the end.
Is there a reason this doesn't complete? Am I missing some important code that lets the tests finish?
I've added * def currentRequest = {} at the end of the special-request scenario and that allows the tests to complete, but that seems like a hack.
This is the top-level test scenario:
Scenario: Updates user id
* def user = call read('utils.feature#endpoint=create-user')
* set user.clientAccountId = user.accountNumber + '-test-client-account-id'
* call read('utils.feature#endpoint=update-user') user
* print 'the test is done!'
The test scenario calls 2 different scenarios in the same utls.feature file
utils.feature:
#ignore
Feature: /users
Background:
* url baseUrl
#endpoint=create-user
Scenario: create a standard user for a test
Given path '/create'
* def restMethod = 'post'
* call read('special-request.feature')
When method restMethod
Then status 201
#endpoint=update-user
Scenario: set a user's client account ID
Given path '/update'
* def restMethod = 'put'
* call read('special-request.feature')
When method restMethod
Then status 201
And match response == {"status":"Success", "message":"Update complete"}
Both of the util scenarios call the special-request feature with different parameters/requests.
special-request.feature:
#ignore
Feature: Builds a special
Scenario: special-request
# The next line causes the test to sit for a long time
* def currentRequest = karate.context.parentContext.getRequest()
# Without the below clear of currentRequest, the test never finishes
# We are de-referencing the parent context's request allows test to finish
* def currentRequest = {}
without currentRequest = {} these are the last lines of output I get before the tests seem to stop.
12:21:38.816 [ForkJoinPool-1-worker-1] DEBUG com.intuit.karate - response time in milliseconds: 8.48
1 < 201
1 < Content-Type: application/json
{
"status": "Success",
"message": "Update complete"
}
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.818 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print] the test is done!
12:21:38.818 [pool-1-thread-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
<==========---> 81% EXECUTING [39s]
With currentRequest = {}, the test completes and the cucumber report generates successfully which is what I would expect to happen even without that line.

Two comments:
* karate.context.parentContext.getRequest()
Wow, these are internal API-s not intended for users to use, I would strongly advise passing values around as variables instead. So all bets are off if you have trouble with that.
It does sound like you have a null-pointer in the above (no surprises here).
There is a bug in 0.9.4 that causes test failures in some edge cases such as the things you are doing, pre-test life-cycle or failures in karate-config.js to hang the parallel runner. You should see something in the logs that indicates a failure, if not - do try help us replicate this problem.
This should be fixed in the develop branch, so you could help if you can build from source and test locally. Instructions are here: https://github.com/intuit/karate/wiki/Developer-Guide
And if you still see a problem, please do this: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

Related

Elrond mandos test elrond_wasm_debug::mandos_rs pass however erdpy contract test fail

I'm writing test cases for my NFT smart contract (SC). When I check the state of the SC after creating my NFT I'm expecting to see a variable (next_index_to_mint:u64, that's I increase by 1 every new NFT) to be updated.
So I'm running the test using the command:
$ erdpy contract test
INFO:projects.core:run_tests.project: /Users/<user>/sc_nft
INFO:myprocess:run_process: ['/Users/<user>/elrondsdk/vmtools/mandos-test', '/Users/<user>/sc_nft/mandos'], in folder: None
CRITICAL:cli:External process error:
Command line: ['/Users/<user>/elrondsdk/vmtools/mandos-test', '/Users/<user>/sc_nft/mandos']
Output: Scenario: buy_nft.scen.json ... FAIL: wrong account storage for account "sc:nft-minter":
for key 0x6e657874496e646578546f4d696e74 (str:nextIndexToMint): Want: "0x02". Have: ""
Scenario: create_nft.scen.json ... FAIL: wrong account storage for account "sc:nft-minter":
for key 0x6e657874496e646578546f4d696e74 (str:nextIndexToMint): Want: "0x02". Have: ""
Scenario: init.scen.json ... ok
Done. Passed: 1. Failed: 2. Skipped: 0.
ERROR: some tests failed
However, when I'm running the test using elrond_wasm_debug::mandos_rs function with the create_nft.scen.json file, it passed.
use elrond_wasm_debug::*;
fn world() -> BlockchainMock {
let mut blockchain = BlockchainMock::new();
blockchain.set_current_dir_from_workspace("");
blockchain.register_contract_builder("file:output/test.wasm", nft_auth_card::ContractBuilder);
blockchain
}
#[test]
fn create_nft() {
elrond_wasm_debug::mandos_rs("mandos/create_nft.scen.json", world());
}
BTW, if you want to add this to the NFT SC example, that would be great in the tests/ folder.
I tried to put an incorrect value, and it failed as expected. So my question is how could it be possible that it work using mandos elrond_wasm debug but not erdpy ?
running 1 test
thread 'create_nft' panicked at 'bad storage value. Address: sc:nft-minter. Key: str:nextIndexToMint. Want: "0x04". Have: 0x02', /Users/<user>/elrondsdk/vendor-rust/registry/src/github.com-1ecc6299db9ec823/elrond-wasm-debug-0.28.0/src/mandos_step/check_state.rs:56:21
Here is the code (I use the default NFT SC example):
const NFT_INDEX: u64 = 0;
fn create_nft_with_attributes<T: TopEncode>(...) -> u64 {
...
self.next_index_to_mint().set_if_empty(&NFT_INDEX);
let next_index_to_mint = self.next_index_to_mint().get();
self.next_index_to_mint().set(next_index_to_mint+1);
...
}
#[storage_mapper("nextIndexToMint")]
fn next_index_to_mint(&self) -> SingleValueMapper<u64>;
Short answer: most likely you haven't re-built your contract before testing it with erdpy.
Long answer: currently there are two ways mandos tests are executed, as you've exemplified in your case:
Run tests directly from rust through mandos_rs
Run tests through erdpy (which in turn uses mandos_go)
These two frameworks (mandos_rs and mandos_go) are functioning in different ways:
mandos_rs: this framework is running on your rust code directly and it's testing it agains a mocked VM and mocked blockchain in the background. Therefore, it's not necessary to build your contract when using mandos_rs.
mandos_go: this framework is testing your compiled contract against a
REAL VM with mocked blockchain in the background, so it's necessary to build your latest changes into a .wasm bytecode (e.g. erdpy contract build) before running the tests via mandos_go, as this compiled file will be loaded by the VM like in a real use scenario.

Calling Karate feature file returns response object including multiple copies of previous response object of parent scenario

I am investigating exponential increase in JAVA heap size when executing complex scenarios especially with multiple reusable scenarios. This is my attempt to troubleshoot the issue with simple example and possible explanation to JVM heap usage.
Environment: Karate 1.1.0.RC4 | JDK 14 | Maven 3.6.3
Example: Download project, extract and execute maven command as per READEME
Observation: As per following example, if we call same scenario multiple times, response object grows exponentially since it includes response from previous called scenario along with copies of global variables.
#unexpected
Scenario: Not over-writing nested variable
* def response = call read('classpath:examples/library.feature#getLibraryData')
* string out = response
* def resp1 = response.randomTag
* karate.log('FIRST RESPONSE SIZE = ', out.length)
* def response = call read('classpath:examples/library.feature#getLibraryData')
* string out = response
* def resp2 = response.randomTag
* karate.log('SECOND RESPONSE SIZE = ', out.length)
Output:
10:26:23.863 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 9 by tag: getLibraryData
10:26:23.875 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 14 by tag: libraryData
10:26:23.885 [main] INFO com.intuit.karate - FIRST RESPONSE SIZE = 331
10:26:23.885 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 9 by tag: getLibraryData
10:26:23.894 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 14 by tag: libraryData
10:26:23.974 [main] INFO com.intuit.karate - SECOND RESPONSE SIZE = 1783
10:26:23.974 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 9 by tag: getLibraryData
10:26:23.974 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 14 by tag: libraryData
10:26:23.988 [main] INFO com.intuit.karate - THIRD RESPONSE SIZE = 8009
Do we really need to include response and global variables in the response of called feature file (non-shared scope)?
When we read large json file and call multiple reusable scenario files, each time copy of read json data gets added to response object. Is there way to avoid this behavior?
Is there a better way to script complex test using reusable scenarios without having multiple copies of same variables?
Okay, can you look at this issue:
https://github.com/intuit/karate/issues/1675
I agree we can optimize the response and global variables. Would be great if you can contribute code.

Robotframework - get failing keyword / stack trace of failure

I have a keyword called "debug teardown" which prints the test status and then runs the debuglibrary Debug keyword, if the test has failed.
I would like to be able to log to console which keyword has caused the failure, so I can more effectively debug my test.
Is it possible to get the stack trace or most recent test keyword, and log it to the console?
Here is my Debug Teardown keyword:
Debug Teardown
Run Keyword If Test Failed Log ${TEST STATUS}: ${TEST MESSAGE} ERROR
Run Keyword If Test Failed Debug
You can get a bit more information if you create a listener and also set the log level to DEBUG. Inside the listener you can save the results of log commands, and then when a keyword fails you can print it out or do whatever else you want.
For example, here's a listener that will print to stdout the last log message when a keyword fails:
from __future__ import print_function
class my_listener():
ROBOT_LISTENER_API_VERSION = 2
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.last_log = None
def _log_message(self, message):
self.last_log = message
def _end_keyword(self, name, attrs):
if attrs['status'] == 'FAIL':
print("\n******\n", self.last_log['message'])
You would use it by importing the listener like a normal library, and also setting the log level to DEBUG (otherwise you'll get the error but no stack trace).
Example:
*** Settings ***
Library my_listener
Suite Setup set log level DEBUG
*** Test cases ***
Example
some keyword
You might be able to utilize set suite variable to update a "global" variable as you go. The last set variable value would be the value that failed.

Karate -TestNG stop execution when any one of the step fail

Karate step execution stops when any one of the step fails.
Example:
Scenario : verify user details.
Given url "this is my webservice"
When method post
Then status 200
*assert 1==2
Then response
Then match XXXXXXX
The match XXXX
The steps fails Assert , remain steps does not execute. Is there any way even my assert fails remaining steps can continue the process
This is the expected behavior.
But you can use the karate.match() function to perform the assert manually. Then you can use conditional logic to decide if you want to continue next steps or not. But I totally don't recommend this.
For example:
* def temp = karate.match(actual, expected)
* print 'some step'
* assert temp.pass

JMeter Pass, fail, Warning?

So I am running a few tests on Jmeter, I have assertions set up for Pass/Fail. The issue is, I need to set up a "Warning" or "caution" result.
For example -
Latency < 500ms = Pass
Latency > 1000ms = Fail
Latency < 999ms AND Latency > 501 = Caution
The above is just an example.. The variation in between A and B would be much smaller.
Does anyone know how to set something like this up in Jmeter?
For the moment JMeter does not support caution result, the sampler can either be successful or not. You can set a custom response status code, message, print something to jmeter.log, send an email, etc. but you cannot get anything but Success: true|false without core JMeter changes.
You could try using JSR223 Assertion to implement your pass/fail criteria logic, the relevant code which will set sampler response code to 999 and message to CAUTION would be something like:
def latency = prev.getLatency() as int
def range = new IntRange(501, 999)
if (latency >= 1000) {
AssertionResult.setFailure(true)
AssertionResult.setFailureMessage('Latency exceeds 1000 (was ' + latency + ')')
}
if (range.contains(latency)){
prev.setResponseCode('599')
prev.setResponseMessage('CAUTION! High latency: ' + latency)
}
If latency will be between 501 and 999 inclusively you will get the next result:
And failure will look "normally"
More information:
prev is an instance of SampleResult class, see JavaDoc for available methods and fields
the same for AssertionResult
also check out Scripting JMeter Assertions in Groovy - A Tutorial for comprehensive information on using Groovy for setting custom JMeter samplers failure conditions