Calling Karate feature file returns response object including multiple copies of previous response object of parent scenario - karate

I am investigating exponential increase in JAVA heap size when executing complex scenarios especially with multiple reusable scenarios. This is my attempt to troubleshoot the issue with simple example and possible explanation to JVM heap usage.
Environment: Karate 1.1.0.RC4 | JDK 14 | Maven 3.6.3
Example: Download project, extract and execute maven command as per READEME
Observation: As per following example, if we call same scenario multiple times, response object grows exponentially since it includes response from previous called scenario along with copies of global variables.
#unexpected
Scenario: Not over-writing nested variable
* def response = call read('classpath:examples/library.feature#getLibraryData')
* string out = response
* def resp1 = response.randomTag
* karate.log('FIRST RESPONSE SIZE = ', out.length)
* def response = call read('classpath:examples/library.feature#getLibraryData')
* string out = response
* def resp2 = response.randomTag
* karate.log('SECOND RESPONSE SIZE = ', out.length)
Output:
10:26:23.863 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 9 by tag: getLibraryData
10:26:23.875 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 14 by tag: libraryData
10:26:23.885 [main] INFO com.intuit.karate - FIRST RESPONSE SIZE = 331
10:26:23.885 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 9 by tag: getLibraryData
10:26:23.894 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 14 by tag: libraryData
10:26:23.974 [main] INFO com.intuit.karate - SECOND RESPONSE SIZE = 1783
10:26:23.974 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 9 by tag: getLibraryData
10:26:23.974 [main] INFO c.intuit.karate.core.FeatureRuntime - scenario called at line: 14 by tag: libraryData
10:26:23.988 [main] INFO com.intuit.karate - THIRD RESPONSE SIZE = 8009
Do we really need to include response and global variables in the response of called feature file (non-shared scope)?
When we read large json file and call multiple reusable scenario files, each time copy of read json data gets added to response object. Is there way to avoid this behavior?
Is there a better way to script complex test using reusable scenarios without having multiple copies of same variables?

Okay, can you look at this issue:
https://github.com/intuit/karate/issues/1675
I agree we can optimize the response and global variables. Would be great if you can contribute code.

Related

Splunk HEC - Disable multiline event splitting due to timestamp

I have a multi-line event that has timestamps on different lines as shown in the below example
[2022-02-08 08:30:23:776] [INFO] [com.example.monitoring.ServiceMonitor] Status report for services
Service 1 - Available
Service 2 - Unavailable since 2022-02-08T07:00:00 UTC
Service 3 - Available
When the log is sent to an HEC, the lines are split into multiple events as highlighted in the Splunk data pipeline's parsing phase. Due to the presence of a timestamp on line 3, it creates 2 different events.
When searching in Splunk, I see the two events as shown below while they are supposed to be part of a single event.
Event 1
[2022-02-08 08:30:23:776] [INFO] [com.example.monitoring.ServiceMonitor] Status report for services
Service 1 - Available
Event 2
Service 2 - Unavailable since 2022-02-08T07:00:00 UTC
Service 3 - Available
I can solve the issue by setting DATETIME_CONFIG to NONE in props.conf but that creates another issue, Splunk will stop recognizing timestamps in the event.
Is it possible to achieve the same result but without disabling the above property?
The trick is to set TIME_PREFIX correctly:
https://ibb.co/PCG5TqY
This will only look for timestamps in lines starting with a "[".
Here is the entry for props.conf:
[changeme]
disabled = false
pulldown_type = true
category = Custom
DATETIME_CONFIG =
NO_BINARY_CHECK = true
TIME_PREFIX = ^\[
TIME_FORMAT = %Y-%m-%d %H:%M:%S:%3N
SHOULD_LINEMERGE = true

Karate UI taking screenshots on errors in the afterScenario hook [duplicate]

This question already has an answer here:
Attaching screenshots to json report
(1 answer)
Closed 1 year ago.
I have a UI test where I want to capture the screenshot when the UI test fails. I have explored the driver.screenshot() functionality and it works well whenever I want to take a screenshot during the test. However I just want to capture the screenshot at the point when the test fails.
I had a look at the solution in the afterScenario hook mentioned in:
Karate UI standalone - can there be screenshots attached to reports on failure?
However this expects INFO.errorMessage to be present in the log files.
Here is the sample of what we see in my karate.log file and I cannot find the error being logged in under INFO.errorMessage. The structure is slightly different:
09:49:22.192 [ForkJoinPool-1-worker-3] DEBUG c.intuit.karate.driver.DriverOptions - >> {"method":"Runtime.evaluate","params":{"expression":"document.evaluate(\"\/\/div[#role='option']\", document, null, 9, null).singleNodeValue.click()","returnByValue":true},"id":361}
09:49:22.200 [nioEventLoopGroup-2-1] DEBUG c.intuit.karate.driver.DriverOptions - << {"id":361,"result":{"result":{"type":"object","subtype":"error","className":"TypeError","description":"TypeError: Cannot read property 'click' of null\n at <anonymous>:1:84","objectId":"{\"injectedScriptId\":2,\"id\":3}"},"exceptionDetails":{"exceptionId":2,"text":"Uncaught","lineNumber":0,"columnNumber":83,"scriptId":"19","stackTrace":{"callFrames":[{"functionName":"","scriptId":"19","url":"","lineNumber":0,"columnNumber":83}]},"exception":{"type":"object","subtype":"error","className":"TypeError","description":"TypeError: Cannot read property 'click' of null\n at <anonymous>:1:84","objectId":"{\"injectedScriptId\":2,\"id\":4}"}}}}
09:49:22.201 [nioEventLoopGroup-2-1] WARN c.intuit.karate.driver.DriverOptions - devtools error: [id: 361, result: [type: MAP, value: {type=object, subtype=error, className=TypeError, description=TypeError: Cannot read property 'click' of null
at <anonymous>:1:84, objectId={"injectedScriptId":2,"id":3}}]]
09:49:22.202 [ForkJoinPool-1-worker-3] ERROR c.intuit.karate.driver.DriverOptions - js eval failed twice:document.evaluate("//div[#role='option']", document, null, 9, null).singleNodeValue.click(), error: {"type":"object","subtype":"error","className":"TypeError","description":"TypeError: Cannot read property 'click' of null\n at <anonymous>:1:84","objectId":"{\"injectedScriptId\":2,\"id\":3}"}
**09:49:22.207 [ForkJoinPool-1-worker-3] ERROR com.intuit.karate - feature call failed: classpath:AMUI/EndToEndTests/CreateAgreement.feature
arg: {"shared_agrname":"TEST_57059","local_agrname":"TEST_57059","agr_type":"CSA"}
CreateAgreement.feature:76 - evaluation (js) failed: click("//div[#role='option']"), java.lang.RuntimeException: js eval failed twice:document.evaluate("//div[#role='option']", document, null, 9, null).singleNodeValue.click(), error: {"type":"object","subtype":"error","className":"TypeError","description":"TypeError: Cannot read property 'click' of null\n at <anonymous>:1:84","objectId":"{\"injectedScriptId\":2,\"id\":3}"}
stack trace: com.intuit.karate.driver.DevToolsDriver.eval(DevToolsDriver.java:300)**
09:49:22.596 [pool-1-thread-1] INFO com.intuit.karate.Runner - <<fail>> feature 41 of 42: classpath:AMUI/EndToEndTests/CSA.feature
Looking at the above, I modified the afterScenario hook to:
* configure afterScenario = function(){ if (karate.ERROR.arg) driver.screenshot() }
but that does not help. It does not take the screenshot.
Any suggestions here would be very helpful.
Also is this expected in the log file? Not having UI errors captured under the info.errorMessage structure? I have not seen any errors captured in this structure so far in my tests. Does the framework capture it this way or is it the application under test that is responsible for it?
karate.info.errorMessage has nothing to do with log-levels, read the example please: hooks.info

Karate - How to call multiple external features from a single main feature

Feature: Principal feature
Background:
* url 'http://example.com'
Scenario: Feature calling
* def inputTable = call read('input_table.feature')
* call read('my_call.feature') inputTable.inputTestData
where the data table file is:
//input_table.feature
Feature:TABLE_TEST
Scenario:TABLE_TEST
* table inputTestData
|inputName|outputName|
|requestA|responseA|
this throw me an error of:
ERROR com.intuit.karate - feature call failed: .../input_table.feature
arg: null
input_table.feature:3 - evaluation (js) failed: requestA, javax.script.ScriptException: ReferenceError: "requestA" is not defined in <eval> at line number 1
stack trace: jdk.nashorn.api.scripting.NashornScriptEngine.throwAsScriptException(NashornScriptEngine.java:470)
but instead if i call the my_call feature with the data table defined inside in the Examples field it works correctly.
Any help?
There's a subtle difference between Examples and table. Maybe you meant to do this:
* table inputTestData
|inputName|outputName|
|'requestA'|'responseA'|
Read the docs: https://github.com/intuit/karate#table

Is there a better way to reference sub-features so that this test finishes?

When running the following scenario, the tests finish running but execution hangs immediately after and the gradle test command never finishes. The cucumber report isn't built, so it hangs before that point.
It seems to be caused by having 2 call read() to different scenarios, that both call a third scenario. That third scenario references the parent context to inspect the current request.
When that parent request is stored in a variable the tests hang. When that variable is cleared before leaving that third scenario, the test finishes as normal. So something about having a reference to that context hangs the tests at the end.
Is there a reason this doesn't complete? Am I missing some important code that lets the tests finish?
I've added * def currentRequest = {} at the end of the special-request scenario and that allows the tests to complete, but that seems like a hack.
This is the top-level test scenario:
Scenario: Updates user id
* def user = call read('utils.feature#endpoint=create-user')
* set user.clientAccountId = user.accountNumber + '-test-client-account-id'
* call read('utils.feature#endpoint=update-user') user
* print 'the test is done!'
The test scenario calls 2 different scenarios in the same utls.feature file
utils.feature:
#ignore
Feature: /users
Background:
* url baseUrl
#endpoint=create-user
Scenario: create a standard user for a test
Given path '/create'
* def restMethod = 'post'
* call read('special-request.feature')
When method restMethod
Then status 201
#endpoint=update-user
Scenario: set a user's client account ID
Given path '/update'
* def restMethod = 'put'
* call read('special-request.feature')
When method restMethod
Then status 201
And match response == {"status":"Success", "message":"Update complete"}
Both of the util scenarios call the special-request feature with different parameters/requests.
special-request.feature:
#ignore
Feature: Builds a special
Scenario: special-request
# The next line causes the test to sit for a long time
* def currentRequest = karate.context.parentContext.getRequest()
# Without the below clear of currentRequest, the test never finishes
# We are de-referencing the parent context's request allows test to finish
* def currentRequest = {}
without currentRequest = {} these are the last lines of output I get before the tests seem to stop.
12:21:38.816 [ForkJoinPool-1-worker-1] DEBUG com.intuit.karate - response time in milliseconds: 8.48
1 < 201
1 < Content-Type: application/json
{
"status": "Success",
"message": "Update complete"
}
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.817 [ForkJoinPool-1-worker-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
12:21:38.818 [ForkJoinPool-1-worker-1] INFO com.intuit.karate - [print] the test is done!
12:21:38.818 [pool-1-thread-1] DEBUG com.jayway.jsonpath.internal.path.CompiledPath - Evaluating path: $
<==========---> 81% EXECUTING [39s]
With currentRequest = {}, the test completes and the cucumber report generates successfully which is what I would expect to happen even without that line.
Two comments:
* karate.context.parentContext.getRequest()
Wow, these are internal API-s not intended for users to use, I would strongly advise passing values around as variables instead. So all bets are off if you have trouble with that.
It does sound like you have a null-pointer in the above (no surprises here).
There is a bug in 0.9.4 that causes test failures in some edge cases such as the things you are doing, pre-test life-cycle or failures in karate-config.js to hang the parallel runner. You should see something in the logs that indicates a failure, if not - do try help us replicate this problem.
This should be fixed in the develop branch, so you could help if you can build from source and test locally. Instructions are here: https://github.com/intuit/karate/wiki/Developer-Guide
And if you still see a problem, please do this: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

Spark execution occasionally gets stuck at mapPartitions at Exchange.scala:44

I am running a Spark job on a two node standalone cluster (v 1.0.1).
Spark execution often gets stuck at the task mapPartitions at Exchange.scala:44.
This happens at the final stage of my job in a call to saveAsTextFile (as I expect from Spark's lazy execution).
It is hard to diagnose the problem because I never experience it in local mode with local IO paths, and occasionally the job on the cluster does complete as expected with the correct output (same output as with local mode).
This seems possibly related to reading from s3 (of a ~170MB file) immediately prior, as I see the following logging in the console:
DEBUG NativeS3FileSystem - getFileStatus returning 'file' for key '[PATH_REMOVED].avro'
INFO FileInputFormat - Total input paths to process : 1
DEBUG FileInputFormat - Total # of splits: 3
...
INFO DAGScheduler - Submitting 3 missing tasks from Stage 32 (MapPartitionsRDD[96] at mapPartitions at Exchange.scala:44)
DEBUG DAGScheduler - New pending tasks: Set(ShuffleMapTask(32, 0), ShuffleMapTask(32, 1), ShuffleMapTask(32, 2))
The last logging I see before the task apparently hangs/gets stuck is:
INFO NativeS3FileSystem: INFO NativeS3FileSystem: Opening key '[PATH_REMOVED].avro' for reading at position '67108864'
Has anyone else experience non-deterministic problems related to reading from s3 in Spark?