Gatling feeder/parameter issue - Exception in thread "main" java.lang.UnsupportedOperationException - api

I just involved the new project for API test for our service by using Gatling. At this point, I want to search query, below is the code:
def chnSendToRender(testData: FeederBuilderBase[String]): ChainBuilder = {
feed(testData)
exec(api.AdvanceSearch.searchAsset(s"{\"all\":[{\"all:aggregate:text\":{\"contains\":\"#{edlAssetId}_Rendered\"}}]}", "#{authToken}")
.check(status.is(200).saveAs("searchStatus"))
.check(jsonPath("$..asset:id").findAll.optional.saveAs("renderedAssetList"))
)
.doIf(session => session("searchStatus").as[Int] == 200) {
exec { session =>
printConsoleLog("Rendered Asset ID List: " + session("renderedAssetList").as[String], "INFO")
session
}
}
}
I declared the feeder already in the simulation scala file:
class GVRERenderEditor_new extends Simulation {
private val edlToRender = csv("data/render/edl_asset_ids.csv").queue
private val chnPostRender = components.notifications.notice.JobsPolling_new.chnSendToRender(edlToRender)
private val scnSendEDLForRender = scenario("Search Post Render")
.exitBlockOnFail(exec(preSimAuth))
.exec(chnPostRender)
setUp(
scnSendEDLForRender.inject(atOnceUsers(1)).protocols(httpProtocol)
)
.maxDuration(sessionDuration.seconds)
.assertions(global.successfulRequests.percent.is(100))
}
But Gatling test failed to run, showing this error: Exception in thread "main" java.lang.UnsupportedOperationException: There were no requests sent during the simulation, reports won't be generated
If I hardcode the #{edlAssetId} (put the real edlAssetId in that query), I will get result. I think I passed the parameter wrongly in this case. I've tried to print the output in console log but no luck. What's wrong with this code? I would appreciate your help. Thanks!

feed(testData)
exec(api.AdvanceSearch.searchAsset(s"{\"all\":[{\"all:aggregate:text\":{\"contains\":\"#{edlAssetId}_Rendered\"}}]}", "#{authToken}")
.check(status.is(200).saveAs("searchStatus"))
.check(jsonPath("$..asset:id").findAll.optional.saveAs("renderedAssetList"))
)
You're missing a . (dot) before the exec to attach it to the feed.
As a result, your method is returning the last instruction, ie the exec only.

Related

NestedClass test failing, because Spring-Data-Mongo-Reactive is closing MongoDb connection (state should be: server session pool is open)

1) Problem Description:
I am using a Spring-Data-Mongo-Reactive, Testcontainers and JUnit 05;
I have a Test-Class with simple-test and ‘Nested-Test-Class’(which has a simple test, as well);
When, JUnit tests the NestedClass, the MongoDb Connection is closed, and the test 'Test's NestedClass' fail;
My Goal is:
Keep the Reactive-MongoDB-Connection opened, in order to test, the ‘Nested-Test-Class’;
Below is the code for the above situation:
1.1) Code:
Current working status: not working;
Current behaviour:
The Reactive-MongoDB connection is closing, when the ‘Nested-Test-Class’ is being tested, even though the first test is tested normally. .
#Import(ServiceCrudRepoCfg.class)
#TestcontainerAnn
#TestsMongoConfigAnn
#TestsGlobalAnn
public class Lab {
final private String enabledTest = "true";
#Autowired
IService serviceCrudRepo;
#Test
#DisplayName("Save")
#EnabledIf(expression = enabledTest, loadContext = true)
public void save() {
Person localPerson = personWithIdAndName().create();
StepVerifier
.create(serviceCrudRepo.save(localPerson))
.expectSubscription()
.expectNext(localPerson)
.verifyComplete();
}
#Nested
#DisplayName("Nested Class")
#Execution(SAME_THREAD)
class NestedClass {
#Test
#DisplayName("findAll")
#EnabledIf(expression = enabledTest, loadContext = true)
public void findAll() {
StepVerifier
.create(serviceCrudRepo.findAll()
.log())
.expectSubscription()
.expectNextCount(1L)
.verifyComplete();
}
}
}
2) Current Problematic Log / Error:
java.lang.AssertionError: expectation "expectNextCount(1)" failed (expected: count = 1; actual: counted = 0; signal: onError(org.springframework.data.mongodb.ClientSessionException: state should be: server session pool is open; nested exception is java.lang.IllegalStateException: state should be: server session pool is open))
Caused by: java.lang.IllegalStateException: state should be: server session pool is open
3) Question:
How Can I Keep the Reactive-MongoDB-Connection opened, in order to test the ‘Nested-Test-Class’ as well?
Thanks a lot for any help

In Karate - Feature file calling from another feature file along with variable value

My apologies it seems repetitive question but it is really troubling me.
I am trying to call one feature file from another feature file along with variable values. and it is not working at all.
Below is the structure I am using.
my request json having variable name. Filename:InputRequest.json
{
"transaction" : "123",
"transactionDateTime" : "#(sTransDateTime)"
}
my featurefile1 : ABC.Feature
Background:
* def envValue = env
* def config = { username: '#(dbUserName)', password: '#(dbPassword)', url: '#(dbJDBCUrl)', driverClassName: "oracle.jdbc.driver.OracleDriver"};
* def dbUtils = Java.type('Common.DbUtils')
* def request1= read(karate.properties['user.dir'] + 'InputRequest.json')
* def endpoint= '/v1/ABC'
* def appDb = new dbUtils(config);
Scenario: ABC call
* configure cookies = null
Given url endpoint
And request request1
When method Post
Then status 200
Feature file from which I am calling ABC.Feature
#tag1
**my featurefile1: XYZ.Feature**
`Background`:
* def envValue = env
Scenario: XYZ call
* def sTransDateTime = function() { var SimpleDateFormat = Java.type('java.text.SimpleDateFormat'); var sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'+00:00'"); return sdf.format(new java.util.Date()); }
* def result = call read(karate.properties['user.dir'] + 'ABC.feature') { sTransDateTime: sTransDateTime }
Problem is,
While executing it, runnerTest has tag1 configured to execute.
Currently, it is ignoring entire ABC.feature to execute and also not generating cucumber report.
If I mention the same tag for ABC.feature (Which is not expected for me as this is just reusable component for me ) then it is being executed but sTransDateTime value is not being passed from XYZ.feature to ABC.feature. Eventually, InputRequest.json should have that value while communicating with the server as a part of the request.
I am using 0.9.4 Karate version. Any help please.
Change to this:
{ sTransDateTime: '#(sTransDateTime)' }
And read this explanation: https://github.com/intuit/karate#call-vs-read
I'm sorry the other part doesn't make sense and shouldn't happen, please follow this process: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

PLC4X:Exception during scraping of Job

I'm actually developing a project that read data from 19 PLCs Siemens S1500 and 1 modicon. I have used the scraper tool following this tutorial:
PLC4x scraper tutorial
but when the scraper is working for a little amount of time I get the following exception:
I have changed the scheduled time between 1 to 100 and I always get the same exception when the scraper reach the same number of received messages.
I have tested if using PlcDriverManager instead of PooledPlcDriverManager could be a solution but the same problem persists.
In my pom.xml I use the following dependency:
<dependency>
<groupId>org.apache.plc4x</groupId>
<artifactId>plc4j-scraper</artifactId>
<version>0.7.0</version>
</dependency>
I have tried to change the version to an older one like 0.6.0 or 0.5.0 but the problem still persists.
If I use the modicon (Modbus TCP) I also get this exception after a little amount of time.
Anyone knows why is happening this error? Thanks in advance.
Edit: With the scraper version 0.8.0-SNAPSHOT I continue having this problem.
Edit2: This is my code, I think the problem can be that in my scraper I am opening a lot of connections and when it reaches 65526 messages it fails. But since all the processing is happenning inside the lambda function and I'm using a PooledPlcDriverManager, I think the scraper is using only one connection so I dont know where is the mistake.
try {
// Create a new PooledPlcDriverManager
PlcDriverManager S7_plcDriverManager = new PooledPlcDriverManager();
// Trigger Collector
TriggerCollector S7_triggerCollector = new TriggerCollectorImpl(S7_plcDriverManager);
// Messages counter
AtomicInteger messagesCounter = new AtomicInteger();
// Configure the scraper, by binding a Scraper Configuration, a ResultHandler and a TriggerCollector together
TriggeredScraperImpl S7_scraper = new TriggeredScraperImpl(S7_scraperConfig, (jobName, sourceName, results) -> {
LinkedList<Object> S7_results = new LinkedList<>();
messagesCounter.getAndIncrement();
S7_results.add(jobName);
S7_results.add(sourceName);
S7_results.add(results);
logger.info("Array: " + String.valueOf(S7_results));
logger.info("MESSAGE number: " + messagesCounter);
// Producer topics routing
String topic = "s7" + S7_results.get(1).toString().substring(S7_results.get(1).toString().indexOf("S7_SourcePLC") + 9 , S7_results.get(1).toString().length());
String key = parseKey_S7("s7");
String value = parseValue_S7(S7_results.getLast().toString(),S7_results.get(1).toString());
logger.info("------- PARSED VALUE -------------------------------- " + value);
// Create my own Kafka Producer
ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, key, value);
// Send Data to Kafka - asynchronous
producer.send(record, new Callback() {
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
// executes every time a record is successfully sent or an exception is thrown
if (e == null) {
// the record was successfully sent
logger.info("Received new metadata. \n" +
"Topic:" + recordMetadata.topic() + "\n" +
"Partition: " + recordMetadata.partition() + "\n" +
"Offset: " + recordMetadata.offset() + "\n" +
"Timestamp: " + recordMetadata.timestamp());
} else {
logger.error("Error while producing", e);
}
}
});
}, S7_triggerCollector);
S7_scraper.start();
S7_triggerCollector.start();
} catch (ScraperException e) {
logger.error("Error starting the scraper (S7_scrapper)", e);
}
So in the end indeed it was the PLC that was simply hanging up the connection randomly. However the NiFi integration should have handled this situation more gracefully. I implemented a fix for this particular error ... could you please give version 0.8.0-SNAPSHOT a try (or use 0.8.0 if we happen to have released it already)

How do I programatically track error messages/

I have a Lookup activity with a Failure output which executes a Stored Procedure activity. The Stored Procedure activity logs the failure. How do I programmatically get the name of the erroring Lookup activity and also the error message as in input parameters to the Stored Procedure activity? Thanks.
You could follow the example sdk code here to get the error messages from the activity which runs in the pipeline.
1.Run the Lookup activity pipeline.
CreateRunResponse runResponse = client.Pipelines.CreateRunWithHttpMessagesAsync(resourceGroup, dataFactoryName, pipelineName).Result.Body;
Console.WriteLine("Pipeline run ID: " + runResponse.RunId);
2.Get the error message if it crashed into some issues.
List<ActivityRun> activityRuns = client.ActivityRuns.ListByPipelineRun(
resourceGroup, dataFactoryName, runResponse.RunId, DateTime.UtcNow.AddMinutes(-10), DateTime.UtcNow.AddMinutes(10)).ToList();
if (pipelineRun.Status == "Succeeded")
Console.WriteLine(activityRuns.First().Output);
else
Console.WriteLine(activityRuns.First().Error);
3.Then run another sp activity pipeline with the above messages as parameters.
Dictionary<string, object> parameters = new Dictionary<string, object>
{
{ "errorMessage", activityRuns.First().Error}
};
CreateRunResponse runResponse = client.Pipelines.CreateRunWithHttpMessagesAsync(resourceGroup, dataFactoryName, pipelineName, parameters: parameters).Result.Body;
Console.WriteLine("Pipeline run ID: " + runResponse.RunId);

Why would I get a lot of None.get exceptions while draining a streaming pipeline?

I am running into issues where I have a streaming scio pipeline running on Dataflow that is deduplicating messages and performing some counting by key. When I try to drain the pipeline I get a large amount of None.get exceptions supposedly thrown in my deduplicate step (I am basing this assumption off the label I am observing in the stackdriver log).
We are currently running on scio version 0.7.0-beta1 and beam version 2.8.0. I have tried protecting as much as I can in my code from any potential Nones but this appears like it is occurring further down inside of the deduplicate step.
The error I am getting is the following:
"java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at com.spotify.scio.util.Functions$$anon$2.mergeAccumulators(Functions.scala:227)
at com.spotify.scio.util.Functions$$anon$2.mergeAccumulators(Functions.scala:220)
at org.apache.beam.runners.dataflow.worker.WindmillStateInternals$WindmillCombiningState.getAccum(WindmillStateInternals.java:958)
at org.apache.beam.runners.dataflow.worker.WindmillStateInternals$WindmillCombiningState.read(WindmillStateInternals.java:920)
at org.apache.beam.runners.core.SystemReduceFn.onTrigger(SystemReduceFn.java:125)
at org.apache.beam.runners.core.ReduceFnRunner.onTrigger(ReduceFnRunner.java:1060)
at org.apache.beam.runners.core.ReduceFnRunner.onTimers(ReduceFnRunner.java:768)
at org.apache.beam.runners.dataflow.worker.StreamingGroupAlsoByWindowViaWindowSetFn.processElement(StreamingGroupAlsoByWindowViaWindowSetFn.java:95)
at org.apache.beam.runners.dataflow.worker.StreamingGroupAlsoByWindowViaWindowSetFn.processElement(StreamingGroupAlsoByWindowViaWindowSetFn.java:42)
at org.apache.beam.runners.dataflow.worker.GroupAlsoByWindowFnRunner.invokeProcessElement(GroupAlsoByWindowFnRunner.java:115)
at org.apache.beam.runners.dataflow.worker.GroupAlsoByWindowFnRunner.processElement(GroupAlsoByWindowFnRunner.java:73)
at org.apache.beam.runners.core.LateDataDroppingDoFnRunner.processElement(LateDataDroppingDoFnRunner.java:80)
at org.apache.beam.runners.dataflow.worker.GroupAlsoByWindowsParDoFn.processElement(GroupAlsoByWindowsParDoFn.java:135)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:45)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:50)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:202)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:160)
at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1226)
at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:141)
at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:965)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
As you can see, this never really enters my code and I am unsure how I should go about finding this issue. Perhaps it has something to do with the "LateDataDroppingDoFnRunner"? Our allowed lateness is relatively large (3 days with windows being an hour long).
val input = PubsubIO.readStrings()
.fromSubscription(subscription)
.withTimestampAttribute("ts")
.withName("Window messages")
.withFixedWindows(
duration = windowSize,
options = WindowOptions(
trigger = AfterWatermark.pastEndOfWindow()
.withEarlyFirings(AfterProcessingTime.pastFirstElementInPane()
.plusDelayOf(earlyFiring))
.withLateFirings(AfterProcessingTime.pastFirstElementInPane()
.plusDelayOf(lateFiring)),
accumulationMode = ACCUMULATING_FIRED_PANES,
allowedLateness = allowedLateness
)
)
.withName(s"Deduplicate messages")
.distinctBy[String](f = getId)
...
// I am being overly cautious here because I have been having
// so much trouble debugging this
def getId(message: Map[String, Any]): String = {
message match {
case null => {
logger.warn("message is null when getting id")
""
}
case message => {
message.get("id") match {
case None => {
logger.warn("id is null in message")
""
}
case id => id.get.toString
}
}
}
}
I am confused how I could possibly be getting a None.get here and why that would only occur when I am draining.
Can I have some advice on how I should go about debugging this error or where I should be looking?