Groovy dynamic method invocation with nested function - dynamic

I need to evaluate a string with nested function calls. Is there an easy way to do this with groovy?
Edit: Code made more realistic. The context is nonacademic; my function needs to combine and evaluate a bunch of arbitrary strings and values from json files.
JSON file 1 will have strings like:
"biggerThan(isList,0)"
"smallerThan(isList,3)"
"biggerThan(isList,1)"
JSON file 2 will have values like
[4,1]
[1,2,1]
[1,5,6,2,98]
[]
def biggerThan= {func, val->
{v->return func(v) && (v.size() > val)}
}
def isList ={n->
return n instanceof List
}
def a=biggerThan(isList,1)
a([4,1])
// -> returns true in groovy console because [4,1] is a list with size>1

Related

How to read Key Value pair in spark SQL?

How do I get this output using spark sql or scala ? I have a table with columns storing such values - need to split in seprate columns.
Input :
Output :
It pretty much depends on what libs you want to use (as you mentioned in Scala or Spark).
Using spark
val rawJson = """
{"Name":"ABC.txt","UploaddedById":"xxxxx1123","UploadedByName":"James"}
"""
spark.read.json(Seq(rawJson).toDS)
Using common json libraries:
// play
Json.parse(rawJson) match {
case obj: JsObject =>
val values = obj.values
val keys = obj.keys
// construct dataframe having keys and values
case other => // handle other types (like JsArray, etc,.)
}
// circe
import io.circe._, io.circe.parser._
parse(rawJson) match {
case Right(json) => // fetch key values, construct df, much like above
case Left(parseError) => ...
}
You can use almost any json library to parse your json object, and then convert it to spark df very easily.

how can we pass multiple arguments in the background functions in karate feature file

i am passing the two arguments to my custom function but in background while i am passing the arguments it's skipping first taking second one only arugment.
here is the sample code
* def LoadToTigerGraph =
"""
function(args1,args2) {
var CustomFunctions = Java.type('com.optum.graphplatform.util.CareGiverTest');
var cf = new CustomFunctions();
return cf.testSuiteTrigger(args1,args2);
}"""
#*eval if (karate.testType == "component") karate.call(LoadToTigerGraph '/EndTestSample.json')
* def result = call LoadToTigerGraph "functional","/EndTestSample.json"
output :
test type is ************/EndTestSample.json
path is *************undefined
When you want to pass two arguments, you need to send them as two json key/value.
* def result = call LoadToTigerGraph { var1: "functionnal", var2: "/EndTestSample.json" }
And you just have to use args.var1 and args.var2 in your function function(args)

Karate - Cannot cast java.util.LinkedHashMap to java.util.List

I need to compare an XML file with that of Json response. Below is just a short version of original xml file, JSON and XML have different attribute names because of which i cannot compare them directly. I need to call function to format xml so that it can be compared with json. During this process, I am getting the error Cannot cast java.util.LinkedHashMap to java.util.List after running below code.
* xml list = $Test1/Body
* print list
* def xpath = function(x, p){ try { return karate.xmlPath(x, p) } catch (e) { return '#notpresent' } }
* def fun = function(x){ return { code: xpath(x, '/Body/code') } }
* def temp = karate.map(list, fun)
* print temp
Test1 is an xml file containing below sample data;
<ns9:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ns5:plan">
<ns4:code>XPBSMWAT</ns4:code>
</ns9:Body>
This is a simpler version of XML data; my actual xml file is bigger.
karate.map takes 2 arguments list and a function,
From your question,
* xml list = $Test1/Body
variable name list is actually of a xml data type as you defined, not a list.
karate takes XML/JSON as java.util.LinkedHashMap since you passed the XML as input to the map function it tried to cast it into List and failed.
So it should be something like,
* def list = <something>
But make sure instead of xml it returns a list
Your xml path $Test1/Body returns xml
when you print the list you will see it surrounded with [ and
]
I am not quite sure what you are trying to do with this xml and JS functions but this could be root cause for the error you are getting.

Passing a list to SQL each row call Groovy

I am currently rendering a list of sql rows from a database using:
Sql sql = new Sql(dataSource)
def list = []
def index = 0
params.mrnaIds.each { mrnaName ->
sql.eachRow ("select value from patient_mrna where mrna_id=$mrnaId") { row ->
list[index] = row.value
index++
}
}
render list
However I would like to avoid assigning the values to a list before rendering them.
The variable params.mrnaIds is coming from a multi select input, so it could either be a single string or a string array containing ids. Is there a way to iterate through these ids inside the eachRow method?
I would like to be able to execute something like:
render sql.eachRow ("select value from patient_mrna where mrna_id=?", params.mrnaIds) { row ->
list[index] = row.value
index++
}
But I'm not completely sure that there is a way to call eachRow with this functionality. If there is not, is there some other way to render the results without storing them in a list?
I think you can render each row:
sql.eachRow( someQuery, someParams ){ row ->
render row as JSON
}
There is rows() to return a list instead ok working with it (like eachRow() is used for). It also shares all the different arguments. E.g.:
render sql.rows("select value from patient_mrna where mrna_id=?", params).collect{ it.value }

scrapy single spider to pass multiple item classes to pipeline

I am new to scrapy. In items.py, I declare 2 ItemClass called ItemClass1 and ItemClass2. A spider method parseUrl get the html and scrape data and put into lists for respective Item Classes.
e.g:
C1Items = []
C1Item = ItemClass1()
#scrape data
C1Items.append(C1Item)
...
C2Items = []
C2Item = ItemClass2()
#scrape data
C2Items.append(C2Item)
...
finally: C1Items and C2Items contain required data.
return C1Items #will pass ItemClass1 data to pipeline
return C2Items #will pass ItemClass2 data to pipeline
Could you please advise what is the best way to pass both C1Items, C2Items to pipeline.
Either combine all the items of different classes into one list and return that list, or use yield statement:
C1Item = ItemClass1()
#scrape data
yield C1Item
...
C2Item = ItemClass2()
#scrape data
yield C2Item
Just combine the arrays into one big array and return that:
return C1Items + C2Items
or alternatively you could turn parseUrl into a generator function with:
yield C1Items
yield C2Items