Issue in def variable while performing sort in Karate Framework - karate

I have noticed that while passing a def as a list for simple sorting I am observing that apart from return values the original variable is also sorted.
* def original = ['a','b','c']
* def javaInstance = new (Java.type('package.subpackage.StringSort'))
* def sortedContent = javaInstance.m1(original,'desc');
* print sortedContent
* print original
Both "sortedContent" & "original" def variables are sorted.
Below is the java fn:
public class StringSort {
public List<String> m1(List<String> s, String order) {
Collections.sort(s);
if(order.equals("asc"))
return s;
else {
Collections.reverse(s);
return s;
}
}
}
Output:
sortedContent = ['c','b','a']
original = ['c','b','a']
I don't understand why original def variable is sorted.

That's how Java works. Create a clone:
* def original = ['a','b','c']
* copy temp = original

Related

Is there something wrong with my code as logic in function works fine in js?

* def a = ["a","b"]
* def b = ["1","2"]
* def fun =
"""
function(a,b){
var result={}
a.forEach(function(x,i){result[x]=b[i]});
return result;}
"""
* def final = fun(a,b)
* print final
Now what I am expecting is
{
"a":"1",
"b":"2"
}
but what I got is {
"a":null,
"b":null
}?
There are limitations to the JS blocks in Karate, that's how it is.
Do this instead:
karate.forEach(a, function(x,i){result[x]=b[i]});

Groovy testing groovy.sql.Sql - MockSql

Need some help of mocking sql.eachRow().
I have a service which I'd like to test which is as follows:
class MyService {
def dataSource
def method1(id) {
def map = [:]
def sql = new Sql(dataSource)
def queryString = 'select col1, col2 from tbl_1 where id = ?
sql.eachRow(queryString, [1]) { row ->
map.put(row.val1, row.val2)
}
return map
}
}
I'm trying to test this with MockFor
Code:
class MyServiceTest extends Specification {
#Test
def "test method1"() {
setup:
def row = [:]
row["col1"] = "1"
row["col2"] = "val1"
def mockResult = [row]
Sql.metaClass.constructor = {dataSource -> return new MockSql("") }
def mockSql = new MockFor(Sql.class)
mockSql.demand.newInstance { def datasource ->
return mockSql
}
mockSql.demand.eachRow { def arg1, def arg2, closure ->
// run the closure over the mock array
mockResult.each(closure)
}
when:
def result = service.method1(1)
then:
result == ["1":"val1"]
}
}
Getting the below error with earRow argument types
No signature of method: com.kenexa.assess.MockSql.eachRow() is applicable for argument types: (java.lang.String, java.util.ArrayList, xyz_closure103) values: [select col1, col2 from tbl_1 where id = ?]
Possible solutions: eachRow(java.lang.Object, groovy.lang.Closure)
groovy.lang.MissingMethodException: No signature of method: com.kenexa.assess.MockSql.eachRow() is applicable for argument types: (java.lang.String, java.util.ArrayList, xyz_closure103) values: [select col1, col2 from tbl_1 where id = ?]
Possible solutions: eachRow(java.lang.Object, groovy.lang.Closure)

Programally multi-value param

I need to make a request with a multi-value param,its value must be a very big array to test an api param validation. One of my implementation:
Scenario: filter_too_long_multivalue
* def localPath = endPointBase + 'filter'
* def generateLongParam =
"""
function(){
var paramValue = [];
for(var idx = 0; idx<1002; idx++){
paramValue.push('r')
}
var params = {};
params.pl = paramValue;
return params;
}
"""
* def tooLongParam = generateLongParam()
* print tooLongParam
Given path localPath
And params tooLongParam
When method get
Then match response == authorizationSchema.empty
Then the request is:
GET http://XXXXXXXXXXXXX?pl=%5Bobject+Array%5D
I have tried differents ways, but always the same result...
How can I obtain a weel formed request?
Thank you.
Yes, you have hit one of those edge cases, can you let me know if this works, I'll also see if this can be improved.
Sometimes what is returned by a JS function is not exactly the format that Karate likes. The workaround is to use json instead of def to type-convert - refer doc: https://github.com/intuit/karate#type-conversion
* def fun =
"""
function(){
var temp = [];
for(var i = 0; i < 5; i++) {
temp.push('r');
}
return temp;
}
"""
* json array = fun()
Given url demoBaseUrl
And path 'echo'
And params { pl: '#(array)' }
When method get
Then status 200
Which resulted in:
1 > GET http://127.0.0.1:60146/echo?pl=r&pl=r&pl=r&pl=r&pl=r

Find Index of all values of an array in another array and collect the value in that index from a third Array

I would like to find the index of matches from the descendentList in the parentIdList and then add the value which exists in that index from the idList to the descendentList and then once again check the parentIdList for the index of all the matching values.
I am essentially trying to create a looping structure which would result in looking like this:
This seems to work but only if you can allow descendentList to be a Set. If not then I am not sure what the terminating condition would be, it would just keep adding the values of the same indexes over and over. I think a Set is appropriate considering what you said in your comment above... "I would like to loop through this until no more matches are added to descendentList"
Set descendentList = [2]
def parentIdList = [0,1,2,3,2]
def idList = [1,2,3,4,5]
/**
* First: I would like to find the index of matches from the descendentList in the
* parentIdList
*/
def findIndexMatches(Set descendentList, List parentIdList, List idList) {
List indexes = []
def size = descendentList.size()
descendentList.each { descendent ->
indexes.addAll(parentIdList.findIndexValues { it == descendent })
}
addExistingValuesToFromIdListToDecendentList(descendentList, idList, indexes)
// Then once again check the parentIdList for the index of all the matching values.
if(size != descendentList.size()) { // no more indexes were added to decendentList
findIndexMatches(descendentList, parentIdList, idList)
}
}
/**
* and then add the value which exists in that index from the
* idList to the descendentList
*/
def addExistingValuesToFromIdListToDecendentList(Set descendentList, List idList, List indexes) {
indexes.each {
descendentList << idList[it as int]
}
}
findIndexMatches(descendentList, parentIdList, idList)
println descendentList // outputs [2,3,4,5]
Something like the following seems to work - not written any tests though, so may fail with different use cases - just a simple, idiomatic recursive solution.
def descendentList = [2]
def parentIdList = [0,1,2,3,2]
def idList = [1,2,3,4,5]
def solve( List descendentList, List parentIdList, List idList ){
List matchedIds = descendentList.inject( [] ){ result, desc ->
result + idList[ parentIdList.findIndexValues{ it == desc } ]
}
if ( matchedIds ){
descendentList + solve( matchedIds, parentIdList, idList )
} else {
descendentList
}
}
println solve( descendentList, parentIdList, idList )
You can also do this without recursion, using an iterator:
class DescendantIterator<T> implements Iterator<T> {
private final List<T> parents
private List<T> output
private final List<T> lookup
private List<T> next
DescendantIterator(List<T> output, List<T> parents, List<T> lookup) {
this.output = output
this.parents = parents
this.lookup = lookup
}
boolean hasNext() { output }
Integer next() {
def ret = output.head()
parents.findIndexValues { it == ret }.with { v ->
if(v) { output += lookup[v] }
}
output = output.drop(1)
ret
}
void remove() {}
}
def descendentList = [2]
def parentIdList = [0,1,2,3,2]
def idList = [1,2,3,4,5]
def values = new DescendantIterator<Integer>(descendentList, parentIdList, idList).collect()
After this, values == [2, 3, 5, 4]
First build a map from parent id to all it's child ids. Next find the results for the input and iterate over newly found results as long as there are no more.
def parentIdList = [0,1,2,3,2]
def idList = [1,2,3,4,5]
tree = [parentIdList, idList].transpose().groupBy{it[0]}.collectEntries{ [it.key, it.value*.get(1)] }
def childs(l) {
l.collect{ tree.get(it) }.findAll().flatten().toSet()
}
def descendants(descendentList) {
def newresults = childs(descendentList)
def results = [].toSet() + descendentList
while (newresults.size()) {
results.addAll(newresults)
newresults = childs(newresults) - results
}
return results
}
assert descendants([2]) == [2,3,4,5].toSet()
assert descendants([2,1]) == [1,2,3,4,5].toSet()
assert descendants([3]) == [3,4].toSet()

How to use ScalaQuery to insert a BLOB field?

I used ScalaQuery and Scala.
If I have an Array[Byte] object, how do I insert it into the table?
object TestTable extends BasicTable[Test]("test") {
def id = column[Long]("mid", O.NotNull)
def extInfo = column[Blob]("mbody", O.Nullable)
def * = id ~ extInfo <> (Test, Test.unapply _)
}
case class Test(id: Long, extInfo: Blob)
Can I define the method used def extInfo = column[Array[Byte]]("mbody", O.Nullable), how to operate(UPDATE, INSERT, SELECT) with the BLOB type field?
BTW: no ScalaQuery tag
Since the BLOB field is nullable, I suggest changing its Scala type to Option[Blob], for the following definition:
object TestTable extends Table[Test]("test") {
def id = column[Long]("mid")
def extInfo = column[Option[Blob]]("mbody")
def * = id ~ extInfo <> (Test, Test.unapply _)
}
case class Test(id: Long, extInfo: Option[Blob])
You can use a raw, nullable Blob value if you prefer, but then you need to use orElse(null) on the column to actually get a null value out of it (instead of throwing an Exception):
def * = id ~ extInfo.orElse(null) <> (Test, Test.unapply _)
Now for the actual BLOB handling. Reading is straight-forward: You just get a Blob object in the result which is implemented by the JDBC driver, e.g.:
Query(TestTable) foreach { t =>
println("mid=" + t.id + ", mbody = " +
Option(t.extInfo).map { b => b.getBytes(1, b.length.toInt).mkString })
}
If you want to insert or update data, you need to create your own BLOBs. A suitable implementation for a stand-alone Blob object is provided by JDBC's RowSet feature:
import javax.sql.rowset.serial.SerialBlob
TestTable insert Test(1, null)
TestTable insert Test(2, new SerialBlob(Array[Byte](1,2,3)))
Edit: And here's a TypeMapper[Array[Byte]] for Postgres (whose BLOBs are not yet supported by ScalaQuery):
implicit object PostgresByteArrayTypeMapper extends
BaseTypeMapper[Array[Byte]] with TypeMapperDelegate[Array[Byte]] {
def apply(p: BasicProfile) = this
val zero = new Array[Byte](0)
val sqlType = java.sql.Types.BLOB
override val sqlTypeName = "BYTEA"
def setValue(v: Array[Byte], p: PositionedParameters) {
p.pos += 1
p.ps.setBytes(p.pos, v)
}
def setOption(v: Option[Array[Byte]], p: PositionedParameters) {
p.pos += 1
if(v eq None) p.ps.setBytes(p.pos, null) else p.ps.setBytes(p.pos, v.get)
}
def nextValue(r: PositionedResult) = {
r.pos += 1
r.rs.getBytes(r.pos)
}
def updateValue(v: Array[Byte], r: PositionedResult) {
r.pos += 1
r.rs.updateBytes(r.pos, v)
}
override def valueToSQLLiteral(value: Array[Byte]) =
throw new SQueryException("Cannot convert BYTEA to literal")
}
I just post an updated code for Scala and SQ, maybe it will save some time for somebody:
object PostgresByteArrayTypeMapper extends
BaseTypeMapper[Array[Byte]] with TypeMapperDelegate[Array[Byte]] {
def apply(p: org.scalaquery.ql.basic.BasicProfile) = this
val zero = new Array[Byte](0)
val sqlType = java.sql.Types.BLOB
override val sqlTypeName = "BYTEA"
def setValue(v: Array[Byte], p: PositionedParameters) {
p.pos += 1
p.ps.setBytes(p.pos, v)
}
def setOption(v: Option[Array[Byte]], p: PositionedParameters) {
p.pos += 1
if(v eq None) p.ps.setBytes(p.pos, null) else p.ps.setBytes(p.pos, v.get)
}
def nextValue(r: PositionedResult) = {
r.nextBytes()
}
def updateValue(v: Array[Byte], r: PositionedResult) {
r.updateBytes(v)
}
override def valueToSQLLiteral(value: Array[Byte]) =
throw new org.scalaquery.SQueryException("Cannot convert BYTEA to literal")
}
and then usage, for example:
...
// defining a column
def content = column[Array[Byte]]("page_Content")(PostgresByteArrayTypeMapper)