How to get row data from web table in Karate framework - karate

The web table has a combination of textbox, span and checkboxes. I need to get first row of all the data in single def and have to verify with DB in order wise.
Ex: First row of table has columns like below.
OrderID(span), EmpName(input), IsHeEligible(checkbox), Address(span)
By using below,
def tebleFirstData = scriptAll("table/tbody/tr/td",'_.textContent')
Able to get only span text data, not able to get input tag test data.
I tried below,
data = attribute("table/tbody/tr/td[5]/input",'value')
But, I'm able to get only single input tag attribute value.
How can I get all the data in single def, i.e span data and input data?

Below is the solution to get all row data from table..!
* def UiFirstRowElements = locateAll("Row xpath")
print UiFirstRowElements
def tableData = []
def RowHtml = UiFirstRowElements[0].html
print RowHtml
eval
"""
for(var i=0; i<UiFirstRowElements.length; i++)
if(UiFirstRowElements[i].html.contains("input") && UiFirstRowElements[i].html.contains("date")){
tableData.push(locate("//table/tbody/tr/td["+(i+1)+"]/div/input").property('value'))
}
else if(UiFirstRowElements[i].html.contains("input") && UiFirstRowElements[i].html.contains("checkbox")){
tableData.push(locate("//table/tbody/tr/td["+(i+1)+"]/div/input").property('checked'))
}
else {
tableData.push(locate("//table/tbody/tr/td["+(i+1)+"]").property('textContent'))
}
"""
* print 'TableName-->', TableName
* print tableData

Related

Iterate through a column in Dataset which have array of key value pairs and find out a pair with max value

I have data in a dataframe , which was obtained from azure eventhub.
Then I convert this data to json object and stored the required data into a dataset as shown below.
Code for obtaining data from eventhub and store it into a dataframe.
val connectionString = ConnectionStringBuilder(<ENDPOINT URL>)
.setEventHubName(<EVENTHUB NAME>).build
val currTime = Instant.now
val ehConf = EventHubsConf(connectionString)
.setConsumerGroup("<CONSUMER GRP>")
.setStartingPosition(EventPosition
.fromEnqueuedTime(currTime.minus(Duration.ofMinutes(30))))
.setEndingPosition(EventPosition.fromEnqueuedTime(currTime))
val reader = spark.read.format("eventhubs").options(ehConf.toMap).load()
var SIGNALS = reader
.select(get_json_object(($"body").cast("string"),"$.NUM").alias("NUM"),
get_json_object(($"body").cast("string"),"$.SIG1").alias("SIG1"),
get_json_object(($"body").cast("string"),"$.SIG2").alias("SIG2"),
get_json_object(($"body").cast("string"),"$.SIG3").alias("SIG3"),
get_json_object(($"body").cast("string"),"$.SIG4").alias("SIG4")
)
val SIGNALSFiltered = SIGNALS.filter(col("SIG1").isNotNull &&
col("SIG2").isNotNull && col("SIG3").isNotNull && col("SIG4").isNotNull)
The data obtained at SIGNALSFiltered is shown below.
+-----------------+--------------------+--------------------+--------------------+--------------------+
| NUM| SIG1| SIG2| SIG3| SIG4|
+-----------------+--------------------+--------------------+--------------------+--------------------+
|XXXXX01|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|
|XXXXX02|[{"TIME":15695604780...|[{"TIME":15695604780...|[{"TIME":15695604780...|[{"TIME":15695604780...|
|XXXXX03|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|
|XXXXX04|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|
|XXXXX05|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|
|XXXXX06|[{"TIME":15695605340...|[{"TIME":15695605340...|[{"TIME":15695605340...|[{"TIME":15695605340...|
|XXXXX07|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|
|XXXXX08|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|[{"TIME":15695605310...|
If we check entire data for a single row it will be as below.
|XXXXX01|[{"TIME":1569560531000,"VALUE":3.7825},{"TIME":1569560475000,"VALUE":3.7812},{"TIME":1569560483000,"VALUE":3.7812},{"TIME":1569560491000,"VALUE":34.7875}]|
[{"TIME":1569560537000,"VALUE":3.7825},{"TIME":1569560481000,"VALUE":34.7825},{"TIME":1569560489000,"VALUE":34.7825},{"TIME":1569560497000,"VALUE":34.7825}]|
[{"TIME":1569560505000,"VALUE":34.7825},{"TIME":1569560513000,"VALUE":34.7825},{"TIME":1569560521000,"VALUE":34.7825},{"TIME":1569560527000,"VALUE":34.7825}]|
[{"TIME":1569560535000,"VALUE":34.7825},{"TIME":1569560479000,"VALUE":34.7825},{"TIME":1569560487000,"VALUE":34.7825}]
I want only the highest TIME pair from each column, not the entire TIME VALUE pairs. Output should be as shown below.
+-----------------+-----------------------------+---------------------------------------+---------------------------------------+----------------------------------------+
| NUM| SIG1| SIG2| SIG3| SIG4|
+-----------------+-----------------------------+---------------------------------------+---------------------------------------+----------------------------------------+
|XXXXX01|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":4.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":5.7825}]|
|XXXXX02|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":6.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":7.7825}]|
|XXXXX03|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":9.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":8.7825}]|
How to Iterate through each column in each row and get the highest TIME-VALUE pair?
After getting highest in each columns (SIG1,....SIG4) have to update only the value of TIME in all columns with highest among them.
Is there Any way to convert the base dataset as below?. Each elements in a column should be converted to a new row.
+-----------------+-----------------------------+---------------------------------------+---------------------------------------+----------------------------------------+
| NUM| SIG1| SIG2| SIG3| SIG4|
+-----------------+-----------------------------+---------------------------------------+---------------------------------------+----------------------------------------+
|XXXXX01|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX01|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX01|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]| null |[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX01|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX02|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX02|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX02|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|
|XXXXX02|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|[{"TIME":1569560531000,"VALUE":3.7825}]|```
Any leads or help is appreciated! Thanks in Advance.
You have to write one user defined function like below. which will loop your data and get Max Time Value.
Note: UDF is just for reference, you can change it as per requirement
How to Iterate through each column in each row and get the highest TIME-VALUE pair?
scala> import org.apache.spark.sql.expressions.{UserDefinedFunction}
scala> def MaxTime:UserDefinedFunction = udf((json:String) => {
val pars = JSON.parseFull(json)
var output=""
pars.foreach{ x => val y = x.asInstanceOf[List[Any]]
var i = 1
var TimeMap = scala.collection.mutable.Map[String, Long]()
var ValueMap = scala.collection.mutable.Map[String, Double]()
y.foreach{ zz => val z = zz.asInstanceOf[Map[String,Double]]
TimeMap(i.toString) = z("TIME").toLong
ValueMap(i.toString) = z("VALUE")
i = i + 1
}
output = """[{"TIME" : """ + TimeMap.maxBy(_._2)._2.toString + """ ,"VALUE": """ + ValueMap(TimeMap.maxBy(_._2)._1) + """}]"""
}
output})
scala> SIGNALSFiltered.withColumn("SIG1", MaxTime(col("SIG1")).withColumn("SIG2", MaxTime(col("SIG2")))).withColumn("SIG3", MaxTime(col("SIG3"))).withColumn("SIG4", MaxTime(col("SIG4"))).show(false)
After getting highest in each columns (SIG1,....SIG4) have to update only the value of TIME in all columns with highest among them.
Write same UDF like above and pass complete row as a parameter. Then parse each column value into Map and get Maximum among all columns.

TYPO3: Get the values in fluid from inline elements

I managed to create an own inline content element (on tt_content table) but when i try to get the values on the frontend via fluid, i get nothing.
I debugged the {data} variable and on the column that my data are saved, there is an integer. I suppose it reads the number of the content elements which were created on the foreign table (accordion). How can i get those values?
At this point the {data} variables reads the tt_content table and the column that has the integer reads the number of content elements on the table accordion.
I suppose no code is necessary. If it is necessary, feel free to comment the part of the code you would like to review.
Best regards
You need to add a DataProcessor to your TypoScript creating the content element, which fetch your accordion records. Example:
tt_content {
yourContentElementName < lib.contentElement
yourContentElementName.templateName = YourContentElementName
yourContentElementName.dataProcessing {
10 = TYPO3\CMS\Frontend\DataProcessing\DatabaseQueryProcessor
10 {
if.isTrue.field = fieldInTtContentWithInteger
table = your_accordion_table
pidInList = this
where.field = uid
where.intval = 1
where.dataWrap = field_pointing_to_ttcontent_record = |
as = accordions
}
}
}

How do I loop thought each DB field to see if range is correct

I have this response in soapUI:
<pointsCriteria>
<calculatorLabel>Have you registered for inContact, signed up for marketing news from FNB/RMB Private Bank, updated your contact details and chosen to receive your statements</calculatorLabel>
<description>Be registered for inContact, allow us to communicate with you (i.e. update your marketing consent to 'Yes'), receive your statements via email and keep your contact information up to date</description>
<grades>
<points>0</points>
<value>No</value>
</grades>
<grades>
<points>1000</points>
<value>Yes</value>
</grades>
<label>Marketing consent given and Online Contact details updated in last 12 months</label>
<name>c21_mrktng_cnsnt_cntct_cmb_point</name>
</pointsCriteria>
There are many many many pointsCriteria and I use the below xquery to give me the DB value and Range of what that field is meant to be:
<return>
{
for $x in //pointsCriteria
return <DBRange>
<db>{data($x/name/text())}</db>
<points>{data($x//points/text())}</points>
</DBRange>
}
</return>
And i get the below response
<return><DBRange><db>c21_mrktng_cnsnt_cntct_cmb_point</db><points>0 1000</points></DBRange>
That last bit sits in a property transfer. I need SQL to bring back all rows where that DB field is not in that points range (field can only be 0 or 1000 in this case), my problem is I dont know how to loop through each DBRange/DBrange in this manner? please help
I'm not sure that I really understand your question, however I think that you want to make queries in your DB using specific table with a column name defined in your <db> field of your xml, and using as values the values defined in <points> field of the same xml.
So you can try using a groovy TestStep, first parse your Xml and get back your column name, and your points. To iterate over points if the values are separated with a blank space you can make a split(" ") to get a list and then use each() to iterate over the points on this list. Then using groovy.sql.Sql you can perform the queries in your DB.
Only one more thing, you need to put the JDBC drivers for your vendor DB in $SOAPUI_HOME/bin/ext and then restart SOAPUI in order that it can load the necessary driver classes.
So the follow code approach can achieve your goal:
import groovy.sql.Sql
import groovy.util.XmlSlurper
// soapui groovy testStep requires that first register your
// db vendor drivers, as example I use oracle drivers...
com.eviware.soapui.support.GroovyUtils.registerJdbcDriver( "oracle.jdbc.driver.OracleDriver")
// connection properties db (example for oracle data base)
def db = [
url : 'jdbc:oracle:thin:#db_host:d_bport/db_name',
username : 'yourUser',
password : '********',
driver : 'oracle.jdbc.driver.OracleDriver'
]
// create the db instance
def sql = Sql.newInstance("${db.url}", "${db.username}", "${db.password}","${db.driver}")
def result = '''<return>
<DBRange>
<db>c21_mrktng_cnsnt_cntct_cmb_point</db>
<points>0 1000</points>
</DBRange>
</return>'''
def resXml = new XmlSlurper().parseText(result)
// get the field
def field = resXml.DBRange.db.text()
// get the points
def points = resXml.DBRange.points.text()
// points are separated by blank space,
// so split to get an array with the points
def pointList = points.split(" ")
// for each point make your query
pointList.each {
def sqlResult = sql.rows "select * from your_table where ${field} = ?",[it]
log.info sqlResult
}
sql.close();
Hope this helps,
Thanks again for your help #albciff, I had to add this into a multidimensional array (I renamed field to column and result is a large return from the Xquery above)
def resXml = new XmlSlurper().parseText(result)
//get the columns and points ranges
def Column = resXml.DBRange.db*.text()
def Points = resXml.DBRange.points*.text()
//sorting it all out into a multidimensional array (index per index)
count = 0
bigList = Column.collect
{
[it, Points[count++]]
}
//iterating through the array
bigList.each
{//creating two smaller lists and making it readable for sql part later
def column = it[0]
def points = it[1]
//further splitting the points to test each
pointList = points.split(" ")
pointList.each
{//test each points range per column
def sqlResult = sql.rows "select * from my_table where ${column} <> ",[it]
log.info sqlResult
}
}
sql.close();
return;

Grails query rows to arrays

I'm new to Groovy and Grails. I think this problem probably has an easy answer, I just don't know it.
I have a database table:
id | category | caption | image | link
I have a query that lets me retrieve one row for each distinct item in 'category.'
I'm trying to return a map where each row is an array named by it's category.
e.g., If I select the rows:
[{category='foo', caption='stuff', ...} {category='bar', caption='things', ...}]
I want to be able to:
return [foo:foo, bar:bar]
where:
foo = [caption='stuff', ...]
bar = [caption='things', ...]
Thanks for any help.
You can transform your list using collect in Groovy, however, the result depends on the source data. I could not infer from your post that if you are returning one item per category or multiple.
def ls = Category.list()
def newList = ls.collect {
[(it.category):['caption':it.caption,'image':it.image,'link':it.link]]
}
will result in something like :
[
[bar:[caption:BarCaption-2, image:BarImage-2, link:BarLink-2]],
[bar:[caption:BarCaption-1, image:BarImage-1, link:BarLink-1]],
[bar:[caption:BarCaption-0, image:BarImage-0, link:BarLink-0]],
[foo:[caption:FooCaption-2, image:FooImage-2, link:FooLink-2]],
[foo:[caption:FooCaption-1, image:FooImage-1, link:FooLink-1]],
[foo:[caption:FooCaption-0, image:FooImage-0, link:FooLink-0]]
]
If you have multiple items per each category you probably want to return the list of each.
def bars = newList.findAll { it.containsKey 'bar' }
def foos = newList.findAll { it.containsKey 'foo' }
[foos:foos,bars:bars]
If I understand your question correctly (I think you mean to put ':' instead of '=' for your maps) then I think the following will work. If new to Groovy, you might find the Groovy web console at http://groovyconsole.appspot.com/ useful. You can try snippets of code out easily, like the following:
def listOfMaps = [[category:'foo', caption:'stuff', something:'else'], [category:'bar', caption:'things', another:'thing']]
def mapOfMaps = [:]
listOfMaps.each { mapOfMaps += [(it.remove('category')) : it] }
assert mapOfMaps == [foo:[caption:'stuff', something:'else'], bar:[caption:'things', another:'thing']]

How can I generate schema from text file? (Hadoop-Pig)

Somehow i got filename.log which looks like for example (tab separated)
Name:Peter Age:18
Name:Tom Age:25
Name:Jason Age:35
because the value of key column may differ i cannot define schema when i load text like
a = load 'filename.log' as (Name:chararray,Age:int);
Neither do i want to call column by position like
b = foreach a generate $0,$1;
What I want to do is, from only that filename.log, to make it possible to call each value by key, for example
a = load 'filename.log' using PigStorage('\t');
b = group b by Name;
c = foreach b generate group, COUNT(b);
dump c;
for that purpose, i wrote some Java UDF which seperate key:value and get value for every field in tuple as below
public class SPLITALLGETCOL2 extends EvalFunc<Tuple>{
#Override
public Tuple exec(Tuple input){
TupleFactory mTupleFactory = TupleFactory.getInstance();
ArrayList<String> mProtoTuple = new ArrayList<String>();
Tuple output;
String target=input.toString().substring(1, input.toString().length()-1);
String[] tokenized=target.split(",");
try{
for(int i=0;i<tokenized.length;i++){
mProtoTuple.add(tokenized[i].split(":")[1]);
}
output = mTupleFactory.newTupleNoCopy(mProtoTuple);
return output;
}catch(Exception e){
output = mTupleFactory.newTupleNoCopy(mProtoTuple);
return output;
}
}
}
How should I alter this method to get what I want? or How should I write other UDF to get there?
Whatever you do, don't use a tuple to store the output. Tuples are intended to store a fixed number of fields, where you know what every field contains. Since you don't know that the keys will be in Name,Age form (or even exist, or that there won't be more) you should use a bag. Bags are unordered sets of tuples. They can have any number of tuples in them as long as the tuples have the same schema. These are all valid bags for the schema B: {T:(key:chararray, value:chararray)}:
{(Name,Foo),(Age,Bar)}
{(Age,25),(Name,Jim)}
{(Name,Bob)}
{(Age,30),(Name,Roger),(Hair Color,Brown)}
{(Hair Color,),(Name,Victor)} -- Note the Null value for Hair Color
However, it sounds like you really want a map:
myudf.py
#outputSchema('M:map[]')
def mapize(the_input):
out = {}
for kv in the_input.split(' '):
k, v = kv.split(':')
out[k] = v
return out
myscript.pig
register '../myudf.py' using jython as myudf ;
A = LOAD 'filename.log' AS (total:chararray) ;
B = FOREACH A GENERATE myudf.mapize(total) ;
-- Sample usage, grouping by the name key.
C = GROUP B BY M#'Name' ;
Using the # operator you can pull out all values from the map using the key you give. You can read more about maps here.