I'd like to know if it's possible, on Selenium IDE, to check the first letter of value inside a variable. For example, I have a variable called cartId, and this variable stores IDs inside it. Sometimes, the ID starts with "A", "E" or "P", examples:
A324FR
E289FS
P23U87
So i'd like to make something like this:
If starts with A, stores "A" in an auxiliary variable.
If starts with E, stores "E" in an auxiliary variable.
If starts with P, stores "P" in an auxiliary variable.
This is needed because depending on the first letter of the value, it will do a different method ... so I can use something like (command, target, value):
gotoIf | ${auxiliaryVariable} == 'A' | METHOD1
gotoIf | ${auxiliaryVariable} == 'E' | METHOD2
gotoIf | ${auxiliaryVariable} == 'P' | METHOD3
Thanks!
You could probably use storeEval to evaluate JavaScript to get your auxiliaryVariable
storeEval | document.getElementById("cartId").textContent[0] | auxiliaryVariable
This will get the element regardless of the text, then store the first letter A, E, or P inside of auxiliaryVariable for use later.
Thanks for all the help, I could do it using:
store | javascript{storedVars['cartId'].substring(0,1);} | auxiliary
Related
I have to call a function func_test(spark,a,b) which accepts two string values and create a df out of it. spark is a SparkSession variable
These two string values are two columns of another dataframe and would be different for different rows of that dataframe.
I am unable to achieve this.
Things tried so far:
1.
ctry_df = func_test(spark, df.select("CTRY").first()["CTRY"],df.select("CITY").first()["CITY"])
Gives CTRY and CITY of only the first record of the df.
2.
ctry_df = func_test(spark, df['CTRY'],df['CITY'])
Gives Column<b'CTRY'> and Column<b'CITY'> as values.
Example:
df is:
+----------+----------+-----------+
| CTRY | CITY | XYZ |
+----------+----------+-----------+
| US | LA | HELLO|
| UK | LN | WORLD|
| SN | SN | SPARK|
+----------+----------+-----------+
So, I want first call to fetch func_test(spark,US,LA); second call to go func_test(spark,UK,LN); third call to be func_test(spark,SN,SN) and so on.
Pyspark - 3.7
Spark - 2.2
Edit 1:
Issue in detail:
func_test(spark,string1,string2) is a function which accepts two string values. Inside this function is a set of various dataframe operations done. For example:- First spark sql in the func_test is a normal select and these two variables string1 and string2 are used in the where clause. The result of this spark sql which generates a df is a temp table of next spark sql and so on. Finally, it creates a df which this function func_test(spark,string1,string2) returns.
Now, In the main class, I have to call this func_test and the two parameters string1 and string2 will be fetched from records of dataframe. So that, first func_test call generates query as select * from dummy where CTRY='US' and CITY='LA'. And the subsequent operations happen which results in df. Second call to func_test becomes select * from dummy where CTRY='UK' and CITY='LN'. Third call becomes select * from dummy where CTRY='SN' and CITY='SN' and so on.
instead of first() use collect() and iterate through the loop
collect_vals = df.select('CTRY','CITY').distinct().collect()
for row_col in collect_vals:
func_test(spark, row_col['CTRY'],row_col['CITY'])
hope this helps !!
Desciption:
I have an array 2d:
$array = InvApplication::model()->findall(array('order'=>'app_name'));
The array contents the next element: "app5", How to avoid it?
Actual Output:
app_name|field1|field2|fieldN|..|..
appn |
appn1 |
appn2 |
app5 |
Already Tested
I have been testing with unset, in_array and strpos functions.
In addition to:
php - finding keys in an array that match a pattern
Delete element from multidimensional-array based on value
My actual piece of code:
This is an actual way, but is not working as I want.
$deleteapp = "app5";
unset($list[$deleteapp]); Test with unset or array_diff
foreach($list as $k=>$v)
{
if(in_array($v,array('app5'))) unset($list[$k]);}
I expect this:
app_name|field1|field2|fieldN|..|..
appn |
appn1 |
appn2 |
Thank you.
seems you want exclude a app_name from the select result
in this case you could use a condition
$array = InvApplication::model()->findall(
array("condition"=> "app_name != 'app5'","order"=>"app_name")
);
Below is the query that will give the data and distance where distance is <=10km
var s=spark.sql("select date,distance from table_new where distance <=10km")
s.show()
this will give the output like
12/05/2018 | 5
13/05/2018 | 8
14/05/2018 | 18
15/05/2018 | 15
16/05/2018 | 23
---------- | --
i want to use first row of the dataframe s , store the date value in a variable v , in first iteration.
In next iteration it should pick the second row , and corresponding data value to be replaced the old variable b .
like wise so on .
I think you should look at Spark "Window Functions". You may find here what you need.
The "bad" way to do this would be to collect the dataframe using df.collect() which would return a list of Rows which you can manually iterate over each using a loop.This is bad cause it brings all the data in your driver.
The better way would be to use foreach() :
df.foreach(lambda x: <<your code here>>)
foreach() takes a lambda function as argument which iterates over each row of the dataframe without bringing all the data in the driver.But you cant use a simple local variable v inside a lambda fuction when there is overwriting involved.you can use spark accumulators for such a case.
eg: if i want to sum all the values in 2nd column
counter = sc.longAccumulator("counter")
df.foreach(lambda row: counter.add(row.get(1)))
Consider the following data:
Item | Overall | Individual | newColumn
A | Fail | Pass | blank
A | Fail | Fail | blank
B | Fail | Pass | issue
B | Fail | Pass | issue
C | Pass | Pass | blank
I have the logic built out for the first 3 columns already. There are two levels of fails in this data:
overall, and
individual.
If any of the individual fail, the overall fails. Sometimes the overall can fail even though all the individuals are fine. This logic is already built out.
I am trying to find a formula for the newColumn. If all the individuals are a pass for a given item (example item B), but the overall is still a fail, the cell should return the text "issue". It is ok if it returns issue twice, not sure if you can non-dupe that part. I've tried various forms of countifs/and/ors and creating columns that count distinct values but I always find a scenario where it will break the logic.
Try this:
=IF(COUNTIFS($A$2:$A$6,A2,$C$2:$C$6,"Fail"),"blank",IF(B2="Fail","Issue","blank"))
As required
If you add a new column with the formula:
=IF(B2="Fail",IF(COUNTIFS(A:A,A2,C:C,"fail")=0,"issue",""),"")
Then this should work on the assumptions:
For each item if one of the overalls are false they are all false
The only two possible values are "Pass" and "Fail" for columns B & C
If you require the word blank instead of a blank cell then use:
=IF(B2="Fail",IF(COUNTIFS(A:A,A2,C:C,"fail")=0,"issue","blank"),"blank")
I want to read values of a json (message) object which has array in it.
This below query helps for immediate properties in d.
traces | extend d = parsejson(message) | d.Timestamp, d.Name;
How do I read property part of an array within d (message). For example if I want to read all street values in below message .. how to do ? This is kind of needing a loop
message
{
"Timestamp": "12-12-2008",
Name: "Alex",
address: {
[{"street": "",zip:""},{"street":"", "zip":""}]
}
}
One way to do this would be using the mvexpand operator (see documentation).
It will output a single row for each element in your array which you could iterate over.
So in your example, running:
traces | extend d = parsejson(message) | mvexpand d.address
Will output a row for each address.