I have data that looks like the following:
assetnum | assetdesc
123 | sampledesc
432 | sample desc2
I want to insert another row with four fields so it looks like the following:
SYSNAME | OBJSTRUC | AddChange | En
assetnum | assetdesc
123 | sampledesc
432 | sample desc2
However I am unsure how to do this. Does anyone know how?
I have tried generating rows but I am unsure how to merge so that it looks like this. I have also thought of adding headers but I am unsure how to specify the header (without it being created automatically) I am quite new to Pentaho.
Thanks.
Here is a hack. Assume StepA writes the actual data into a file fileA. Before writing anything into your fileA have a Text file output step and in the content tab, Add Ending line of file field, enter the custom row you need to insert. Since the file is empty at the beginning, your last line will become the first line. Once it is done, you can write the other data as per your original source using Append flag. To set the dependency, use the Block until steps finish to block the actual write in StepA.
Related
I get Request body data from excel file.
I have already covert excel to csv format.
I have kind of able to find a solution but it is not working 100% as jsonbody format in not fetching data correctly is shows forward slash in csv import data from runner collections.
Request Body
{{jsonBody}}
Set Global variables jsonBody
Run collection select data file as csv file as per screenshot request body shows with forward slash.
After running the collection I'm getting body incorrect version with forward slash.
This below screenshot show correct version on csv data I require to remove forward slash from csv data
I had similar issue with postman and realized my problem was more of a syntax issue.
Lets say your cvs file has the following columns:
userId | mid | platform | type | ...etc
row1 94J4J | 209444894 | NORTH | PT | ...
row2 324JE | 934421903 | SOUTH | MB | ...
row3 966RT | 158739394 | EAST | PT | ...
This is how you want your json request body to look like:
{
"userId" : "{{userId}}",
"mids":[{
"mid":"{{mid}}",
"platform":"{{platform}}"
}],
"type":["{{type}}"],
.. etc
}
Make sure your colums names match the varibales {{variableName}}
The data coming from CSV is already in a stringified format so you don't need to do anything in pre-request.
example:
let csv be
| jsonBody |
| {"name":"user"}|
Now in postman request just use:
{{jsonBody}}
as {{column_name}} will be considered as data varaible so , in your case {{jsonBody}}
csv:
make sure you save this as csv file :
Now in request use :
output:
if you want to add the json body as value of another then just use :
Output:
My homework is giving me a hard time with pyspark. I have this view of my "df2" after a groupBy:
df2.groupBy('years').count().show()
+-----+-----+
|years|count|
+-----+-----+
| 2003|11904|
| 2006| 3476|
| 1997| 3979|
| 2004|13362|
| 1996| 3180|
| 1998| 4969|
| 1995| 1995|
| 2001|11532|
| 2005|11389|
| 2000| 7462|
| 1999| 6593|
| 2002|11799|
+-----+-----+
Every attempt to save this (and then load with pandas) to a file gives back the original source data text file form I read with pypspark with its original columns and attributes, only now its .csv but that's not the point.
What can I do to overcome this ?
For your concern I do not use SparkContext function in the begining of the code, just plain "read" and "groupBy".
df2.groupBy('years').count().write.csv("sample.csv")
or
df3=df2.groupBy('years').count()
df3.write.csv("sample.csv")
both of them will create sample.csv in your working directory
You can assign the results into a new dataframe results, and then write the results to a csv file. Note that there are two ways to output the csv. If you use spark you need to use .coalesce(1) to make sure only one file is outputted. The other way is to convert .toPandas() and use to_csv() function of pandas DataFrame.
results = df2.groupBy('years').count()
# writes a csv file "part-xxx.csv" inside a folder "results"
results.coalesce(1).write.csv("results", header=True)
# or if you want a csv file, not a csv file inside a folder (default behaviour of spark)
results.toPandas().to_csv("results.csv")
I am trying to create a random number generator:
Command | Tgt | Val |
store | tom | tester
store | dominic | envr
execute script | Math.floor(Math.random()*11111); | number
type | id=XXX | ${tester}.${dominic}.${number}
Expected result:
tom.dominic.0 <-- random number
Instead I get this:
tom.dominic.${number}
I looked thru all the resources and it seems the recent selenium update/version has changed the approach and I cannot find a solution.
I realize this question is 2 years old, but it's a fairly common one, so I'll answer it and see if there are other answers that address it.
If you want to assign the result of a script run by the "execute script" in Selenium IDE to a Selenium variable, you have to return the value from JavaScript. So instead of
execute script | Math.floor(Math.random()*11111); | number
you need
execute script | return Math.floor(Math.random()*11111); | number
Also, in your final assignment that puts the 3 pieces together, you needed ${envr} instead of ${dominic}.
My sample feature file rather than giving data from Examples I want it to pass from csv how to achieve that can anyone help me out.
Feature file:
Feature: Rocky Search Status
Scenario Outline: Rocky Search Status with Filters
Given Open firefox and start application for Rocky Search Status
When User enters "<price_right>" and "<Carat_left>" and "<Color_right_param>" and "<Cut_right_param>" and "<Clarity_right_param>"
Then Message displayed Rocky Search Status Successful
Then Application should be closed after Rocky Search Status
Examples:
| price_right | Carat_left | Color_right_param | Cut_right_param | Clarity_right_param |
| 10000 | 1.5 | 80 | 180 | 84 |
I want the data values to be defined in CSV outside the Project.
Not directly. However, you can have a record ID (or test case number) of sorts in the Example table. You can then retrieve records from the CSV in the step code based on the ID.
Scenario Outline: Rocky Search Status with Filters
Given Open firefox and start application for Rocky Search Status
When User enters data specified in test case <tcn>
Then Message displayed Rocky Search Status Successful
Then Application should be closed after Rocky Search Status
Examples:
|tcn|
|1 |
|2 |
The "When" step will use the tcn to retrieve the corresponding record from the CSV.
You can't with Gherkin. What you can do is to give your CSV file an appropriate name, refer to the name inside your Gherkin step, and then load and read the file inside your step definition.
abc.feature
Feature: A
Scenario: 1
Given data at abc.csv
...
step-definitions.js
Given(/^data at (.*)$/, function (fileName) {
const data = jsonfile.readFileSync(`${__dirname}/${fileName}`);
// iterate over data
})
I'm using GNU Indent to format some code. I have some lines like this one:
port->N[0].BTR.U = (DIV8(0U) |
TSEG2(0x3U) |
TSEG1(0xEU) |
SJW(0x3U) |
BRP(0x9U));
That are being formatted to code like this:
port->N[0].BTR.U = (DIV8(0U) | TSEG2(0x3U) | TSEG1(0xEU) | SJW(0x3U) | BRP(0x9U));
I am using the -l80 command that, according to the documentation, should break the line at 80 characters. Here the code was originally shorter than that but after the formatting, the resulting line is beyond the 80 characters! So how is indent violating it's own rule? Also as far as I understand I didn't specify any command for doing this, I mean, taking code from several lines and place it in one single line.
And this is really annoying because I don't want this to be modified. So, does anybody know what command or combination of commands can I use to avoid this?
These are the commands I'm already using:
-ndj -nbad -bap -nbc -nbbo -hnl -bl -bli0 -bls -blf -ncdb -nce -cp1 -ncs -di2 -nfc1 -nfca -hnl -i4 -ip0 -lp -npcs -nprs -psl -saf -sai -saw -nsc -nsob -cli4 -cbi0 -nut -nbs -npsl -l80 -c90 -cd90
Regards!