How to write specific data from JMeter execution output to CSV / Notepad using beanshell scripting - jmeter-4.0

We are working on web services automation project using JMeter 4.0 In our response, JMeter returns data in json format but we would like to store only specific data (Account ID, Customer ID or Account inquiry fields) from that json into csv file but it stores data in csv file in unformatted format.
Looking for a workaround on this.
We are using following code:
import java.io.File;
import org.apache.jmeter.services.FileServer;
Result = "FAIL";
Responce = prev.getResponseDataAsString():
if(Responce.contains("data"))
Result = "PASS";
f = new FileOutputStream("C:/Users/Amar.pawar/Desktop/testoup.csv",true);
p = new PrintStream(f);
p.println(vars.get("/ds1odmc") + "," + Result):
p.close();
f.close():
Following error is getting encountered:
Error invoking bsh method: eval In file: inline evaluation of: ``import java.io.File; import org.apache.jmeter.services.FileServer; Result = "FA . . . '' Encountered ":" at line 5, column 42.
We are looking for saving specific data in CSV (or txt) instead of complete output in unformatted format. Please look into the matter & suggest.

Looks like a typo. You use : instead of ; in three lines:
Responce = prev.getResponseDataAsString():
...
p.println(vars.get("/ds1odmc") + "," + Result):
...
f.close():
And if problem does not solved it may be useful to check the article about testing complex logic with JMeter beanshell

Related

In jmeter: getting Error invoking bsh method: eval

BeanShellInterpreter: Error invoking bsh method: eval Sourced file: inline evaluation of: import org.apache.jmeter.services.FileServer; String path=FileServer.getFileSer . . . '' : Attempt to access property on undefined variable or class name 2022-01-27 19:50:51,923 WARN o.a.j.e.BeanShellPostProcessor: Problem in BeanShell script: org.apache.jorphan.util.JMeterException: Error invoking bsh method: eval Sourced file: inline evaluation of: import org.apache.jmeter.services.FileServer; String path=FileServer.getFileSer . . . '' : Attempt to access property on undefined variable or class name
JMETER BEANSHELL CODE: (Beanshell post processor)
import org.apache.jmeter.services.FileServer;
String path=FileServer.getFileServer().getBaseDir();
var1= vars.get("userid");
var2= vars.get("username");
var3= vars.get("userfullname");
var4= ${exam_id};
f = new FileOutputStream("C://apache-jmeter-5.4.1/apache-jmeter-5.4.1/bin/Script4.csv",true);
p = new PrintStream(f);
this.interpreter.setOut(p);
p.println(var1+ "," +var2 + "," +var3 + "," +var4);
f.close();
Change this line:
var4= ${exam_id};
to this one:
var4= vars.get("exam_id");
Since JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting so it worth considering migrating to Groovy.
If you run this code with more than 1 thread (virtual user) you will face a race condition resulting in data corruption when multiple threads will be writing into the same file. JMeter provides sample_variables property allowing to write variables values to the .jtl results file, if you need to store the data in a separate file - go for Flexible File Writer

How to prevent errors in format (string-variable) when downloading data from Google Big Query to R?

I have some data stored (Tweets streamed from Twitters Rest API) in Google Big Query, which, in the preview looks like this
'I’m up by myself.'
However, when I download it into R, it looks like this;
'I’m up by myself.'
Is there any way to prevent it?
I am using this code to download the data in R:
library(bigrquery)
project_id <- "my_project"
sql_string <-
"SELECT
text,
FROM my_under_project.my_table,
LIMIT 500
;"
test <- query_exec(sql_string, project = project_id, useLegacySql = FALSE, allowLargeResults=TRUE, max_pages = Inf)
str(test)
#data.frame': 500 obs. of 1 variable:
#$ text: chr "tweets" ...
The data from 'text' is stored as a string in Big Query.
Any help is appreciated! Thanks in advance!
I downloaded the data by 'bq_table_download' from the same package (instead of query_exec) from the same package and that solved the problem!
Special characters when importing from BigQuery to R

Python - Use %s in value of config file

I use a config file (type .ini) to save my SQL queries, then i get a query by its key. All work fine, until creating a query with parameters, example :
;the ini file
product_by_cat = select * from products where cat =%s
I use :
config = configparser.ConfigParser()
args= ('cat1')
config.read(path_to_ini_file)
query= config.get(section_where_are_stored_thequeries,key_of_the_query)
complete_query= query%args
I get the error :
TypeError: not all arguments converted during string formatting
So it try to format the string at retrieving the value from the ini file.
Any proposition of my problem.
You can use format function like this
ini file
product_by_cat = select * from products where cat ={}
python:
complete_query= query.format(args)
depending on the versions of ConfigParser (Python 2 or Python 3) you may need to double the % like this or it throws an error:
product_by_cat = select * from products where cat =%%s
Although a better way would be to use the raw version of the config parser, so the % char isn't interpreted
config = configparser.RawConfigParser()

Issue automating CSV import to an RSQLite DB

I'm trying to automate writing CSV files to an RSQLite DB.
I am doing so by indexing csvFiles, which is a list of data.frame variables stored in the environment.
I can't seem to figure out why my dbWriteTable() code works perfectly fine when I enter it manually but not when I try to index the name and value fields.
### CREATE DB ###
mydb <- dbConnect(RSQLite::SQLite(),"")
# FOR LOOP TO BATCH IMPORT DATA INTO DATABASE
for (i in 1:length(csvFiles)) {
dbWriteTable(mydb,name = csvFiles[i], value = csvFiles[i], overwrite=T)
i=i+1
}
# EXAMPLE CODE THAT SUCCESSFULLY MANUAL IMPORTS INTO mydb
dbWriteTable(mydb,"DEPARTMENT",DEPARTMENT)
When I run the for loop above, I'm given this error:
"Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
cannot open file 'DEPARTMENT': No such file or directory
# note that 'DEPARTMENT' is the value of csvFiles[1]
Here's the dput output of csvFiles:
c("DEPARTMENT", "EMPLOYEE_PHONE", "PRODUCT", "EMPLOYEE", "SALES_ORDER_LINE",
"SALES_ORDER", "CUSTOMER", "INVOICES", "STOCK_TOTAL")
I've researched this error and it seems to be related to my working directory; however, I don't really understand what to change, as I'm not even trying to manipulate files from my computer, simply data.frames already in my environment.
Please help!
Simply use get() for the value argument as you are passing a string value when a dataframe object is expected. Notice your manual version does not have DEPARTMENT quoted for value.
# FOR LOOP TO BATCH IMPORT DATA INTO DATABASE
for (i in seq_along(csvFiles)) {
dbWriteTable(mydb,name = csvFiles[i], value = get(csvFiles[i]), overwrite=T)
}
Alternatively, consider building a list of named dataframes with mget and loop element-wise between list's names and df elements with Map:
dfs <- mget(csvfiles)
output <- Map(function(n, d) dbWriteTable(mydb, name = n, value = d, overwrite=T), names(dfs), dfs)

How can I incorporate the current input filename into my Pig Latin script?

I am processing data from a set of files which contain a date stamp as part of the filename. The data within the file does not contain the date stamp. I would like to process the filename and add it to one of the data structures within the script. Is there a way to do that within Pig Latin (an extension to PigStorage maybe?) or do I need to preprocess all of the files using Perl or the like beforehand?
I envision something like the following:
-- Load two fields from file, then generate a third from the filename
rawdata = LOAD '/directory/of/files/' USING PigStorage AS (field1:chararray, field2:int, field3:filename);
-- Reformat the filename into a datestamp
annotated = FOREACH rawdata GENERATE
REGEX_EXTRACT(field3,'*-(20\d{6})-*',1) AS datestamp,
field1, field2;
Note the special "filename" datatype in the LOAD statement. Seems like it would have to happen there as once the data has been loaded it's too late to get back to the source filename.
You can use PigStorage by specify -tagsource as following
A = LOAD 'input' using PigStorage(',','-tagsource');
B = foreach A generate INPUT_FILE_NAME;
The first field in each Tuple will contain input path (INPUT_FILE_NAME)
According to API doc http://pig.apache.org/docs/r0.10.0/api/org/apache/pig/builtin/PigStorage.html
Dan
The Pig wiki as an example of PigStorageWithInputPath which had the filename in an additional chararray field:
Example
A = load '/directory/of/files/*' using PigStorageWithInputPath()
as (field1:chararray, field2:int, field3:chararray);
UDF
// Note that there are several versions of Path and FileSplit. These are intended:
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit;
import org.apache.pig.builtin.PigStorage;
import org.apache.pig.data.Tuple;
public class PigStorageWithInputPath extends PigStorage {
Path path = null;
#Override
public void prepareToRead(RecordReader reader, PigSplit split) {
super.prepareToRead(reader, split);
path = ((FileSplit)split.getWrappedSplit()).getPath();
}
#Override
public Tuple getNext() throws IOException {
Tuple myTuple = super.getNext();
if (myTuple != null)
myTuple.append(path.toString());
return myTuple;
}
}
-tagSource is deprecated in Pig 0.12.0 .
Instead use
-tagFile - Appends input source file name to beginning of each tuple.
-tagPath - Appends input source file path to beginning of each tuple.
A = LOAD '/user/myFile.TXT' using PigStorage(',','-tagPath');
DUMP A ;
will give you the full file path as first column
( hdfs://myserver/user/blo/input/2015.TXT,439,43,05,4,NAVI,PO,P&C,P&CR,UC,40)
Refrence: http://pig.apache.org/docs/r0.12.0/api/org/apache/pig/builtin/PigStorage.html
A way to do this in Bash and PigLatin can be found at: How Can I Load Every File In a Folder Using PIG?.
What I've been doing lately though, and find to be much cleaner is embedding Pig in Python. That let's you throw all sorts of variables and such between the two. A simple example is:
#!/path/to/jython.jar
# explicitly import Pig class
from org.apache.pig.scripting import Pig
# COMPILE: compile method returns a Pig object that represents the pipeline
P = Pig.compile(
"a = load '$in'; store a into '$out';")
input = '/path/to/some/file.txt'
output = '/path/to/some/output/on/hdfs'
# BIND and RUN
results = P.bind({'in':input, 'out':output}).runSingle()
if results.isSuccessful() :
print 'Pig job succeeded'
else :
raise 'Pig job failed'
Have a look at Julien Le Dem's great slides as an introduction to this, if you're interested. There's also a ton of documentation at http://pig.apache.org/docs/r0.9.2/cont.pdf.