Save JMeter summary table data without plugin? - testing

Is there any way whatsoever that I can save the summary table data from my JMeter tests each time they are run without using the command line or a plugin of any sorts? You can output the default values in a csv which I am already doing so surely there must be a way to do the same with the average, min, max and so on.

You can configure filename, where are you going to save result right here - in Summary Report. Find field: Write result to file/Read from file.
In case you are going to run JMeter in non-gui mode, you could configure format in which the testing results will be saved. You can chose xml or csv. Use jmeter.save.saveservice.output_format config in jmeter.properties.

Yes. Add a Simple Data Writer (Test Plan > Add > Listeners > Simple Data Writer) and then give it the path that you want to save to. You can configure the result data that you want to keep. You can then load this file back into any of the reports, results trees, etc. and see the full test results again.

Related

need to automate an form with 1000 users in single time

this is link to the form, already used jmeter to do this but in backend only one user was reflecting https://testapp-app.kloudsoft.co/survey/38e288e2-7957-4a05-b024-fb337df2f0f6
I am expecting 1000 differnt users in the back end
backend
Need a tool from which i can do this form automation
If you recorded your test using HTTP(S) Test Script Recorder it's absolutely expected that you're able to see only one user (the name and/or email which was used during recording)
You need to parameterize the credentials using one of the following approaches:
Generate 1000 users/emails, put them into a CSV file and use CSV Data Set Config to read them
Use JMeter Functions like __RandomString() to generate random users
Use Counter configuration element and/or __counter() function to generate an incremented number on each iteration or each time the function is called

Trying to upload a CSV into Snowflake and the date column isn't recognized

The column contains dates in the following format:
dd/mm/yyyy
I've tried DATE, TIMESTAMP etc but whatever I do, I can't seem to upload the file.
In the classic UI you can click on the Load Tablebutton and follow the dialogs to upload your file. It is a bit hard to find. Click on Databases right to the big snowflake icon. Then select a database and a table. Then you should the the button. In the wizard there will be a step for defining the 'File Format'. There, you have to scroll down to define the date format. (see Classic snowflake UI)
Without the classic UI you have to install SnowSQL on your device first (https://docs.snowflake.com/en/user-guide/snowsql-install-config.html).
Start SnowSQL and apply the following steps:
Use the database where to upload the file to. You need various privileges for creating a stage, a fileformat, and a table. E.g. USE TEST_DB
Create the fileformat you want to use for uploading your csv file. E.g.
CREATE FILE FORMAT "TEST_DB"."PUBLIC".MY_FILE_FORMAT TYPE = 'CSV' DATE_FORMAT = 'dd/mm/yyyy';
Create a stage using this fileformat
CREATE STAGE MY_STAGE file_format = "TEST_DB"."PUBLIC".MY_FILE_FORMAT;
Now you can put your file to this stage
PUT file://<file_path>/file.csv #MY_STAGE;
You can check the upload with
SELECT d.$1, ..., d.$N FROM #MY_STAGE/file.csv d;
Then, create your table.
Copy the content from your stage to your table. If you want to transform your data at this point, you have to use an inner select. If not then the following command is enough.
COPY INTO mycsvtable from #MY_STAGE/file.csv;
You can find documentation for configuring the fileupload at https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html
You can find documentation for configuring the stage at https://docs.snowflake.com/en/sql-reference/sql/create-stage.html
You can find documentation for copying the staged file into a table https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html
I recommend that you upload your file with disabled automatically date detection OR that your initial table has a string column instead of a date column. IMHO, it is easier to transform your upload afterwards using the try_to_date function. Using this function it is much easier to handle possible parsing errors.
e.g. SELECT try_to_date(date_column, 'dd/mm/yyyy') as MY_DATE, IFNULL(MY_DATE, date_column) as MY_DATE_NOT_PARSABLE FROM upload;
You see that it is pretty much to do for loading a simple CSV file to Snowflake. It becomes even more complicated when you take into account that every step can cause some specific failures and that your file might contain erroneous lines. This is why my team and I are working at Datameer to make these types of tasks easier. We aim for a simple drag and drop solution that does most of the work for you. We would be happy if you would try it out here: https://www.datameer.com/upload-csv-to-snowflake/

Pentaho writing to log but not text file

I have a transformation that is successfully writing the first row to the log file.
However the same transformation is not writing the first row to a text file.
The text file remains blank.
Does anyone know why this may be?
edited - only focusing on the applications to run and set pm variable transformations, as the other transformations are replications of set pm variable but for different fields
It looks like your Set Variables step is distributing its rows over the two follow-up steps in a round-robin way, which is the default setting in PDI.
Right-click the Set Variables step and under Data Movement, select Copy. That will send all rows to BOTH steps. You should see a documents icon on the hops then.

Pentaho Data Integration: How to select output of sql query as a filename for Microsoft Excel Input.

I have files abc.xlsx, 1234.xlsx, and xyz.xlsx in some folder. My requirement is to develop a transformation where the Microsoft Excel Input in PDI (Pentaho Data Integration) should only pick the file based on the output of a sql query. If the output query is abc.xlsx. Microsoft Excel Input should pick of abc.xlsx for further processing. How do I achieve this? Would really appreciate your help. Thanks.
Transformations in Kettle run asynchronously, so you're probably looking into needing a job for this.
Files to create
Create a transformation that performs the SQL query you're looking for and populates a variable based on the result
Create a transformation that pulls data from the Excel file, using the variable populated as the filename
Create a job that executes the first transformation, then steps into the second transformation
Jobs run sequentially, so it will execute the first transformation, perform the query, get the result, and set a variable. Variables need to be set and retrieved in different transformations because of their asynchronous nature. This is the reason for the second transformation; the job won't step into the second transformation until the first one is done running (therefore, not until the variable is populated).
This is all assuming you only want to run the transformation once, expecting a single result from the query. If you want to loop it, pulling data from a set, then setup is a little bit different.
The Excel input step has a "accept filenames from previous step" option. You can have a table input build the full path of the file you want to read (or you somehow build it later knowing the base dir and the short filename), pass the filename to the excel input, tick that box and specify the step and the field you want to use for the filename.

How to automate the retrival of files based on datestamp

Im new to the pentaho suite and its automation functionality. i have files that come in on a daily basis and two columns need to be put in place. I have figured out how to add the columns but now i am stuck on the automation side of things. The filename is constant but it has a datestamp at the end. EG: LEAVER_REPORT_NEW_20110623.csv. The file will always be in the same directory. How do i go about using Pentaho data integration to solve this issue? ive tried get files but that doesnt seem to work.
create a variable in a previous transform which contains 20110623 (easy with a get system info step to get the date, and then a select values step to format to string, then a set variables step)
then change the filename of the textfile input to use:
LEAVER_REPORT_NEW_${variablename}.csv