CSV data set config just working for 1 line - scripting

I have recorded a script, in that script only a search id is changed after login and search a field
After recording i modified the script and i have provided a "CSV_Data Set COnfig" file to read a list of search ids.
The issue is
If i provide only one search id in that csv that is run correctly, but if i add multiple search ids in that file and run the script that number of time, only the first line run for that X number of times, but in Jmeter it shows that the URL of the search id was made correctly.
Can someone guide?
I have used loop controller too to run X number of times for that search ids, but same is the result.
How can i run that? (As i recorded the script 1 time and just modified that search id with csv)
The script is working fine for 1 search id, but not for multiple (after providing the search ids through csv file through csv data set config)
What i am doing wrong? please guide

It looks like you have the csv and the loop controller at the same level.
You should put the csv data set configuration as a child of loop controller
*--thread Group
*----loopcontroler
*------csv
*------httpsampler

Related

need to automate an form with 1000 users in single time

this is link to the form, already used jmeter to do this but in backend only one user was reflecting https://testapp-app.kloudsoft.co/survey/38e288e2-7957-4a05-b024-fb337df2f0f6
I am expecting 1000 differnt users in the back end
backend
Need a tool from which i can do this form automation
If you recorded your test using HTTP(S) Test Script Recorder it's absolutely expected that you're able to see only one user (the name and/or email which was used during recording)
You need to parameterize the credentials using one of the following approaches:
Generate 1000 users/emails, put them into a CSV file and use CSV Data Set Config to read them
Use JMeter Functions like __RandomString() to generate random users
Use Counter configuration element and/or __counter() function to generate an incremented number on each iteration or each time the function is called

VB.Net Query CSV File

I have a a csv file I need to process to find just a few line items. I followed the tutorial here for reading in a CSV file into a DataTable. It worked very well when my query was:
SELECT * FROM <NameOfCsvFile>
This is great but it returns the whole CSV file as a DataTable. I only want a portion of the file to check for known errors. So I modified my query to be like this:
SELECT * FROM <NameOfCsvFile> WHERE Column1 IN (Int1,Int2,Int3,Int4,Int5)
When I checked the DataTables output in the debugger it had the right row count (being 5) but the list of rows was equal to NOTHING.
How can I get this to work MS's query interface instead of manually looping through the whole CSV file to return the 5 records I seek out of hundreds?
I figured it out. The query worked perfectly but the debugger doesn't allow you to see what rows are returned as they haven't been decoded yet. Calling Dim rows() as DataRows = dt.Select("") allowed me to see the actual results of the CSV file query.
All is good in the world now

Write results of SQL query to multiple files based on field value

My team uses a query that generates a text file over 500MB in size.
The query is executed from a Korn Shell script on an AIX server connecting to DB2.
The results are ordered and grouped by a specific field.
My question: Is it possible, using SQL, to write all rows with this specific field value to its own text file?
For example: All rows with field VENDORID = 1 would go to 1.txt, VENDORID = 2 to 2.txt, etc.
The field in question currently has 1000+ different values, so I would expect the same amount of text files.
Here is an alternative approach that gets each file directly from the database.
You can use the DB2 export command to generate each file. Something like this should be able to create one file :
db2 export to 1.txt of DEL select * from table where vendorid = 1
I would use a shell script or something like Perl to automate the execution of such a command for each value.
Depending on how fancy you want to get, you could just hardcode the extent of vendorid, or you could first get the list of distinct vendorids from the table and use that.
This method might scale a bit better than extracting one huge text file first.

How to truncate then append to a RAW file in SSIS

I'm attempting to pull data from several spreadsheets that reside in a single folder, then put all the data into a single csv file along with column headings.
I have a foreach loop container setup to iterate through each of the filenames in the folder, which then appends this data to a RAW file, however as many have seemed to run into, there does not appear to be a built in option that will allow one to simply truncate the RAW file before entering the loop container.
Jamie Thompson described a similar situation in his blog here, but the links to the examples do not seem to work. Does anyone have an easy way to truncate the RAW file in a stand alone step before entering the foreach loop?
The approach I always use is to create a data flow with the appropriate metadata format but no actual rows and route that to a RAW file set to Create new.
In my existing data flow, I look at the metadata that populates the RAW file and then craft a select statement that mimics it.
e.g.
SELECT
CAST(NULL AS varchar(70)) AS AddressLine1
, CAST(NULL AS bigint) AS SomeBigInt
, CAST(NULL AS nvarchar(max)) AS PerformanceLOL
Here's what I would did:
Make your initial raw file
Make a copy of that raw file
Use a file task to replace the staging file at the beginning of your package/job every time.
In my use case I have 20 foreach threads writing to their own files all at the same time. No thread can create and then append, so you just "recreate" by copying over an 'empty' raw file that already has the meta data assigned, before calling the threads:

How to automate the retrival of files based on datestamp

Im new to the pentaho suite and its automation functionality. i have files that come in on a daily basis and two columns need to be put in place. I have figured out how to add the columns but now i am stuck on the automation side of things. The filename is constant but it has a datestamp at the end. EG: LEAVER_REPORT_NEW_20110623.csv. The file will always be in the same directory. How do i go about using Pentaho data integration to solve this issue? ive tried get files but that doesnt seem to work.
create a variable in a previous transform which contains 20110623 (easy with a get system info step to get the date, and then a select values step to format to string, then a set variables step)
then change the filename of the textfile input to use:
LEAVER_REPORT_NEW_${variablename}.csv