If i am using CSV file in jmeter how should i configure threads count? - testing

I set CSV file in jmeter via CSV data Set Config this file contains 6 usernames and passwords.
What should be number of Threads in Thread Group page.
Also what should i do if i wanted to check with 100 users?
Should i increase number of users in my CSV file or should i use number of thread?

Answer really depends on what your test does and which load you want to inject.
But to make a realistic test, your CSV file should have as many logins as you have Virtual Users (threads in JMeters).
And you should ensure that 2 threads never use the same user (if your application does not accept it).

Related

How to do load test for Registration in JMETER

Iam very new to JMETER, how to test registration for multiple users & login. Is it possible using CSV file or any other method for multiple users ??
Can anyone please guide me
Depending on what you're trying to achieve:
If you have a file with username/password combinations - the most commonly used approach is reading the file with CSV Data Set Config
Alternatives are:
JDBC Test Elements - if the credentials are in the database
Redis Data Set - the credentials are in Redis
etc.
If you're testing registration and "run and forget" approach is good for you - you can generate random usernames and passwords using suitable JMeter Functions like __RandomString, __Random(), __time(), etc.
You can use Random CSV DataSet Config plugin in addition to the solutions given by Dmitri.
I have demonstrated use of CSV DataSet Config element and Random CSV DataSet Config in this video.

Real time load testing with Jmeter webdriver sampler

I need to load test a mendix application .
The requirement is - 10 users should be able to perform file upload function simultaneously .
But the test should be like real life situation where 10 users will use 10 different machines, accounts and browsers. Basically no shared resources.
I have written Jmeter webdriver sampler and trying to setup on multiple VMs in distributed way.
Is there any other better option / or better tool
You may use commercial solutions like Redline13 to distribute the users across the agents (i.e. 10 different machines)
You can include the file names associated with the users in a CSV file.
Uses can be uniquely distributed using the split feature Filename should be picked from the CSV and include in the HTTP Request ${FILE_NAME}. You can upload all the files (to be uploaded) associated with the used.

upload same file with different name each time in Load Runner

I need to upload an excel file in load runner HTTP/HTML script with unique filename each time. The file must present in the directory. Copying files and renaming them will be more manual task. Can anyone tell me is there any automated way to do this? or Load Runnner itself can perform such tasks? Thank you.
On each of your load generators make sure that you have a ram drive for the file i/o for the new files. You are going to have ten, perhaps hundreds or thousands on your load generator. You do not want contention for the read/write heads of a physical hard drive acting as a drag anchor on the performance of your entire load generator. It is for this same reason that logging is minimized during test execution.
Include the base file as part of your virtual user
Use appropriate language API for making a copy of the file from the virtual user directory to the ramdrive on your virtual user generator with a new name. It might recommend a name which includes virtual user number and iteration number at the end to ensure uniqueness across your virtual user population.
Upload your file from the ramdrive as the source
Delete your newly created file to return to the same initial condition as the beginning of the iteration.
As you will be engaging a large amount of file i/o for the virtual users it is highly recommended that you monitor the load generators just as you would monitor your application under test. If you are new to LoadRunner and performance testing then this is an excellent opportunity for your mentor/trainer to guide you on a monitoring strategy.
Assuming the upload is done using a html form..
Use web_submit_data() with the FilePath argument.
but first lets create some parameters to get a real unique filename (very importent)
create a parameter VUSERID which outputs the current vuser id.
get/save the current timestamp
web_save_timestamp_param("TIMESTAMP", LAST);
and here is the request:
web_submit_data("i1",
"Action=https://{LR_SERVER}/{LR_JUNCTION}/upload",
"Method=POST",
"EncType=multipart/form-data",
"Snapshot=t1.inf",
"Mode=HTML",
ITEMDATA,
"Name=FIELDNAME", "Value={VUSERID}{TIMESTAMP}_LOADTEST.xlsx", "File=yes", "FilePath=REALFILEPATH.xlsx", "ContentType=WHATEVERCONTENTTYPE", ENDITEM,
LAST);
The Value={VUSERID}{TIMESTAMP}_LOADTEST.xlsx will be the new (unique) filename. (It is unique for each user and iteration! very importent)
The FilePath points to the real file and its content will be uploaded.

Mule SFTP component

Hi I have below queries with SFTP component if you guys can help me out that would be a great help:
1) Can we get the file size of the file picked up by SFTP component? I need to restrict the transfer based on the size of file.
2) Can I get the number of files and the file names picked up by the SFTP component?
3) Is the understanding correct: SFTP component picks up all the files from the server and keep in memory and do the processing 1 by 1 until it finishes all files?
4) If server has 5 files can SFTP component process all the 5 files in parallel rather than 1 by 1?
1-Mule does not populate the file-size field for SFTP as they do with FILE. There are Jira tickets open on this matter but MuleSoft has called it an enhancement and not given it a priority. I disagree. Perhaps ping MuleSoft, if enough users do maybe they will raise the priority and address it. They use the size internally, they simply do not expose it outside as is done with the FILE connector.
2-No, not really. It gives them back one at a time, not as a list.
3 & 4-It is only loading the entire file into memory if you tell it not to stream or do something else, like an onject-to-string transformer which forces a memory load. The files or files streams are passed back 1 by 1, but unless you restrict threading and make your flow synchronous, it will go to asych and multi-threaded and process multiple files in parallel. Flows default to asych, subflow are synchronous.
You can use the SFTP endpoint to retrieve files, and then use a Java or script call to get the file's attributes and filter to only process the files you are actually interested in, such as ones larger than your minimum size. This would seem more in line with what you are looking for in point 1. There are other options, but this would be more straight forward that others I can quickly think of.
I found 1 way to get the file size, if we provide transformer-refs="Object_to_Byte_Array" in and then do #[payload.size()] to get the size of file in Bytes? Will this cause any issue?

Storing uploaded content on a website

For the past 5 years, my typical solution for storing uploaded files (images, videos, documents, etc) was to throw everything into an "upload" folder and give it a unique name.
I'm looking to refine my methods for storing uploaded content and I'm just wondering what other methods are used / preferred.
I've considered storing each item in their own folder (folder name is the Id in the db) so I can preserve the uploaded file name. I've also considered uploading all media to a locked folder, then using a file handler, which you pass the Id of the file you want to download in the querystring, it would then read the file and send the bytes to the user. This is handy for checking access, and restricting bandwidth for users.
I think the file handler method is a good way to handle files, as long as you know to how make good use of resources on your platform of choice. It is possible to do stupid things like read a 1GB file into memory if you don't know what you are doing.
In terms of storing the files on disk it is a question of how many, what are the access patterns, and what OS/platform you are using. For some people it can even be advantageous to store files in a database.
Creating a separate directory per upload seems like overkill unless you are doing some type of versioning. My personal preference is to rename files that are uploaded and store the original name. When a user downloads I attach the original name again.
Consider a virtual file system such as SolFS. Here's how it can solve your task:
If you have returning visitors, you can have a separate container for each visitors (and name it by visitor login, for example). One of the benefits of this approach is that you can encrypt the container using visitor's password.
If you have many probably one-time visitors, you can have one or several containers with files grouped by date of upload.
Virtual file system lets you keep original filenames either as actual filesnames, or as a metadata for the files being stored.
Next, you can compress the data being stored in the container.