I'm using Jmeter for testing file uploads. This works great when I upload just one file, but I want to be able to loop through a list of files. I see Jmeter has a CSV based config capability, but I can't figure out how to include a file as one of the params.
How can I specify a list of different files for jmeter to loop through, uploading one per request?
You need to pass:
either relative or full path to the file being uploaded
upload input name
file MIME type
So if your CSV file will look like:
c:/testfiles/test.txt,upload,text/plain
c:/testfiles/test.jpg,upload,image/jpeg
etc.
And CSV Data Set Config is configured as:
Your HTTP Request Sampler configuration should look somehow like
References:
CSV Data Set Config
Performance Testing: Upload and Download Scenarios with Apache JMeter
Related
Jmeter v.5.1.1 r1855137
I'm trying to upload .xlsx file using multipart/form-data request type,
however, I'm getting different errors such as:
multipart body length limit 16384 exceeded;
unexpected end of stream, the content may have already been read by another component;
These are parameters for POST request that should upload our .xlsx file:
screenshot_1
Method: POST;
Use multipart/form-data checkbox: unchecked;
File Path: C:\temp\5000Lanes.xlsx;
Parameter Name: file;
MIME Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
MIME type value is specified for .xlsx file types - https://www.freeformatter.com/mime-types-list.html;
These are Header Manager parameters
screenshot_2
Content-Type: multipart/form-data; boundary=--AaB03x
Note that Body Data and Parameters left empty
screenshot_3
So, please help me to figure out what is wrong in configuration to make upload successful ?
After 1 day of struggle I've found the solution and now the upload is successful.
So, using the same configs that were specified in my question
the only thing you need to add is to proceed to 'Advanced' tab of your HTTP Request and set 'Implementation' parameter to Java
https://i.stack.imgur.com/GtEDz.png
Remove Content-Type header from the HTTP Header Manager
Tick Use multipart/form-data box for the HTTP Request sampler:
Going forward be aware that you don't have to manually build file upload requests (as well as any other requests), you can record the file upload event using HTTP(S) Test Script Recorder, just make sure to copy the file to "bin" folder of your JMeter installation otherwise JMeter won't be able to properly catch the request and generate correct HTTP Request sampler, see Recording File Uploads with JMeter for more details
According to JMeter Best Practices you should always be using the latest version of JMeter so consider upgrading to JMeter 5.3 (or whatever is the latest stable version available at JMeter Downloads page) on next available opportunity
I am using jmeter tool.While upload the text file, its shows the message as "Actual File content,its not shown here".Could you tell me the configuration to upload the text file.
you can upload any file in HTTP request. Please refer below snapshot
Also try to record this call from application. It'll automatically fill this for you
Completely new to JMeter.
Just wondering if it can be configured to generate a file (i.e. .csv) and then post said file with a HTTP POST?
I've searched a bit on the internet but can't work out whether it is possible...
Reason I want to do this is that I want to test a server that receives data in the form of a .csv file and uploads it to a database. Hoping to test this server by sending a whole bunch of randomised data.
Would be good to have different threads sending different amounts of data at different rates.
Any tips would be amazing, I am a complete newbie.
Cheers
It is possible to upload a file using JMeter HTTP Request sampler.
JMeter supports beanshell which is Java - scripting language. It can be used to create a CSV file& store it in some path.
Update the csv file path in the 'Send file with the request' of the HTTP Request sampler.
When I try to upload an uncompressed json file, it works fine; but when I try a gzipped version of the same json file, the job would fail with lexical error resulted from failure to parse the json content.
I gzipped the json file with the gzip command from Mac OSX 10.8 and I have set the sourceFormat to: "NEWLINE_DELIMITED_JSON".
Did I do something incorrectly or gzipped json file should be processed differently?
I believe that using the multipart/related request it is not possible to submit binary data (such as the compressed file. However, if you don't want to use uncompressed data, you may be able to use resumable upload.
What language are you coding in? The python jobs.insert() api takes a media upload parameter, which you should be able to give a filename to in order to do resumable upload (which sends your job metadata and new table data as separate streams). I was able to use this to upload a compressed file.
This is what bq.py uses, so you could look at the source code here.
If you aren't using python, the googleapis client libraries for other languages should have similar functionality.
You can upload gzipped files to Google Cloud Storage, and BigQuery will be able to ingest it with a load job:
https://developers.google.com/bigquery/loading-data-into-bigquery#loaddatagcs
Is there a way to load a Gzipped file from Amazon S3 into Pentaho Data Integration (Spoon)?
There is a "Text File Input" that has a Compression attribute that supports Gzip, but this module can't connect to S3 as a source.
There is an "S3 CSV Input" module, but no Compression attribute, so it can't decompress the Gzipped content into tabular form.
Also, there is no way to save the data from S3 to a local file. The downloaded content can only be "hopped" to another Step, but no Step can read gzipped data from a previous Step, the Gzip-compatible steps all read only from files.
So, I can get gzipped data from S3, but I can't send that data anywhere that can consume it.
Am I missing something? Is there a way to unzip zipped data from a non-file source?
Kettle uses VFS (Virtual File System) when working with files. Therefore, you can fetch a file through http, ssh, ftp, zip, ... and use it as a regular, local file in all the steps that read files. Just use the right "url". You will find more here and here, and a very nice tutorial here. Also, check out VFS transformation examples that come with Kettle.
This is url template for S3: s3://<Access Key>:<Secret Access Key>#s3<file path>
In your case, you would use "Text file input" with compression settings you mentioned and selected file would be:
s3://aCcEsSkEy:SecrEttAccceESSKeeey#s3/your-s3-bucket/your_file.gzip
I really don't know how but if you really need this you can look for using S3 through VFS capabilities that Pentaho Data Integration provides. I can se a vsf-providers.xml with the following content in my PDI CE distribution:
../data-integration/libext/pentaho/pentaho-s3-vfs-1.0.1.jar
<providers>
<provider class-name="org.pentaho.s3.vfs.S3FileProvider">
<scheme name="s3"/>
<if-available class-name="org.jets3t.service.S3Service"/>
</provider>
</providers>
You can also try with GZIP input control in peanatho kettle it is there.