How to performance test an Update endpoint in JMeter while automatically updating row value in payload - api

I have an Update entity endpoint in my .NET Core Microservice API that needs to be tested for performance. For all other endpoints, I am able to store the ID in a CSV file and load it before processing, however I want to reuse the values in the CSV for update, which requires updating and keeping track of the Row Version attribute for the ID.
I will be testing using 100 Users and 100 Orders, so I will need to match every user to one order so they don't try updating the same entity.
Steps:
Read CSV with ID and current row version
Call Update endpoint on the ID and row version, read in new Row Version from response body
Store the new row version and the ID within JMeter to reuse in the test
Call Update endpoint on the ID and new row version
The problem with storing inside of the CSV is JMeter will be reading and writing from the same file. I am looking for a way to use a Java like collection inside of my script to not have to read and write from a file.
The dictionary would look like {'q28937-3423572903485-324875', rowVersion: 42}

Add a Post Processor as a child of your HTTP request (the first update) to extract the new ID and rowVersion.
Then in the next update, you should use Jmeter variables ${ID} and ${rowVersion} which holds the new values that you extracted using Post Processor.
Note that variables are not shared between threads, from Jmeter user manual best practices - 16.13
Variables are local to a thread; a variable set in one thread cannot be read in another. This is by design.
Also check
Using RegEx (Regular Expression Extractor) with JMeter guide.
Using CSV DATA SET CONFIG guide.
CSV data set config Jmeter User Manual

Related

need to automate an form with 1000 users in single time

this is link to the form, already used jmeter to do this but in backend only one user was reflecting https://testapp-app.kloudsoft.co/survey/38e288e2-7957-4a05-b024-fb337df2f0f6
I am expecting 1000 differnt users in the back end
backend
Need a tool from which i can do this form automation
If you recorded your test using HTTP(S) Test Script Recorder it's absolutely expected that you're able to see only one user (the name and/or email which was used during recording)
You need to parameterize the credentials using one of the following approaches:
Generate 1000 users/emails, put them into a CSV file and use CSV Data Set Config to read them
Use JMeter Functions like __RandomString() to generate random users
Use Counter configuration element and/or __counter() function to generate an incremented number on each iteration or each time the function is called

Azure Data Factory - delete data from a MongoDb (Atlas) Collection

I'm trying to use Azure Data Factory (V2) to copy data to a MongoDb database on Atlas, using the MongoDB Atlas connector but I have an issue.
I want to do an Upsert but the data I want to copy has no primary key, and as the documentation says:
Note: Data Factory automatically generates an _id for a document if an
_id isn't specified either in the original document or by column mapping. This means that you must ensure that, for upsert to work as
expected, your document has an ID.
This means the first load works fine, but then subsequent loads just insert more data rather than replacing current records.
I also can't find anything native to Data Factory that would allow me to do a delete on the target collection before running the Copy step.
My fallback will be to create a small Function to delete the data in the target collection before inserting fresh, as below. A full wipe and replace. But before doing that I wondered if anyone had tried something similar before and could suggest something within Data Factory that I have missed that would meet my needs.
As per the document, You cannot delete multiple documents at once from the MongoDB Atlas. As an alternative, you can use the db.collection.deleteMany() method in the embedded MongoDB Shell to delete multiple documents in a single operation.
It has been recommended to use Mongo Shell to delete via query. To delete all documents from a collection, pass an empty filter document {} to the db.collection.deleteMany() method.
Eg: db.movies.deleteMany({})

Azure Data Factory 2 : How to split a file into multiple output files

I'm using Azure Data Factory and am looking for the complement to the "Lookup" activity. Basically I want to be able to write a single line to a file.
Here's the setup:
Read from a CSV file in blob store using a Lookup activity
Connect the output of that to a For Each
within the For Each, take each record (a line from the file read by the Lookup activity) and write it to a distinct file, named dynamically.
Any clues on how to accomplish that?
Use Data flow, use the derived column activity to create a filename column. Use the filename column in sink. Details on how to implement dynamic filenames in ADF is describe here: https://kromerbigdata.com/2019/04/05/dynamic-file-names-in-adf-with-mapping-data-flows/
Data Flow would probably be better for this, but as a quick hack, you can do the following to read the text file line by line in a pipeline:
Define your source dataset to output a line as a single column. Normally I would use "NoDelimiter" for this, but that isn't supported by Lookup. As a workaround, define it with an incorrect Column Delimiter (like | or \t for a CSV file). You should also go to the Schema tab, and CLEAR the schema. This will generate a column in the output named "Prop_0".
In the foreach activity, set the Items to the Lookup's "output.value" and check "Sequential".
Inside the foreach, you can use item().Prop_0 to grab the text of the line:
To the best of my understanding, creating a blob isn't directly supported by pipelines [hence my suggestion above to look into Data Flow]. It is, however, very simple to do in Logic Apps. If I was tackling this problem, I would create a logic app with an HTTP Request Received trigger, then call it from ADF with a Web activity and send the text line and dynamic file name in the payload.

How to merge sqlite3 session extension sessions?

I'm using the c API of the sqlite3 session extension and wondering if the session extension can be used to merge sqlite3 sessions that already have been written to file.
Following the tutorial referenced above I was able to register sqlite3 sessions by writing them to file one by one, e.g. for an UPDATE call I end up with a session file, and with another INSERT call I get another file, and so on. These transactions are triggered by UI button callbacks. I wonder if the session files could be somehow merged afterwards into one single session file, so that calling sqlite3changeset_apply() with this merged session file as its parameter I could end up with the same result as if I called sqlite3changeset_apply() on a list of session files. The reason I would like to do this is that I'd like to transfer only one session file instead of a folder of session files.
I tried iterating over a session list calling subsequent sqlite3changeset_apply() on a copy of the original database while registering the session, but in that I case I eventually get a session file with zero size (although the copy database would contain all the expected changes).
I could not find anything on this in the official documentation nor on the web.
🧟‍♀️🧟🧟‍♂️ necromancy alert 🧟‍♂️🧟🧟‍♀️
You're probably looking for the sqlite3changeset_concat function:
This function is used to concatenate two changesets, A and B, into a single changeset. The result is a changeset equivalent to applying changeset A followed by changeset B.
There is also a streaming version, sqlite3changeset_concat_strm.
If you need to combine many changesets, you can use the type sqlite3_changegroup and its associated functions.

Variable values stored outside of SSIS

This is merely a SSIS question for advanced programmers. I have a sql table that holds clientid, clientname, Filename, Ftplocationfolderpath, filelocationfolderpath
This table holds a unique record for each of my clients. As my client list grows I add a new row in my sql table for that client.
My question is this: Can I use the values in my sql table and somehow reference each of them in my SSIS package variables based on client id?
The reason for the sql table is that sometimes we get request to change the delivery or file name of a file we send externally. We would like to be able to change those things dynamically on the fly within the sql table instead of having to export the package each time and manually change then re-import the package. Each client has it's own SSIS package
let me know if this is feasible..I'd appreciate any insight
Yes, it is possible. There are two ways to approach this and it depends on how the job runs. First is if you are running for a single client for a single job run or if you are running for multiple clients for a single job run.
Either way, you will use the Execute SQL Task to retrieve data from the database and assign it to your variables.
You are running for a single client. This is fairly straightforward. In the Result Set, select the option for Single Row and map the single row's result to the package variables and go about your processing.
You are running for multiple clients. In the Result Set, select Full Result Set and assign the result to a single package variable that is of type Object - give it a meaningful name like ObjectRs. You will then add a ForEachLoop Enumerator:
Type: Foreach ADO Enumerator
ADO object source variable: Select the ObjectRs.
Enumerator Mode: Rows in all the tables (ADO.NET dataset only)
In Variable mappings, map all of the columns in their sequential order to the package variables. This effectively transforms the package into a series of single transactions that are looped.
Yes.
I assume that you run your package once per client or use some loop.
At the beginning of the "per client" code read all required values from the database into SSIS varaibles and the use these variables to define what you need. You should not hardcode client specific information in the package.