Handling Multiple Correlation IDs in Load Runner - scripting

I am a beginner in Load Runner V 11.50. I was scripting a login page and then logout under Action. I applied the correlation from Design Studio. But the problem is, a single ID value is applied to all requests having the ID as parameter. In reality, the ID value generated from 1 request is passed in other and from the later request another ID value is getting generated in passed in rest all requests. So the replay status is failed. I guess multiple correlation is required in this case. Anyone can suggest anything on this?
Thanks in advance.

What you are likely referring to is a state variable, which is generated on page A and passed back on page A+1. You will need to handle this with manual correlation.
Automation in correlation is not meant to replace manual correlation, only to improve the efficiency of the developer when the pattern is well known. In this case because of the constantly changing nature of when a value is populated and used you will need to address this manually.
As you are a beginner, this is an opportunity for your mentor to reground the material from your classroom training and work with you to reinforce your manual correlation processes and skills. If you are being asked to perform without a mentor then your management is setting you up to fail as a new person in this field.
Here is a podcast which should help you on the identification front for manual correlation.
http://www.perfbytes.com/dynamic-data-correlation

Please put web_reg_save_param before each request which is generating dynamic value in this case.
Example:
web_reg_save_param("param1")
A
web_reg_save_param("param2")
B --> Pass {param1} in B
web_reg_save_param("param3")
C --> Pass {param2} in C
and so on...
Also learn capturing various LB/RB,escape sequences,Arguments in correlation etc.

You can try using auto correlation function of load runner for now. But this will not work in all cases. Any how uou need to learn manual correlation methods.
This article will be helpful:http://www.guru99.com/correlation-in-loadrunner-ultimate-guide.html
First you need to practice manual correlation from Generation logs. later on after experience correlations can be done from Tree view.

Related

How to invoke a custom ResultFilter before ClientErrorResultFilter is executed in ASP.NET 6

I spent almost a full day debugging why my client can't post any forms, until I found out the anti-forgery mechanism got borked on the client-side and the server just responded with a 400 error, with zero logs or information (turns out anti-forgery validation is logged internally with Info level).
So I decided the server needs to special handle this scenario, however according to this answer I don't really know how to do that (aside from hacking).
Normally I would set up a IAlwaysRunResultFilter and check for IAntiforgeryValidationFailedResult. Easy.
Except that I use Api Controllers, so by default all results get transformed into ProblemDetails. So context.Result as mentioned here is always of type ObjectResult. The solution accepted there is to use options.SuppressMapClientErrors = true;, however I want to retain this mapping at the end of the pipeline. But if this option isn't set to true, I have no idea how to intercept the Result in the pipeline before this transformation.
So in my case, I want to do something with the result of the anti-forgery validation as mentioned in the linked post, but after that I want to retain the ProblemDetails transformation. But my question is titled generally, as it is about executing filters before the aforementioned client mapping filter.
Through hacking I am able to achieve what I want. If we take a look at the source code, we can see that the filter I want to precede has an order of -2000. So if I register my global filter like this o.Filters.Add(typeof(MyResultFilter), -2001);, then the filter shown here correctly executes before ClientErrorResultFilter and thus I can handle the result and retain the transformation after the handling. However I feel like this is just exploiting the open-source-ness of .Net 6 and of course as you can see it's an internal constant, so I have no guarantee the next patch doesn't change it and my code breaks. Surely there must be a proper way to order my filter to run before the api transform.

Bulk POST request without enumerating objects

I'm trying to let my API clients make a POST request that bulk modifies objects that the client doesn't have their IDs.
I'm thinking of implementing this design but I don't feel good about it, are there any better solutions than this?
POST url/objects/modify?name=foo
This request will modify all objects with the name foo
This can be a tricky thing to do with an API because it doesn't age very well.
By that I mean that over time, you might introduce more criteria for the data stored on resources (e.g., you can only set this field to "archived" if the create_time field is older than 6 months). When that happens, your bulk updates will start to only work on some resources and now you have to communicate that back to the person calling the API.
For example, for any failures you need to explain that the update worked for some resources (and list them out) but failed on others (and list them out) and the reason why for each failure (and remember you might have different failure conditions for different resources).
If you're set on going down this path, the closest thing I can think of is the "criteria-based delete" method shown here: https://google.aip.dev/165.

Listing current Ignite jobs and cancelling them

I got a partial answer here but not exactly what I wanted.
The link describes how to get a list of task futures but what I'd really like to be able to do is list out and cancel individual jobs (that might be hung, long running etc etc). I've seen another post implying that this is not possible but I'd like to confirm (see second link)
Thanks
http://apache-ignite-users.70518.x6.nabble.com/How-can-I-obtain-a-list-of-executing-jobs-on-an-ignite-node-td8841.html
http://apache-ignite-users.70518.x6.nabble.com/Cancel-tasks-on-Ignite-compute-grid-worker-nodes-td5027.html
Yes, this is not possible and actually I'm not sure how this can be done in general case. Imagine there are 5 jobs running and you want to cancel one of them. How are you going to identify it? It seems to be very use case specific to me.
However, you can always implement your own mechanism to do this. One of the possible ways is to use ComputeTaskSession API and task attributes. E.g., set a special attribute that will act as signal for job cancellation and create attribute listener that will stop job execution accordingly.

RESTful way to preallocate an ID

For some reasons my application needs to have an API that flows like:
Client calls server to get ID for a new resource.
Then user spends a while filling out the forms for the resource.
Then user clicks save (or not...), and when he does the client saves by writing to /myresource/{id}
What is the RESTful way to design this?
How should the first call look like? On server side, it's a matter of generating an ID and returning it. It has side effects (increments sequence and thus "reserves space"), but it doesn't explicitly create a resource.
If I understand correctly, the 3rd call should be a PUT because it creates something with a known URI.
One way you could do it is:
client POSTs empty body to /myresource/
server answers with status code 302 (Found) with a Location response header set to /myresource/newresourceid (to indicate the ID / URI that should be used to create the resource)
client PUTs the new resource to /myresource/newresourceid once the user is done filling the form.
Seems RESTful enough. ;)
I'm interested to see the other answers to this question as I imagine there's a lot of ways to do this.
If possible I would let your auto-incrementing ID in the database serve as your surrogate key and assign another field to be your business identifier. It could be something like a product code or a GUID.
With this in mind the client can now create the ID themselves which removes the need for step 1 at all. They would do a PUT to a url such as /myresource/MLN5001 or /myresource/3F2504E0-4F89-11D3-9A0C-0305E82C3301 to create the resource. If the ID is already in use return a 409 Conflict with the conflict in the response body. Otherwise return a 201 Created and include the URI to the resource in the response body and location header.
I would go with
GET /myresource/new-id
POST /myresource/{id}
Your walkthrough is pretty clear on the verb:
"to GET [an] ID for a new resource"
you could rename new-id to whatever you think makes it most clear. If you have multiple resources you need to do this for, it would probably be better to split out the generator into its own resource, such as
GET /id-generator/myresource
GET /id-generator/my-other-resource
If there are multiple cases, the user will quickly learn they need to hit id-generator to get their ID. If it's only one case, it's annoying for them to only have to use it infrequently.
I guess you could also do
GET /myresource-id-generator/next
which looks a little clearer, but then if you ever need another ID to be generated you have to make another resource to do it.
ID allocation is non-idempotent — two invokes of the allocation operation will get different IDs — so that should always be a POST. From that point on, the resource should conceptually exist. However, what I'd do at that point is fill it out with reasonable default values (whether that involves doing POSTs or PUTs is rather immaterial to the RESTfulness of the overall design), so the user can then take their time to alter the things that they want to look like they want them to.
The question then becomes one of whether there should be some kind of “release this; I'm done with altering it” operation at the end. Strict RESTfulness says there shouldn't, as if you know the resource identifier (the URL) then you can talk about it. On the other hand, that doesn't mean the hosting server has to tell anyone else about the resource until the creating user is happy with it; general HATEOAS principles say nothing about when others can discover that a resource exists or whether knowing the name lets you read the thing, but it's entirely reasonable to deny to third parties that a resource exists until the owner of the resource turns on the “make this public” flag.

JMeter Tests and Non-Static GET/POST Parameters

What's the best strategy to use when writing JMeters tests against a web application where the values of certain query-string and post variables are going to change for each run.
Quick, common, example
You go to a Web Page
Enter some information into a form
Click Save
Behind the scenes, a new record is entered in the database
You want to edit the record you just entered, so you go to another web page. Behind the scenes it's passing the page a parameter with the Database ID of the row you just created
When you're running step 5 of the above test, the page parameter/Database ID is going to change each time.
The workflow/strategy I'm currently using is
Record a test using the above actions
Make a note of each place where a query string variable may change from run to run
Use a XPath or Regular Expression Extractor to pull the value out of a response and into a JMeter variable
Replace all appropriate instances of the hard-coded parameter with the above variable.
This works and can be automated to an extent. However, it can get tedious, is error prone, and fragile. Is there a better/commonly accepted way of handling this situation? (Or is this why most people just use JMeter to play back logs? (-;)
Sounds to me like your on the right track. The best that can be achieved by JMeter is to extract page variables with a regular expression or xpath post processor. However your absolutely correct in that this is not a scalable solution and becomes increasingly tricky to maintain or grow.
If you've reached is point then you may want to consider a tool which is more specialised for this sort of problem. Have a look web testing tool such as Watir, it will automatically handle changing post parameters; but you would still need to extract parameters if you need to do a database update but using Watir allows for better code reuse making the problem less painful.
We have had great success in testing similar scenarios with JMeter by storing parameters in JMeter Variables within a JDBC assertion. We then do our http get/post and use a BSF Assertion and javascript do complex validation of the response. Hope it helps