This question already has an answer here:
Parallel execution with multiple users in karate
(1 answer)
Closed 1 year ago.
I'm running tests using the Karate framework.
As a setup step, I need to create some entities using a REST API. I create them using callSIngle in karate-config.js:
const result = karate.callSingle('classpath:path/to/createEntities.feature', config)
The feature has a Scenario Outline, defining various entities that need to be created. The REST API returns an ID for each entity that is created.
How can I save these IDs? I tried several solutions, for example define a variable in the Background section of the Scenario Outline - doesn't work as it's overwritten by each test and only its last value is returned.
Background
* def ids = {}
.....
Scenario Outline:
....
* set ids.<index> = response.id
In this example, the result will only have one value inside the ids map, for the last scenario.
Yes a Scenario Outline is not designed to be able to accumulate results. You might be able to append to a JSON array, but I leave it to you to experiment.
One thing that may work is if you are into Java, you can append data into some singleton, refer: https://stackoverflow.com/a/54571844/143475
Otherwise I recommend you use a table, the example here is probably the best simple reference: https://github.com/karatelabs/karate#data-driven-features
So you can have a tabular set of data, drive a loop and get the results as an array, ready to return or do whatever.
Related
I want to understand what would be the best way to represent this in a RESTful way, taking in consideration that the codebase it's a very large - inherited - legacy project and I have to add a lot of new functionality on top of it.
The API Definition is built with OpenaAPI3.
Let's take in consideration the following example:
/v1/{customer}/types/{id}
But the Types collection also has a database constraint of Unique(customer, code) - customer and code being columns from the Types table.
What I need to implement now is a new endpoint that will retrieve a single entity, based on the customer path param and code path param, without having to use the ID path param.
It's a matter of reducing the number of calls, that's why I don't want to make use of the ID path param also.
One solution would be to use query params:
/v1/{customer}/types?code=123
But this will basicaly return a Singleton List so it's not that trivial and definetley not a best practice.
What would be your take on this? I know I should have the ID in the place I want that entity to be returned, but this some case I want to get resovled without having to do another call to get the ID of the entity so I can call the initial endpoint.
There is a requirement every time I run my automation I first need to get list of data from DB (as it is very dynamic) and put it in Examples section of scenario outline and use it running the feature file
I have used scenario outline as same scenario needs to be run for multiple data set .
So can you suggest if there is a way for this ?
Yes, you can try the dynamic scenario outline introduced in 0.9.X: https://github.com/intuit/karate#dynamic-scenario-outline
Examples:
| getDataFromDb() |
Note that there is an open bug for logs and a large number of rows: https://github.com/intuit/karate/issues/660
Else the normal looping over a second feature which you already know will work: https://github.com/intuit/karate#data-driven-tests
There is a requirement every time I run my automation I first need to get list of data from DB (as it is very dynamic) and put it in Examples section of scenario outline and use it running the feature file
I have used scenario outline as same scenario needs to be run for multiple data set .
So can you suggest if there is a way for this ?
Yes, you can try the dynamic scenario outline introduced in 0.9.X: https://github.com/intuit/karate#dynamic-scenario-outline
Examples:
| getDataFromDb() |
Note that there is an open bug for logs and a large number of rows: https://github.com/intuit/karate/issues/660
Else the normal looping over a second feature which you already know will work: https://github.com/intuit/karate#data-driven-tests
I'm new to Backbone.js. I'm intrigued by the idea that you can just supply a URL to a collection and then proceed to create, update, delete, and get models from that collection and it handle all the interaction with the API.
In the small task management sample applications and numerous demo's I've seen of this on the web, it seems that the collection.fetch() is used to pull down all models from the server then do something with them. However, more often than not, in a real application, you don't want to pull down hundreds of thousands or even millions of records by issuing a GET statement to the API.
Using the baked-in connection.sync method, how can I specify parameters to GET specific record sets? For example, I may want to GET records with a date of 2/1/2014 or GET records that owned by a specific user id.
In this question, collection.find is used to do this, but does this still pull down all records to the client first then "finds" them or does the collection.sync method know to specify arguments when doing a GET to the server?
You do use fetch, but you provide options as seen in collection.fetch([options]).
So for example to obtain the one model where id is myIDvar:
collection.fetch(
{
data: { id: myIDvar },
success: function (model, response, options) {
// do a little dance;
}
};
My offhand recollections is that find, findWhere and where would invoke all models being downloaded and then the filtering taking place on the client. I believe with fetch the filtering takes places on the server side.
You can implement some kind of pagination on server side and update your collection with limited number of records. In this case all your data will be up to date with backend.
You can do it by overriding fetch method with you own implementaion, or specify params
For example:
collection.fetch({data: {page: 3})
You can also use find where method here
collection.findWhere(attributes)
I'm dealing now with a problem during creating automation of integration test.
I would like to:
input data via selenium RC
check if values are correct in DB after import.
I've got problem with logic of those tests. Now I do it as it follows: in one test I generate random data (first name, last name, etc of the person). Then, by simple select from DB I obtain unique person_id key (I assumed that if first and last name are both 8 characters long, random generated strings I can treat them as unique) and then use this id in next queries.
Is this method correct? If not, how can I deal with it?
What exactly is the purpose of this integration test?
If you're testing your DB adaptor layer, then is there any need to use the Web UI? You can exercise the adaptor directly.
If you're testing the Web UI is there any need to actually store data in a database? You can check the values using a mock (or some other sort of test double).
If you're doing an end-2-end skim test, is it necessary to check the actual data values (over and above the success of the actual interaction)? And if the answer is yes, maybe the test should be along the lines of:
Given I have registered as "Random Person"
When I retrieve my details
Then my name is displayed correctly.