I'm dealing now with a problem during creating automation of integration test.
I would like to:
input data via selenium RC
check if values are correct in DB after import.
I've got problem with logic of those tests. Now I do it as it follows: in one test I generate random data (first name, last name, etc of the person). Then, by simple select from DB I obtain unique person_id key (I assumed that if first and last name are both 8 characters long, random generated strings I can treat them as unique) and then use this id in next queries.
Is this method correct? If not, how can I deal with it?
What exactly is the purpose of this integration test?
If you're testing your DB adaptor layer, then is there any need to use the Web UI? You can exercise the adaptor directly.
If you're testing the Web UI is there any need to actually store data in a database? You can check the values using a mock (or some other sort of test double).
If you're doing an end-2-end skim test, is it necessary to check the actual data values (over and above the success of the actual interaction)? And if the answer is yes, maybe the test should be along the lines of:
Given I have registered as "Random Person"
When I retrieve my details
Then my name is displayed correctly.
Related
This question already has an answer here:
Parallel execution with multiple users in karate
(1 answer)
Closed 1 year ago.
I'm running tests using the Karate framework.
As a setup step, I need to create some entities using a REST API. I create them using callSIngle in karate-config.js:
const result = karate.callSingle('classpath:path/to/createEntities.feature', config)
The feature has a Scenario Outline, defining various entities that need to be created. The REST API returns an ID for each entity that is created.
How can I save these IDs? I tried several solutions, for example define a variable in the Background section of the Scenario Outline - doesn't work as it's overwritten by each test and only its last value is returned.
Background
* def ids = {}
.....
Scenario Outline:
....
* set ids.<index> = response.id
In this example, the result will only have one value inside the ids map, for the last scenario.
Yes a Scenario Outline is not designed to be able to accumulate results. You might be able to append to a JSON array, but I leave it to you to experiment.
One thing that may work is if you are into Java, you can append data into some singleton, refer: https://stackoverflow.com/a/54571844/143475
Otherwise I recommend you use a table, the example here is probably the best simple reference: https://github.com/karatelabs/karate#data-driven-features
So you can have a tabular set of data, drive a loop and get the results as an array, ready to return or do whatever.
There is a requirement every time I run my automation I first need to get list of data from DB (as it is very dynamic) and put it in Examples section of scenario outline and use it running the feature file
I have used scenario outline as same scenario needs to be run for multiple data set .
So can you suggest if there is a way for this ?
Yes, you can try the dynamic scenario outline introduced in 0.9.X: https://github.com/intuit/karate#dynamic-scenario-outline
Examples:
| getDataFromDb() |
Note that there is an open bug for logs and a large number of rows: https://github.com/intuit/karate/issues/660
Else the normal looping over a second feature which you already know will work: https://github.com/intuit/karate#data-driven-tests
I have a table that represent a request sent through frontend
coupon_fetching_request
---------------------------------------------------------------
request_id | request_time | requested_by | request_status
Above I tried to create a table to address the issue.
Here request_status is an integer. It could have some values as follows.
1 : request successful
2 : request failed due to incorrect input data
3 : request failed in otp verification
4 : request failed due to internal server error
That table is very simple and status is used to let frontend know what happened to sent request. I had discussion with my team and other developers were proposing that we should have a status representation table. At database side we are not gonna need this status. But team was saying that in future we may need to show simple output from database to show what is the status of all request. According to YAGNI principle I don't think it is a good idea.
Currently I have coded to convert returned request_status value to descriptive value at frontend. I tried to convince team that I can creat an enumuration at business layer to represent meaning of the status OR I could add documentation at frontend and in java but failed to convince them.
The table proposed is as follows
coupon_fetching_request_status
---------------------------------------------------
status_id | status_code | status_description
My question is, Is it necessary to create table for such a simple status in similar cases.
I tried to create simple example to address the problem. In real time the table is to represent a Discount Coupon Code Request and status representing if the code is successfully fetched
It really depends on your use case.
To start with: in you main table, you are already storing request_status as an integer, which is a good thing (if you were storing the whole description, like 'request successful', that would not be optimized).
The main question is: will you eventually need to display that data in a human-readable format?
If no, then it is probably useless to create a representation table.
If yes, then having a representation table would be a good thing, instead of adding some code in the presentation layer to do the transcodification; let the data live in the database, and the frontend take care of presentation only.
Since this table can be easily created when needed, a pragmatic approach would be to hold on until you have a real need for the representation table.
You should create the reference table in the database. You currently have business logic on the application side, interpreting data stored in the database. This seems dangerous.
What does "dangerous" mean? It means that ad-hoc queries on the database might need to re-implement the logic. That is prone to error.
It means that if you add a reporting front end, then the reports have to re-implement the logic. That is prone to error and a maintenance nightmare.
It means that if you have another developer come along, or another module implemented, then the logic might need to be re-implemented. Red flag.
The simplest solution is to have a reference table to define the official meanings of the codes. The application should use this table (via join) to return the strings. The application should not be defining the meaning of codes stored in the database. YAGNI doesn't apply, because the application is so in need of this information that it implements the logic itself.
There is a requirement every time I run my automation I first need to get list of data from DB (as it is very dynamic) and put it in Examples section of scenario outline and use it running the feature file
I have used scenario outline as same scenario needs to be run for multiple data set .
So can you suggest if there is a way for this ?
Yes, you can try the dynamic scenario outline introduced in 0.9.X: https://github.com/intuit/karate#dynamic-scenario-outline
Examples:
| getDataFromDb() |
Note that there is an open bug for logs and a large number of rows: https://github.com/intuit/karate/issues/660
Else the normal looping over a second feature which you already know will work: https://github.com/intuit/karate#data-driven-tests
I'm optimizing the memory load (~2GB, offline accounting and analysis routine) of this line:
l2 = Photograph.objects.filter(**(movie.get_selectors())).values()
Is there a way to convince django to skip certain columns when fetching values()?
Specifically, the routine obtains all rows of the table matching certain criteria (db is optimized and performs it very quickly), but it is a bit too much for python to handle - there is a long string referenced in each row, storing the urls for thumbnails.
I only really need three fields from each row, but, if all the fields are included, it suddenly consumes about 5kB/row which sadly pushes the RAM to the limit.
The values(*fields) function allows you to specify which fields you want.
Check out the QuerySet method, only. When you declare that you only want certain fields to be loaded immediately, the QuerySet manager will not pull in the other fields in your object, till you try to access them.
If you have to deal with ForeignKeys, that must also be pre-fetched, then also check out select_related
The two links above to the Django documentation have good examples, that should clarify their use.
Take a look at Django Debug Toolbar it comes with a debugsqlshell management command that allows you to see the SQL queries being generated, along with the time taken, as you play around with your models on a django/python shell.