Running Karate tests on different environments and per specific url - karate

I'm doing the research for my QA project and I'm wondering if Karate is able to handle certain use cases. Basically I need to run tests for different environments (local, staging, production). What I understood from the documentation, it is not a problem because of karate-config.js and karate-config-env.js.
The problem starts with the execution itself. Each environment has different urls for 3 different countries, so actually there are 9 urls in total. Moreover, because of the development process, certain features are deployed not at the same time for all countries. So I want to be able to run tests against:
1 - staging for one country (one url)
2 - staging for all countries (the same request with 3 urls, I guess I can use parallel execution)
The json structure is the same for all environments and countries and I want to execute one request with different configurations. I was thinking about TDD but I'm not sure if I can skipp some rows from Scenario Outline table if I'm executing tests for only one country. Is it possible? or is there any other way? Any advice appreciated.

You can "tagify" Scenario Outline rows. See the docs: https://github.com/intuit/karate#tags-and-examples
Scenario Outline: examples partitioned by tag
* def vals = karate.tagValues
* match vals.region[0] == expected
#region=US
Examples:
| expected |
| US |
#region=GB
Examples:
| expected |
| GB |
Karate can handle pretty much any data-driven challenge you have, once you understand how JSON, manipulating JSON and data-driven testing works. Here are some answers that will give you further ideas to consider:
https://stackoverflow.com/a/61685169/143475
https://stackoverflow.com/a/59162760/143475

Related

How to loop get method multiple times as per values fetching from database in Karate Framework [duplicate]

There is a requirement every time I run my automation I first need to get list of data from DB (as it is very dynamic) and put it in Examples section of scenario outline and use it running the feature file
I have used scenario outline as same scenario needs to be run for multiple data set .
So can you suggest if there is a way for this ?
Yes, you can try the dynamic scenario outline introduced in 0.9.X: https://github.com/intuit/karate#dynamic-scenario-outline
Examples:
| getDataFromDb() |
Note that there is an open bug for logs and a large number of rows: https://github.com/intuit/karate/issues/660
Else the normal looping over a second feature which you already know will work: https://github.com/intuit/karate#data-driven-tests

Karate - Is there a way to get data from DB in Examples section of scenario outline?

There is a requirement every time I run my automation I first need to get list of data from DB (as it is very dynamic) and put it in Examples section of scenario outline and use it running the feature file
I have used scenario outline as same scenario needs to be run for multiple data set .
So can you suggest if there is a way for this ?
Yes, you can try the dynamic scenario outline introduced in 0.9.X: https://github.com/intuit/karate#dynamic-scenario-outline
Examples:
| getDataFromDb() |
Note that there is an open bug for logs and a large number of rows: https://github.com/intuit/karate/issues/660
Else the normal looping over a second feature which you already know will work: https://github.com/intuit/karate#data-driven-tests

How to execute a Cucumber Scenario / Feature multiple times?

I would like to know if it is possible to execute using cucumber-jvm a scenario / feature multiple times. Thanks.
Your can use several approaches:
On operation system level: run command several times or write
appropriate shell script.
On Cucumber level. E.g. you have
following feature file:
Feature: My great feature
Scenario: My scenario
Given My given step one
When My when step two
Then My then step three
You can force cucumber-jvm run it several times e.g. 3 by transforming "Scenario" to "Scenario outlines" and using "Examples":
Feature: My great feature
Scenario **Outline**: My scenario
Given My given step <number>
When My when step two
Then My then step three
Examples: to run this scenario several times
|number|
|one|
|one|
|one|
It looks some artificially, but works.
I use this approach to gather statistics for complex tests dependent of a lot of conditions.

how to list job ids from all users?

I'm using the Java API to query for all job ids using the code below
Bigquery.Jobs.List list = bigquery.jobs().list(projectId);
list.setAllUsers(true);
but it doesn't list me job ids that were run by Client ID for web applications (ie. metric insights) I'm using private key authentication.
Using the command line tool 'bq ls -j' in turn giving me only the metric insight job ids but not the ones ran with the private key auth. Is there a get all method?
The reason I'm doing this is trying to get better visibility into what queries are eating up our data usage. We have multiple sources of queries: metric insights, in house automation, some done manually, etc.
As of version 2.0.10, the bq client has support for API authorization using service account credentials. You can specify using a specific service account with the following flags:
bq --service_account your_service_account_here#developer.gserviceaccount.com \
--service_account_credential_store my_credential_file \
--service_account_private_key_file mykey.p12 <your_commands, etc>
Type bq --help for more information.
My hunch is that listing jobs for all users is broken, and nobody has mentioned it since there is usually a workaround. I'm currently investigating.
Jordan -- It sounds like you're honing in on what we want to do. For all access that we've allowed into our project/dataset we want to produce an aggregate/report of the "totalBytesProcessed" for all queries executed.
The problem we're struggling with is that we have a handful of distinct java programs accessing our data, a 3rd party service (metric insights) and 7-8 individual users who have query access via the web interface. Fortunately the incoming data only has one source so explaining the cost for that is simple. For queries though I am kinda blind at the moment (and it appears queries will be the bulk of the monthly bill).
It would be ideal if I can get the underyling data for this report with just one listing made with some single top level auth. With that I think from the timestamps and the actual SQL text I can attribute each query to a source.
One thing that might make this problem far easier is if there were more information in the job record (or some text adornment in the job_id for queries). I don't see that I can assign my own jobIDs on queries (perhaps I missed it?) and perhaps recording some source information in the job record would be possible? Just thinking out loud now...
There are three tables you can query for this.
region-**.INFORMATION_SCHEMA.JOBS_BY_{USER, PROJECT, ORGANIZATION}
Where ** should be replaced by your region.
Example query for JOBS_BY_USER in the eu region:
select
count(*) as num_queries,
date(creation_time) as date,
sum(total_bytes_processed) as total_bytes_processed,
sum(total_slot_ms) as total_slot_ms_cost
from
`region-eu.INFORMATION_SCHEMA.JOBS_BY_USER` as jobs_by_user,
jobs_by_user.referenced_tables
group by
2
order by 2 desc, total_bytes_processed desc;
Documentation is available at:
https://cloud.google.com/bigquery/docs/information-schema-jobs

Rough estimate of test cases

I'm curious how many test cases others have for a site similar to mine. It's your basic CRUD with business workflow website. 3 user roles, a couple input pages, a couple search pages, a business rule engine, etc. Maybe 50k lines of .NET code (workflow and persistence altogether). DB with about 10 main tables plus about 100 supporting tables (lookups, logs, etc.). The main UI for entering data is quite big, around 100 data fields, multiple grids, about 5 action/submit type buttons.
I know this is vague and I'm only hoping for order of magnitude figures. I'm also thinking of basic test cases, not code coverage type cases. But like if I told you we had 25 test cases I'm sure you'd say way WAY not enough. So I'm just looking for ballpark figures.
TIA
I would have as many test cases as it takes to ensure a high level of confidence in the system.
The number of tables, rules, lines of code, etc is actually immaterial.
You should have the appropriate unit tests to ensure your domain objects and business rules are firing correctly. You should have tests to ensure your queries execute appropriately (this is a harder one).
You might even want to have test cases for paths through the software. In other words, click here, get this page, click there, edit a field, save the page, go back... This type is the most difficult as the tests are usually recorded and have to be rerecorded when the pages change (ie: a field is added or removed).
Generally speaking it's more about coverage than number of tests. You want your tests to cover as much of the applications funcionality as is feasible. Note that I didn't say possible. You can cover an entire application (100%) with test cases, but for every little change, bug fix, etc you'll have to recode those tests. This is more desired for a mature app. For newer apps you don't want to hamstring your developers and QA team that way as they'll spend inordinate amounts of time fixing/changing unit tests...
For any system, you could easily spend as much time developing your automated tests as you do the system itself. In some cases, even more.
As for our group, we tend to have lots of unit tests. However, for testing paths through the system we only record those once a particular area has moved into a "maintenance" type of mode. Meaning we expect little change for quite a while in that area and the path test is simply to ensure no one jacked it up.
UPDATE: the comments here led me to the following:
Going a little further: Let's examine 1 small piece of code:
Int32 AddNumbers(Int32 a, Int32 b) {
return a+b;
}
On the face of it you could get away with a single test:
Int32 result = AddNumbers(1,2);
Assert.Equals(result, 3);
However, that probably isn't enough. What happens if you do this:
Int32 result = AddNumbers(Int32.MaxValue, 1);
Assert.Equals(result, (Int32.MaxValue+1));
Now we have a failure. Here's another one:
Int32 result = AddNumbers(Int32.MinValue, -1);
Assert.Equals(result, (Int32.MinValue-1));
So, we have an extremely simple method that requires at least 3 tests. The initial to see if it can give any result, then 2 for bounds checking. That's 3 tests for essentially 2 lines of code (method definition and the one line computation).
As your code becomes more complex, things get really dicey:
Decimal DivideThis(Decimal a, Decimal b) {
result = Decimal.Divide(a,b);
}
This slight change introduces yet another exception condition beyond bounds: DivideByZero. So now we are up to 4 tests required for 2 lines of code.
Now, let's simplify it a bit:
String AppendData(String data, String toAppend) {
return String.Format("{0}{1}", data, toAppend);
}
Our test case here is:
String result = AppendData("Hello", "World");
Assert.Equals(result, "HelloWorld");
That's just one test case for the code block, with no others really needed.
What does this tell us: For starters 2 lines of code might cause us to need between 1 and 4 test cases. You mentioned 50k lines... Using that logic, you will need between 50,000 and 200,000 test cases...
Of course, life is rarely so simple. In those 50k lines of code you have, there are going to be large blocks of code that have very limited inputs. For example a mortgage interest calculator might take 3 parameters, and return 1 value (the APR). The code itself might run 100 lines or so (been awhile, just work with me). The number of test cases for this is going to be determined by edge cases along the lines of making sure you properly handle rounding.
So, let's say it's 5 cases: which brings us to 20 lines of code = 1 case. Calculating that out your 50k lines might result in 2,500 test cases. Obviously much smaller than what we expected above.
Finally, I'm going to throw another wrinkle into the mix. Some test systems can handle inputs and your assertions coming from a data file. Considering our first one we could have a data file that has a line for each parameter combination we want to test. In this scenario, we only need 1 test case to cover 3 (or more..) possible conditions.
The test case might look like (pseudo code):
read input file.
parse expected result, parameter 1, parameter 2
run method
assert method result = parsed result
repeat for each line of the file
With that capability, we are down to 1 test case per scenario. I would say 1 per method, but the reality is that most methods are rarely standalone and it's entirely possible that numerous methods are implicitly tested through explicit testing of others; therefore not requiring their own individual tests.
This leads me to this: It is impossible to determine the right number of test cases without a full understanding of your code base. 5 cases that are at the UI level might be enough for complete coverage depending on the complexity of the tests; or it might take thousands. Therefore it's much better to base it on code coverage. What percentage of the code, and branching logic, are you testing?
If you ask a car salesman for a rough price of a car and he would give me that price, I wouldn't buy my car there, because he forgot to ask me some important questions. What kind of car do you want? Which extras do you want on the car? etc.
Same for number of test cases .... If a hiring manager would ask me that question I would probably give him the following answer.
#test cases = between #Requirements*2 and #Requirements*infinite (some requirements can lead to bollions of possibilities)
I also would say that based on my experience the number would realistically be #Requirements*5 (is the number I use at the initial phase, for projects with new, changed and omitted functionality)
where the following error margin has to be taken depending on the phase I am making this estimate:
Initiation phase : error margins = 400%
...
Testing phase : error margin = 10%
By the time you start the testing phase, detailed requirements/specs are available, volatillity of requirements is stabilized, creep of requirements is almost zero, etc.
At that time I also will be able to give better estimates ...