Difference between scenario and scenaio out line - karate

i have created below feature and calling this in another feature file
#ignore
Feature: re-usable feature to create a single order
Scenario Outline: Create multiple users and verify their id, name and age
Given url 'https://arid-stage.****/sun-api//user/****'
And request { locale:'',offerId:'',operationType:'',paidTermDuration:'',paidTermDurationUnit:'',paymentCategory:'',storeOrderId:'',userId:'' }
When method post
Then status 200
Examples:
| locale | offerId | operationType| paidTermDuration |paidTermDurationUnit | paymentCategory |storeOrderId|userId |
| en_us | 7777777 | CREATE | 30 | MONTH | VENDOR_PAYMENT | localDate | 42DC198E5ABCE1430A494128 |
In other feature, i'm calling feature -> * def result = call read('redeem-create.feature')
Questions:
This will be executed only when with text Scenario Outline and i remove and update as Scenario, this will not get executed.
When to use scenario outline and scenario
Any suggestions/ ideas

Why don't you read the documentation: https://github.com/intuit/karate#data-driven-tests
And also look at this example for a comparison: examples.feature

Related

How better I can optimize this Kusto Query to get my logs

I have below query which I am running and getting logs for Azure K8s, but its takes hour to generate the logs and i am hoping there is a better way to write what i have already written. Can some Kusto experts advice here as how can I better the performance?
AzureDiagnostics
| where Category == 'kube-audit'
| where TimeGenerated between (startofday(datetime("2022-03-26")) .. endofday(datetime("2022-03-27")))
| where (strlen(log_s) >= 32000
and not(log_s has "aksService")
and not(log_s has "system:serviceaccount:crossplane-system:crossplane")
or strlen(log_s) < 32000
| extend op = parse_json(log_s)
| where not(tostring(op.verb) in ("list", "get", "watch"))
| where substring(tostring(op.responseStatus.code), 0, 1) == "2"
| where not(tostring(op.requestURI) in ("/apis/authorization.k8s.io/v1/selfsubjectaccessreviews"))
| extend user = op.user.username
| extend decision = tostring(parse_json(tostring(op.annotations)).["authorization.k8s.io/decision"])
| extend requestURI = tostring(op.requestURI)
| extend name = tostring(parse_json(tostring(op.objectRef)).name)
| extend namespace = tostring(parse_json(tostring(op.objectRef)).namespace)
| extend verb = tostring(op.verb)
| project TimeGenerated, SubscriptionId, ResourceId, namespace, name, requestURI, verb, decision, ['user']
| order by TimeGenerated asc
You could try starting your query as follow.
Please note the additional condition at the end.
AzureDiagnostics
| where TimeGenerated between (startofday(datetime("2022-03-26")) .. endofday(datetime("2022-03-27")))
| where Category == 'kube-audit'
| where log_s hasprefix '"code":2'
I assumed that code is integer, in case it is string, use the following (added qualifier)
| where log_s has prefix '"code":"2'

How to use dynamic values for Karate Features

I've a need where I should use dynamic values in the features of my karate tests.
I've came accross with some of the questions and answers like this: How to read input data from an excel spreadsheet and pass it JSON payload in karate framework?
But no mather what how hard I try, I couln't make it happen. I believe I should share the code parts that I am trying to use, so that a discussion can start.
I have a SOAP request for creating new users as below:
<?xml version="1.0" encoding="utf-8"?>
<soapenv:Envelope xxxxxx>
<soapenv:Header/>
<soapenv:Body>
<int:createSubscriber soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
<custBean xxxxx>
<accountNumber xsi:type="xsd:string">#(accountNo)</accountNumber>
<custName xsi:type="xsd:string" xs:type="type:string">Xbox</custName>
</custBean>
<addSubscriberBean xxxxx>7
<subscriberID xsi:type="xsd:string">#(subsID)</subscriberID>
<password xsi:type="xsd:string" xs:type="type:string">0</password>
<areaID xsi:type="xsd:string" xs:type="type:string">1</areaID>
<lineOfCredit xsi:type="xsd:int" xs:type="type:int"></lineOfCredit>
<creditCycle xsi:type="xsd:int" xs:type="type:int"></creditCycle>
<points xsi:type="xsd:int" xs:type="type:int"></points>
<bandwidth xsi:type="xsd:int" xs:type="type:int"></bandwidth>
<physicalPorts xsi:type="xsd:string" xs:type="type:string">8080</physicalPorts>
<mobilePhoneNo xsi:type="xsd:string" xs:type="type:string">#(mobile)</mobilePhoneNo>
<stbCount xsi:type="xsd:int" xs:type="type:int">5</stbCount>
<oTTCount xsi:type="xsd:int" xs:type="type:int">10</oTTCount>
<subscriptionType xsi:type="xsd:string" xs:type="type:string">#(subsType)</subscriptionType>
</addSubscriberBean>
<sequenceID xxxxx>1234567840123422700</sequenceID>
</int:createSubscriber>
</soapenv:Body>
As you've seen, I have some variables which are going to be given from outside, and those are: accountNo, subsID, subsType and mobile.
Now, I have a feature file where I make a call to a SOAP service by using above file. I am assigning new values to all variables of request, so that I can create new users all the time.
Here is the example:
Feature: Create Subscriber Feature End-To-End Scenario
Background:
* url SOAP_CREATE_SUBSCRIBER_HOST
* def accountNumber = '789'
* def subscriberID = '456'
* def userMsisdn = '123'
* def subscriptionType = 'ASD'
* def createUser = read('create-user-soap.xml') # This is the above one
* replace createUser
| token | value |
| #(accountNo) | accountNumber |
| #(subsID) | subscriberID |
| #(mobile) | userMsisdn |
| #(subsType) | subscriptionType |
Scenario: Create Subscriber
Given request createUser
When soap action SOAP_CREATE_SUBSCRIBER_HOST
Then status 200
And match //returnCode == 0
And match //returnMessage == 'The operation succeeded.'
However, I need to create bunch of users, so I need to use dynamic variables to call my .xml file too many times.
I checked the docs and answer here: How to read input data from an excel spreadsheet and pass it JSON payload in karate framework?
But couldn't locate it in my situation.
Thanks in advance.
EDIT:
I am aware of the situation that I need to use table or json or csv or excel kind of data holder to use it late, so below is my users table. Just don't know how to implement it to my feature file so that it can create too many users.
* table userstable
| accountNo | subsID | mobile | subsType |
| '113888572' | '113985218890' | '1135288836' | 'asd' |
| '113888573' | '113985218891' | '1135288837' | 'qwe' |
| '113888582' | '113985218810' | '1135288846' | 'asd' |
| '883889572' | '883985219890' | '8835298836' | 'qwe' |
| '773888572' | '773985218890' | '7735288836' | 'asd' |
| '663888572' | '663985218890' | '6635288836' | 'qwe' |
| '553888572' | '553985218890' | '5535288836' | 'asd' |
| '443888572' | '443985218890' | '4435288836' | 'qwe' |
| '333888572' | '333985218890' | '3335288836' | 'asd' |
| '223888572' | '223985218890' | '2235288836' | 'qwe' |
| '165488572' | '175585218890' | '1114788836' | 'asd' |
EDIT 2:
After a deep dive into the some answers and reading lots of docs, I've encounter with the solution below. There should be a .feature file where you place your create method to fire up the single user creation mechanism. It is going to look like this:
#ignore
Feature: re-usable feature to create a single user
Background:
* url SOAP_CREATE_SUBSCRIBER_HOST
Scenario: Create single user
* match __arg == bulkusers[__loop]
* def createUser = read('xxxx')
Given request createUser
When soap action SOAP_CREATE_SUBSCRIBER_HOST
And request { accountNo: '#(accountNo)', subsID: '#(subsID)', mobile: '#(mobile)', subsType: '#(subsType)' }
Then status 200
So above code can be placed as a template in your mind. On the other hand, we need another**.feature** file to call that template. And it is going to look like this:
Feature: call template feature.
Background:
* url SOAP_CREATE_SUBSCRIBER_HOST
Scenario: Use bulkusers table to create default users
* table bulkusers
| accountNo | subsID | mobile | subsType |
| '131451715' | '133451789134' | '5335167897' | 'asd' |
| '122452715' | '123452789124' | '5334287897' | 'qwe' |
| '124453715' | '123453789114' | '5334367817' | 'asd' |
* def result = call read('user-create.feature') bulkusers
* def created = $result[*].response
* match result[*].__loop == [0, 1, 2]
* match created[*].name == $bulkusers[*].name
* def createUser = read('xxx')
What this code achieve is, it is packing up the bulkusers table with user-create.feature, thus, user-create.feature template class being called recursively till the number of the table variables ends, with the bulkusers variables.
I am providing a simplified example below, but am sure you will find the answers to your questions here. It is easy to loop over data and build XML in Karate using the karate.set(varName, xPath, value) API:
* table users
| accountNo | subsID | mobile | subsType |
| '113888572' | '113985218890' | '1135288836' | 'asd' |
| '113888573' | '113985218891' | '1135288837' | 'qwe' |
| '113888582' | '113985218810' | '1135288846' | 'asd' |
* def xml = <users></users>
* def fun =
"""
function(u, i) {
var base = '/users/user[' + (i + 1) + ']/';
karate.set('xml', base + 'account', u.accountNo);
karate.set('xml', base + 'mobile', u.mobile);
karate.set('xml', base + 'type', u.subsType);
}
"""
* eval karate.forEach(users, fun)
* match xml ==
"""
<users>
<user>
<account>113888572</account>
<mobile>1135288836</mobile>
<type>asd</type>
</user>
<user>
<account>113888573</account>
<mobile>1135288837</mobile>
<type>qwe</type>
</user>
<user>
<account>113888582</account>
<mobile>1135288846</mobile>
<type>asd</type>
</user>
</users>
"""

Combining two rowsets in ADLA without join on clause

I've got two types of input files I'm loading into an ADLA job. In one, I've got a bunch of data (left) and in another, I've got a list of values that are important to me (right).
As an example here, let's say I'm using the following in my "left" rowset:
| ID | URL |
|----|-------------------------|
| 1 | https://www.google.com/ |
| 2 | https://www.yahoo.com/ |
| 3 | https://www.hotmail.com/|
I'll have something like the following in my right rowset:
| ID | Name | Regex | Exceptions | Other Lookup Val |
|----|-------|-------------|------------|------------------|
| 1 | ThisA | /[a-z]{3,}/ | abc | 091238 |
| 2 | ThatA | /[a-z]{3,}/ | xyz | lksdf9 |
| 3 | OtherA| /[a-z]{3,}/ | def | 098143 |
As each are loaded via an EXTRACT statement, both are in separate rowsets. Ideally, I'd like to be able to load all the values for both rowsets and loop through the right one to run a series of calculations against the left one to find a match per various business rules. Notably, there's no value to simply join on, nor is it a simple Regex evaluation, but rather something a bit more involved. Thus, the output might just look something like the "left" rowset:
| ID | URL |
|----|-------------------------|
| 1 | https://www.google.com/ |
| 3 | https://www.hotmail.com/|
Now, a COMBINER is the only UDO I see that accepts two rowsets, but the U-SQL syntax requires that I do some sort of join statement here. There's no common identifier between each of the rowsets though, so there's nothing to join on, which suddenly makes this seem less ideal. Of the attribute options defined at https://learn.microsoft.com/en-us/azure/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide#use-user-defined-combiners, I'd like to specify this as a Full because I'd need each of the left values available to evaluate against each of the right ones, but again, no shared identifier to do this on.
I then tried to use a REDUCER that accepted an IRowset in the IReducer constructor as a parameter, then tried to just pass the rowset in from the U-SQL, but it didn't like that syntax.
Is there any way to perform this custom combining in a manner that doesn't require a JOIN ON clause?
It sounds like you may be able to use an IProcessor. This would allow you to analyze each row in the RIGHT set and add a column (with a value based on your business rules) that you can subsequently use to join to the LEFT set.
[Adding a bit more detail]: You could also do this twice, once for the left and once for the right to create an artificial join column, like row_number or some such.

How to do an SQL query dependent on username and url in Apache with authz_dbd for require dbd_group, and set group as http header

I'm having trouble using mod_authz_dbd in Apache to do access control with :
require dbd_group
I have a Apache reverse proxy, that must do authentication and authorization to let users accessing their projects. Projects need a http header containing group for role-based access control. Currently I would like to authorize users if they have a valid group for a project. I use the following mysql database with my apache server :
______________ _________________ __________________
| User | | User's project | | Project |
|--------------| |-----------------| |------------------|
|PK | id |_ |PK | id | _|PK | id |
| | username | \_|FK | user_id | / | | url |
| | password | |FK | project_id |_/ | | project name |
|______________| | | group | |__________________|
|_________________|
The project url is relative url like "/foo".
Each user can work in several projects and can have different group for each,
groups are per project and can take three values :
admin
pm for the project manager
dev for the developers
To implement this, I write in my configuration file :
# Get relative url in environment variable.
RewriteRule (.*) - [E=TARGET_URL:$1]
Require dbd-group "admin"
Require dbd-group "pm"
Require dbd-group "dev"
AuthzDBDQuery "SELECT group FROM user, project, users_project WHERE username = %s AND url='%{TARGET_URL}e' AND project.id = users_project.project_id AND user.id = users_project.user_id"
# Here set the group but how ?
RequestHeader set "X-Forwarded-Groups" "group"
But I have two problems :
first, using environment variables in the SQL query does not work. So, how can I make a query with username and url as parameters in AuthzDBDQuery ?
Secondly, how to get group from the AuthzDBDQuery to set it in http header ?
I also tried with a RewriteMap with dbd and then with prg using python script, but same problem, I did not find how to pass two parameters.

Microsoft Access - Create a numerical sequence based on field value changes?

For query data like this:
+-------+---------+
| Name | Details |
| JEFF | TEST1 |
| JEFF | TEST2 |
| JEFF | TEST3 |
| BOB | TEST1 |
| BOB | TEST2 |
+-------+---------+
How do I query so that a numerical sequence (1,2,3...) can be added that resets back to 1 each time the name changes (ie from JEFF to BOB)?
Is it possible to use the DCOUNT function?
What I have so far is (it doesn't sequence correctly):
Number: (SELECT COUNT(*) FROM [dQuery]
WHERE [dQuery].[Name] = [dQuery].[Name]
AND [dQuery].[sequence] >= [dQuery].[sequence])
UPDATE1:
The correct query is:
SELECT [dQuery].Name, [dQuery].[sequence], (select count([dQuery].Name) + 1
from [dQuery] as dupe where
dupe.[sequence]< [dQuery].[sequence] and dupe.name = [dQuery].name
) AS [Corrected Sequence]
FROM [dQuery]
WHERE ((([dQuery].Name)="jeff"))
ORDER BY [dQuery].Name, [dQuery].[sequence];
Take a look here. I think the author has solved some very similar issues.
If you like to add a serial number in your report dynamically than, create a report for the specific table and open the report in design view. Then, add a text box in the left side of the the data row and give "=1" (with out colon) to its Control Source property # "Data" tab. And change "No" to "Over Group" of its "running sum" property # "Data" tab. At the run time that text field will show data in sequence like 1, 2, 3 in every row.
Thanks