Is it possible to generate scenario outline definition dynamically [duplicate] - karate

I currently use junit5, wiremock and restassured for my integration tests. Karate looks very promising, yet I am struggling with the setup of data-driven tests a bit as I need to prepare a nested data structures which, in the current setup, looks like the following:
abstract class StationRequests(val stations: Collection<String>): ArgumentsProvider {
override fun provideArguments(context: ExtensionContext): java.util.stream.Stream<out Arguments>{
val now = LocalDateTime.now()
val samples = mutableListOf<Arguments>()
stations.forEach { station ->
Subscription.values().forEach { subscription ->
listOf(
*Device.values(),
null
).forEach { device ->
Stream.Protocol.values().forEach { protocol ->
listOf(
null,
now.minusMinutes(5),
now.minusHours(2),
now.minusDays(1)
).forEach { startTime ->
samples.add(
Arguments.of(
subscription, device, station, protocol, startTime
)
)
}
}
}
}
}
return java.util.stream.Stream.of(*samples.toTypedArray())
}
}
Is there any preferred way how to setup such nested data structures with karate? I initially thought about defining 5 different arrays with sample values for subscription, device, station, protocol and startTime and to combine and merge them into a single array which would be used in the Examples: section.
I did not succeed so far though and I am wondering if there is a better way to prepare such nested data driven tests?

I don't recommend nesting unless absolutely necessary. You may be able to "flatten" your permutations into a single table, something like this: https://github.com/intuit/karate/issues/661#issue-402624580
That said, look out for the alternate option to Examples: which just might work for your case: https://github.com/intuit/karate#data-driven-features
EDIT: In version 1.3.0, a new #setup life cycle was introduced that changes the example below a bit.
Here's a simple example:
Feature:
Scenario:
* def data = [{ rows: [{a: 1},{a: 2}] }, { rows: [{a: 3},{a: 4}] }]
* call read('called.feature#one') data
and this is: called.feature:
#ignore
Feature:
#one
Scenario:
* print 'one:', __loop
* call read('called.feature#two') rows
#two
Scenario:
* print 'two:', __loop
* print 'value of a:', a
This is how it looks like in the new HTML report (which is in 0.9.6.RC2 and may need more fine tuning) and it shows off how Karate can support "nesting" even in the report, which Cucumber cannot do. Maybe you can provide feedback and let us know if it is ready for release :)

Related

Using a JSON file in another JSON file (for code reusabibility)? It's possible? (Karate/APITesting/SchemaValidation)

I'm working with a lot of API data and my plan is to do schema validation using Karate. Because I have many items which share some properties I would like to create JSON files and "call" them in the principal JSON file where I have the whole schema.
I understand I could call each json in the feature file, but I would like to know if there is any way I can put all the schemas together, like a puzzle, from multiple json files in a single json file and call just one in the feature file.
Thanks! P.S Please save my ass!
Take a look at this example: https://github.com/ptrthomas/karate-test/tree/main/src/test/java/examples/reuse
So you can "compose" multiple JSON files in a re-usable feature file like this:
#ignore
Feature:
Scenario:
* def first = read('first.json')
* def second = read('second.json')
* def schema = { first: '#(first)', second: '#[] second' }
And then when you want to use this to match, note how the call is done in "shared" scope to keep things simple:
* call read('common.feature')
* def response = { first: { a: 1 }, second: [{ b: 'x' }] }
* match response == schema

Controller returning reactive paged result from total item count as Mono, items as Flux

I've got 2 endpoints returning the same data in two different JSON-formats.
The first endpoint returns a JSON-array, and starts the response right away.
#Get("/snapshots-all")
fun allSnapshots(
#Format("yyyy-MM-dd") cutoffDate: LocalDate
): Flux<PersonVm> = snapshotDao.getAllSnapshots(cutoffDate)
The next endpoint that returns a paged result, is more sluggish. It starts the response when both streams are completed. It also requires a whole lot more of memory than the previous endpoint, even though the previous endpoint returns all records from BigQuery.
#Get("/snapshots")
fun snapshots(
#Format("yyyy-MM-dd") cutoffDate: LocalDate,
pageable: Pageable
): Mono<Page<PersonVm>> = Mono.zip(
snapshotDao.getSnapshotCount(cutoffDate),
snapshotDao.getSnapshots(
cutoffDate,
pageable.size,
pageable.offset
).collectList()
).map {
CustomPage(
items = it.t2,
totalNumberOfItems = it.t1,
pageable = pageable
)
}
(Question update) BigQuery is at the bottom of this endpoint. The strength of BigQuery compared to e.g. Postgres, is querying huge tables. The weakness is relatively high latency for simple queries. Hence I'm running the queries in parallel in order to keep latency for the endpoint at a minimum. Running the queries in sequence, will add at least a second to the total processing time.
Question is: Is there a possible rewrite of the chain that will speed up the /snapshots endpoint?
Solution requirements (question update after suggested approaches)
The consumer of this endpoint is external to the project, and every endpoint in this project is documented at a detailed level. Hence, pagination may only occur one time in the returned JSON. Else feel free to suggest new types for returning pagination along with the PersonVm collection.
If it turns out that another solution is impossible, that's an answer as well.
SnapshotDao#getSnapshotCount returns a Mono<Long>
SnapshotDao#getSnapshots returns a Flux<PersonVm>
PersonVm is defined like this:
#Introspected
data class PersonVm(
val volatilePersonId : UUID,
val cases: List<PublicCaseSnapshot>
)
CustomPage is defined like this:
#Introspected
data class CustomPage<T>(
private val items: List<T> = listOf(),
private val totalNumberOfItems: Long,
private val pageable: Pageable
) : Page<T> {
override fun getContent(): MutableList<T> = items.toMutableList()
override fun getTotalSize(): Long = totalNumberOfItems
override fun getPageable(): Pageable = pageable
}
PublicCaseSnapshot is a complex structure, and left out for brevity. It should not be required for solving this issue.
Code used during test of suggested approach from #Denis
In this approach, chain starts with SnapshotDao#getSnapshotCount, and is mapped into an HttpResponse instance with response body containing the Flux<PersonVm>, and total item count in header.
Queries will now run in sequence, and numerous comparison tests between below code and existing code, showed that the original code performs better (by approx. 1 second). Different page sizes were used during the tests, and BigQuery was warmed up by running same query multiple times. Best results were recorded.
Please note that in cases where time spent on total item count query is negligible (or total item count is cacheable) and pagination is not required to be part of the JSON, this should be considered as a viable approach.
#Get("/snapshots-with-total-count-in-header")
fun snapshotsWithTotalCountInHeader(
#Format("yyyy-MM-dd") cutoffDate: LocalDate,
pageable: Pageable
): Mono<HttpResponse<Flux<PersonVm>>> = snapshotDao.getSnapshotCount(cutoffDate)
.map { totalItemCount ->
HttpResponse.ok(
snapshotDao.getSnapshots(
cutoffDate,
pageable.size,
pageable.offset
)
).apply {
headers.add("total-item-count", totalItemCount.toString())
}
}
You need to rewrite the method to return a publisher of the items. I can see a few options here:
Return the pagination information in the header. Your method will have return type Mono<HttpResponse<Flux<PersonVm>>>.
Return the pagination information on every item: Flux<Tuple<PageInfo, PersonVm>>

How to prepare a nested data structure for a data-driven test in Karate?

I currently use junit5, wiremock and restassured for my integration tests. Karate looks very promising, yet I am struggling with the setup of data-driven tests a bit as I need to prepare a nested data structures which, in the current setup, looks like the following:
abstract class StationRequests(val stations: Collection<String>): ArgumentsProvider {
override fun provideArguments(context: ExtensionContext): java.util.stream.Stream<out Arguments>{
val now = LocalDateTime.now()
val samples = mutableListOf<Arguments>()
stations.forEach { station ->
Subscription.values().forEach { subscription ->
listOf(
*Device.values(),
null
).forEach { device ->
Stream.Protocol.values().forEach { protocol ->
listOf(
null,
now.minusMinutes(5),
now.minusHours(2),
now.minusDays(1)
).forEach { startTime ->
samples.add(
Arguments.of(
subscription, device, station, protocol, startTime
)
)
}
}
}
}
}
return java.util.stream.Stream.of(*samples.toTypedArray())
}
}
Is there any preferred way how to setup such nested data structures with karate? I initially thought about defining 5 different arrays with sample values for subscription, device, station, protocol and startTime and to combine and merge them into a single array which would be used in the Examples: section.
I did not succeed so far though and I am wondering if there is a better way to prepare such nested data driven tests?
I don't recommend nesting unless absolutely necessary. You may be able to "flatten" your permutations into a single table, something like this: https://github.com/intuit/karate/issues/661#issue-402624580
That said, look out for the alternate option to Examples: which just might work for your case: https://github.com/intuit/karate#data-driven-features
EDIT: In version 1.3.0, a new #setup life cycle was introduced that changes the example below a bit.
Here's a simple example:
Feature:
Scenario:
* def data = [{ rows: [{a: 1},{a: 2}] }, { rows: [{a: 3},{a: 4}] }]
* call read('called.feature#one') data
and this is: called.feature:
#ignore
Feature:
#one
Scenario:
* print 'one:', __loop
* call read('called.feature#two') rows
#two
Scenario:
* print 'two:', __loop
* print 'value of a:', a
This is how it looks like in the new HTML report (which is in 0.9.6.RC2 and may need more fine tuning) and it shows off how Karate can support "nesting" even in the report, which Cucumber cannot do. Maybe you can provide feedback and let us know if it is ready for release :)

Kotlin - Inject Android Room SQL language on multiple line queries

How can I get multi-line queries to be injected? It works on Room with Java classes, but does Kotlin support this as well?
E.g. I have 2 queries here, and only the top SQL query (1 line) gets injected.
I tried to follow the steps in this guide but could not find the required settings.
There is an issue at https://youtrack.jetbrains.com/issue/KT-13636 which suggests this is fixed, but I'm not sure how to implement the fix.
You can use a raw string which is more readable anyway:
#Dao
interface ItemDao {
#Query("""
SELECT * FROM Item
WHERE Item.id = :id
""")
fun loadItemById(id: Long): LiveData<Item>
}

Sorted table with a map in Apache Ignite

I initially want to accomplish something simple with Ignite. I have a type like this (simplified):
case class Product(version: Long, attributes: Map[String, String])
I have a key for each one to store it by (it's one of the attributes).
I'd like to store them such that I can retrieve a subset of them between two version numbers or, at the very least, WHERE version > n. The problem is that the cache API only seems to support either retrieval by key or table scan. On the other hand, SQL99 doesn't seem to have any kind of map type.
I was thinking I'd need to use a binary marshaler, but the docs say:
There is a set of 'platform' types that includes primitive types, String, UUID, Date, Timestamp, BigDecimal, Collections, Maps and arrays of thereof that will never be represented as a BinaryObject.
So... maps are supported?
Here's my test code. It fails with java.lang.IllegalArgumentException: Cache is not configured: ignite-sys-cache, though. Any help getting a simple test working would really aid my understanding of how this is supposed to work.
Oh, and also, do I need to configure the schema in the Ignite config file? Or are the field attributes a sufficient alternative to that?
case class Product(
#(QuerySqlField #field)(index = true) version: Long,
attributes: java.util.Map[String, String]
)
object Main {
val TestProduct = Product(2L, Map("pid" -> "123", "foo" -> "bar", "baz" -> "quux").asJava)
def main(args: Array[String]): Unit = {
Ignition.setClientMode(true)
val ignite = Ignition.start()
val group = ignite.cluster.forServers
val cacheConfig = new CacheConfiguration[String, Product]
cacheConfig.setName("inventory1")
cacheConfig.setIndexedTypes(classOf[String], classOf[Product])
val cache = ignite.getOrCreateCache(cacheConfig)
cache.put("P123", TestProduct)
val query = new SqlQuery(classOf[Product], "select * from Product where version > 1")
val resultSet = cache.query(query)
println(resultSet)
}
}
Ignite supports querying by indexed fields. Since version is a regular indexed field it should be feasible to do the described queries.
I've checked your code and it works on my side.
Please check that the Ignite version is consistent across all the nodes.
If you provide the full logs I could take a look at it.