How can I write crate-wide documentation? - documentation

In order to ensure that all public artifacts of my crate are documented (if minimally to start with), I specified #![deny(missing_docs)] in my lib.rs. This backfired.
I expected to write a documentation comment at the top and the code afterwards:
/// Hello world example for Rust.
#![deny(missing_docs)]
fn main() {
println!("Hello world!");
}
This fails with:
error: an inner attribute is not permitted following an outer doc comment
--> src/main.rs:3:3
|
3 | #![deny(missing_docs)]
| ^
|
= note: inner attributes, like `#![no_std]`, annotate the item enclosing them, and are usually found at the beginning of source files. Outer attributes, like `#[test]`, annotate the item following them.
Reverting the order so that the attribute is first and the comment second:
#![deny(missing_docs)]
/// Hello world example for Rust.
fn main() {
println!("Hello world!");
}
Also fails:
error: missing documentation for crate
--> src/main.rs:1:1
|
1 | / #![deny(missing_docs)]
2 | |
3 | | /// Hello world example for Rust.
4 | |
5 | | fn main() {
6 | | println!("Hello world!");
7 | | }
| |_^
|
note: lint level defined here
--> src/main.rs:1:9
|
1 | #![deny(missing_docs)]
| ^^^^^^^^^^^^
I could not find how to actually write documentation for the crate itself. How should I be writing the crate's documentation to satisfy #![deny(missing_docs)]?

I found the hidden nugget in the book's Publishing a Crate to Crates.io section.
Regular documentation comments (starting with ///) document the next item, however a crate is nobody's next.
The solution is to switch to using another kind of comment, this time starting with //!, which documents the enclosing item.
And suddenly it works:
#![deny(missing_docs)]
//! Hello world example for Rust.
fn main() {
println!("Hello world!");
}

Related

How to compare each element in two different lists in Karate? [duplicate]

Is it possible to continue the execution of test step even if one of the assert/match fails?
Ex:
Scenario: Testing
* def detail = {"a":{"data":[{"message":["push","dash"]},{"message":["data","Test"]}]}}
* match detail contains {"a":{"data":[{"message":["push","dash"]}]}}
* print detail
Here match will fail but execution stop at that point.
is there a way to do a soft assertion so that next step gets executed?
EDIT in 2021 - a PR introducing a continueOnStepFailure flag was contributed by Joel Pramos here and is available in Karate 1.0 onwards. You can find more details here: https://stackoverflow.com/a/66733353/143475
If you use a Scenario Outline each "row" is executed even if one fails.
Scenario Outline: Testing
* def detail = { a: 1, b: 2, c: 3 }
* match detail contains <expected>
Examples:
| expected |
| { a: 1 } |
| { b: 2 } |
| { c: 3 } |
Note that the concept of "soft assertions" is controversial and some consider it a bad practice:
a) https://softwareengineering.stackexchange.com/q/7823
b) https://martinfowler.com/articles/nonDeterminism.html
For those looking for a way to show all mis-matches between 2 JSON objects, see this: https://stackoverflow.com/a/61349887/143475
And finally, since some people want to do "conditional" match logic in JS, see this answer also: https://stackoverflow.com/a/50350442/143475

Karate - How to construct two tables, using lines from each to validate against the other [duplicate]

I want to use single row under examples in cucumber like below:
Examples:
| data1 | data2|paymentOp|
| MySql | uk1 |??????????|
Where paymentOp is a number which I am getting from java method which has List as an argument. The method returns each of the numbers which I want to pass it under paymentOp.
There is an absolute way to iterate it by copy the row and paste it again in the table but I don't want that because the method has a dynamic result which may return 2 or 5 set of numbers.
Is it possible to achieve it using Karate?
How to proceed further. Any lead here would be much appreciated!
You can combine Examples: with dynamic behavior. Please read this example (especially the second one): https://github.com/intuit/karate/blob/master/karate-demo/src/test/java/demo/outline/examples.feature
Since you have difficulties reading the docs and examples (:P) here is a simple example. Take some time to understand it carefully.
Background:
* def data = { one: 1, two: 2, three: 3 }
Scenario Outline:
* match data.<key> == <value>
Examples:
| key | value |
| one | 1 |
| two | 2 |
| three | 3 |

Cypher: How to create a recursive cost query with alternates?

I have the following structure:
(:pattern)-[:contains]->(:pattern)
...basically a hierarchy of patterns that use other patterns as content. These constitute trees.
Certain patterns are generated by certain generators:
(:generator)-[:canProduce]->(:pattern)
The canProduce relationship has a cost value associated with it as a property. Multiple generators can create the same pattern.
I would like to figure out, with a query, what patterns I need to generate to produce a particular output - and which generators to choose to have the lowest cost. I started like this:
MATCH (p:pattern {name: 'preciousPattern'})-[:contains *]->(ps:pattern) RETURN ps
so far so good. The results don't contain the starting pattern, so I made this:
MATCH (p:pattern {name: 'preciousPattern'})-[:contains *]->(ps:pattern)
WITH p+collect(ps) as list
UNWIND list as patterns
RETURN patterns
That does not feel elegant, but it also does not provide the hierarchy
I can of course do a path query (MATCH path = MATCH...) but the results don't seem very useful.
Also, now I need to connect the cost from the generator relationship.
I tried this:
MATCH (p:pattern {name: 'awesome'})-[:contains *]->(ps:pattern)
WITH p+collect(ps) as list
UNWIND list as rec
CALL {
WITH rec
MATCH (rec)-[r:canGenerate]-(g:generator)
return r.GenCost as GenCost, g.name AS GenName
}
return rec.name, GenCost , GenName
The problem I have now is that if any of the patterns that are part of another pattern can be generated by multiple generators, I just get double entries in the list, but what I want is separate lists for each alternative possibility, so that I can generate the cost.
This is my pattern tree:
Awesome
input1
input2
input 3
Input 3 can be generated by 2 different generators. I now get:
Awesome | 2 | MainGen
input1 | 3 | TestGen1
input2 | 2.5 | TestGen2
input3 | 1.25 | TestGen3
input4 | 1.4 | TestGen4
What I want is this: Two lists (or n, in the general case, where I might have n possible paths), one
Awesome | 2 | MainGen
input1 | 3 | TestGen1
input2 | 2.5 | TestGen2
input3 | 1.25 | TestGen3
and one:
Awesome | 2 | MainGen
input1 | 3 | TestGen1
input2 | 2.5 | TestGen2
input4 | 1.4 | TestGen4
each set representing one alternative set, so that I can calculate the costs and compare.
I have no idea how to do something like that. Any suggestions?

How to apply a custom filtering function on a Spark DataFrame

I have a DataFrame of the form:
A_DF = |id_A: Int|concatCSV: String|
and another one:
B_DF = |id_B: Int|triplet: List[String]|
Examples of concatCSV could look like:
"StringD, StringB, StringF, StringE, StringZ"
"StringA, StringB, StringX, StringY, StringZ"
...
while a triplet is something like:
("StringA", "StringF", "StringZ")
("StringB", "StringU", "StringR")
...
I want to produce the cartesian set of A_DF and B_DF, e.g.;
| id_A: Int | concatCSV: String | id_B: Int | triplet: List[String] |
| 14 | "StringD, StringB, StringF, StringE, StringZ" | 21 | ("StringA", "StringF", "StringZ")|
| 14 | "StringD, StringB, StringF, StringE, StringZ" | 45 | ("StringB", "StringU", "StringR")|
| 18 | "StringA, StringB, StringX, StringY, StringG" | 21 | ("StringA", "StringF", "StringZ")|
| 18 | "StringA, StringB, StringX, StringY, StringG" | 45 | ("StringB", "StringU", "StringR")|
| ... | | | |
Then keep just the records that have at least two substrings (e.g StringA, StringB) from A_DF("concatCSV") that appear in B_DF("triplet"), i.e. use filter to exclude those that don't satisfy this condition.
First question is: can I do this without converting the DFs into RDDs?
Second question is: can I ideally do the whole thing in the join step--as a where condition?
I have tried experimenting with something like:
val cartesianRDD = A_DF
.join(B_DF,"right")
.where($"triplet".exists($"concatCSV".contains(_)))
but where cannot be resolved. I tried it with filter instead of where but still no luck. Also, for some strange reason, type annotation for cartesianRDD is SchemaRDD and not DataFrame. How did I end up with that? Finally, what I am trying above (the short code I wrote) is incomplete as it would keep records with just one substring from concatCSV found in triplet.
So, third question is: Should I just change to RDDs and solve it with a custom filtering function?
Finally, last question: Can I use a custom filtering function with DataFrames?
Thanks for the help.
The function CROSS JOIN is implemented in Hive, so you could first do the cross-join using Hive SQL:
A_DF.registerTempTable("a")
B_DF.registerTempTable("b")
// sqlContext should be really a HiveContext
val result = sqlContext.sql("SELECT * FROM a CROSS JOIN b")
Then you can filter down to your expected output using two udf's. One that converts your string to an array of words, and a second one that gives us the length of the intersection of the resulting array column and the existing column "triplet":
import scala.collection.mutable.WrappedArray
import org.apache.spark.sql.functions.col
val splitArr = udf { (s: String) => s.split(",").map(_.trim) }
val commonLen = udf { (a: WrappedArray[String],
b: WrappedArray[String]) => a.intersect(b).length }
val temp = (result.withColumn("concatArr",
splitArr(col("concatCSV"))).select(col("*"),
commonLen(col("triplet"), col("concatArr")).alias("comm"))
.filter(col("comm") >= 2)
.drop("comm")
.drop("concatArr"))
temp.show
+----+--------------------+----+--------------------+
|id_A| concatCSV|id_B| triplet|
+----+--------------------+----+--------------------+
| 14|StringD, StringB,...| 21|[StringA, StringF...|
| 18|StringA, StringB,...| 21|[StringA, StringF...|
+----+--------------------+----+--------------------+

Executing all assertions in the same Spock test, even if one of them fails

I am trying to verify two different outputs in the context of a single Spock method that runs multiple test cases of the form when-then-where. For this reason I use two assertions at the then block, as can be seen in the following example:
import spock.lang.*
#Unroll
class ExampleSpec extends Specification {
def "Authentication test with empty credentials"() {
when:
def reportedErrorMessage, reportedErrorCode
(reportedErrorMessage, reportedErrorCode) = userAuthentication(name, password)
then:
reportedErrorMessage == expectedErrorMessage
reportedErrorCode == expectedErrorCode
where:
name | password || expectedErrorMessage | expectedErrorCode
' ' | null || 'Empty credentials!' | 10003
' ' | ' ' || 'Empty credentials!' | 10003
}
}
The code is an example where the design requirement is that if name and password are ' ' or null, then I should always expect exactly the same expectedErrorMessage = 'Empty credentials!' and expectedErrorCode = 10003. If for some reason (presumably because of bugs in the source code) I get expectedErrorMessage = Empty! (or anything else other than 'Empty credentials!') and expectedErrorCode = 10001 (or anything else other than 1003), this would not satisfy the above requirement.
The problem is that if both assertions fail in the same test, I get a failing message only for the first assertion (here for reportedErrorMessage). Is it possible to get informed for all failed assertions in the same test?
Here is a piece of code that demonstrates the same problem without other external code dependencies. I understand that in this particular case it is not a good practice to bundle two very different tests together, but I think it still demonstrates the problem.
import spock.lang.*
#Unroll
class ExampleSpec extends Specification {
def "minimum of #a and #b is #c and maximum of #a and #b is #d"() {
expect:
Math.min(a, b) == c
Math.max(a, b) == d
where:
a | b || c | d
3 | 7 || 3 | 7
5 | 4 || 5 | 4 // <--- both c and d fail here
9 | 9 || 9 | 9
}
}
Based on the latest comment by OP, it looks like a solution different from my previous answer would be helpful. I'm leaving the previous answer in-place, as I feel it still provides useful information related to the question (specifically separating positive and negative tests).
Given that you want to see all failures, and not just have it fail at the first assert that fails, I would suggest concatenating everything together into a boolean AND operation. Not using the && shortcut operator, because it will only run until the first check that does not satisfy the entire operation. I would suggest using the &, so that all checks are made, regardless of any previously failing checks.
Given the max and min example above, I would change the expect block to this:
Math.min(a, b) == c & Math.max(a, b) == d
When the failure occurs, it gives you the following information:
Math.min(a, b) == c & Math.max(a, b) == d
| | | | | | | | | | |
4 5 4 | 5 false 5 5 4 | 4
false false
This shows you every portion of the failing assert. By contrast, if you used the &&, it would only show you the first failure, which would look like this:
Math.min(a, b) == c && Math.max(a, b) == d
| | | | | |
4 5 4 | 5 false
false
This could obviously get messy pretty fast if you have more than two checks on a single line - but that is a tradeoff you can make between all failing information on one line, versus having to re-run the test after fixing each individual component.
Hope this helps!
I think there are two different things at play here.
Having a failing assert in your code will throw an error, which will cease execution of the code. This is why you can't have two failing assertions in a single test. Any line of code in an expect or then block in Spock has an implicit assert before it.
You are mixing positive and negative unit tests in the same test. I ran into this before myself, and I read/watched something about this and Spock (I believe from the creator, Peter Niederwieser), and learned that these should be separated into different tests. Unfortunately I couldn't find that reference. So basically, you'll need one test for failing use cases, and one test for passing/successful use cases.
Given that information, here is your second example code, with the tests separated out, with the failing scenario in the second test.
#Unroll
class ExampleSpec extends Specification {
def "minimum of #a and #b is #c and maximum of #a and #b is #d - successes"() {
expect:
Math.min(a, b) == c
Math.max(a, b) == d
where:
a | b || c | d
3 | 7 || 3 | 7
9 | 9 || 9 | 9
}
def "minimum of #a and #b is #c and maximum of #a and #b is #d - failures"() {
expect:
Math.min(a, b) != c
Math.max(a, b) != d
where:
a | b || c | d
5 | 4 || 5 | 4
}
}
As far as your comment about the MongoDB test case - I'm not sure what the intent is there, but I'm guessing they are making several assertions that are all passing, rather than validating that something is failing.