"Undeclared requirements" error when running problem file with planner - error-handling

I keep getting an "undeclared requirement :typing" error when I run my problem file with a planner, even though I've already declared it in my domain file
The requirements in my domain file:
(:requirements :typing :types :durative-actions :fluents :numeric-fluents)
(:types patient surgeon rooms speciality injury)
In my problem file:
(define (problem surgery)
(:domain emergency_room)
(:objects
patient1 patient2 patient3 - patient
surgeon1 surgeon2 surgeon3 - surgeon
trauma dental cardio - speciality
heart tooth accident - injury)

Which planner do you use?
:typing
means that the domain uses types that you define below. But to the best of my knowledge, the
:types
requirement does not exist and should only be used as a keyword for the actual types. Therefore, you probably need to remove :types from the requirement section.
Also note that most planners are quite robust against missing requirements tags. So sometimes it's a good idea to omit some of them.

Here is the list of requirements supported by PDDL 1.2
https://planning.wiki/ref/pddl/requirements#typing
Indeed, :typing is the planner requirement, :types is the keyword for declaring object types inside the PDDL domain.
For requirements specified by later PDDL versions, search this page for List of PDDL x.y Requirements.
https://planning.wiki/ref/pddl/requirements

Related

Is this conversion from BNF to EBNF correct?

As context, my textbook uses this style for EBNF:
Sebesta, Robert W. Concepts of Programming Languages 11th ed., Pearson, 2016, 150.
The problem:
Convert the following BNF rule with three RHSs to an EBNF rule with a single RHS.
Note: Conversion to EBNF should remove all explicit recursion and yield a single RHS EBNF rule.
A ⟶ B + A | B – A | B
My solution:
A ⟶ B [ (+ | –) A ]
My professor tells me:
"First, you should use { } instead of [ ],
Second, according to the BNF rule, <"term"> is B." (He is referring the the style guide posted above)
Is he correct? I assume so but have read other EBNF styles and wonder if I am entitled to credit.
You were clearly asked to remove explicit recursion and your proposed solution doesn't do that; A is still defined in terms of itself. So independent of naming issues, you failed to do the requested conversion and your prof is correct to mark you down for it. The correct solution for the problem as presented, ignoring the names of non-terminals, is A ⟶ B { (+ | –) B }, using indefinite repetition ({…}) instead of optionality ([…]). With this solution, the right-hand side of the production for A only references B, so there is no recursion (at least, in this particular production).
Now, for naming: clearly, your textbook's EBNF style is to use angle brackets around the non-terminal names. That's a common style, and many would say that it is more readable than using single capital letters which mean nothing to a human reader. Now, I suppose your prof thinks you should have changed the name of B to <term> on the basis that that is the "textbook" name for the non-terminal representing the operand of an additive operator. The original BNF you were asked to convert does show the two additive operators. However, it makes them right-associative, which is definitely non-standard. So you might be able to construct an argument that there's no reason to assume that these operators are additive and that their operands should be called "terms" [Note 1]. But even on that basis, you should have used some name written in lower-case letters and surrounded in angle brackets. To me, that's minor compared with the first issue, but your prof may have their own criteria.
In summary, I'm afraid I have to say that I don't believe you are entitled to credit for that solution.
Notes
If you had actually come up with that explanation, your prof might have been justified in suggesting a change of major to Law.

ABNF rule `zero = ["0"] "0"` matches `00` but not `0`

I have the following ABNF grammar:
zero = ["0"] "0"
I would expect this to match the strings 0 and 00, but it only seems to match 00? Why?
repl-it demo: https://repl.it/#DanStevens/abnf-rule-zero-0-0-matches-00-but-not-0
Good question.
ABNF ("Augmented Backus Naur Form"9 is defined by RFC 5234, which is the current version of a document intended to clarify a notation used (with variations) by many RFCs.
Unfortunately, while RFC 5234 exhaustively describes the syntax of ABNF, it does not provide much in the way of a clear statement of semantics. In particular, it does not specify whether ABNF alternation is unordered (as it is in the formal language definitions of BNF) or ordered (as it is in "PEG" -- Parsing Expression Grammar -- notation). Note that optionality/repetition are just types of alternation, so if you choose one convention for alternation, you'll most likely choose it for optionality and repetition as well.
The difference is important in cases like this. If alternation is ordered, then the parser will not backup to try a different alternative after some alternative succeeds. In terms of optionality, this means that if an optional element is present in the stream, the parser will never reconsider the decision to accept the optional element, even if some subsequent element cannot be matched. If you take that view, then alternation does not distribute over concatenation. ["0"]"0" is precisely ("0"/"")"0", which is different from "00"/"0". The latter expression would match a single 0 because the second alternative would be tried after the first one failed. The former expression, which you use, will not.
I do not believe that the authors of RFC 5234 took this view, although it would have been a lot more helpful had they made that decision explicit in the document. My only real evidence to support my belief is that the ABNF included in RFC 5234 to describe ABNF itself would fail if repetition was considered ordered. In particular, the rule for repetitions:
repetition = [repeat] element
repeat = 1*DIGIT / (*DIGIT "*" *DIGIT)
cannot match 7*"0", since the 7 will be matched by the first alternative of repeat, which will be accepted as satisfying the optional [repeat] in repetition, and element will subsequently fail.
In fact, this example (or one similar to it) was reported to the IETF as an erratum in RFC 5234, and the erratum was rejected as unnecessary, because the verifier believed that the correct parse should be produced, thus providing evidence that the official view is that ABNF is not a variant of PEG. Apparently, this view is not shared by the author of the APG parser generator (who also does not appear to document their interpretation.) The suggested erratum chose roughly the same solution as you came up with:
repeat = *DIGIT ["*" *DIGIT]
although that's not strictly speaking the same; the original repeat cannot match the empty string, but the replacement one can. (Since the only use of repeat in the grammar is optional, this doesn't make any practical difference.)
(Disclosure note: I am not a fan of PEG. So it's possible the above answer is not free of bias.)

How to find an example instance of a cyclic dependency in jQAssistant?

I found the output of the dependency:packageCycles constraint shipped with jQAssistant hard to interpret. Specifically I'm keen on finding an example instance of classes that make up the cyclic dependency.
Given I found a cyclce of packages, for each pair of adjunct packages I would need to find two classes that connect them.
This is my first attempt at the Cypher query, but there are still some relevant parts missing:
MATCH nodes = (p1:Package)-[:DEPENDS_ON]->(p2: Package)-[:DEPENDS_ON*]->(p1)
WHERE p1 <> p2
WITH extract(x IN relationships(nodes) |
(:Type)<--(:Package)-[x]->(:Package)-->(:Type)) AS cs
RETURN cs
Specifically, in order to really connect the two packages, the two types should be related to each other with DEPENDS_ON as shown below:
(:Type)<--(:Package)-[x]->(:Package)-->(:Type)
| ^
| DEPENDS_ON |
+--------------------------------------+
For above pattern I would have to return the two types (and not the packages, for instance). Preferably the output for a single cyclic dependency consists of a list of qualified class names (otherwise multiple one cannot possibly distinguish the class chains of more than one cyclic dependency).
For this specific purpose I find Cypher to be very limited, support for identifying and collecting new graph patterns during path traversal does not seem to be the easiest thing to do. Also the attempt to give names to the (:Type) nodes resulted in syntax errors.
Also I messed a lot around with UNWIND, but to no avail. It lets you introduce new MATCH clauses on per-element basis (say, the elements of relationships(nodes)), but I do not know of another method to undo the damaging effects of unwind: the surrounding list structure is removed, such that the traces of multiple cyclic dependencies merge into each other. Additionally the results appear permuted to me. That being said below query is conceptually also very close on what I am trying to achieve but does not work:
MATCH nodes = (p1:Package)-[:DEPENDS_ON]->(p2: Package)-[:DEPENDS_ON*]->(p1)
WHERE p1 <> p2
WITH relationships(nodes) as rel
UNWIND rel AS x
MATCH (t0:Type)<-[:CONTAINS]-(:Package)-[x]->(:Package)-[:CONTAINS]->(t1:Type),
(t0)-[:DEPENDS_ON]->(t1)
RETURN t0.fqn, t1.fqn
I do appreciate that there seems to be some scripting support within jQAssistant. However, this would really be my last resort, since it is surely more difficult to maintain than a Cypher query.
To rephrase it: given a path, I'm looking for a method to identify a sub-pattern for each element, project a node out of that match, and to collect the result.
Do you have any ideas on how could one accomplish that with Cypher?
Edit #1: Within one package, one has also to consider that the class that is target to the inbound edge of type DEPENDS_ON may not be the same class that is source to the outgoing edge. In other words, as a result
two classes of the same package may be part of the trace
if one wanted to express the cyclic dependency trace as a path, one must take into account detours that navigate to classes in the same package. For instance (edges in bold mark package entry / exit; an edge of type DEPENDS_ON is absent between the two types):
-[:DEPENDS_ON]->(:Type)<-[:CONTAINS]-(:Package)-[:CONTAINS]->(:Type)-[DEPENDS_ON]->
Maybe it gets a little clearer using the following picture:
Clearly "a, b, c" is a package cycle and "TestA, TestB1, TestB2, TestC" is a type-level trace for justifying the package-level dependency.
The following query extends the package cycle constraint by drilling down on type level:
MATCH
(p1:Package)-[:DEPENDS_ON]->(p2:Package),
path=shortestPath((p2)-[:DEPENDS_ON*]->(p1))
WHERE
p1 <> p2
WITH
p1, p2
MATCH
(p1)-[:CONTAINS]->(t1:Type),
(p2)-[:CONTAINS]->(t2:Type),
(p1)-[:CONTAINS]->(t3:Type),
(t1)-[:DEPENDS_ON]->(t2),
path=shortestPath((t2)-[:DEPENDS_ON*]->(t3))
RETURN
p1.fqn, t1.fqn, EXTRACT(t IN nodes(path) | t.fqn) AS Cycle
I'm not sure how well the query will work on large projects, we'll need to give it a try.
Edit 1: Updated the query to match on any type t3 which is located in the same package as t1.
Edit 2: The answer is not correct, see discussion below.

CHM/HHP: maximum length of variable names in [ALIAS] section

What is the maximum length of variable names in the [ALIAS] section of HHP files?
I_AM_WONDERING_ABOUT_THE_MAXIMUM_LENGTH_OF_THIS_STRING_RIGHT_HERE=this-is-some-really-helpful-html-file.html
I have found a CHM/HHP specification right here:
https://www-user.tu-chemnitz.de/~heha/viewchm.php/hs/chmspec.chm/hhp.html
That page only talks about the length of the overall line, though (and not about the length of the variable name). Very specific question, I know. Still, someone may be able to point me somewhere.
As far as I know never asked before and I never heard about limitations. But I think this is because nobody used long variable names in this place so far.
The purpose of the two files e.g. alias.h and map.h is to ease the coordination between developer and help author. The mapping file links an ID to the map number - typically this can be easily created by the developer and passed to the help author. Then the help author creates an alias file linking the IDs to the topic names. That was the idea behind years (decades) ago by Ralph Walden (ex Microsoft).
Please note HTMLHelp is about 20 years old and these context ID strings inside a alias.h file were derived from WinHelp as a predecessor of HTMLHelp.
You'll find some further Information at Creating Context-Sensitive Help for Applications.
In general I'd recommend to use ID's with a fixed format because of the better legibility like shown below:
;-------------------------------------------------------------
; alias.h file example for HTMLHelp (CHM)
; www.help-info.de
;
; All IDH's > 10000 for better format
; last edited: 2006-07-09
;---------------------------------------------------
IDH_90001=index.htm
IDH_10000=Context-sensitive_example\contextID-10000.htm
IDH_10010=Context-sensitive_example\contextID-10010.htm
IDH_20000=Context-sensitive_example\contextID-20000.htm
IDH_20010=Context-sensitive_example\contextID-20010.htm
I'd recommend to use less than 1024 bytes per line.

Does histogram_summary respect name_scope

I am getting a Duplicate tag error when I try to write out histogram summaries for a multi-layer network that I generate procedurally. I think that the problem might be related to naming. Imagine code like the following:
with tf.name_scope(some_unique_name):
...
_ = tf.histogram_summary('weights', kernel_weights)
I'd naively assumed that 'weights' would be scoped to some_unique_name but I'm suspecting that it is not. Are summary names independent of name_scope?
As Dave points out, the tag argument to tf.histogram_summary(tag, ...) is indeed independent of the current name scope. Part of the reason for this is that the tag may be a string Tensor (i.e. computed by part of your graph), whereas name scopes are a purely client-side construct (i.e. Python-only), so there's no good way to make the scoping work consistently across the two modes of use.
However, if you're using TensorFlow build from source (and should be available in the next release, 0.8.0), you can use the following recipe to scope your tags (using Graph.unique_name(..., mark_as_used=False)):
with tf.name_scope(some_unique_name):
# ...
tf.histogram_summary(
tf.get_default_graph().unique_name('weights', mark_as_used=False),
kernel_weights)
Alternatively, you can do the following in the current version:
with tf.name_scope(some_unique_name) as scope:
# ...
tf.histogram_summary(scope + 'weights', kernel_weights)
They are.
I'm with you in thinking this is a bug, but I haven't run it past the designers of the op yet. Go ahead and open an issue for it on GitHub!
(I've run into this also and found it terribly annoying -- it prevents reuse of the model without deliberately parameterizing the summary op invocations.)