I found the output of the dependency:packageCycles constraint shipped with jQAssistant hard to interpret. Specifically I'm keen on finding an example instance of classes that make up the cyclic dependency.
Given I found a cyclce of packages, for each pair of adjunct packages I would need to find two classes that connect them.
This is my first attempt at the Cypher query, but there are still some relevant parts missing:
MATCH nodes = (p1:Package)-[:DEPENDS_ON]->(p2: Package)-[:DEPENDS_ON*]->(p1)
WHERE p1 <> p2
WITH extract(x IN relationships(nodes) |
(:Type)<--(:Package)-[x]->(:Package)-->(:Type)) AS cs
RETURN cs
Specifically, in order to really connect the two packages, the two types should be related to each other with DEPENDS_ON as shown below:
(:Type)<--(:Package)-[x]->(:Package)-->(:Type)
| ^
| DEPENDS_ON |
+--------------------------------------+
For above pattern I would have to return the two types (and not the packages, for instance). Preferably the output for a single cyclic dependency consists of a list of qualified class names (otherwise multiple one cannot possibly distinguish the class chains of more than one cyclic dependency).
For this specific purpose I find Cypher to be very limited, support for identifying and collecting new graph patterns during path traversal does not seem to be the easiest thing to do. Also the attempt to give names to the (:Type) nodes resulted in syntax errors.
Also I messed a lot around with UNWIND, but to no avail. It lets you introduce new MATCH clauses on per-element basis (say, the elements of relationships(nodes)), but I do not know of another method to undo the damaging effects of unwind: the surrounding list structure is removed, such that the traces of multiple cyclic dependencies merge into each other. Additionally the results appear permuted to me. That being said below query is conceptually also very close on what I am trying to achieve but does not work:
MATCH nodes = (p1:Package)-[:DEPENDS_ON]->(p2: Package)-[:DEPENDS_ON*]->(p1)
WHERE p1 <> p2
WITH relationships(nodes) as rel
UNWIND rel AS x
MATCH (t0:Type)<-[:CONTAINS]-(:Package)-[x]->(:Package)-[:CONTAINS]->(t1:Type),
(t0)-[:DEPENDS_ON]->(t1)
RETURN t0.fqn, t1.fqn
I do appreciate that there seems to be some scripting support within jQAssistant. However, this would really be my last resort, since it is surely more difficult to maintain than a Cypher query.
To rephrase it: given a path, I'm looking for a method to identify a sub-pattern for each element, project a node out of that match, and to collect the result.
Do you have any ideas on how could one accomplish that with Cypher?
Edit #1: Within one package, one has also to consider that the class that is target to the inbound edge of type DEPENDS_ON may not be the same class that is source to the outgoing edge. In other words, as a result
two classes of the same package may be part of the trace
if one wanted to express the cyclic dependency trace as a path, one must take into account detours that navigate to classes in the same package. For instance (edges in bold mark package entry / exit; an edge of type DEPENDS_ON is absent between the two types):
-[:DEPENDS_ON]->(:Type)<-[:CONTAINS]-(:Package)-[:CONTAINS]->(:Type)-[DEPENDS_ON]->
Maybe it gets a little clearer using the following picture:
Clearly "a, b, c" is a package cycle and "TestA, TestB1, TestB2, TestC" is a type-level trace for justifying the package-level dependency.
The following query extends the package cycle constraint by drilling down on type level:
MATCH
(p1:Package)-[:DEPENDS_ON]->(p2:Package),
path=shortestPath((p2)-[:DEPENDS_ON*]->(p1))
WHERE
p1 <> p2
WITH
p1, p2
MATCH
(p1)-[:CONTAINS]->(t1:Type),
(p2)-[:CONTAINS]->(t2:Type),
(p1)-[:CONTAINS]->(t3:Type),
(t1)-[:DEPENDS_ON]->(t2),
path=shortestPath((t2)-[:DEPENDS_ON*]->(t3))
RETURN
p1.fqn, t1.fqn, EXTRACT(t IN nodes(path) | t.fqn) AS Cycle
I'm not sure how well the query will work on large projects, we'll need to give it a try.
Edit 1: Updated the query to match on any type t3 which is located in the same package as t1.
Edit 2: The answer is not correct, see discussion below.
Related
I want to search all programs - within a package - that use the statement:
modify itab_xyz from wa_itab_xyz
Preferably, the string should be searched with wild cards like itab*
for a range of itab_(values) like itab_abc, itab_def, itab_ghi
etc..
How do i do this in SAP ABAP?
Below is a screenshot of all programs within a package one can search from.
One possibility would be to use program RS_ABAP_SOURCE_SCAN.
You can restrict the selection by package and you can also enter a specific string to search for in the code.
I use the transaction code_scanner (program is afx_code_scanner).
The biggest problem with this program and the RS_ABAP_SOURCE_SCAN provided above is that they won’t find everything. IMO the most important missing component to them is implicit enhancements. They can be very impactful to system functions, and if you are searching for an error message or hard coded value skipping them could mean not finding something critical.
At the time I looked (about 7 years ago), I was unable to find a delivered tool that would actually scan all the code in the system. I ended up enhancing the code_scanner to look for enhancements, WDA components, BSP code, and forms code.
I don’t know if the open source component above includes those. At first glance it doesn’t seem to, but I don’t have time to really dig into it.
You could use a tool from the Galileo-Open Source library. This program searches ABAP Source, OTR-Texts, Message and Textpools for static Text, wildcard patterns or regex patterns.
ABAP-Coding:
https://github.com/galileo-group/galileo-abap-lib/blob/master/%23gal%23devtools_find_text.prog.abap
Textpool:
https://github.com/galileo-group/galileo-abap-lib/blob/master/%23gal%23devtools_find_text.prog.xml
It refers to some additional classes from the library, so you either need to copy these as well or just use ABAPgit to get the whole library. You can also contact me, so I can send you a transport containing the library.
Additional information (October 1, 2020):
I created a version of the report that you can copy/paste to the ABAP editor. It is too long to include it in the response, but you can find it here.
Do not forget to copy the text elements / selection texts.
Required Text Elements:
-----------------------
B00 Scope
B01 Search pattern
H01 Type
H02 Name
H03 Key
H04 Match
Required Selection Texts:
-------------------------
P_CASE Case-sensitive
P_DEVC Package
P_LANGU Language
P_MESS Messages
P_OTR OTR Texts
P_PATT Pattern
P_REGEX Regular expression
P_SOURCE ABAP sources
P_TPOOL Textpools
P_WILDC Wildcard pattern
This code works:
(3,6...66).contains( 9|21 ).say # OUTPUT: «any(True, True)»
And returns a Junction. It's also tested, but not documented.
The problem is I can't find its implementation anywhere. The Str code, which is also called from Cool, never returns a Junction (it does not take a Junction, either). There are no other methods contain in source.
Since it's autothreaded, it's probably specially defined somewhere. I have no idea where, though. Any help?
TL;DR Junction autothreading is handled by a single central mechanism. I have a go at explaining it below.
(The body of your question starts with you falling into a trap, one I think you documented a year or two back. It seems pretty irrelevant to what you're really asking but I cover that too.)
How junctions get handled
Where is contains( Junction) defined? ... The problem is I can't find [the Junctional] implementation anywhere. ... Since it's autothreaded, it's probably specially defined somewhere.
Yes. There's a generic mechanism that automatically applies autothreading to all P6 routines (methods, operators etc.) that don't have signatures that explicitly control what happens with Junction arguments.
Only a tiny handful of built in routines have these explicit Junction handling signatures -- print is perhaps the most notable. The same is true of user defined routines.
.contains does not have any special handling. So it is handled automatically by the generic mechanism.
Perhaps the section The magic of Junctions of my answer to an earlier SO Filtering elements matching two regexes will be helpful as a high level description of the low level details that follow below. Just substitute your 9|21 for the foo & bar in that SO, and your .contains for the grep, and it hopefully makes sense.
Spelunking the code
I'll focus on methods. Other routines are handled in a similar fashion.
method AUTOTHREAD does the work for full P6 methods.
This is setup in this code that sets up handling for both nqp and full P6 code.
The above linked P6 setup code in turn calls setup_junction_fallback.
When a method call occurs in a user's program, it involves calling find_method (modulo cache hits as explained in the comment above that code; note that the use of the word "fallback" in that comment is about a cache miss -- which is technically unrelated to the other fallback mechanisms evident in this code we're spelunking thru).
The bit of code near the end of this find_method handles (non-cache-miss) fallbacks.
Which arrives at find_method_fallback which starts off with the actual junction handling stuff.
A trap
This code works:
(3,6...66).contains( 9|21 ).say # OUTPUT: «any(True, True)»
It "works" to the degree this does too:
(3,6...66).contains( 2 | '9 1' ).say # OUTPUT: «any(True, True)»
See Lists become strings, so beware .contains() and/or discussion of the underlying issues such as pmichaud's comment.
Routines like print, put, infix ~, and .contains are string routines. That means they coerce their arguments to Str. By default the .Str coercion of a listy value is its elements separated by spaces:
put 3,6...18; # 3 6 9 12 15 18
put (3,6...18).contains: '9 1'; # True
It's also tested
Presumably you mean the two tests with a *.contains argument passed to classify:
my $m := #l.classify: *.contains: any 'a'..'f';
my $s := classify *.contains( any 'a'..'f'), #l;
Routines like classify are list routines. While some list routines do a single operation on their list argument/invocant, eg push, most of them, including classify, iterate over their list doing something with/to each element within the list.
Given a sequence invocant/argument, classify will iterate it and pass each element to the test, in this case a *.contains.
The latter will then coerce individual elements to Str. This is a fundamental difference compared to your example which coerces a sequence to Str in one go.
In my ODL code, I have recently noticed that when uninstalling flows, I get unexpected behavior. The scenario goes something like this:
A bunch of flows are installed across multiple tables
I delete a flow by using the same NodeId, TableId and FlowId that I used when creating it. For reference, I use SalFlowService's addFlow and removeFlow methods.
I execute ovs-ofctl dump-flows and notice that ALL flows on the given node and given table are deleted. For reference, the flowId I use is something like "routing-rename-src-0.0.0.0-to-123.123.123.0".
It appears to me that ODL somehow completely fails at recognizing the FlowId, and defaults to deleting all flows on the given table. No error messages are sent from OpenFlow, and no errors are logged in ODL.
The thing is, I am definitely using the same FlowId object.
Now, I am a bit confused about what could go wrong, but I have an idea, it's just that there's conflicting evidence online, and since I haven't worked on OpenFlowPlugin, I can't quite tell myself.
Flows are or tend to be posted using integers for flowIds, in the REST request paths.
In ODL code such as l2switch, flowIDs can be strings. This makes certain debugging easier to parse through.
Now, this is pretty strange. Are we using integers, or strings, or can ODL make a conversion between integer and strings by a mapping mechanism of sorts? Either way, I get unexpected behavior. Interestingly, the code I linked to does not do deletion... so maybe it's more of a hack in this case?
EDIT : I have now tried to rename my IDs as mere numbers, or as "PluginName" + "-" + number, and uninstallation still seems to fail. The problem is now that I just can't uninstall a flow rule without uninstalling the entire table with it...
EDIT 2 : This issue allowed me to understand that the flow id is not necessarily used to remove the flow. I came up with the following procedure to delete flows, in a way that doesn't cause all flows on the table to get deleted:
final RemoveFlowInputBuilder builder = new RemoveFlowInputBuilder(flow);
builder.setNode(new NodeRef(nodeId));
builder.setFlowRef(new FlowRef(flowPath));
builder.setFlowTable(new FlowTableRef(tableId));
flowIdentity.context.salFlowService.removeFlow(builder.build());
The very difference with my previous code was that I was not using a Flow object to initialize the input builder. In this form, my methods for adding and removing are identical. As long as I preserve the Flow object after adding the flow, I can delete the flow, and the tables will not be wiped.
But there is an exception. On table 0, I have installed two different table-change rules with identical actions, but different priorities. The matches are slightly different (one defines an in-port, the other doesn't). When I delete the most generic (and lowest priority) rule, the other one gets deleted also.
I don't understand why this happens. Even if I try setting the priority in the input builder, this still happens. Hrm.
As I wrote in my second edit, this post suggests that flow deletion does not work explicitly based on Id, but rather, on the fields that are defined in the input builder of the method. I haven't tested this, but I suspect if the flow reference is omitted from the builder, the defined fields will be used to delete all matching rules, which could imply deleting all flows by accident if the wrong fields are set.
Given the following code to add flows:
final AddFlowInputBuilder builder = new AddFlowInputBuilder(flow);
builder.setNode(new NodeRef(nodeId));
builder.setFlowRef(new FlowRef(flowPath));
builder.setFlowTable(new FlowTableRef(tableId));
builder.setPriority(flow.getPriority());
flowIdentity.context.salFlowService.addFlow(builder.build());
The following code to remove flows works as expected (using the SAME Flow object):
final RemoveFlowInputBuilder builder = new RemoveFlowInputBuilder(flow);
builder.setNode(new NodeRef(flowLocation.nodeIdentifier));
builder.setFlowRef(new FlowRef(flowLocation.flowPath));
builder.setFlowTable(new FlowTableRef(flowLocation.tableIdentifier));
builder.setPriority(flow.getPriority());
builder.setStrict(Boolean.TRUE);
flowIdentity.context.salFlowService.removeFlow(builder.build());
Without "strict" set to true, this can cause unexpected deletion of similar rules on the same table. I am unsure of the way flows are matched on deletion, with or without strict, but this much I can confirm.
I have a data set which includes a number of nodes, all of which labeled claim, which can have various properties (names P1, P2, etc., through P2000). Currently, each of the claim nodes can have only one of these properties, and each property has value, which can be of different types (i.e. P1 may be string, P2 may be float, P3 integer, etc.). I also need to be able to look up the nodes by any property (i.e. "find all nodes with P3 which equals to 42").
I have modeled it as nodes having property value and label according to the P property. Then I define schema index on label claim and property value. The lookup then would look something like:
MATCH (n:P569:claim) WHERE n.value = 42 RETURN n
My first question is - is this OK to have such index? Are mixed type indexes allowed?
The second question is that the lookup above works (though I'm not sure whether it uses index or not), but this doesn't - note the label order is switched:
neo4j-sh (?)$ MATCH (n:claim:P569) WHERE n.value>0 RETURN n;
IncomparableValuesException: Don't know how to compare that. Left: "113" (String); Right: 0 (Long)
P569 properties are all numeric, but there are string properties from other P-values one of which is "113". Somehow, even though I said the label should be both claim and P569, the "113" value is still included in the comparison, even though it has no P569 label:
neo4j-sh (?)$ MATCH (n:claim) WHERE n.value ="113" RETURN LABELS(n);
+-------------------+
| LABELS(n) |
+-------------------+
| ["claim","P1036"] |
| ["claim","P902"] |
+-------------------+
What is wrong here - why it works with one label order but not another? Can this data model be improved?
Let me at least try to side-step your question, there's another way you could model this that would resolve at least some of your problems.
You're encoding the property name as a label. Perhaps you want to do that to speed up looking up a subset of nodes where that property applies; still it seems like you're causing a lot of difficulty by shoe-horning incomparable data values all into the same property named "value".
What if, in addition to using these labels, each property was named the same as the value? I.e.:
CREATE (n:P569:claim { P569: 42});
You still get your label lookups, but by segregating the property names, you can guarantee that the query planner will never accidentally compare incomparable values in the way it builds an execution plan. Your query for this node would then be:
MATCH (n:P569:claim) WHERE n.P569 > 5 AND n.P569 < 40 RETURN n;
Note that if you know the right label to use, then you're guaranteed to know the right property name to use. By using properties of different names, if you're logging your data in such a way that P569's are always integers, you can't end up with that incomparable situation you have. (I think that's happening because of the particular way cypher is executing that query)
A possible downside here is that if you have to index all of those properties, it could be a lot of indexes, but still might be something to consider.
I think it makes sense to take a step back and think what you actually want to achieve, and why you have those 2000 properties in the first place and how you could model them differently in a graph?
Also make sure to just leave off properties you don't need and use coalesce() to provide the default.
I am attempting to write a configspec which will only branch on certain file types (I.E. Docs can be painful so we wish to avoid those).
Right now I have the following extensions:
*.txt and *.pl (for example)
I have tried:
element * CHECKEDOUT
element -directory * \main\LATEST
element '{*.txt||*.pl}' \main\BLARG\LATEST
element '{*.txt||*.pl}' \main\LATEST -mkbranch BLARG
And some variations using parentheses, and whatnot.
I am just perplexed, I did find that in certain contexts you can use the comparison operators similar to c++ but can't get this to work.
(looking at the query language section from here:
http://publib.boulder.ibm.com/infocenter/cchelp/v7r0m0/index.jsp?topic=/com.ibm.rational.clearcase.cc_ref.doc/topics/config_spec.htm
I should be able to use: query && query
Is it possible to only allow branching on specific filetypes by use of a configspec, and if so, any hints/tips/something to get me headed in the right direction?
EDIT:
Reading from the link I sent (one of the pages on that site anyways), you can set it up using something to the effect of
element * CHECKEDOUT
element -directory * \main\LATEST
element *.[hc] \main\BLARG\LATEST
element *.[hc] \main\LATEST -mkbranch BLARG
This should match any h and c files that you are looking at and allow to branch based on those.
element * CHECKEDOUT
element -directory * \main\LATEST
element *.txt \main\BLARG\LATEST
element *.txt \main\LATEST -mkbranch BLARG
That will work and only matches .txt files, which is great, I was just hoping that it would match for additional sets, maybe I could add an additional line or two and maybe that would accomplish what I'm attempting to do.
element * CHECKEDOUT
element -directory * \main\LATEST
element *.txt \main\BLARG\LATEST
element *.pl \main\BLARG\LATEST
element *.txt \main\LATEST -mkbranch BLARG
element *.pl \main\LATEST -mkbranch BLARG
Our team only branches on certain sets of files for a variety of reasons, one being that in some cases it's difficult to merge (Note .doc files). I was going to write up a configspec that would automatically branch what our team designates as "branchable", but otherwise just checkout main.
I hope my issue is clearer, and I think that it's not quite what you're talking about in your initial answer VonC (I think), please let me know if your answer still holds.
No, it doesn't seem to be easily possible (unless you list each type you want to branch), and for a reason.
The idea behind branching is to isolate the history for a group of files (not for some specific parts of said group).
That idea has been reinforced with UCM and its notion of UCM component (a coherent set of files, which branches as a all, and which is labelled as a all unit).
See more on "component" in the article "Best practices for using composite baselines in UCM".
So trying very hard to bend the tool in order to achieve that one selective versionning organization might not be the right thing to do.
Isolating those file in their own "component", and then using them through symlinks back into your original tree structure is one possible solution (there might be other) which is at least better (and which is reminiscent of the notion of submodules or sub-forests used by other (D)VCS)
Plus, if you branch for the reason it is "difficult" to merge:
it won't remove the merge issue if you make modification in your own branch (a merge will have to be done sooner or later)
.doc Word document could, in theory be merged through the ms_word type manager.
See "About merging Microsoft Office files in ClearCase"
for automatic merge to not be blocked by binary artifacts like a .ppt for instance, activate for that type a copy-merge policy, as illustrated in "Clearcase UCM is trying to merge pdf files".
I realize you are branching for other reasons that might be valid, but again, I like my branching policy simple, manageable and scalable.