How to use matching delimiters in Raku - grammar

I'm trying to write a token that allows nested content with matching delimiters. Where (AB) should result in a match to at least "AB" if not "(AB)". And (A(c)B) would return two matches "(A(c)B)" and so on.
Code boiled down from its source:
#!/home/hsmyers/rakudo741/bin/perl6
use v6d;
my #tie;
class add-in {
method tie($/) { #tie.push: $/; }
}
grammar tied {
rule TOP { <line>* }
token line {
<.ws>?
[
| <tie>
| <simpleNotes>
]+
<.ws>?
}
token tie {
[
|| <.ws>? <simpleNotes>+ <tie>* <simpleNotes>* <.ws>?
|| <openParen> ~ <closeParen> <tie>
]+
}
token openParen { '(' }
token closeParen { ')' }
token simpleNotes {
[
| <[A..Ga..g,'>0..9]>
| <[|\]]>
| <blank>
]
}
}
my $text = "(c2D) | (aA) (A2 | B)>G A>F G>E (A,2 |\nD)>F A>c d>f |]";
tied.parse($text, actions => add-in.new).say;
$text.say;
for (#tie) {
s:g/\v/\\n/;
say "«$_»";
}
This gives a partially correct result of:
«c2D»
«aA»
«(aA)»
«A2 | B»
«\nD»
«A,2 |\nD»
«(A,2 |\nD)>F A>c d>f |]»
«(c2D) | (aA) (A2 | B)>G A>F G>E (A,2 |\nD)>F A>c d>f |]»
BTW, I'm not concerned about the newline, it is there only to check if the approach can span text over two lines. So stirring the ashes I see captures with and without parenthesis, and a very greedy capture or two.
Clearly I have a problem within my code. My knowledge of perl6 can best be described as "beginner" So I ask for your help. I'm looking for a general solution or at least an example that can be generalized and as always suggestions and corrections are welcome.

There are a few added complexities that you have. For instance, you define a tie as being either (...) or just the .... But that inner contents is identical to the line.
Here's a rewritten grammar that greatly simplifies what you want. When writing grammars, it's helpful to start from the small and go up.
grammar Tied {
rule TOP { <notes>+ %% \v+ }
token notes {
[
| <tie>
| <simple-note>
] +
%%
<.ws>?
}
token open-tie { '(' }
token close-tie { ')' }
token tie { <.open-tie> ~ <.close-tie> <notes> }
token simple-note { <[A..Ga..g,'>0..9|\]]> }
}
A few stylistic notes here. Grammars are classes, and it's customary to capitalize them. Tokens are methods, and tend to be lower case with kebap casing (you can of course use any type you want, though). In the tie token, you'll notice that I used <.open-tie>. The . means that we don't need to capture it (that is, we're just using it for matching and nothing else). In the notes token I was able to simplify things a lot by using the %% and making TOP a rule which auto adds some whitespace.
Now, the order that I would create the tokens is this:
<simple-note> because it's the most base level item. A group of them would be
<notes>, so I make that next. While doing that, I realize that a run of notes can also include a…
<tie>, so that's the next one. Inside of a tie though I'm just going to have another run of notes, so I can use <notes> inside it.
<TOP> at last, because if a line just has a run of notes, we can omit line and use %% \v+
Actions (often given the same name as your grammar, plus -Actions, so here I use class Tied-Actions { … }) are normally used to create an abstract syntax tree. But really, the best way to think of this is asking each level of the grammar what we want from it. I find that whereas writing grammars it's easiest to build from the smallest element up, for actions, it's easiest to go from the TOP down. This will also help you build more complex actions down the road:
What do we want from TOP?
In our case, we just want all the ties that we found in each <note> token. That can be done with a simple loop (because we did a quantifier on <notes> it will be Positional:
method TOP ($/) { 
my #ties;
#ties.append: .made for $<notes>;
make #ties;
}
The above code creates our temp variable, loops through each <note> and appends on everything that <note> made for us — which is nothing at the moment, but that's okay. Then, because we want the ties from TOP, so we make them, which allows us to access it after parsing.
What do you want from <notes>?
Again, we just want the ties (but maybe some other time, you want ties and glisses, or some other information). So we can grab the ties basically doing the exact same thing:
method notes ($/) { 
my #ties;
#ties.append: .made for $<tie>.grep(*.defined);
make #ties;
}
The only differences is rather than doing just for $<tie>, we have to grab just the defined ones — this is a consequence of doing the [<foo>|<bar>]+: $<foo> will have a slot for each quantified match, whether or note <foo> did the matching (this is when you would often want to pop things out to, say, proto token note with a tie and a simple note variant, but that's a bit advaned for this). Again, we grab the whatever $<tie> made for us — we'll define that later, and we "make" it. Whatever we make is what other actions will find made by <notes> (like in TOP).
What do you want from <tie>?
Here I'm going to just go for the content of the tie — it's easy enough to grab the parentheses too if you want. You'd think we'd just use make ~$<notes>, but that leaves off something important: $<notes> also has some ties. But those are easy enough to grab:
method tie ($/) {
my #ties = ~$<notes>;
#ties.append: $<notes>.made;
make #ties;
}
This ensures that we pass along not only the current outer tie, but also each individual inner tie (which in turn may haev another inner one, and so on).
When you parse, all you need to do is grab the .made of the Match:
say Tied.parse("a(b(c))d");
# 「a(b(c))d」
# notes => 「a(b(c))d」
# simple-note => 「a」
# tie => 「(b(c))」 <-- there's a tie!
# notes => 「b(c)」
# simple-note => 「b」
# tie => 「(c)」 <-- there's another!
# notes => 「c」
# simple-note => 「c」
# simple-note => 「d」
say Tied.parse("a(b(c))d", actions => TiedActions).made;
# [b(c) c]
Now, if you really only will ever need the ties  —and nothing else— (which I don't think is the case), you can things much more simply. Using the same grammar, use instead the following actions:
class Tied-Actions {
has #!ties;
method TOP ($/) { make #!ties }
method tie ($/) { #!ties.push: ~$<notes> }
}
This has several disadvantages over the previous: while it works, it's not very scalable. While you'll get every tie, you won't know anything about its context. Also, you have to instantiate Tied-Actions (that is, actions => TiedActions.new), whereas if you can avoid using any attributes, you can pass the type object.

Related

KeystoneJS `filter` vs `Item` list access control

I am trying to understand more in depth the difference between filter and item access control.
Basically I understand that Item access control is, sort of, higher order check and will run before the GraphQL filter.
My question is, if I am doing a filter on a specific field while updating, for instance a groupID or something like this, do I need to do the same check in Item Access Control?
This will cause an extra database query that will be part of the filter.
Any thoughts on that?
The TL;DR answer...
if I am doing a filter on a specific field [..] do I need to do the same check in Item Access Control?
No, you only need to apply the restriction in one place or the other.
Generally speaking, if you can describe the restriction using filter access control (ie. as a graphQL-style filter, with the args provided) then that's the best place to do it. But, if your access control needs to behave differently based on values in the current item or the specific changes being made, item access control may be required.
Background
Access control in Keystone can be a little hard to get your head around but it's actually very powerful and the design has good reasons behind it. Let me attempt to clarify:
Filter access control is applied by adding conditions to the queries run against the database.
Imagine a content system with lists for users and posts. Users can author a post but some posts are also editable by everyone. The Post list config might have something like this:
// ..
access: {
filter: {
update: () => ({ isEditable: { equals: true } }),
}
},
// ..
What that's effectively doing is adding a condition to all update queries run for this list. So if you update a post like this:
mutation {
updatePost(where: { id: "123"}, data: { title: "Best Pizza" }) {
id name
}
}
The SQL that runs might look like this:
update "Post"
set title = 'Best Pizza'
where id = 234 and "isEditable" = true;
Note the isEditable condition that's automatically added by the update filter. This is pretty powerful in some ways but also has its limits – filter access control functions can only return GraphQL-style filters which prevents them from operating on things like virtual fields, which can't be filtered on (as they don't exist in the database). They also can't apply different filters depending on the item's current values or the specific updates being performed.
Filter access control functions can access the current session, so can do things like this:
filter: {
// If the current user is an admin don't apply the usual filter for editability
update: (session) => {
return session.isAdmin ? {} : { isEditable: { equals: true } };
},
}
But you couldn't do something like this, referencing the current item data:
filter: {
// ⚠️ this is broken; filter access control functions don't receive the current item ⚠️
// The current user can update any post they authored, regardless of the isEditable flag
update: (session, item) => {
return item.author === session.itemId ? {} : { isEditable: { equals: true } };
},
}
The benefit of filter access control is it doesn't force Keystone to read an item before an operation occurs; the filter is effectively added to the operation itself. This can makes them more efficient for the DB but does limit them somewhat. Note that things like hooks may also cause an item to be read before an operation is performed so this performance difference isn't always evident.
Item access control is applied in the application layer, by evaluating the JS function supplied against the existing item and/or the new data supplied.
This makes them a lot more powerful in some respects. You can, for example, implement the previous use case, where authors are allowed to update their own posts, like this:
item: {
// The current user can update any post they authored, regardless of the isEditable flag
update: (session, item) => {
return item.author === session.itemId || item.isEditable;
},
}
Or add further restrictions based on the specific updates being made, by referencing the inputData argument.
So item access control is arguably more powerful but they can have significant performance implications – not so much for mutations which are likely to be performed in small quantities, but definitely for read operations. In fact, Keystone won't let you define item access control for read operations. If you stop and think about this, you might see why – doing so would require reading all items in the list out of the DB and running the access control function against each one, every time a list was read. As such, the items accessible can only be restricted using filter access control.
Tip: If you think you need item access control for reads, consider putting the relevant business logic in a resolveInput hook that flattens stores the relevant values as fields, then referencing those fields using filter access control.
Hope that helps

Filtering Content in Sanity Studio

I am wondering if it is possible to filter content in Sanity Studio according to set criteria. For example, return all published posts or all posts within a particular category, etc.
Here is a short video showing what I mean: https://www.loom.com/share/5af3a9dd79f045458de00e8f5365cf00
Is this possible? If so, is there any documentation on how to do it?
Thanks.
The easiest way I've found to make all kinds of filters is using the Structure Builder. With it you add as many sections you like, name them, and give it your own filter in the form of groq and params.
Se documentation: https://www.sanity.io/docs/structure-builder-introduction
As an example I've added a S.listItem to the deskStructure.js file that gets all articles that are missing the module field.
export default async () =>
S.list()
.title('Content')
.items([
// ...
S.listItem() // <-- New root item for my filters
.title('My article filters')
.icon(FaRegCopyright)
.child(
S.list() // <-- List of filters
.title('My article filters')
.items([
S.listItem() // <-- Item with filter description
.title('Articles without module')
.icon(FaCogs)
.child(
S.documentList() // <-- Filtered list of articles
.title('Articles without module')
.menuItems(S.documentTypeList(menuType).getMenuItems())
.filter('_type == $type && !defined(module)')
.params({ type: 'article' })
),
S.listItem(), // more filters
S.listItem(), // more filters
])
),
// ...
It doesn't make different filters on one list of elements. It's more making different lists that are all ready filtered as you need. And you can give it what ever icon and text you want. Potato/potàto ,'-)
In the sorting list I don't think you can do much other than adding more sorting. And It doesn't work when the list of elements get larger anyways so I wouldn't bother. But it's in the Sort Order section: https://www.sanity.io/docs/sort-orders

MAIN subroutine

Scenario
Imagine that you have a module X whose functionalities are available to the user through a Command Line Interface. Such module doesn't do much in and of itself but it allows for other people to create plugin-like modules they can hook-up to module X. Ideally, the plugins could be used through X's CLI.
Thus my question:
What do you need to do in order to hook-up whatever functionality
a plugin might provide to X's CLI?
This means the plugin would need to provide some structure describing the command, what needs to be called and hopefully a usage message for the plugin. Therefore, when you run X's CLI, the plugin's command and help message shows up in the regular X's CLI's help message.
Example
main.p6:
use Hello;
use Print;
multi MAIN('goodbye') {
put 'goodbye'
}
lib/Hello.pm6:
unit module Hello;
our %command = %(
command => 'hello',
routine => sub { return "Hello" },
help => 'print hello.'
);
lib/Print.pm6:
unit module Print;
our %command = %(
command => 'print',
routine => sub { .print for 1..10 },
help => 'print numbers 1 through 10.'
);
The program main.p6 is a simplified version of this scenario. I'd like to integrate the information provided by both Hello.pm6 and Print.pm6 through
their respective %command variables into two new MAIN subs in main.p6.
Is this possible? If so, how would I go about achieving it?
This looks kinda specific for a StackOverflow question, but I will give it a try anyway. There are a couple of issues here. First one is to register the commands as such, so that MAIN can issue a message saying "this does that", and second is to actually perform the command. If both can be known at compile time, this can probably be fixed. But let's see how the actual code would go. I'll just do the first part, and I leave the rest as an exercise.
The first thing is that %command needs somehow to be exported. You can't do it the way you are doing it now. First, because it's not explicitly exported; if it were, you would end up with several symbols with the same name in the outer scope. So we need to change that to a class, so that the actual symbols are lexical to the class, and don't pollute the outer scope.
unit class Hello;
has %.command = %(
command => 'hello',
routine => sub { return "Hello" },
help => 'print hello.'
);
(Same would go for Print)
As long as we have that, the rest is not so difficult, only we have to use introspection to know what's actually there, as a small hack:
use Hello;
use Print;
my #packages= MY::.keys.grep( /^^<upper> <lower>/ );
my #commands = do for #packages -> $p {
my $foo = ::($p).new();
$foo.command()<command>
};
multi MAIN( $command where * eq any(#commands) ) {
say "We're doing $command";
}
We check the symbol table looking for packages that start with a capital letter, followed by other non-capital letter. It so happens that the only packages are the ones we're interested in, but of course, if you would want to use this as a plugin mechanism you should just use whatever pattern would be unique to them.
We then create instances of these new packages, and call the command auto-generated method to get the name of the command. And that's precisely what we use to check if we're doing the correct command in the MAIN subroutine, by using the where signature to check if the string we're using is actually in the list of known commands.
Since the functions and the rest of the things are available also from #packages, actually calling them (or giving an additional message or whatever) is left as an exercise.
Update: you might want to check out this other StackOveflow answer as an alternative mechanism for signatures in modules.

Actual property name on REQUIRED_CHILDREN connetion

In relay, when using REQUIRED_CHILDREN like so:
return [{
type: 'REQUIRED_CHILDREN',
children: [
Relay.QL`
fragment on Payload {
myConnection (first: 50) {
edges {
node {
${fragment}
}
}
}
}
`
]
}]
and reading off the response through the onSuccess callback:
Relay.Store.commitUpdate(
new AboveMutation({ }), { onFailure, onSuccess }
)
the response turns the property myConnection into a hashed name (i.e. __myConnection652K), which presumably is used to prevent connection/list conflicts inside the relay store.
However, since this is a REQUIRED_CHILDREN and I'm manually reading myConnection, it just prevents access to it.
Is there an way to get the actual property names when using the onSuccess callback?
Just as Ahmad wrote: using REQUIRED_CHILDREN means you're not going to store the results. The consequence of it is that data supplied to the callback is in raw shape (nearly as it came from server) and data masking does not applies.
Despite not storing the data, it seems to be no reason (though core team member's opinion would be certainly more appropriate here) not to convert it to client style shape. This is the newest type of mutation, so there is a chance such feature was accidentally omitted. This is normal that queries are transformed to the server style shape, the opposite transformation could take place as well. However until now is has not been needed - while saving the data to the store and updating components props, transformation was made meanwhile. Currently most of Relay team is highly focused on rewriting much of the implementation, so I would not expect this issue to be improved very soon.
So again, solution proposed by Ahmed to convert type to GraphQLList seems to be the easiest and most reliable. If for any reason you want to stand by connection, there is an option to take GraphQL fragment supplied as children (actually its parsed form stored in __cachedFragment__ attribute of that original fragment) and traverse it to obtain the serializationKey for desired field (eg __myConnection652K).

How to insert an item into a sequence using Sequelize, or How to manage an ordering attribute

I have an entity with a sequence attribute, which is an integer from 1-N for N members of the list. They are polyline points.
I want to be able to insert into the list at a given sequence point, and increment all the items at that point or beyond in the sequence to make room for the new item, and likewise if I delete then decrement everything above so we still have nice sequence ordering with no missing numbers.
There is a REST interface in this of course, but I dont want to hack about with that, I just want sequelize to magically manage this sequence number.
I am assuming I need to get hold of some "before insert" and "after delete" hooks in sequelize and issue some SQL to make this happen. Is that assumption correct or is there some cooler way of doing it.
I havent tested this, but this appears to be the solution, which is barely worth comment.
I know the modelName, and name==the attribute name,
options.hooks={
beforeInsert: function(record, options) {
return self.models[modelName].incrementAfter(name,record[name]);
},
afterDelete: function(record, options) {
return self.models[modelName].decrementAfter(name,record[name]);
}
}
and then added to my extended model prototype I have
incrementAfter:function(field,position){
return this.sequelize.query("UPDATE "+this.tableName+" SET "+field+" = "+field+"+1 WHERE "+field +" >= "+position);
},
decrementAfter:function(field,position){
return this.sequelize.query("UPDATE "+this.tableName+" SET "+field+" = "+field+"-1 WHERE "+field +" >= "+position);
},