Tree-sitter: Problem with optional tokens which has special meaning but their characters can also appear elsewhere - grammar

I'm working on tree-sitter grammar for https://xit.jotaen.net/ but I'm stuck on a strange problem. I was able to create a stripped down version of it:
module.exports = grammar({
name: 'xit',
extras: (_$) => [],
conflicts: (_$) => [],
rules: {
document: ($) => repeat(choice($.line, '\n')),
line: ($) => choice(
seq($.at, ' ', $.priority, ' ', $.text),
seq($.at, ' ', $.text),
),
at: (_$) => '#',
priority: (_$) => '!',
text: (_$) => seq(/[^\n]+/),
},
});
Here's a test for it, with what I would like to achieve:
==========
Experiment
==========
# ! !test!
# !test!
---
(document
(line
(at)
(priority)
(text)
)
(line
(at)
(text)
)
)
I'm not really sure what I'm doing wrong. I probably miss some important concept. I've read tree-sitter's documentation and noticed sections about conflicts, tokens and precedence but even when I tried to use a different combinations of these, I wasn't able to make it work.
Note: The real (not stripped down) problem can be found here: https://github.com/synaptiko/tree-sitter-xit/blob/priority/grammar.js and https://github.com/synaptiko/tree-sitter-xit/tree/priority/test/corpus

Related

Flutter Printing Package: Create pdf table with loop

I want to create a pdf document with the package 'pdf'. The example on the dart - page is working fine: https://pub.dev/packages/pdf#-example-tab-
You can see that the table is static. I want to create a dynamic table in the pdf document.
The columns will be constant, but the rows have to be dynamic.
I have tried to insert a for() - loop.
The syntax is not correct.
pdfWidget.Table.fromTextArray(context: context, data: <List<String>> [
<String>['Date', 'PDF Version', 'Acrobat Version'],
//.....
//more Strings here.....
]),
I ran into the same issue.
This seemed to work for me.
pdf.addPage(
MultiPage(
build: (context) => [
Table.fromTextArray(context: context, data: <List<String>>[
<String>['Msg ID', 'DateTime', 'Type', 'Body'],
...msgList.map(
(msg) => [msg.counter, msg.dateTimeStamp, msg.type, msg.body])
]),
],
),
);
where my msgList object was a custom List, ie: List<SingleMessage>
There are several ways to do it, I prefer to fill the List separately, like:
List<List<String>> salidas = new List();
salidas.add(<String>['Title1','Title2', ... , 'Title n']);
for(var indice=0;indice<records.length;indice++) {
List<String> recind = <String>[
records[indice].Stringfield1,
records[indice].Stringfield2,
...,
records[indice].Stringfieldn
];
salidas.add(recind);
}
...
fpdf.Table.fromTextArray(context: context,data: salidas),

Field not required but if user entered a value it must be meeting some rules (vuetify rules)

I am trying to enter a validation for a field that accept url, the field not required but if user entered a value it must meet url regex.
so I tried the following code
<v-text-field height="34px" v-model="website" single-line outline placeholder="http://" :rules="urlRules"></v-text-field>
this attribute :rules="urlRules" refer to a bulk of rules that I give in my vue object
urlRules: [
v => !!v || 'required',
v => /[-a-zA-Z0-9#:%_\+.~#?&//=]{2,256}\.[a-z]{2,4}\b(\/[-a-zA-Z0-9#:%_\+.~#?&//=]*)?/gi.test(v) || 'Please enter a correct URL to your website'
],
I want to change the validation in the rule object to make it not required but if user entered a url it must be correct
This is how I manage to validate non required fields with Vuetify:
urlRules: [
(v) => ( !v || YOUR_REGEX.test(v) ) || 'Please enter a correct URL to your website'
],
Check the !v. I am validating first, that if nothing is passed, will trigger a true value, this means, will not be required. But if something is passed, will check your regex or whatever expression you want to place.
There are other options, like an if else inside the rules, but i prefer this (prettier).
You can do it like below
urlRules: [
v => (v && v.length > 0 && !YOUR_REGEXP.test(v)) ? 'Please enter a correct URL to your website' : true
],
I solved this by maintaining separate validation functions as follows:
export const emailFormat = v =>
/^\w+([\.-]?\w+)*#\w+([\.-]?\w+)*(\.\w{2,3})+$/.test(v) ||
"E-mail must be valid";
export const emptyEmailFormat = v => {
if (v && v.length > 0) {
return /^\w+([\.-]?\w+)*#\w+([\.-]?\w+)*(\.\w{2,3})+$/.test(v) ||
"E-mail must be valid";
} else {
return true
}
}
emailRules = {
isRequired: this.question.mandatory ? required : true,
emailFormat: this.question.mandatory ? emailFormat : emptyEmailFormat
};
I got this idea after posting this question
I used a regex that accept an empty value with the pattern I want
here is what I used
urlRules: [
v => /^$|[-a-zA-Z0-9#:%_\+.~#?&//=]{2,256}\.[a-z]{2,4}\b(\/[-a-zA-Z0-9#:%_\+.~#?&//=]*)?/gi.test(v) || 'Please enter a correct URL to your website'
],
so if there is a better answer share it with me plz

Pagination with Sequelize using limit takes 4 times longer to load. How can I optimize?

I am currently using Sequelize with Node.js and I wanted to incorporate pagination. The code works, however, Sequelize takes 4 times more time to load than what it used to. 13 seconds is not acceptable as a loading time for each page. I tried both findAll and findAndCountAll, but as soon as I add the limit to the options, it becomes really slow. Here is the query used in node.js:
return req.models.BookingGroup.findAndCountAll({
attributes: group_attrs,
where: group_where,
include: [
{
model: req.models.Booking,
attributes: book_attrs,
where: booking_where,
include: [
{
model: req.models.Guest,
attributes: [
'id',
'firstname',
'lastname',
'email',
],
},
],
}
],
offset,
limit
})
.then(({count, rows}) => {
res.send(200, {count, groups: rows})
return {count, groups: rows}
})
.catch(err => console.log("##error ", err))
Am I doing something wrong? It only returns 70 entries, I don't think it should take this long. I haven't found anything online, and I don't know if it is a problem with Sequelize but any help is appreciated !
I came across a performance issue when using findandcountall.
In my case, Sequelize formed a lengthy JOIN statement in findandcountall (You can check this with passing the option logging: console.log).
However, instead of using findAndCountAll, I used .count() to get the number and .findAll() to get the results. This actually turned out to be much faster than using the findAndCountAll().
const count = await req.models.BookingGroup.count({
where, include, distinct: true, ...
});
const bookingGroup = await req.models.BookingGroup.findAll({
where, include, attributes, limit, offset, ...
});
res.send({ bookingGroup: [], count });

ElasticSearch analyzed fields

I'm building my search but need to analyze 1 field with different analyzers. My problem is for a field I need to have an analyzer on it for stemming (snowball) and then also one to keep the full word as one token (keyword). I can get this to work by the following index settings:
curl -X PUT "http://localhost:9200/$IndexName/" -d '{
"settings":{
"analysis":{
"analyzer":{
"analyzer1":{
"type":"custom",
"tokenizer":"keyword",
"filter":[ "standard", "lowercase", "stop", "snowball", "my_synonyms" ]
}
}
},
"filter": {
"my_synonyms": {
"type": "synonym",
"synonyms_path ": "synonyms.txt"
}
}
}
},
"mappings": {
"product": {
"properties": {
"title": {
"type": "string",
"search_analyzer" : "analyzer1",
"index_analyzer" : "analyzer1"
}
}
}
}
}';
The problem comes when searching on a single word in the title field. If it's populated with The Cat in the Hat it will store it as "The Cat in the Hat" but if I search for cats I get nothing returned.
Is this even possible to accomplish or do I need to have 2 separate fields and analyze one with keyword and the other with snowball?
I'm using nest in vb code to index the data if that matters.
Thanks
Robert
You can apply two different analyzers to the same using the fields property (previously known as multi fields).
My VB.NET is a bit rusty, so I hope you don't mind the C# examples. If you're using the latest code from the dev branch, Fields was just added to each core mapping descriptor so you can now do this:
client.Map<Foo>(m => m
.Properties(props => props
.String(s => s
.Name(o => o.Bar)
.Analyzer("keyword")
.Fields(fs => fs
.String(f => f
.Name(o => o.Bar.Suffix("stemmed"))
.Analyzer("snowball")
)
)
)
)
);
Otherwise, if you're using NEST 1.0.2 or earlier (which you likely are), you have to accomplish this via the older multi field type way:
client.Map<Foo>(m => m
.Properties(props => props
.MultiField(mf => mf
.Name(o => o.Bar)
.Fields(fs => fs
.String(s => s
.Name(o => o.Bar)
.Analyzer("keyword"))
.String(s => s
.Name(o => o.Bar.Suffix("stemmed"))
.Analyzer("snowball"))
)
)
)
);
Both ways are supported by Elasticsearch and will do the exact same thing. Applying the keyword analyzer to the primary bar field, and the snowball analyzer to the bar.stemmed field. stemmed of course was just the suffix I chose in these examples, you can use whatever suffix name you desire. In fact, you don't need to add a suffix, you can name the multi field something completely different than the primary field.

EVAL inside grok logstash

I am trying to add new filed in grok filter which supposed to an arithmetic expression of the fields that are extracted by grok match command.
Unfortunately was not able to figure out the correct syntax for that... Anybody?
I found somewhere that {(8*6)} supposed to return 48, but what about variables instead of constants?
====
`if [type] == "f5" {
grok {
match => [ message, "...%{WORD:was_status}...%{NUMBER:hour}hr:%{NUMBER:min}min:%{NUMBER:sec}sec" ]
add_field => [ "duration_%{was_status}", "\{((%{hour} * 3600) + (%{min} * 60) + %{sec})}" ]
}
}`
====
got the result, but EVAL obviously not working correctly:
message: .... [ was down for 0hr:0min:4sec ]
duration_down \`{((0 * 3600) + (0 * 60) + 4)}`
Thanks a lot,
Yuri
There is an outstanding feature request for a math filter, but I'm not aware of any such feature at this time.
In the meantime, you can use the ruby filter to run arbitrary Ruby code on your event.
Here's a simple example:
input {
generator {
count => 1
message => "1 2 3"
}
}
filter {
grok {
match => ["message", "%{NUMBER:a:int} %{NUMBER:b:int} %{NUMBER:c:int}"]
}
ruby {
code => "event['sum'] = event['a'] + event['b'] + event['c']"
}
}
output {
stdout {
codec => rubydebug{}
}
}
Note that grok will usually parse values into strings. If I hadn't converted them to integers, Ruby would have handled the + operator as a string concatenation (and sum would end up equaling 123).
Your ruby filter might look more like this:
ruby {
code => "event['duration_down'] = event['hour']*3600 + event['min']*60 + event['sec']"
}