Hi there i have a flex rule inside my lexer definition :
operators "[]"|"[]="|"[]<"|".."|"."|".="|"+"|"+="|"-"|"-="|"/"|"/="|"*"|"*="|"%"|"%="|"++"|"--"|"^"|"^="|"~"|"&"|"&="|"|"|"|="|"<<"|"<<="|">>"|"!"|"<"|">"|">="|"<="|"=="|"!="|"&&"|"||"|"~="
Is there any way to split this ruole on more lines to keep it clearer?
I tried with \ just like macros but it does not seem to be accepted by flex :(
PS: I don't want to split the rule in more sub-rules, but only split its regex in more lines to keep the code clearer.
No, that's not possible with flex (I've already looked up flex sources once to find this out).
Strictly speaking, the question is a bit misleading, since you're talking about a name definition, not rule.
Related
Background: I am making a extension to use for a not well known coding language.
While creating a vscode extension, I have run into an issue with block comments. When creating a new line in the comment I was hoping to automate the process of adding and asterisk (*) and a space on a new line. The only way I have found to do this is with the on enter rules.
The issue that I have with this is that the on enter rules can only see the line above and after the cursor. As far as I can tell this could lead to inaccurate comments. One case could be if there was a multiplication issue stretching across multiple lines.
Am I thinking about this correctly, and if I am, are there any solutions to add this asterisk?
The VS Code documentation has a clear if short explanation of how to do custom indentation, but doing this has no effect — whatever I put in the "indentationRules", it fails to match the simplest patterns, and it doesn't even stop the built-in indentation from working, it goes right on using the default indentation described in the link above. All the other bits of the language extension are working, it's not a general problem, it's specific to trying to get these indentation rules to work. I've tried to find examples to copy from the internet but with no success. (I found an example of a grammar for Python but the only mention of indentation in it was as a possible kind of error, which is puzzling.)
Thanks for your help.
I would like something like the Markdown options (of SO?), where
four leading spaces makes it look like code
blocks have been written
Google (Hangouts) Chat only supports basic (not rich nor full Markdown) formatting. For your specific inquiry, use pairs of triple backticks, i.e.,
```
Hello
World
```
For this and other formatting directives, see either the consumer help page (for end-users), or the simple messages page (for developers) in Google's documentation.
If it's a single line, you can use it between single back quote.
`Hello World`
I'm working with Huge's new Styleguide templates and am starting to wrap my head around Jade syntax. That said, I can't seem to find any documentation related to how the author created image paths. The syntax used is:
img.huge-sidebar__logo.clearfix(src='styleguide/assets/images/#{public.styleguide._data.logoImage}')
The part I'm not getting is the section of the path that appears to be an include:
#{public.styleguide._data.logoImage}
Can anyone shed some light on what this is called and what it's doing?
What you are seeing is an interesting application of Jade's interpolation functionality, which can be used on plaintext strings, such as is the case with src='...'.
It looks different (with the dots) because it's using a multidimensional JavaScript Object rather than simply a variable.
I am using ANTLR4 to parse code in my Netbeans Platform application. I have successfully implemented syntax highlighting using ANTLR4 and Netbeans mechanisms.
I have also implemented a simple code completion for two of my tokens. At the moment I am using a simple implementation from a tutorial, which searches for a whitespace and starts the completion process from there. This works, but it deems the user to prefix a whitespace before starting code completion.
My question: is it possible or even contemplated using ANTLR's lexer to determine which tokens are currently read from the input to determine the correct completion item?
I would appreciate every pointer in the right direction to improve this behaviour.
not really an answer, but I do not have enough reputation points to post comments.
is it possible or even contemplated using ANTLR's lexer to determine which tokens are currently read from the input to determine the correct completion item?
Have a look here: http://www.antlr3.org/pipermail/antlr-interest/2008-November/031576.html
and here: https://groups.google.com/forum/#!topic/antlr-discussion/DbJ-2qBmNk0
Bear in mind that first post was written in 2008 and current antlr v4 is very different from the one available at the time, which is why Sam’s opinion on this topic appear to have evolved.
My personal experience - most of what you are asking is probably doable with antlr, but you would have to know antlr very well. A more straightforward option is to use antlr to gather information about the context and use your own heuristics to decide what needs to be shown in this context.
The ANTLRv3 grammar https://sourceware.org/git/?p=frysk.git;a=blob_plain;f=frysk-core/frysk/expr/CExpr.g;hb=HEAD implements context sensitive completion of C expressions (no macros).
For instance, if fed the string:
a_struct->a<tab>
it would just lists the fields of "a_struct" starting with "a" (tab could, technically be any character or marker).
The technique it used was to:
modify a C grammar to recognize both IDENT and IDENT_TAB tokens
for IDENT_TAB capture the partial expression AST and "TOKEN_TAB" and throw them back to 'main' (there are hacks to help capture the AST)
'main' then performs a type-eval on the partial expression (compute the expression's type not value) and use that to expand TOKEN_TAB
the same technique, while not exactly ideal, can certainly be used in ANTLRv4.