In a Spring Boot test written in Kotlin, I have a test method which is annotated with #Sql like this:
#Sql(statements = [
"""insert into employee (created, name)
values (current_time, 'An, Ny')""",
"""insert into shift (created, progress)
values (current_time, 80)"""
])
These are not recognized by IntelliJ as SQL and have no syntax highlighting (except that of a string). I can add a comment such as // language=sql in front of every such string. This, however, is a nuisance (it’s rather like 6 statements in reality).
I’d much prefer a single comment or annotation spanning the whole contents of the #Sql annotation, however that doesn’t seem to work: IntelliJ proposes to add #Language("SQL") in front of the whole annotation but does not recognize the contents as SQL then.
There are settings to configure the injections, even down to parameter level, however that is overwhelming. Is that even the right road, or is there something simpler?
As of IntelliJ IDEA 2019.3, the issue is partially fixed: Adding a single //language=SQL comment suffices to highlight all SQL: IDEA-158709. The annotation alone is not yet enough: IDEA-228131
Related
I'm removing builder pattern on multiple places. Following example would help me with the task, but mainly I'd like to learn how to use live templates more.
Preexisting code:
Something s = s.builder
.a(...)
.b(bbb)
.build();
and I'd like to remove it to:
Something s = new Something();
s.setA(...);
s.setB(bbb);
part of it can be trivially done using intellij regex, pattern: \.(.*)$ and replacement .set\u$1. Well it could be improved, but lets keep it simple.
I can create surround live template using variable:
regularExpression(SELECTION, "\\.(.*)", "\\u$1")
but \\u will be evaluated as u.
question 1: is it possible to get into here somehow \u functionality?
but I might get around is differently, so why not try to use live temlate variable:
regularExpression(SELECTION, "\\.(.)(.*)", concat(capitalize($1), "$2"))
but this does not seem to work either. .abc is replaced to bc
question 2: why? How would correct template look like? And probably, if it worked, this would behave incorrectly for multiline input. How to make it working and also for multiline inputs?
sorry for questions, I didn't find any harder examples of live templates than trivial replacements.
No, there is no \u functionality in the regularExpression() Live Template macro. It is just a way to call String.replaceAll(), which doesn't support \u.
You can create a Live Template like this:
set$VAR$
And set the following expression for the $VAR$ variable:
capitalize(regularExpression(SELECTION, "\\.(.*)", "$1"))
I was trying to understand the following code:
def() ->commands
if(deferred_passive_abilities != [],
let [{ability: class passive_ability, creature: class creature}] items = [];
let found = false;
map(deferred_passive_abilities,
if(cmd = null, add(items, [value]), [cmd, set(found, true)])
where cmd = value.ability.static_effect(me, value.creature));
if(found,
set(deferred_passive_abilities, items);
evaluate_deferred_passive_abilities(),
set(deferred_passive_abilities, []))
)
Haskell appears to have both let and where, but I didn't learn much by a superficial reading of their haskell docs. They also have a let...in, which I didn't understand but it would be good to know if FFL has that.
So, what is the significance of using let versus where? Was it necessary to use let here? (Also, possibly another question: why does it need those semicolons?)
Using let introduces a variable that can be modified. Note how found and items are modified. By contrast, where always introduces immutable symbols.
Semi-colons are used in FFL to create a command pipeline. Normally in FFL, an entire formula is evaluated, resulting in a command or list of commands, and then the commands are executed.
When a semi-colon is present, everything before the semi-colon is treated as an entirely separate formula to everything after the semi-colon. The first formula is evaluated and executed and then the second formula is evaluated and executed.
Semi-colons effectively allow a much more procedural programming style in FFL, without semi-colons it is a purely functional language.
Never knew of let in FFL before this, must be very rare.
Regardless of the insights, the semicolon has to be absolutely necessary, in order to force execution before using the bound variable. In other words, until used the semicolon, the variable does not exist. Does not have a bound value.
This is a big difference to where, which doesn't need of semicolons.
Given the semicolon is not a construction for complete beginners, I could somewhat recommend beginners about variables to stick in where until understanding the trickery of the semicolons.
Hello ANTLR creators/users,
Some context - I am using PlSql ANTLR4 parser to do some lightweight transpiling of some queries from oracle sql to, let's say, spark sql. I have my listener class setup which extends the base listener.
Example of an issue -
Let's say the input is something like -
SELECT to_char(to_number(substr(ATTRIBUTE_VALUE,1,4))-3)||'0101') from xyz;
Now, I'd like to replace || with CONCAT and to_char with CAST as STRING, so that the final query looks like -
SELECT CONCAT(CAST(to_number(substr(ATTRIBUTE_VALUE,1,4))-3) as STRING),'0101') from xyz;
In my listener class, I am overriding two functions from base listener to do this - concatenation and string_function. In those, I am using a tokenStreamRewriter's replace to make the necessary transformation. Since tokenStreamRewriter is evaluated lazily, I am running to issue ->
java.lang.IllegalArgumentException: replace op boundaries of
<ReplaceOp#[#38,228:234='to_char',<2193>,3:15]..[#53,276:276=')',
<2214>,3:63]:"CAST (to_number(substr(ATTRIBUTE_VALUE,1,4))-3 as STRING)">
overlap with previous <ReplaceOp#[#38,228:234='to_char',<2193>,3:15]..
[#56,279:284=''0101'',<2209>,3:66]:"CONCAT
(to_char(to_number(substr(ATTRIBUTE_VALUE,1,4))-3),'0101')">
Clearly, the issue is my two listener functions attempting to replace/transform text on overlapping boundaries.
Is there any work around for territory overlap kind of issues for ANTLR4? I'm sure folks run into such stuff all the time probably.
I'd appreciate any workarounds, even dirty ones at this point of time :)
I did realize that ANTLR4 does not allow us to modify original AST, otherwise this would have been a little bit easier to solve.
Thanks!
A look at how tokenstreamrewriter works leads to the following understanding:
first, a list of all modification operations are built
then, you invoke getText()
here, there is a reduction of modification operations. The idea for example is to merge multiple insert together in one reduction. Its role is also to avoid multiple replace on same data (but i will expand on this point later).
every token is then read, in the case there is a modification listed for the said token index, TokenStreamRewriter do the operation, otherwise it just pop the read token.
Let's have a look on how modification operations are implemented:
for insert, tokenstream rewriter basically just adds the string to be added at the current token index, and then do an index+1, effectively going to next token
for replace, tokenstream rewriter replace a range of tokens with the new string, and set the new index to the end of this range.
So, for tokenstreamrewriter, overlapping replaces are not possible, as when you replace you jump to the end of the range of tokens to be replaced. Especially, in the case you remove the checks of overlapping, then only the first replace will be operated, as afterwards, the token index is past the other replaces.
Basically, this has been done because there is no way to tell easily what tokens should be replaced while using overlapping replaces. You would need for that symbol recognition and matching.
So, what you are trying to do is the following (for each step, the part between '*' is what is modified):
*SELECT to_char(to_number(substr(ATTRIBUTE_VALUE,1,4))-3)||'0101')* from xyz;
|
V
CONCAT (*to_char(to_number(substr(ATTRIBUTE_VALUE,1,4))-3)*,'0101') from xyz;
|
V
SELECT CONCAT(CAST(to_number(substr(ATTRIBUTE_VALUE,1,4))-3) as STRING),'0101') from xyz;
to achieve your transformation, you could do so a replace of :
'to_char' -> 'CONCAT(CAST'
'||' -> ' as STRING),'
And, by using a bit of intelligence while parsing your tokens, like is there a '||' in my tokens to know if it's string, you would know what to replace.
regards
The way I solve it in multiple projects based on ANTLR is this: I translated ANTLR parse-tree to an AST written using Kolasu, an open-source library we developed at Strumenta.
Kolasu has all sort of utilities to process and mutate ASTs. For all non-trivial projects I end up doing transformations on the AST.
Kolasu
I just wanted to know how you put a cursor in a specific place in a live template for IntelliJ
For example:
# $var$ is an insance of the $objectType$ class
assert isinstance($var$, $objectType$)$END$
What happens here is that your cursor gets dragged to $var$ in the comment string first and then to your other values inside assert. What I wanted to know is how you chose where the cursor goes first.
I've read the documentation, but this is not mentioned, although a lot of other things are.
You can arrange the order that your variables are visited in. You find the information under bullet number five in this IntelliJ help document: http://www.jetbrains.com/idea/webhelp/creating-and-editing-template-variables.html
To arrange variables in the order you want IntelliJ IDEA to switch between associated input fields, use the Move Up and Move Down buttons.
Edit
You have to update the macro definition to similar to this:
# $varComment$ is an insance of the $objectTypeComment$ class
assert isinstance($var$, $objectType$)$END$
And then you define the order and expression to something like this (I didn't have any good expression for the var and orderType for you):
Since you fill in the Skip if defined for the two comment variable they will just take the values from the var and orderType and fill it in. This will do exactly what you are looking for :-)
Following conversion
SELECT to_tsvector('english', 'Google.com');
returns this:
'google.com':1
Why does TSearch2 engine didn't return something like this?
'google':2, 'com':1
Or how can i make the engine to return the exploded string as i wrote above?
I just need "Google.com" to be foundable by "google".
Unfortunately, there is no quick and easy solution.
Denis is correct in that the parser is recognizing it as a hostname, which is why it doesn't break it up.
There are 3 other things you can do, off the top of my head.
You can disable the host parsing in the database. See postgres documentation for details. E.g. something like ALTER TEXT SEARCH CONFIGURATION your_parser_config
DROP MAPPING FOR url, url_path
You can write your own custom dictionary.
You can pre-parse your data before it's inserted into the database in some manner (maybe splitting all domains before going into the database).
I had a similar issue to you last year and opted for solution (2), above.
My solution was to write a custom dictionary that splits words up on non-word characters. A custom dictionary is a lot easier & quicker to write than a new parser. You still have to write C tho :)
The dictionary I wrote would return something like 'www.facebook.com':4, 'com':3, 'facebook':2, 'www':1' for the 'www.facebook.com' domain (we had a unique-ish scenario, hence the 4 results instead of 3).
The trouble with a custom dictionary is that you will no longer get stemming (ie: www.books.com will come out as www, books and com). I believe there is some work (which may have been completed) to allow chaining of dictionaries which would solve this problem.
First off in case you're not aware, tsearch2 is deprecated in favor of the built-in functionality:
http://www.postgresql.org/docs/9/static/textsearch.html
As for your actual question, google.com gets recognized as a host by the parser:
http://www.postgresql.org/docs/9.0/static/textsearch-parsers.html
If you don't want this to occur, you'll need to pre-process your text accordingly (or use a custom parser).