How to generate the python files using antlr v4 plugin in IntelliJ? - intellij-idea

when clicking on "generate ANTLR recognizer" its only generating the java files and I'm not able to find how to generate python files instead of java files.

Add a language=...; to the options { ... } of your grammar:
grammar Test;
options {
language=Python3;
}
parse
: ANY*? EOF
;
ANY
: .
;
Or right click the grammar and choose the option Configure ANTLR..., and then set the language property in there.

Related

Error when running latexpdf for Cookiecutter template

This is part of the Sphinx document I'm trying to generate a PDF for:
Overview
--------
Use this `Cookiecutter <https://cookiecutter.readthedocs.io>`_ template to generate
an empty Python package. Features include:
* Boilerplate tests and documentation
* `Python setup configuration`_
* Open source software license
* Code style checking via `pre-commit`_
* `GitLab CI/CD integration`_
* `Editorconfig`_
* Miscellaneous files, such as `Changelog`_
* A `README`_
.. _Python setup configuration: {{cookiecutter.project_slug}}/setup.py
.. _pre-commit: {{cookiecutter.project_slug}}/.pre-commit-config.yaml
.. _GitLab CI/CD integration: {{cookiecutter.project_slug}}/.gitlab-ci.yml
.. _Editorconfig: {{cookiecutter.project_slug}}/.editorconfig
.. _Changelog: {{cookiecutter.project_slug}}/CHANGELOG.rst
.. _README: {{cookiecutter.project_slug}}/README.rst
I'm using make latexpdf. This generally works correctly, except for this bit. I can't get LaTeX to cooperate with these Cookiecutter links, though it looks fine in HTML. I've tried formatting the links in various ways in Sphinx, but it always ends up generating the same LaTeX, which generates the following error:
! Extra }, or forgotten \endgroup.
<recently read> }
l.129 ...}\}/setup.py}{Python setup configuration}
^^M
The generated LaTeX itself contains:
126:
127: \item {}
128: \sphinxAtStartPar
129: \sphinxhref{\{\{cookiecutter.project\_slug\}\}/setup.py}{Python setup configuration}
130:
I admittedly don't know much about LaTeX, but this line looks fine to me. It seems to escape all the { and } that it should, and open and close the rest properly?

Generate a single file with Telosys

how are you? I am learning how to use Telosys to make a code generator for Arduino, I was wondering if there is a way to create a single file for all the entities in my DSL. For example, I have the .entity files "Cars" and "Drivers", which generates me two .txt files when I generate code. Is there any way to generate the code for both .entity files in a single text file?
Yes, it's possible and it's quite easy. In your ".vm" file you just have to iterate on each entity defined in the model by using one these entities list : "$model.allEntites" or "$selectedEntities"
( see templates doc : https://www.telosys.org/templates-doc/objects/model.html )
Here's an example :
All entities :
#foreach( $entity in $model.allEntites )
. $entity.name : $entity.attributesCount attributes
#end
The "database doc" is a bundle of templates using this kind of generation.
See "database_tables_list.vm" in bundle https://github.com/telosys-templates-v3/database-doc-T300
Don't forget to set the "number of generations" to "1" for this ".vm" file in the "templates.cfg" file in order to generate it only once
Example from "database doc" bundle ( "1" at the end of line ) :
Database tables list (HTML) ; database.html ; dbdoc ; database_tables_list.vm ; 1
In your case for a text file :
My global text file ; global.txt ; myfolder ; mytemplate.vm ; 1

ANTLR4 Unicode Parse Not Recognized by grun

Given the following:
grammar Lang
start: CHAR;
CHAR: [\uE001];
WS: [ \t\r\n]+ -> skip;
When this batch file runs:
#echo off
setlocal
call antlr4 -o .\javatarget LangFile.g4 -encoding UTF-8
cd .\javatarget
call javac LangFile*.java
call grun LangFile Lang -gui -diagnostics -trace -encoding UTF-8
endlocal
#echo on
This error happens when I paste in the Unicode character:

^Z
line 1:0 token recognition error at: '?'
enter Lang, LT(1)=<EOF>
consume [#0,3:2='<EOF>',<-1>,2:0] rule Lang
exit Lang, LT(1)=<EOF>
Despite my search into the other answers (such as the -encoding option), I cannot seem to get this kind of Unicode (the Private Use Areas) parsing to work.
Edit: I have version 4.8.
The problem seems to be with the grun tool. Running it manually with Python runs fine, and so does specifying an input file. But directly pasting the content into the console fails. It's good enough for me to revert to using an input file, but perhaps this question is answered when grun's direct input mode works.
Could be an issue with how your grun script handles the input, because when I generate a lexer and parser and run this:
LangLexer lexer = new LangLexer(CharStreams.fromString("\uE001"));
LangParser parser = new LangParser(new CommonTokenStream(lexer));
parser.start();
it parses without any warnings or errors.

Kotlin prints non-English characters as question marks

I am trying to print Hebrew characters from a Kotlin program (running on the console).
All the Hebrew characters are being output as question marks.
I created the following simple test.kts script file for testing:
println("שלום מקוטלין")
// Try to print a simple non-Hebrew character too
println("\u0394") // Greek Delta
The file is properly saved in UTF-8 format.
It prints:
???? ???????
?
I tried running it in Command Prompt, PowerShell (both in its native window and in Windows Terminal), and Git Bash, all of which give the same result. I also tried redirecting the output to a file to rule out display issues in the shells.
To make sure the problem isn't the console itself, I also made simple test.bat, test.ps1, and test.sh files with the following content:
echo "שלום מקוטלין"
All three shells correctly displayed the Hebrew text here, indicating that the problem is in Kotlin's output, not in the shell display. (Though PowerShell requires the file to be saved "UTF-8 with BOM" to display properly, this can't be the issue with Kotlin since Kotlin won't even run a script that is saved with a BOM.)
As far as I can tell, Kotlin should support UTF-8 output by default with no configuration needed.
How can I get the proper output?
Updates:
If I write the output to a file using java.io.File("out.txt").writeText("שלום מקוטלין"), it works properly.
Also, if I open a new PrintStream using val out = java.io.PrintStream(System.out, true, "UTF-8") and then write to it using out.println("שלום מקוטלין"), that works properly too.
Only writing to the console with println is broken.
System info:
Windows 10 2004 (Build 19041.450)
Kotlin 1.4.0 (downloaded from GitHub Releases)
Tested with JAVA_HOME pointing to both JRE 1.8.0_261 (Oracle) and 11.0.2 (Oracle OpenJDK).
(Update at bottom)
Partial answer, but was able to get some Hebrew characters in the console in both Kotlin and Java. Was verry painful. Included some commented out stuff to show you some other things I may have tried if you run into any other hurdles.
Saved Tester.kt as UTF-8 with Notepad.
fun main(args : Array<String>) {
System.setProperty("file.encoding", "UTF8")
//val charset = Charsets.UTF_8
//val byteArray = "שלום מקוטלין".toByteArray(charset)
//System.out.printf("%c",byteArray.toString(charset))
//System.out.println(Charset.defaultCharset())
System.out.println("ל")
}
kotlinc.bat .\Tester.kt -include-runtime -d Tester.jar
Now, this leads to another mess, which I discovered by trying to copy and paste Hebrew characters to Powershell/Cmd. When copying, the ? marks showed right off the bat. Dug around a little bit, seems Powershell ISE is better suited for this (reference below). Without any plugins, copy and pasted successfully. Then had to run this:
PS> [Console]::OutputEncoding = [System.Text.Encoding]::UTF8
Because on my system, running the following showed:
PS> [Console]::OutputEncoding
IsSingleByte : True
BodyName : iso-8859-1
EncodingName : Western European (Windows)
HeaderName : Windows-1252
WebName : Windows-1252
WindowsCodePage : 1252
IsBrowserDisplay : True
IsBrowserSave : True
IsMailNewsDisplay : True
IsMailNewsSave : True
EncoderFallback : System.Text.InternalEncoderBestFitFallback
DecoderFallback : System.Text.InternalDecoderBestFitFallback
IsReadOnly : True
CodePage : 1252
Then,
java -jar -D"file.encoding=UTF-8" tester.jar
and voila, a single Lamedh
ל
Also, the Java route, which may or may not bring more insights:
Tester.java saved as UTF-8 with Notepad, imports redundant, yes, but shows some standout imports
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import static java.nio.charset.StandardCharsets.*;
import java.nio.*;
public class Tester{
public static void main(String[] args){
String str1 = "שלום מקוטלין";
byte[] ptext = str1.getBytes(UTF_8);
String value = new String(ptext, UTF_8);
ByteBuffer byteBuffer = StandardCharsets.UTF_8.encode("ש");
System.out.println(Charset.defaultCharset());
System.out.println("שלום מקוטלין");
System.out.println(value);
System.out.print(byteBuffer.getChar());
System.out.printf("Value: %s",value);
}
}
javac would give:
javac .\Tester.java
.\Tester.java:8: error: unmappable character (0x9D) for encoding windows-1252
System.out.println("╫⌐╫£╫ò╫? ╫₧╫º╫ò╫ÿ╫£╫Ö╫ƒ");
So
javac -encoding UTF-8 .\Tester.java
and voila again, PS ISE only:
PS> java -D"file.encoding=UFT-8" Tester
UTF-8
שלום מקוטלין
שלום מקוטלין
힩Value: שלום מקוטלין
I think this shows there are several hurdles, but it can work with Kotlin, and with println after making sure the file is correct, running the file the right way, and the output is correct. Hebrew may be particularly difficult due to the right-to-left nature, other characters like Greek were easier I think.
No matter what, I feel your pain, good luck. From what I read, there may be other bottlenecks like sending Hebrew over a network. This opened my eyes to several things, will continue to learn about this myself.
(Update)
Using the second link in the reference actually provided before, you can make two small changes and get Hebrew in Powershell (not just ISE)!!
PS> $OutputEncoding = [console]::InputEncoding = [console]::OutputEncoding = New-Object System.Text.UTF8Encoding
Then,
Font: Courier New
References:
https://markw.dev/unicode_powershell/
Displaying Unicode in Powershell
https://community.idera.com/database-tools/powershell/ask_the_experts/f/learn_powershell_from_don_jones-24/11793/add-hebrew-to-powershell
https://docs.oracle.com/javase/7/docs/api/java/nio/charset/Charset.html
I want to display Greek unicode characters but i get "?" instead on ouput
Encode String to UTF-8

How to create a shortcut for user's build system in Sublime Text?

I've just created a build system named XeLaTeX by creating a file named XeLaTeX.sublime-build in the User directory of Sublime Text, whose content is:
{
"cmd": ["xelatex.exe","-synctex=1","-interaction=nonstopmode","$file_base_name"]
}
What should I do, if I want to bind my F1 key to this specific build system?
Note: Ctrl + B, the default build system should not be influenced. That is to say, I could use Ctrl + B to use the default one, and the key F1, the new system, is also available at the same time.
Maybe there is another way to achieve this. Add the following text to Default(Windows).sublime-keymap will execute the command:
{"keys": ["f1"], "command": "exec", "args": {"cmd": ["xelatex.exe","-synctex=1","-interaction=nonstopmode","$file_base_name"]}},
However, $file_base_name is not defined here. Is there any method to pass current file (base_)name to exec?
I nailed it by myself.
AFAIK, there is no such a way to pass current file name through key binding, and it's not possible to use key binding to specify a certain build system. Thus, writing a Python script is of necessity.
There are only three steps.
1. Save the following content to /Data/Package/User/compile_with_xelatex.py:
import sublime, sublime_plugin
class CompileWithXelatexCommand(sublime_plugin.TextCommand):
def run(self, edit):
self.view.window().run_command('exec', {'cmd': ["xelatex.exe","-synctex=1","-interaction=nonstopmode", self.view.file_name()[:-4]]})
2. Add a line to /Data/Packages/User/Default(<your-plat>).sublime-keymap
{"keys": ["f1"], "command": "compile_with_xelatex"},
3. Open your LaTeX source file with Sublime Text, and then press F1 to compile it with XeLaTeX.
Indeed, it's a little tricky, but it works like a charm for me.
Build systems work by either selecting them specifically in the Tools -> Build System menu, or by using a selector to match a specific syntax. If you want to use a selector, add the following line to your XeLaTeX.sublime-build file (make sure to add a comma , after the first line, the file needs to be valid JSON):
"selector": "text.tex.latex"
The build command is already bound to CtrlB and F7, but if you also want it bound to F1, open Preferences -> Key Bindings-User and add the following line if you already have custom key bindings:
{ "keys": ["f1"], "command": "build" }
If the file is empty, just add opening and closing square brackets [ ] at the beginning and end of the file, respectively, as it also needs to be valid JSON.
Now, whenever you open a LaTeX file, you should be able to hit F1 to build it. If for some reason it doesn't work (if, for example, you have other build systems for LaTeX installed by plugins like LaTeXTools), then just select Tools -> Build Systems -> XeLaTeX, and everything should work properly.
Your example will work, if you give parameter "$file_basename" not "$file_base_name".