Get real position in lexer. with example - netbeans-7

I writing editor with netbeans7 and ANTLR4
I have line in my.g4 file
Label : {(getCharPositionInLine()==0)}? ID;
That works well for static files, but while editing getCharPositionInLine() returns 0 often in other places.
How get a real position in lexer?
I noticed that, while editing text editor sent to lexer not all text but only changed, and in this fragment working lexer. I don't know how change it.
I created example with this problem
https://github.com/daimor/SimpleANTLR

If your input stream does not represent a stream starting at the beginning of the file, then you need to initialize the lexer with the line/column where the stream is actually starting.
lexer.getInterpreter().setLine(actualLine);
lexer.getInterpreter().setCharPositionInLine(actualCharPositionInLine);
If you do not do this, the lexer will always assume that the input stream starts at the beginning of the file.

Related

Removing handling newlines in a simple text import class

I have an input file that I want to use the string SPLIT function on for each line, depending on the Type field. However, the description field sometimes has data that has new lines in it so it messes up my file reader since it uses streamreader's readline() function
Handled:
Type|Name|User|Description
Type|Name|User|Description
Unhandled:
Type|Name|User|Description line 1
Description Line 2
Type|Name|User|Description
Besides not being able to validate on 'Type' for each line and keep reading the file for when the next Type field appears, are there any ways folks can come up with to properly read this file?
My solution was to have the file maker replace newline characters in their description field with another unique character that I can later add back in. I'm still interested in solutions from the file reader's perspective though
I know I'm talking to myself a lot here, but I found another solution, which is to remove remove line feeds, since the output file creator wrote out carriage returns for each line.
You could easily set a conditional statement to see if the Split array contains more than one element, which would indicate that it's a line you want to parse.

Delete the first characters before parse an XML (SAX)

I have who xml files apparently identical named wrong.xml and good.xml.
The code is the follow:
<?xml version="1.0" encoding="utf-16"?>
<tag>
</tag>
The problem is that the XMLReader class (org.xml.sax.XMLReader) detects the follow error when parse the wrong.xml.
Content is not allowed in prolog
The reason is that exist an hidden characters before prolog.
I only got to see these characters using a basic java file reader and I can see that the first and second characters are -1 and -2.
'-1''-2'<?xml version>......
Notepad, Ultraedit32, Wordpad, Notepad++, etc. neither can see them.
My real problem is that I need read the xml from an FTP automatically, then I need any way for delete these characters before parse with xmlReader without parse all document because some documents are very big.
How delete the first char of a file?
You'll have to remove those characters before the parser sees them, but you don't need to read the whole file and write it back out again with those characters removed.
A sax parser can read from an InputSource based on a Reader. There are many implementations of this Reader interface for reading from a file, url or other data source, but you can also wrap whatever your primary Reader is in a FilterReader extension that you code to perform changes needed to the data before it goes on.
It isn't difficult to code an extension of FilterReader that drops the first two characters but passes on everything else, and that will do just what you need. If the need to drop those characters isn't known until runtime, but can be detected then in a sensible way, this can be to do it only when needed. It might make sense to drop all characters before the first '<'.

Removing blank line at end of string before writing to text file?

Been searching around for this for a couple hours, can't find anything which will do this correctly. When writing a string to a text file, a blank line is outputted at the end.
writeString = New StreamWriter(path, False)
writeString.WriteLine("Hello World")
writeString.Flush()
writeString.Close()
This will write the following to file:
Hello World
(Blank Line)
I've tried removing last character of string (both as regular string with varString.Substring(0, varString.Length - 1) and also as a list of string with varList.RemoveAt(varList.Count - 1)) but it just removes the literal last character.
I've also tried using Replace(vbCrLf, "") and many variations of it but again, they only remove literal new lines created in the string, not the new line at the end that is magically created.
Preferably, I'm seeking a method which will be able to remove that magical newline before the string is ever written to the file. I found methods which read from the file and then write back to it which would require Write > Read > Write, but in all cases the magical new line still appeared. :(
If it's important to note: The file will contain a string which may contain actual new lines (it's 'Song Artist - Song Title', though can contain other information and new lines can be added if the user wishes). That text file is then read by other applications (such as mIRC etc) of which output the contents by various means depending on application.
Eg. If an application were to read it and output it into a textbox.. the new line will additionally output to that textbox.. which is a problem! I have no control of the applications which will read the file as input considering it's the client which decides the application, so the removal of the new line needs to be done when outputted.
Help is appreciated~!
Use the Write method instead of WriteLine. The WriteLine method is the one adding a blank 0 length line to the file because it is terminating the "Hello World" string with a newline.
writeString.Write("Hello World")

How to discard the rest of line after syntax error

I'm implementing a small shell, and using lex&yacc to parse command. Lex reads command from stdin and yacc execute the command after yyparse.
The problem is, when there is a syntax error, yacc prompt an error and parse from the begining. In this case, cmd1 >>> cmd2 leads to running cmd2 becuase >>> is a syntax error.
My question is how to discard the rest of current command after encounting a syntax error?
If you want to write an interactive language with a prompt that lets users enter expressions, it's a bad idea to simply use yacc on the entire input stream. Yacc might get confused about something on one line and then misinterpret subsequent lines. For instance, the user might have an unbalanced parenthesis on the first line. or a string literal which is not closed, and then yacc will just keep consuming subsequent lines of the input, looking to close the construct.
It's better to gather the line of input from the user, and then parse that as one unit. The end of the line then simply the end of the input as far as Yacc is concerned.
If you're using lex, there are ways to redirect lex to read characters from a buffer in memory instead of from a FILE * stream. Look for documentation on the YY_INPUT macro, which you can define in a Lex file to basically specify the code that Lex uses for obtaining input characters.
Analogy time: Using a scanner developed with lex/yacc for directly handling interactive user input is a little bit like using scanf for handling user input. Whereas capturing a line into a buffer and then parsing it is more like using sscanf. Quote:
It's perfectly appropriate to parse strings with sscanf (as long as the return value is checked), because it's so easy to regain control, restart the scan, discard the input if it didn't match, etc. [comp.lang.c FAQ, 12.20].

Encoding issue in I/O with Jena

I'm generating some RDF files with Jena. The whole application works with utf-8 text. The source code as well is stored in utf-8.
When I print a string contaning non-English characters on the console, I get the right format, e.g. Est un lieu généralement officielle assis....
Then, I use the RDF writer to output the file:
Model m = loadMyModelWithMultipleLanguages()
log.info( getSomeStringFromModel(m) ) // log4j, correct output
RDFWriter w = m.getWriter( "RDF/XML" ) // default enc: utf-8
w.setProperty("showXmlDeclaration","true") // optional
OutputStream out = new FileOutputStream(pathToFile)
w.write( m, out, "http://someurl.org/base/" )
// file contains garbled text
The RDF file starts with: <?xml version="1.0"?>. If I add utf-8 nothing changes.
By default the text should be encoded to utf-8.
The resulting RDF file validates ok, but when I open it with any editor/visualiser (vim, Firefox, etc.), non-English text is all messed up: Est un lieu généralement officielle assis ... or Est un lieu g\u221A\u00A9n\u221A\u00A9ralement officielle assis....
(Either way, this is obviously not acceptable from the user's viewpoint).
The same issue happens with any output format supported by Jena (RDF, NT, etc.).
I can't really find a logical explanation to this.
The official documentation doesn't seem to address this issue.
Any hint or tests I can run to figure it out?
My guess would be that your strings are messed up, and your printStringFromModel() method just happens to output them in a way that accidentally makes them display correctly, but it's rather hard to say without more information.
You're instructing Jena to include an XML declaration in the RDF/XML file, but don't say what encoding (if any) Jena declares in the XML declaration. This would be helpful to know.
You're also not showing how you're printing the strings in the printStringFromModel() method.
Also, in Firefox, go to the View menu and then to Character Encoding. What encoding is selected? If it's not UTF-8, then what happens when you select UTF-8? Do you get it to show things correctly when selecting some other encoding?
Edit: The snippet you show in your post looks fine and should work. My best guess is that the code that reads your source strings into a Jena model is broken, and reads the UTF-8 source as ISO-8859-1 or something similar. You should be able to confirm or disconfirm that by checking the length() of one of the offending strings: If each of the troublesome characters like é are counted as two, then the error is on reading; if it's correctly counted as one, then it's on writing.
My hint/answer would be to inspect the byte sequence in 3 places:
The data source. Using a hex editor, confirm that the é character in your source data is represented by the expected utf-8 hex sequence 0xc3a8.
In memory. Right after your call to printStringFromModel, put a breakpoint and inspect the bytes in the string (or convert to hex and print them out.
The output file. Again, use a hex editor to inspect the byte sequence is 0xc3a8.
This will tell exactly what is happening to the bytes as they travel along the path of your program, and also where they deviate from the expected 0xc3a8.
The best way to address this would be to package up the smallest unit of your code that you can that demonstrates the issue, and submit a complete, runnable test case as a ticket on the Jena Jira.