SWI-Prolog predicate for reading in lines from input file - file-io

I'm trying to write a predicate to accept a line from an input file. Every time it's used, it should give the next line, until it reaches the end of the file, at which point it should return false. Something like this:
database :-
see('blah.txt'),
loop,
seen.
loop :-
accept_line(Line),
write('I found a line.\n'),
loop.
accept_line([Char | Rest]) :-
get0(Char),
C =\= "\n",
!,
accept_line(Rest).
accept_line([]).
Obviously this doesn't work. It works for the first line of the input file and then loops endlessly. I can see that I need to have some line like "C =\= -1" in there somewhere to check for the end of the file, but I can't see where it'd go.
So an example input and output could be...
INPUT
this is
an example
OUTPUT
I found a line.
I found a line.
Or am I doing this completely wrong? Maybe there's a built in rule that does this simply?

In SWI-Prolog, the most elegant way to do this is to first use a DCG to describe what a "line" means, and then use library(pio) to apply the DCG to a file.
An important advantage of this is that you can then easily apply the same DCG also on queries on the toplevel with phrase/2 and do not need to create a file to test the predicate.
There is a DCG tutorial that explains this approach, and you can easily adapt it to your use case.
For example:
:- use_module(library(pio)).
:- set_prolog_flag(double_quotes, codes).
lines --> call(eos), !.
lines --> line, { writeln('I found a line.') }, lines.
line --> ( "\n" ; call(eos) ), !.
line --> [_], line.
eos([], []).
Example usage:
?- phrase_from_file(lines, 'blah.txt').
I found a line.
I found a line.
true.
Example usage, using the same DCG to parse directly from character codes without using a file:
?- phrase(lines, "test1\ntest2").
I found a line.
I found a line.
true.
This approach can be very easily extended to parse more complex file contents as well.

If you want to read into code lists, see library(readutil), in particular read_line_to_codes/2 which does exactly what you need.
You can of course use the character I/O primitives, but at least use the ISO predicates. "Edinburgh-style" I/O is deprecated, at least for SWI-Prolog. Then:
get_line(L) :-
get_code(C),
get_line_1(C, L).
get_line_1(-1, []) :- !. % EOF
get_line_1(0'\n, []) :- !. % EOL
get_line_1(C, [C|Cs]) :-
get_code(C1),
get_line_1(C1, Cs).
This is of course a lot of unnecessary code; just use read_line_to_codes/2 and the other predicates in library(readutil).
Since strings were introduced to Prolog, there are some new nifty ways of reading. For example, to read all input and split it to lines, you can do:
read_string(user_input, _, S),
split_string(S, "\n", "", Lines)
See the examples in read_string/5 for reading linewise.
PS. Drop the see and seen etc. Instead:
setup_call_cleanup(open(Filename, read, In),
read_string(In, N, S), % or whatever reading you need to do
close(In))

Related

How do I replace part of a string with a lua filter in Pandoc, to convert from .md to .pdf?

I am writing markdown files in Obsidian.md and trying to convert them via Pandoc and LaTeX to PDF. Text itself works fine doing this, howerver, in Obsidian I use ==equal signs== to highlight something, however this doesn't work in LaTeX.
So I'd like to create a filter that either removes the equal signs entirely, or replaces it with something LaTeX can render, e.g. \hl{something}. I think this would be the same process.
I have a filter that looks like this:
return {
{
Str = function (elem)
if elem.text == "hello" then
return pandoc.Emph {pandoc.Str "hello"}
else
return elem
end
end,
}
}
this works, it replaces any instance of "hello" with an italicized version of the word. HOWEVER, it only works with whole words. e.g. if "hello" were part of a word, it wouldn't touch it. Since the equal signs are read as part of one word, it won't touch those.
How do I modify this (or, please, suggest another filter) so that it CAN replace and change parts of a word?
Thank you!
this works, it replaces any instance of "hello" with an italicized version of the word. HOWEVER, it only works with whole words. e.g. if "hello" were part of a word, it wouldn't touch it. Since the equal signs are read as part of one word, it won't touch those.
How do I modify this (or, please, suggest another filter) so that it CAN replace and change parts of a word?
Thank you!
A string like Hello, World! becomes a list of inlines in pandoc: [ Str "Hello,", Space, Str "World!" ]. Lua filters don't make matching on that particularly convenient: the best method is currently to write a filter for Inlines and then iterate over the list to find matching items.
For a complete example, see https://gist.github.com/tarleb/a0646da1834318d4f71a780edaf9f870.
Assuming we already found the highlighted text and converted it to a Span with with class mark. Then we can convert that to LaTeX with
function Span (span)
if span.classes:includes 'mark' then
return {pandoc.RawInline('latex', '\\hl{')} ..
span.content ..
{pandoc.RawInline('latex', '}')}
end
end
Note that the current development version of pandoc, which will become pandoc 3 at some point, supports highlighted text out of the box when called with
pandoc --from=markdown+mark ...
E.g.,
echo '==Hi Mom!==' | pandoc -f markdown+mark -t latex
⇒ \hl{Hi Mom!}

Raku zip operator & space

I found this one liner which joins same lines from multiple files.
How to add a space between two lines?
If line 1 from file A is blue and line 1 from file B is sky, a get bluesky,
but need blue sky.
say $_ for [Z~] #*ARGS.map: *.IO.lines;
This is using the side-effect of .Str on a List to add spaces between the elements:
say .Str for [Z] #*ARGS.map: *.IO.lines
The Z will create 2 element List objects, which the .Str will then stringify.
Or even shorter:
.put for [Z] #*ARGS.map: *.IO.lines
where the .put will call the .Str for you and output that.
If you want anything else inbetween, then you could probably use .join:
say .join(",") for [Z] #*ARGS.map: *.IO.lines
would put comma's between the words.
Note: definitely don't do this in anything approaching real code. Use (one of) the readable ways in Liz's answer.
If you really want to use the same structure as [Z~] – that is, an operator modified by the Zip meta-operator, all inside the Reduce meta-operator – you can. But it's not pretty:
say $_ for [Z[&(*~"\x20"~*)]] #*ARGS.map: *.IO.lines
Here's how that works: Z can take an operator, so we need to give it an operator that concatenates two strings with a space in between. But there's no operator like that built in. No problem – we can turn any function into an infix operator by surrounding it with [ ] (the infix form).
So all we need is a function that joins two strings with a space between them. That also doesn't exist, but we can create one: * ~ ' ' ~ *. So, we should be able to shove that into our infix form and pass the whole thing to the Zip operator Z[* ~ ' ' ~ *].
Except that doesn't work. Because Zip isn't really expecting an infix form, we need to give it a hint that we're passing in a function … that is, we need to put our function into a callable context with &( ), which gets us to Z[&(* ~ ' ' ~ *)].
That Zip expression does what we want when used in infix position – but it still doesn't work once we put it back into the Reduce/[ ] operator that we want to use. This time, the problem is due to something that may or may not be a bug – even after discussing it with jnthn on github, I'm still not sure whether this behavior is intended/correct.
Specifically, the issue is that the Reduction meta-operator doesn't allow whitespace – even in strings. Thus, we need to replace * ~ ' ' ~ * with *~"\c[space]"~* or *~"\x20"~* (where \x20 is the hex value of in Unicode/ASCII). Since we've come this far into obfuscated code, I figure we might as well go all the way. And that gets us back to
say $_ for [Z[&(*~"\x20"~*)]] #*ARGS.map: *.IO.lines
Again, I'm not recommending that you do this. (And, if you do, you could at least make it slightly more readable by saving the * ~ ' ' ~ * function as a named variable in the previous line, which at least gets you whitespace. But, really, just use one of Liz's suggestions).
I just thought this gives a useful window into some of the darker and more interesting corners of Raku's strangely consistent behavior.

How to add a small bit of context in a grammar?

I am tasked to parse (and transform) a code of a computer language, that has a slight quirk in its rules, at least I see it this way. To be exact, the compiler treats new lines (as well as semicolons) as statement separators, but other than that (e.g. inside the statement) it treats them as spacers (whitespace).
As an example, this code:
try
local x = 5 / 0
catch (i)
print(i + "\n")
is proved to be equivalent to this:
try local x = 5 / 0 catch (i) print(i + "\n")
I don't see how I can express such a rule in EBNF, or specifically in Lark EBNF dialect. I mean in a sensible way. I probably could define all possible newline positions inside all statements, but it would be cumbersome and error-prone.
I wish to find a way to treat newlines contextually. Is there a proven method for this, preferably within Python/Lark domain? If I have to modify the parser for that purpose, then where should I start?
Or if I misunderstood something in this language in particular or in machine language parsing in general, or my statement of the problem is wrong, I'd also be happy to get educated.
(As you may guess, the language in question has a well proven implementation, but no officially defined grammar. Also, it is Squirrel, for all that it matters.)
The relevant quote from the "specification" is this:
A squirrel program is a simple sequence of statements.:
stats := stat [';'|'\n'] stats
[...] Statements can be separated with a new line or ‘;’ (or with the keywords case or default if inside a switch/case statement), both symbols are not required if the statement is followed by ‘}’.
These are relatively complex rules and in their totality not context free if newlines can also be ignored everywhere else. Note however that in my understanding the text implies that ; or \n are required when no of the other cases apply. That would make your example illegal. That probably means that the BNF as written is correct, e.g. both ; and \n are optionally everywhere. In that case you can (for lark) just put an %ignore "\n" statement and it should work fine.
Also, lark should not complain if you both ignore the \n and use it in a rule: Where useful it will match it in a rule, otherwise it will just ignore it. Note however that this breaks if you use a Terminal that includes the \n (e.g. WS or /\s/). Just have \n as an extra case.
(For the future: You will probably get faster response for lark questions if you ask over on gitter or at least put a link to SO there.)

jEdit in hard word-wrap mode: insert comment character automatically?

Probably quite a niche question, but I believe in the power of a big community: Is it possible to set up jEdit in way, that it automatically inserts a comment character (//, #, ... depending on the edit mode) at the beginning of a new line, if the line before the wrap was a comment?
Sample:
# This is a comment spanning multiple lines. If I continue to type here, it
# wraps around automatically, but I have to manually add a `#` to each line.
If I continue to type after the . the third line should start with the # automatically. I searched in the plugin repository but could not find anything related.
Background: jEdit has the concepct of soft and hard wrap. While soft wrap only breaks lines visually at a character limit, it does not insert line breaks in the file. Hard wrap on the other hand inserts \n into the file at the desired character count.
This is not exactly what you want: I use the macros Enter_with_Prefix.bsh to automatically insert the prefix (e.g., #, //) at the beginning of the new line.
Description copied from Enter_with_Prefix.bsh:
Enter_with_Prefix.bsh - a Beanshell macro for jEdit
that starts a new line continuing any recognized
sequence that started the previous. For example,
if the previous line beings with "1." the next will
be prefixed with "2.". It supports alpha lists (a., b., etc...),
bullet lists (+, =, *, etc..), comments, Javadocs,
Java import statements, e-mail replies (>, |, :),
and is easy to extend with new sequence types. Suggested
shortcut for this macro is S+ENTER (SHIFT+ENTER).

Prolog - unexpected end of file

Reading a file in Prolog error
Hi, I'm working on a Prolog project and I need to read the whole file in it. I have a file named 'meno.txt' and I need to read it. I found some code here on stack. The code is following:
main :-
open('meno.txt', read, Str),
read_file(Str,Lines),
close(Str),
write(Lines), nl.
read_file(Stream,[]) :-
at_end_of_stream(Stream).
read_file(Stream,[X|L]) :-
\+ at_end_of_stream(Stream),
read(Stream,X),
read_file(Stream,L).
When I call the main/0 predicate, I get an error saying unexpected end of file. My file looks like this:
line1
line2
line3
line4
I also found the similar problem here and the solution where in ASCII and UTF coding but I tried that and it seems its not my solution. Can anyone help?
The predicate read/2 reads Prolog terms from a file. Those must end with a period. Try updating the contents of your file to:
line1.
line2.
line3.
line4.
The unexpected end of file you're getting likely results from the Prolog parser trying to find an ending period.
If you are using SWI Prolog, you may use something like this for the read:
read_file(Stream,[X|L]) :-
\+ at_end_of_stream(Stream),
read_line_to_codes(Stream,Codes),
atom_chars(X, Codes),
read_file(Stream,L), !.
read_line_to_codes/2 reads each line directly into an array of character codes. Then atom_chars/2 will convert the codes to an atom.