How to use { } Curly braces in java-script function to be generated by RPG-CGI pgm - cgi

How to write a RPG-CGI program to generate a HTML page which contains a java-script program having function xxx() { aaaaaaaaaaaa; ssssssssss; }. When written in using Hex code constant it is being changed to some other symbol in the actual html code in the browser.
Does EBCDIC character set contains { , }, [ , ] , ! symbols.......if no,then how to use it in AS/400 RPG-CGI program ?

You are most likely running into a codepage conversion issue, which in brief means that the AS/400 does not produce the characters as expected by the recipient. Try to run in code page 819 which is ISO-Latin-1

Another option may be to look into using CGIDEV2 though I would try Thorbjørn's option first.

Related

How do I replace part of a string with a lua filter in Pandoc, to convert from .md to .pdf?

I am writing markdown files in Obsidian.md and trying to convert them via Pandoc and LaTeX to PDF. Text itself works fine doing this, howerver, in Obsidian I use ==equal signs== to highlight something, however this doesn't work in LaTeX.
So I'd like to create a filter that either removes the equal signs entirely, or replaces it with something LaTeX can render, e.g. \hl{something}. I think this would be the same process.
I have a filter that looks like this:
return {
{
Str = function (elem)
if elem.text == "hello" then
return pandoc.Emph {pandoc.Str "hello"}
else
return elem
end
end,
}
}
this works, it replaces any instance of "hello" with an italicized version of the word. HOWEVER, it only works with whole words. e.g. if "hello" were part of a word, it wouldn't touch it. Since the equal signs are read as part of one word, it won't touch those.
How do I modify this (or, please, suggest another filter) so that it CAN replace and change parts of a word?
Thank you!
this works, it replaces any instance of "hello" with an italicized version of the word. HOWEVER, it only works with whole words. e.g. if "hello" were part of a word, it wouldn't touch it. Since the equal signs are read as part of one word, it won't touch those.
How do I modify this (or, please, suggest another filter) so that it CAN replace and change parts of a word?
Thank you!
A string like Hello, World! becomes a list of inlines in pandoc: [ Str "Hello,", Space, Str "World!" ]. Lua filters don't make matching on that particularly convenient: the best method is currently to write a filter for Inlines and then iterate over the list to find matching items.
For a complete example, see https://gist.github.com/tarleb/a0646da1834318d4f71a780edaf9f870.
Assuming we already found the highlighted text and converted it to a Span with with class mark. Then we can convert that to LaTeX with
function Span (span)
if span.classes:includes 'mark' then
return {pandoc.RawInline('latex', '\\hl{')} ..
span.content ..
{pandoc.RawInline('latex', '}')}
end
end
Note that the current development version of pandoc, which will become pandoc 3 at some point, supports highlighted text out of the box when called with
pandoc --from=markdown+mark ...
E.g.,
echo '==Hi Mom!==' | pandoc -f markdown+mark -t latex
⇒ \hl{Hi Mom!}

Howto parse runlength encoded binary subformat with antlr

Given the following input:
AA:4:2:#5#xxAAx:2:a:
The part #5# defines the start of a binary subformat with the length of 5. The sub format can contain any kind of character and is likely to contain tokens from the main format. (ex. AA is a keyword/token inside the main format).
I want to build a lexer that is able to extract one token for the whole binary part.
I already tried several approaches (ex. partials, sematic predicates) but I did not get them working together the right way.
Finally I found the solution by myself.
Below are the relevant parts of the lexer definition
#members {
public int _binLength;
}
BINARYHEAD: '#' [0-9]+ '#' { _binLength = Integer.parseInt(getText().substring(1,getText().length()-1)); } -> pushMode(RAW) ;
mode RAW;
BINARY: .+ {getText().length() <= _binLength}? -> popMode;
The solution is based on an extra field that set while parsing the length definition of the binary field. Afterward a semantic predicate is used to restrict the validity of the binary content to the size of that field.
Any suggestion to simplify the parseInt call is welcome.

Unexpected token \n in JSON when parsing with Elm.Json

Actually I'm working with Elm but I have few issues with the json parsing in this language, the error that give me the compiler is:
Err "Given an invalid JSON: Unexpected token \n in JSON at position 388"
What I need to do is this:
example
At the char_meta I want its something like this:
[("Biographical Information", [("Japanese Name", "緑谷出久"), ...]), ...]
Here the code:
Ellie link
PD: The only constant keys are character_name, lang, summary and char_meta, they keys inside of char_meta are dynamic (thats why I use keyvaluepair) and the length its always different of this array (sometimes its empty)
Thanks, hope can help me.
EDIT:
The Ellie link now redirect to the fixed code
The issue is that elm (or JS once transcoded) interprets the \n and \" sequences when parsing the string literal, and they are replaced with an actual new line and double quotes respectively, which results in invalid JSON.
If you want to have the JSON inline in the code, you need to escape the 5 \s by doubling them (\\n and \\").
This only applies for literals, you won't have the issue if you load JSON from the network for instance.

How do I parse string which contains # character?

parse copy text [ to "<##" to "#>"]
This causes my Rebol script to generate a syntax error.
Do this:
parse copy text [ to {<##} to {#>}]
I didn't test the code, but I think it works.
--DJ

How to program Lex and Yacc to parse a partial file

Let me tell with an example.
Suppose the contents of a text file are as follows:
function fun1 {
int a, b, c;
function fun2 {
int d, e;
char f g;
function fun3 {
int h, i;
}
}
In the above text file, the number of opening braces are not matching the number of closing braces. The file as a whole doesn't follow the syntax. However the partial functions fun2 and fun3 follows the syntax. Typically the text file is very large.
If the user wants to parse the entire file ie function fun1, then the program should output an error as the braces are not matching. However, if the user wants to parse only the partial file ie function fun2/fun3, then the program shouldn't throw out an error as the braces are matching.
I have a question now
1. Is there a way to let the Lex and Yacc load only a
partial file ? If so then how it needs to be done.
Are you using bison/flex or plain old yacc/lex ?
It's a long time I played with yacc.
The technical answer is different for both pair of tool.
With flex you'll have to deal with the buffer mechanism.
The final code will be cleaner.
With lex you'll have to do all by hand.
At least you have to redefine input and unput macro.
You can also try to play with yyin and fseek.
On the parser side you'll have to deal with error management (yyerrok macro) and error token
http://dinosaur.compilertools.net/bison/bison_9.html#SEC81