Formula delimiter differences on different locales - google-sheets-api

I'm trying to append cells with hyperlink to a spreadsheet file by following the instructions here https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/cells#celldata
A hyperlink this cell points to, if any. This field is read-only. (To set it, use a =HYPERLINK formula in the userEnteredValue.formulaValue field.)
The problem is that some formulas has multiple parameters that delimited by comma. But delimiters are different on spreadsheet that has different locales- like Turkey as locale. The delimiter on Turkey locale has settled as semicolon not comma. I didn't check if delimiters are different on different locales.
After I tried to add link as formulaValue, the result looks like this on spreadsheet that has Turkey locale:
https://user-images.githubusercontent.com/5789670/77210180-61581500-6b11-11ea-9302-81dcf84256f8.png
and this is from a spreadsheet that has United States locale:
https://user-images.githubusercontent.com/5789670/77210238-8e0c2c80-6b11-11ea-9eb8-ea82fdc869d2.png
Both spreadsheets has same formulas and only difference is just this (compared to a blank spreadsheet)
https://user-images.githubusercontent.com/5789670/77210339-cc095080-6b11-11ea-8805-92b3f6c59b0b.png
It's not like possible for me to track/identify all the configuration for delimiter on different locales. I just simply finding a way to generate hyperlink formula without having delimiter issues.
Something like a function
.getDelimiter("Europe/Istanbul")
or a field in properties to understand what type of delimiter has used on the target spreadsheet file
// SpreadsheetProperties
"properties": {
"title": string,
"locale": string,
"timeZone": string,
"formulaDelimiter": string, // read-only
...
}
Environment details
OS: Ubuntu 18.04
Node.js version: v12.13.0
npm version: 6.13.7
googleapis version: ^48.0.0
Steps to reproduce
Have two different spreadsheets that has United States and Turkey locales.
Use following data to append cell with batchUpdate API
{
"requests": [
{
"appendCells": {
"fields": "*",
"rows": [
{
"values": [
{
"userEnteredFormat": {},
"userEnteredValue": {
"formulaValue": "=HYPERLINK('https://google.com','20006922')"
}
}
]
}
],
"sheetId": 111111
}
}
]
}
Making sure to follow these steps will guarantee the quickest resolution possible.
Thanks!
Original issue is on Github. Can be found here: https://github.com/googleapis/google-api-nodejs-client/issues/1994

In your case, how about this modification?
Issue and workaround:
When the comma , is used like "formulaValue": "=HYPERLINK('https://google.com','20006922')" to the locale which uses the semicolon ;, when the formula is put using the batchUpdate method of Sheets API, the comma is used without replacing. By this, such error occurs.
On the other hand, when the semicolon is used as the delimiter instead of the comma to the local which uses the comma, when the formula is put using Sheets API, the semicolon is automatically replaced with the comma. By this, no error occurs.
From above situation, how about the following modification? In this case, also I replaced ' to ".
From:
"formulaValue": "=HYPERLINK('https://google.com','20006922')"
To:
"formulaValue": "=HYPERLINK(\"https://google.com\";\"20006922\")"

Related

How do I replace part of a string with a lua filter in Pandoc, to convert from .md to .pdf?

I am writing markdown files in Obsidian.md and trying to convert them via Pandoc and LaTeX to PDF. Text itself works fine doing this, howerver, in Obsidian I use ==equal signs== to highlight something, however this doesn't work in LaTeX.
So I'd like to create a filter that either removes the equal signs entirely, or replaces it with something LaTeX can render, e.g. \hl{something}. I think this would be the same process.
I have a filter that looks like this:
return {
{
Str = function (elem)
if elem.text == "hello" then
return pandoc.Emph {pandoc.Str "hello"}
else
return elem
end
end,
}
}
this works, it replaces any instance of "hello" with an italicized version of the word. HOWEVER, it only works with whole words. e.g. if "hello" were part of a word, it wouldn't touch it. Since the equal signs are read as part of one word, it won't touch those.
How do I modify this (or, please, suggest another filter) so that it CAN replace and change parts of a word?
Thank you!
this works, it replaces any instance of "hello" with an italicized version of the word. HOWEVER, it only works with whole words. e.g. if "hello" were part of a word, it wouldn't touch it. Since the equal signs are read as part of one word, it won't touch those.
How do I modify this (or, please, suggest another filter) so that it CAN replace and change parts of a word?
Thank you!
A string like Hello, World! becomes a list of inlines in pandoc: [ Str "Hello,", Space, Str "World!" ]. Lua filters don't make matching on that particularly convenient: the best method is currently to write a filter for Inlines and then iterate over the list to find matching items.
For a complete example, see https://gist.github.com/tarleb/a0646da1834318d4f71a780edaf9f870.
Assuming we already found the highlighted text and converted it to a Span with with class mark. Then we can convert that to LaTeX with
function Span (span)
if span.classes:includes 'mark' then
return {pandoc.RawInline('latex', '\\hl{')} ..
span.content ..
{pandoc.RawInline('latex', '}')}
end
end
Note that the current development version of pandoc, which will become pandoc 3 at some point, supports highlighted text out of the box when called with
pandoc --from=markdown+mark ...
E.g.,
echo '==Hi Mom!==' | pandoc -f markdown+mark -t latex
⇒ \hl{Hi Mom!}

Cross references to headings with leading numbers in PDF

I am using Pandoc to convert a markdown file to a PDF and I have some issues with creating references to headings with leading numbers.
Here is the code:
Take me to the [first paragraph](#1-paragraph)
## 1 Paragraph
In the converted PDF the link does not work.
When I remove the leading number everything works fine.
So whats the correct way to link to this kind of headings?
A good way to go about this is to look at pandoc's “native” output, i.e., the internal representation of the document after parsing:
$ echo '## 1 Paragraph' | pandoc -t native test.md
[ Header
2
( "paragraph" , [] , [] )
[ Str "1" , Space , Str "Paragraph" ]
]
The auto-generated ID for the heading is paragraph. The reason for that is that HTML4 doesn't allow identifiers that start with numbers, so pandoc skips those. Hence, [first paragraph](#paragraph) will work.
However, GitHub Flavored Markdown is written with HTML5 in mind, and numbers are allowed as the first id character in that case. Pandoc supports GitHub's scheme as well, and those auto-identifiers are enabled with --from=markdown+gfm_auto_identifiers.
Probably better than manual numbering of headings is to call pandoc with --number-sections (or -N); the numbering will be performed automatically.

How to detect latin word in a file in mulesoft

I want to detect latin / non-english word in a file in a Mule application running in Anypoint Studio (MuleSoft products), Can anyone help me?
Basically my code fetching a file from a legacy system and read it and post the data to salesforce, while reading the file I need to detect if any latin word / non-english word are there in the name column
There is no built-in function to detect characters outside the english alphabet that I'm aware of in Mule.
One alternative is to create a custom DataWeave function and use the charCode() or charCodeAt() functions to compare the Unicode characters of each character in the file with the allowed english characters, iterating over the characters of the file. This assumes that the file is a text file that can be read as a string.
Another alternative is to implement the same algorithm in a Java class and call it using the Java Module.
This is a solution with DataWeave using a recursive function to iterate over the characters. It would be more efficient if there was a way to avoid the recursion:
%dw 2.0
output application/json
import * from dw::core::Strings
fun isEnglishChar(c)=
(c >= 65 and c <= 90) or (c >= 97 and c <= 122) or (c == 32)
fun isEnglishWord(s)=
if (sizeOf(s) > 1) isEnglishChar(charCode(s)) and isEnglishWord(s[1 to -1])
else if (sizeOf(s) == 1) isEnglishChar(charCode(s))
else true
---
payload map isEnglishWord($.name)
Input:
[
{
"name": "has space"
},
{
"name": "JustEnglish"
},
{
"name": "ñó"
}]
Output:
[
true,
true,
false
]
Using functions makes it easy to reuse and to modify the logic if needed.
use the below regex to find non-English words, created a simple example as below
input:
{
"message": "你好"
}
code:
%dw 2.0
output application/json
---
payload.message contains (/[^\x00-\x7F]+/)
output : true
one more working example screenshot is as below:
Assuming you are able to parse input and get the string value in name field. You can iterate over the String value and for each word in the string, apply the below logic on each word.
Logic: Assuming the word is either english word or non-english word, pick the first letter from the word and check if it contains in the 26 english alphabets. When English word, value of No_Latin_Word should be True else False.
%dw 2.0
output application/json
//var name = "ĥć"
var name = "hc"
---
No_Latin_Word : upper(name[0]) contains /[A|B|C|D|E|F|G|H|I|J|K|L|M|N|O|P|Q|R|S|T|U|V|W|X|Y|Z]/

Why doesn't this perl 6 grammar work?

I don't know perl 5, but I thought I'd have a play with perl 6. I am trying out its grammar capabilities, but so far I'm having no luck. Here's my code far:
grammar CopybookGrammar {
token TOP { {say "at TOP" } <aword><num>}
token aword { {say "at word" } [a..z]+ }
token num { {say "at NUM" } [0..9]+ }
}
sub scanit($contents) {
my $match1 = CopybookGrammar.parse($contents);
say $match1;
}
scanit "outline1";
The output is as follows:
at TOP
at word
(Any)
For some reason, it does not appear to matching the <num> rule. Any ideas?
You forgot the angled brackets in the character classes syntax:
[a..z]+ should be <[a..z]>+
[0..9]+ should be <[0..9]>+
By themselves, square brackets [ ] simply act as a non-capturing group in Perl 6 regexes. So [a..z]+ would match the letter "a", followed by any two characters, followed by the letter "z", and then the whole thing again any number of times. Since this does not match the word "outline", the <aword> token failed to match for you, and parsing did not continue to the <num> token.
PS: When debugging grammars, a more convenient alternative to adding {say ...} blocks everywhere, is to use Grammar::Debugger. After installing that module, you can temporarily add the line use Grammar::Debugger; to your code, and run your program - then it'll go through your grammar step by step (using the ENTER key to continue), and tell you which tokens/rules match along the way.

How to use { } Curly braces in java-script function to be generated by RPG-CGI pgm

How to write a RPG-CGI program to generate a HTML page which contains a java-script program having function xxx() { aaaaaaaaaaaa; ssssssssss; }. When written in using Hex code constant it is being changed to some other symbol in the actual html code in the browser.
Does EBCDIC character set contains { , }, [ , ] , ! symbols.......if no,then how to use it in AS/400 RPG-CGI program ?
You are most likely running into a codepage conversion issue, which in brief means that the AS/400 does not produce the characters as expected by the recipient. Try to run in code page 819 which is ISO-Latin-1
Another option may be to look into using CGIDEV2 though I would try Thorbjørn's option first.