How to get the current time with GHCJS? - ghcjs

How to get the current time with GHCJS? Should i try to access Date or use Haskell base libraries? Is there an utility function somewhere in the GHCJS base libraries?

The Data.Time.Clock module seems to work well:
import Data.Time.Clock (getCurrentTime)
import Data.Time.Format -- Show instance
main = do
now <- getCurrentTime
print now

The solution i found currently is quite ugly, but it works for me, so maybe it can save some time to somebody:
{-# LANGUAGE JavaScriptFFI #-}
import GHCJS.Types( JSVal )
import GHCJS.Prim( fromJSString )
foreign import javascript unsafe "Date.now()+''" dateNow :: IO (JSVal)
asInteger = read (fromJSString dateNow) :: Integer -- this happens in IO
The ugliness comes from not finding a JSInteger type in GHCJS, which would be needed in order to get the result of Date.now() which is a long integer. So i need to produce a string concatenating a string to the result of Date.now() in Javascript. At this point i could get a JSString as result, but that would not be an instance of Read so using read would not work. So i get a JSValue and convert it to String using fromJSString.
Eventually there might be a JSInteger in GHCJS, or JSString might become an instance of Read, so if you are reading this from the future try out something more elegant!

Related

BigQuery : Returning timestamp from JS udf throwing "Failed to coerce output value to type TIMESTAMP"

I have a bigquery code.
CREATE TEMP FUNCTION to_struct_attributes(input STRING)
RETURNS STRUCT<status_code STRING, created_time TIMESTAMP>
LANGUAGE js AS """
let res = JSON.parse(input);
res['created_time'] = Date(res['created_time'])
return res;
""";
SELECT
5 AS ID,
to_struct_attributes(
TO_JSON_STRING(
STRUCT(
TIMESTAMP(PARSE_TIMESTAMP('%Y%m%d%H%M%S', '20220215175959','America/Los_Angeles')) AS created_time
)
)
) AS ATTRIBUTES;
When I execute this, I'm getting the following error:
Failed to coerce output value "2022-02-16 01:59:59+00" to type TIMESTAMP
I feel this is quite strange, since BigQuery should be able to interpret it correctly and I haven't had this issue with any other datatypes. Also, if I do:
SELECT TIMESTAMP("2022-02-16 01:59:59+00")
It returns:
2022-02-16 01:59:59 UTC
So BigQuery can indeed parse it correctly. I'm not sure why it doesn't happen for the UDF. On searching the internet, I found this question and as the answer suggests, if I change the return statement to:
return Date(res.created_time);
It resolves the issue. But for a project of mine, doing it for every timestamp is not feasible due to the high number of struct columns.
So, I wanted to know if someone has a better alternative to it?
PS : I have removed a lot of non-essential parts from the above example, so this might look a bit abstract. Also, the actual use-case is a bit different and complex that's why I need that JS udf.
The best way to do what you want is to implement the following code.
return Date(res.created_time);
This happens when you pass a TIMESTAMP to a UDF, it is represented as a DATE object, as stated in the documentation. This is like a return of a TIMESTAMP from a JavaScript UDF, where you need to construct and return a DATE object.

VS Code - Completion is terrible, is it my setup?

Code completion and intellisense in VS Code is absolutely god-awful for me. In every language. I have extensions installed and updated but its always absolute trash.
import pandas as pd
data_all = pd.read_csv(DATA_FILE, header=None)
data_all. (press tab)
No suggestions.
Do you really not know its a Pandas DataFrame object, its literally the line above?
I have this issue in python, in ruby/rails, pretty much every langauge i try to use the completion is absolute garbage. Do i have an extension that is breaking other extensions? is code jsut this bad? Why is it so inexplicably useless?
Installed Currently:
abusaidm.html-s
nippets#0.2.1
alefragnani.numbered-bookmarks#8.0.2
bmewburn.vscode-intelephense-client#1.6.3
bung87.rails#0.16.11
bung87.vscode-gemfile#0.4.0
castwide.solargraph#0.21.1
CoenraadS.bracket-pair-colorizer#1.0.61
donjayamanne.python-extension-pack#1.6.0
ecmel.vscode-html-css#1.10.2
felixfbecker.php-debug#1.14.9
felixfbecker.php-intellisense#2.3.14
felixfbecker.php-pack#1.0.2
formulahendry.auto-close-tag#0.5.10
golang.go#0.23.2
groksrc.ruby#0.1.0
k--kato.intellij-idea-keybindings#1.4.0
KevinRose.vsc-python-indent#1.12.0
Leopotam.csharpfixformat#0.0.84
magicstack.MagicPython#1.1.0
miguel-savignano.ruby-symbols#0.1.8
ms-dotnettools.csharp#1.23.9
ms-mssql.mssql#1.10.1
ms-python.python#2021.2.636928669
ms-python.vscode-pylance#2021.3.1
ms-toolsai.jupyter#2021.3.619093157
ms-vscode.cpptools#1.2.2
rebornix.ruby#0.28.1
sianglim.slim#0.1.2
VisualStudioExptTeam.vscodeintellicode#1.2.11
wingrunr21.vscode-ruby#0.28.0
Zignd.html-css-class-completion
#1.20.0
If you check the IntelliSense of the read_csv() method (By hovering your mouse over it), you will see that it returns a DataFrame object
(function)
read_csv(reader: IO, sep: str = ...,
#Okay... very long definition but scroll to the end...
float_precision: str | None = ...) -> DataFrame
But if you use IntelliSense check the variable data_all
import pandas as pd
data_all = pd.read_csv(DATA_FILE, header=None)
It is listed as the default data type in python: Any. That's why your compiler isn't generating the autocomplete.
So, you simply need to explicitly tell your compiler that it is, in fact, a DataFrame object as shown.
import pandas as pd
from pandas.core.frame import DataFrame
DATA_FILE = "myfile"
data_all:DataFrame = pd.read_csv(DATA_FILE, header=None)
# Now all autocomplete options on data_all are available!
It might seem strange why the compiler cannot guess the data type in this example until you realize that the read_csv() method is overloaded with many definitions, and some of them return objects as Any type. So the compiler assumes the worst-case scenario and treats it as an Any type object unless specified otherwise.

Unable to use pickAFile in TigerJython

In JES, I am able to use:
file=pickAFile()
In TigerJython, however, I get the following error
NameError: name 'pickAFile' is not defined
What am I doing wrong here?
You are not doing anything wrong at all. The thing is that pickAFile() is not a standard function in Python. It is actually rather a function that JES has added for convenience, but which you probably will not find it in any other environment.
Since TigerJython and JES are both based on Jython, you can easily write a pickAFile() function on your own that uses Java's Swing. Here is a possible simple implementation (the pickAFile() found in JES might be a bit more complex, but this should get you started):
def pickAFile():
from javax.swing import JFileChooser
fc = JFileChooser()
retVal = fc.showOpenDialog(None)
if retVal == JFileChooser.APPROVE_OPTION:
return fc.getSelectedFile()
else:
return None
Given that it is certainly a useful function, we might have to consider including it into our next update of TigerJython.
P.S. I would like to apologise for answering so late, I have just joined SO recently and was not aware of your question (I am one of the original authors of TigerJython).

Generating Random String of Numbers and Letters Using Go's "testing/quick" Package

I've been breaking my head over this for a few days now and can't seem to be able to figure it out. Perhaps it's glaringly obvious, but I don't seem to be able to spot it. I've read up on all the basics of unicode, UTF-8, UTF-16, normalisation, etc, but to no avail. Hopefully somebody's able to help me out here...
I'm using Go's Value function from the testing/quick package to generate random values for the fields in my data structs, in order to implement the Generator interface for the structs in question. Specifically, given a Metadata struct, I've defined the implementation as follows:
func (m *Metadata) Generate(r *rand.Rand, size int) (value reflect.Value) {
value = reflect.ValueOf(m).Elem()
for i := 0; i < value.NumField(); i++ {
if t, ok := quick.Value(value.Field(i).Type(), r); ok {
value.Field(i).Set(t)
}
}
return
}
Now, in doing so, I'll end up with both the receiver and the return value being set with random generated values of the appropriate type (strings, ints, etc. in the receiver and reflect.Value in the returned reflect.Value).
Now, the implementation for the Value function states that it will return something of type []rune converted to type string. As far as I know, this should allow me to then use the functions in the runes, unicode and norm packages to define a filter which filters out everything which is not part of 'Latin', 'Letter' or 'Number'. I defined the following filter which uses a transform to filter out letters which are not in those character rangetables (as defined in the unicode package):
func runefilter(in reflect.Value) (out reflect.Value) {
out = in // Make sure you return something
if in.Kind() == reflect.String {
instr := in.String()
t := transform.Chain(norm.NFD, runes.Remove(runes.NotIn(rangetable.Merge(unicode.Letter, unicode.Latin, unicode.Number))), norm.NFC)
outstr, _, _ := transform.String(t, instr)
out = reflect.ValueOf(outstr)
}
return
}
Now, I think I've tried just about anything, but I keep ending up with a series of strings which are far from the Latin range, e.g.:
𥗉똿穊
𢷽嚶
秓䝏小𪖹䮋
𪿝ท솲
𡉪䂾
ʋ𥅮ᦸ
堮𡹯憨𥗼𧵕ꥆ
𢝌𐑮𧍛併怃𥊇
鯮
𣏲𝐒
⓿ꐠ槹𬠂黟
𢼭踁퓺𪇖
俇𣄃𔘧
𢝶
𝖸쩈𤫐𢬿詢𬄙
𫱘𨆟𑊙
欓
So, can anybody explain what I'm overlooking here and how I could instead define a transformer which removes/replaces non-letter/number/latin characters so that I can use the Value function as intended (but with a smaller subset of 'random' characters)?
Thanks!
Confusingly the Generate interface needs a function using the type not a the pointer to the type. You want your type signature to look like
func (m Metadata) Generate(r *rand.Rand, size int) (value reflect.Value)
You can play with this here. Note: the most important thing to do in that playground is to switch the type of the generate function from m Metadata to m *Metadata and see that Hi Mom! never prints.
In addition, I think you would be better served using your own type and writing a generate method for that type using a list of all of the characters you want to use. For example:
type LatinString string
const latin = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01233456789"
and then use the generator
func (l LatinString) Generate(rand *rand.Rand, size int) reflect.Value {
var buffer bytes.Buffer
for i := 0; i < size; i++ {
buffer.WriteString(string(latin[rand.Intn(len(latin))]))
}
s := LatinString(buffer.String())
return reflect.ValueOf(s)
}
playground
Edit: also this library is pretty cool, thanks for showing it to me
The answer to my own question is, it seems, a combination of the answers provided in the comments by #nj_ and #jimb and the answer provided by #benjaminkadish.
In short, the answer boils down to:
"Not such a great idea as you thought it was", or "Bit of an ill-posed question"
"You were using the union of 'Letter', 'Latin' and 'Number' (Letter || Number || Latin), instead of the intersection of 'Latin' with the union of 'Letter' and 'Number' ((Letter || Number) && Latin))
Now for the longer version...
The idea behind me using the testing/quick package is that I wanted random data for (fuzzy) testing of my code. In the past, I've always written the code for doing things like that myself, again and again. This meant a lot of the same code across different projects. Now, I could of course written my own package for it, but it turns out that, even better than that, there's actually a standard package which does just about exactly what I want.
Now, it turns out the package does exactly what I want very well. The codepoints in the strings which it generates are actually random and not just restricted to what we're accustomed to using in everyday life. Now, this is of course exactly the thing which you want in doing fuzzy testing in order to test the code with values outside the usual assumptions.
In practice, that means I'm running into two problems:
There's some limits on what I would consider reasonable input for a string. Meaning that, in testing the processing of a Name field or a URL field, I can reasonably assume there's not going to be a value like 'James Mc⌢' (let alone 'James Mc🙁') or 'www.🕸site.com', but just 'James McFrown' and 'www.website.com'. Hence, I can't expect a reasonable system to be able to support it. Of course, things shouldn't completely break down, but it also can't be expected to handle the former examples without any problems.
When I filter the generated string on values which one might consider reasonable, the chance of ending up with a valid string is very small. The set of possible characters in the set used by the testing/quick is just so large (0x10FFFF) and the set of reasonable characters so small, you end up with empty strings most of the time.
So, what do we need to take away from this?
So, whilst I hoped to use the standard testing/quick package to replace my often repeated code to generate random data for fuzzy testing, it does this so well that it provides data outside the range of what I would consider reasonable for the code to be able to handle. It seems that the choice, in the end, is to:
Either be able to actually handle all fuzzy options, meaning that if somebody's name is 'Arnold 💰💰' ('Arnold Moneybags'), it shouldn't go arse over end. Or...
Use custom/derived types with their own Generator. This means you're going to have to use the derived type instead of the basic type throughout the code. (Comparable to defining a string as wchar_t instead of char in C++ and working with those by default.). Or...
Don't use testing/quick for fuzzy testing, because as soon as you run into a generated string value, you can (and should) get a very random string.
As always, further comments are of course welcome, as it's quite possible I overlooked something.

Convert lxml _Element to HtmlElement

For various reasons I'm trying to switch from lxml.html.fromstring() to lxml.html.html5parser.document_fromstring(). The big difference between the two is that the first returns an lxml.html.HtmlElement, and the second returns an lxml.etree._Element.
Mostly this is OK, but when I try to run my code with the _Element object, it crashes, saying:
AttributeError: 'lxml.etree._Element' object has no attribute 'rewrite_links'
Which makes sense. My question is, what's the best way to deal with this problem. I have a lot of code that expects HtmlElements, so I think the best solution will be to convert to those. I'm not sure that's possible though.
Update
One terrible solution looks like this:
from lxml.html import fromstring, tostring
from lxml.html import html5parser
e = html5parser.fromstring(text)
html_element = fromstring(tostring(e))
Obviously, that's pretty brute force, but it does work. I'm able to get an HtmlElement that's been parsed by the html5parser, which is what I'm after.
The other option would be to work out how to do the rewrite_links and xpath queries that I rely on, but _Elements don't seem to have that function (which, again, makes sense!)
One solution less CPU intensive than brut force is to to create an almost empty HtmlElement based on the roottree and to append the _Element children.
from lxml.html import fromstring, tostring
from lxml.html import html5parser
text = "<html lang='en'><body><a href='http://localhost'>hello</body></html>"
e = html5parser.fromstring(text)
html_element = fromstring(tostring(e.getroottree()))
for child in e.getchildren():
html_element.append(child)
print(tostring(html_element))
def rewriter(link):
return "http://newlink.com"
html_element.rewrite_links(rewriter)
print(tostring(html_element.body))
Will output :
b'<html><body><html xmlns:html="http://www.w3.org/1999/xhtml" lang="en"><head></head><body>hello</body></html></body><html:head xmlns:html="http://www.w3.org/1999/xhtml"></html:head><html:body xmlns:html="http://www.w3.org/1999/xhtml"><html:a href="http://localhost">hello</html:a></html:body></html>'
b'<body><html xmlns:html="http://www.w3.org/1999/xhtml" lang="en"><head></head><body>hello</body></html></body>'
So both attributes like 'body' and methods like 'rewrite_links' work in this situation.