Advice for simpleTraverse usage from "#typescript-eslint/typescript-estree" - typescript-eslint

👋
I'm parsing AST with typescript-estree's parse function and traversing the tree with simpleTraverse to identify and collect metadata about different nodes. I read in this issue discussion that simpleTraverse is intentionally not documented as it's only intended for usage by other ESlint packages, which makes sense.
My question is whether it's fine to use as is, or whether there's another tool I should be using. I have tried other common traversal tools, but the types are incomplete and / or conflict with what's provided by #typescript-eslint/typescript-estree. Another option is to copy the simpleTraverse code.
Thanks in advance! I really appreciate this project and the community supporting it. ❤️

Related

How to list the exposed members of a package/dir-like method in Elm?

I have been searching the official docs and existing questions and could not find any information on this - in Elm, how it would be possible to see the members/methods/variables that belong to or are exposed by a package in Elm, (such as the dir method in python), without having to dive into the source code each time?
What I want to do is get a simple list of what methods are exposed by an imported package. (So for a package like List, it should output reverse , all, any, map, etc.) I have attempted tab completion in elm repl and the elm extension available in VS code editor, and elm repl does not offer any methods such as help, doc, ?, dir, man, etc., so I have no idea where to even start. I'm wondering how everyone else does this other than pulling up the source code for each and every package they use.
I apologize for the newbie question and if I misread or have been missing anything, but I couldn't even find anything in the https://elmprogramming.com tutorial. Thanks in advance.
Nothing like this exists in Elm to do reflection over modules, unfortunately (as of 0.19.1, at least).
However, if you aren't looking to actually do this kind of thing at runtime, but rather as a convenient way of finding out for development, the elm packaging system enforces the requirement that all public functions are documented, so if you visit the package page, every public function and type will be documented there (obviously it can't enforce the content of the documentation, but at the very least it will be listed).

Creating Cytoscape.js extensions

Max, I want to update my extension to the new format, but I am running into issues with placement of custom code. It seems that the extension framework has been updated a lot since I added an extension 4 years ago. Is there a way to get better documentation on getting started with adding a extension? I am happy to help write up the documentation if you can help answer some questions that I think would help get people started. Let me know.
The only thing that really changed is that the scaffolder creates a webpack project for you. The extension registering procedure is the same: http://js.cytoscape.org/#extensions/api
For example, cytoscape( 'collection', 'fooBar', function(){ return 'baz'; } ) registers eles.fooBar().
I guess the main thing is that there are a lot more files than what the previous scaffolder generated, so it might be harder to find things. The layout output has lots of files, because it creates a skeleton impl for each of the continuous case and the discrete case.
The scaffolder isn't strictly necessary. You could use another build system (or none at all) as long as you call cytoscape(). For example, if you only care about publishing to npm for people who use webpack/browserify/rollup, then you could just use cjs require('cytoscape') to pull in the peer dependency. Exporting a register function is nice if you want to allow the client to decide the order of extension registrations with cytoscape.use(extension) (or extension(cytoscape)).
You're right that there should be some more docs on the output of the scaffolder. Maybe a summary of the files would suffice. We could add a tutorial in the blog later if need be. Both the docs and the blog just use markdown, so the content could go in either place.

Unmarshaling SOS DescribeSensor response via JSONIX yields incomplete object

I am attempting to use jsonix to unmarshal xml response from an SOS DescribeSensor request. In the broader scope I am going to be using jsonix to unmarshal all responses from SOS, particularly 2.0. I noticed that the response uses SML or SensorML namespace and so I added the extra module dependencies and sub-dependencies (namely GML_3_1_1, SWE_1_0_1, IC_2_0, SMIL_2_0, SMIL_2_0_Language, and of course SensorML_1_0_1). Before I added these I noticed the return was a generic json (see first screenshot, particularly near sml:physicalsystem). After I added the dependencies I got an error in my console during part of the unmarshaling process which I do not understand (see second screenshot). Here is a link to the xml response from the server for reference. https://drive.google.com/file/d/0B8LdnPVJpHz7M3VGb0FZc2lQcjQ/view?usp=sharing. I would really like to understand if this has anything to do with the ordering of the modules when I create the context though I believe it is fine. Once the solution to this is discovered I have two follow up questions.
Is it reasonable to expect (in general) that using the modules built from the ogc-schemas on the highsource github page should allow me to handle all responses via jsonix? i.e. every element will always be mapped to a defined type. I know these schemas/mappings are very complicated.
Are there any other tools I can use to verify the modules or validate them against schemas to make life easier rather than tracking down elements on an individual basis or tracing through various module files when jsonix seems to parse incorrectly?
Thanks in advance - Richard3d
var context = new Jsonix.Context([XLink_1_0, GML_3_2_1, IC_2_0, SMIL_2_0, SMIL_2_0_Language, GML_3_1_1, SWE_1_0_1, SensorML_1_0_1, OWS_1_1_0, SWE_2_0, SWES_2_0, WSN_T_1, WS_Addr_1_0_Core, OM_2_0, ISO19139_GMD_20070417, ISO19139_GCO_20070417, ISO19139_GSS_20070417, ISO19139_GTS_20070417, ISO19139_GSR_20070417, Filter_2_0, SOS_2_0]);
Disclaimer: I am the author of jsonix and main dev of ogc-schemas.
First of all, you're on the right track, stay on it.
Yes, if you have all the required mappings then you should get a "nice" JSON with all the properties with specific types, cardinatilities etc.
The goal of Jsonix is to provide bi-directional XML<->JSON conversion with deterministic structure, types and cardinalities.
The goal of OGC Schemas is to provide JAXB and Jsonix mappings for all of the OGC Schemas.
So togethere these two should allow to transform any OGC XMLs from/to JSON.
"Generic JSON" was actually just DOM. If a property allows DOM and Jsonix does not have mapping for certain element, it is just taken as DOM. You were just missing SensorML mappings.
You're right the structure of schema dependencies is very complex. But this is something we should take to OGC. :) It's a bit crazy that you need, like, a dozen of schemas to read sensor data. I was actually intending to build automatic loading of dependencies but did not yet implement this feature.
The next GML_3_1_1.AbstractFeatureType problem is probably this issue. Try changing the order of mappings (move GML_3_1_1 to the earlier places). Actually the order of mappings should not be significant, but, well, there's a bug.
Tools to cross-check - no, probably not. My approach is to do roundtrip tests (unmarshal-marshal-unmarshal-check equality). From experience, there are normally a couple of caveats at the start, but then it works by design. Of course there are bugs in Jsonix and there may be problems with mappings, but this gets sorted out.
Also feel to create a support project here:
https://github.com/highsource/jsonix-support
For instance https://github.com/highsource/jsonix-support/s/sos.
Here's an example of such a support project:
https://github.com/highsource/jsonix-support/tree/master/l/lightstalker89
I need this because just downloading XML from Google Drive (a) takes me effort to set up the support project (b) legally dangerous as I have not idea where this XML comes from and if I have rights/license to add these files to my test suites.

Is it feasible to use Antlr for source code completion?

I don't know, if this question is valid since i'm not very familiar with source code parsing. My goal is to write a source code completion function for one existing programming language (Language "X") for learning purposes.
Is Antlr(v4) suitable for such a task or should the necessary AST/Parse Tree creation and parsing be done by hand, assuming no existing solutions exists?
I haven't found much information about that specific topic, except a list of compiler books, except a compiler is not what i'm after for.
The code completion in GoWorks is completely implemented using ANTLR 4. The following video shows the level of completion of this code completion engine. The code completion example runs from 5 minutes through the end of the video.
Intro to Tunnel Vision Labs' GoWorks IDE (Preview Release)
I have been working on code completion algorithms for many years, and strongly believe that there is no better solution (automated or manual) for producing a code completion solution for a new language that meets the requirements for what I would call highly-responsive code completion. If you are not interested in that level of performance or accuracy, other solutions may be easier for you to get involved with (I don't work with those personally, because I am too easily disappointed in the results).
Xtext uses ANTLR3 and has good autocomplete facilities. The problem is, it generates a seperate parser (again using antlr3) for autocomplete processing which is derived from AbstractInternalContentAssistParser. This multi-thousand line code part shows that the error recovery of ANTLR3 alone found to be insufficient by the xtext team.
Meanwhile ANTLR4 has a function parser.getExpectedTokensWithinCurrentRule() which lists possible token types for given position. It works when used in a ParseTreeListener. Remaining is semantics, scoping etc which is out of ANTLRs scope.

Difference between (plain) Classworlds and Plexus Classworlds?

Can anyone please explain the difference between plexus-classworlds and (plain) classworlds?
These two are confusing and can't see the difference. Plexus classworlds contains almost no description. Apparently, a maven-based Java project uses both, I don't understand why.
Is it possible to replace classworlds with plexus-classworlds without much hassle?
I'm gonna answer that, even though the question is so old...
classworlds was migrated to plexus-classworlds, but the documentation on the site doesn't seem to keep up with that... the best docs I've seen was on classworlds 1.1-SNAPSHOT, although the current is plexus-classworlds 2.4.1-SNAPSHOT, and there is hardly any doc there.
if you look at plexus-classworlds, you can also see the original org.codehaus.classworlds package, with class comments like this:
A compatibility wrapper for org.codehaus.plexus.classworlds.launcher.Launcher provided for legacy code
which means that they thought about migration, but of course nothing replaces a thorough test.