Why are suffixes like “.vue” and “.jsx” recognized and processed by vscode? If I want to customize a suffix called “.xxx” with an icon and process it, how should I do it?
I want to get a practical tutorial or guide document, if there is a detailed explanation of the principle, I will be very grateful
Related
I am creating an IntelliJ plugin and I am using JavaParser for one of my features. My plugin will allow users to click a gutter icon next to a method and automatically navigate to the tests associated with that method.
To achieve this, temporerily I have used the line:
typeSolver.add(new JavaParserTypeSolver(new File("/home/webby/IdeaProjects/project00/src/")));
My problem is that I need to pass the source folder of the given module into this type solver. Is there any way I can find the source folder programmatically? Perhaps from an actionEvent?
I have tried things along the lines of the following:
actionEvent.getData(PlatformDataKeys.PROJECT).getBasePath()
This gives me: '/home/webby/IdeaProjects/project00/' but I'm struggling to see how I can get the source folder? I feel there should be a fairly straight forward way of doing this using IntelliJ's SDK but I have not found anything in the documentation or anywhere else online.
Any and all solutions welcome!
Many Thanks,
James
You can use
ModuleRootManager.getInstance(module).getSourceRoots()
to access sources roots of a module. Refer to IntelliJ SDK Docs for details.
BTW IntelliJ IDEA provides special API to syntax trees of Java files, it works more efficiently and better integrates with other IDE features than external JavaParsers.
And it's better to ask questions about IntelliJ IDEA API on a special forum.
I am using a command line tool (ng-xi18n) to extract the i18n strings from an angular 2 app I wrote. The output of this command is a messages.xlf file. Coming from a .po background, and being not familiar with .xlf, I assumed that this file is the equivalent to the .pot file (correct me if I am wrong).
I then assumed that if I want to translate my app, I had to cp messages.xlf messages.de.xlf to have a copy (messages.de.xlf) of the template file (messages.xlf) where I can translate each message into German (hence the .de.xlf).
After translating some dummy texts and running the app, I saw that it worked as expected, so I quit translating and continued developing the app. After some time, I added more i18n strings, and eventually thought that I had to update my template. And this is where things got hardly maintainable. I updated the template messages.xlf file, and quickly was wondering how I could update the new strings to my already translated messages.de.xlf file without loosing my progress.
When I was developing using .po files, this was no problem thanks to good tools like poEdit, but I didn't find anything comparable for .xlf. After trying some tools, I thought that the best choice would be Lokalize, but I didn't find a possibility to merge the template file to already translated (but outdated) files either.
Up to now, this was rather an essay than a question, so here's a quick summary:
Is the workflow of dealing with .xlf files really comparable to .po as I initially thought (described above), or is it completely different?
How am I suppose to update my already translated files?
What are the best practices dealing with .xlf files?
What are proof of concept tools to work with .xlf?
Sidenotes:
The Lokalize handbook was not helpful at all. I see a lot of functions that sound promising, like:
"File" > "Update file from template". I did not find anything in the handbook to explain this function. If I click on this, nothing happens.
"Sync" > "Open file for sync/merge". This seems to be a function to merge two similar files (by multiple translators) rather than a tool to update the translation file from a template. Even though there is a tooltip in Lokalize's primary sync tab, notifying me about "x unmatched entries", I just couldn't find anything to append those unmatched entries to my .de.xlf file.
[Update] Turns out, I had similar issues as in this question. After downgrading my version of Lokalize to the suggested one, many issues (including the ones mentioned in the question) disappeared. However, now the "Update file from template" option is greyed out, and I don't know why.
I also tried OmegaT, which does not work at all on my platform (Ubuntu 16.04).
[Update] Virtaal works great for merging new strings from a template, but the UI in general is very poorly designed...
Googling did not help, as every hit seems to be related to XCode or something.
Thanks for any help in advance, I really appreciate it
I wrote a small npm command line tool called xliffmerge.
In principle it does the same, that Roland Oldengarm does with his gulp tasks described in his blog article.
It is free and you can have a look at it at https://github.com/martinroob/ngx-i18nsupport#readme
The best workflow automation solution I have seen described so far is from Roland Oldengarm's blog entry "Angular 2: Automated i18n workflow using gulp". To summarize, in a few dozen lines of Gulp code he created the tooling to handle some of the challenges you faced. Specifically it runs ng-xi18n to extract the messages; creates an English translation with sources copied to targets; updates existing translations by adding new trans-units, keeping existing ones, and removing missing ones; and then exposes all xlf files as TypeScript string constants. These last strings can then be imported to supply the bootstrapModule with its translation provider options.
Caveat: I have not used this exact solution (and code) myself, but I was able to expose generated xlf as TypeScript strings and use them in an app in a manner similar to what he described. As for maintaining translations, I have leveraged IntelliJ IDEA (WebStorm) file comparison features and Counterparts Lite (for Mac) for that. My own efforts are still in early stages but are working end to end for an application that is in active development.
Official Angular docs are now updated for Internationalization (i18n) at https://angular.io/docs/ts/latest/cookbook/i18n.html including a section specifically for creating a translation source file with the ng-xi18n tool.
I'm trying to implement content-assist and some custom highlighting as a plugin for Eclipse, after a lot of research I found this eclipse document.
I got content-assist working for XML documents, the problem is the part about SemanticHighlighting, I didn't find any information about how to implement this extension-point and I'm a bit lost. The only info that I found is the XSD for the extension point.
I'm trying to make some customs expressions on XML get a different color Ex:
<span>%%colored_text%%</span>
Where can I get more information about this org.eclipse.wst.sse.ui.semanticHighlighting and how to implement it?
I don't think there's a lot of a documentation on the semantic highlighting for SSE. The document that you found is a little light on details. For an example, the XSL project implemented semantic highlighting using the extension point.
The basic idea behind the semantic highlighting extension point is that when a change occurs, implementors will be asked if it can 'consume' a region of the document. If it can, it will return an array of Positions that can be highlighted by that particular highlighter. It can apply only one style, so it ends up being very specific. For example, you wouldn't be able to say 'color this part of the region blue and this other part of the region red'. You would need two separate highlighters to accomplish that.
The highlighter obtains style information for the highlight by using a preference store that you return from getPreferenceStore(). You'll then need to set up keys that the highlighter will use to look up styles from that preference store. If you use the styleStringKey on the extension point, the only key of importance from the semantic highlighting implementation is the one returned from getEnabledPreferenceKey(). This is kind of the condensed way to declare a style, as it only takes 2 preferences to get going. The semantic highlighting framework knows how the parse the string value returned by the preference store for the styleStringKey into the appropriate style components. Just follow the format as defined on the New Help for Old Friends document that you linked to.
Now, if you want to keep all the style components separate, the other get*PreferenceKey() methods become important. You'll have to define keys for each of them, and then add default values for each of those keys in your preference initializer.
org.eclipse.wst.xsl.ui.internal.preferences.XSLUIPreferenceInitializer has examples of both ways to define these style defaults.
I have a custom XML format that links to Java resources. For the sake of simplicity let's assume my XML file would look like this:
<root>
<java-class>my.fully.qualified.class.name</java-class>
</root>
Eventually my references will be somewhat more complicated. It will not contain the fully qualified class name directly and I will need some logic to resolve the correct class, but I want to keep the example as simple as possible here.
Now I want it to be possible to Strg+Click on the element's text and want IntelliJ to carry me to the .java file, just like it is possible in Spring-XML files. In the IDEA Plugin Development FAQ there is a link called "How do I add custom references to Java elements in XML files?" which so much sounds like exactly what I need. Unfortunately it links to a discussion where someone is more or less done implementing something like this, having some minor problems. Nevertheless I understood that I probably need to write an implementation of the interface com.intellij.psi.PsiReference. Googling for "PsiReference" and "IntelliJ" or "IDEA" unfortunately did not bring up any tutorials on how to use it, but I found the class XmlValueReference which sounds useful. Yet again googling for "XmlValueReference" did not turn up anything useful on how to use the class. At least the PSI Cookbook tells me that I can find the Java class by using JavaPsiFacade.findClass(). I'd be thankful for any tutorials, hints and the like, that tell the correct usage.
The above linked discussion mentions that I need to call registry.registerReferenceProvider(XmlTag.class, provider) in order to register my provider once I eventually managed to implement it, but of which type is "registry" and where do I get it from?
First of all, here's a nice tutorial that came up a few days ago, which explains the basics of IntelliJ plugin development (you should take a look at the section Reference Contributor).
You will likely have to define your own PsiReferenceContributor, which will be referenced in your plugin.xml like this:
<psi.referenceContributor implementation="com.yourplugin.YourReferenceContributor"/>
In your reference contributor, there's a method registerReferenceProviders(PsiReferenceRegistrar) where you will be able to call registry.registerReferenceProvider(XmlTag.class, provider).
Finally, in your instance of PsiReferenceProvider, you will have to test the tag name to filter out tags which don't contain class references, then find the right Java class using JavaPsiFacade.findClass().
From my experience, the best place to get help regarding IntelliJ plugin development is JetBrains' forums.
I have a project in Eclipse where I have an editor for a custom language. I am using ANTLR to generate the compiler for it. What I need is to add content assist to the editor.
The input is a source code in the custom language, and the position of the character where the user requested content assist. The source code is most of time incomplete as the user can ask for content assist any time. What I need is to calculate the list of possible tokens that are valid for the given position.
It is possible to write a custom code to do the calculation, but that code would have to be manually kept in sync with the grammar. I figured the parser is doing something similar. It has to be able to determine at a given context what are the acceptable tokens. Is it possible to "reuse" that? What is the best practice in creating content assist anyway?
Thanks,
Balint
Have a look at Xtext. Xtext uses Antlr3 under the hood and provides content assist for the Antlr based languages. Have a look especially into package org.eclipse.xtext.ui.editor.contentassist.
You may consider to redefine your grammar with Xtext, which would provide the content assist out-of-the-box. It is not possible to reuse the Antlr grammar of a custom language.