Finally I've found a way to export single classes to jar file. My problem now is that it doesn't export structure but only single classes.
How can I export chosen classes and export it with the current directory structure? (so its in i.e /src/org/smth/cmp/asd/MyClass.class inside of jar)
Some things in IDEA are really counterintuitive in comparison how easy that was in Eclipse where selecting classes and simply using Export... function in context menu did what I'm talking about... Still like lots of its features though
Related
I already using the minify argument when building with dart2js.
I looked at the output and I see that the import 'dart:html causes problems in terms of the output file size (2kb .dart file becomes 182kb .js file). For example it imports SVG package though in my code I never touch any <svg> DOM Elements.
I understand that the compiler doesn't know if I'm going to use svg DOM Elements or not. And I understand that the using of var is one of the reasons of that behavior.
But if I will not use any var keywords, the compiler still doesn't have enough 'power' to strip all unused packages and functions.
Is there any directive I can use to forbid the import of certain packages. I mean built-in packages right now. I'm using IntelliJ IDEA and it doesn't allow me to change anything in the Dart default setup.
UPD: Tried to use
import 'dart:html' show querySelector, Element
to import only that method and class, but file size is still 182kb.
The only solution I see for now is to make a few stripped versions of the default 'dart:html' package. The one without WebGL, SVG and some other features.
Because maybe Dart compiler works good, but there is just some methods and classes that I don't use, but the code uses. Like.. the initial package methods checking if some elements are SVG or something like that.
There is a tool for analyzing the output of a dart2js build, especially for references and dependencies. Just tested and gave a better overview in my case.
https://github.com/dart-lang/dump-info-visualizer
hosted :
https://dart-lang.github.io/dump-info-visualizer/
Build with option --dump-info
https://webdev.dartlang.org/tools/dart2js#options
Even when you don't import anything you will get some minimal output size. Dart provides a lot of features like classes with inheritance and mixins (and a lot more) and dart2js output contains code that implements these features.
This is like adding a JS library like jQuery.
Therefore main() {} will already result in an output size of several dozen kb. Adding another line of code probably will only add a few additional bytes.
pub build by default does tree-shaking and minifications, therefore no additional options are required.
Moving to intellij i'm trying to understand properly the logic behind the its project structure. I come from eclipse. After reading for a while i understood the relation between workspace and project, then between project and modules. However something that is puzzling me is the logic of the default project configuration in Intellij. Indeed, when you create a project there is an initial module which to a certain extend is equivalent to the Project itself. To be more precise, the initial module folder is the Project folder. This is kind of confusing to me. Then when you add more module they are sub-module of that module.
My first question is what is the rationale of making this first module equivalent to the project folder ?
Following this, i would further ask, what the point of having modules as sub-module of others.
In eclipse i use to have simply different project (i.e. module) independent from each other and adding the dependency as necessary. So how does the Idea solution makes it better, if not what is the rational here ?
I saw that one can start an empty project and then add modules to it. However in that case, the modules added are added as subfolder of the Project and therefore there is no initial module equivalent to the Project folder ? So why this difference and what is the rationale behind it ?
What would be the better approach, the first or second ?
Would it be ok to have this first initial module with no src or test folder but just with the proper facet so as to spread it to the sub-module?
I would appreciate if someone could explain a bit the rational of all of it ?
I will move to SBT soon (i.e. maven structure which I suppose inspired all modern IDE project Structure) if one want to explain within that context fine, nevertheless i want to understand the rationale in intelliJ first.
Many thanks,
-M-
PS: What i'm looking for is some advise for some multi-module project structure in Intellij as i'm moving my eclipse workspaces to it.
I think that it's not uncommon for projects to be relatively small, so they don't need fancy modules with dependency management etc. In that case, I find the default project created by IntelliJ to fit perfectly my needs: no need to add submodules, everything is directly in the parent project, it reduces the structure to its bare minimum.
On the other hand, big projects with submodules will likely resemble the structure of a Maven multimodule project (perhaps SBT too, but I don't know this tool at all). You have a parent root which acts as a container for submodules. The parent project may also store configuration (a default SDK, a language level etc. that will be inherited by the submodules). The actual code will be contained in the submodules.
Regarding your questions, it all depends on the kind of project you are developing. For a small codebase, you could keep a simple project with no submodule. For bigger codebases, you can either create modules manually, or import an existing Maven/SBT/whatever project, which will automatically create modules reflecting the imported structure.
I just created a project, and I notice immediately that I can't seem to right-click on my src folder and create a logical file group/folder within the project view.
In many IDEs, I can arbitrarily organize source files in groups/folders. These grouping help me organize my business process and data access layers appropriately.
How do I do the same in IntelliJ?
I'm not sure what the exact equivalent to what you're thinking is, or if there even is one.
IntelliJ has a notion of "scopes". Under the Project/navigation view, click the configure dropdown and choose "Edit Scopes...". From this window, you can define a pattern to include certain files from your project.
For instance, all of my DAOs are in packages called my.company.<feature>.persistence. I create a scope called "DAOs" with the pattern src[myProject]:my.company.*.persistence.*. Now when I choose "DAOs" from the Project view dropdown, I see a filtered view of the project. I haven't found a way to show that filtered view alongside other scopes at the same time, however.
These scopes can also be shared, and they can be used to narrow down searches. They are similar in many ways to Eclipse's working sets.
Many scopes are defined implicitly, like Test and Production, Changed Files, VCS changesets, etc.
As suggested by the Eclipse documentation, I have an org.eclipse.core.resources.IncrementalProjectBuilder that compiles each source file and separately I also have a org.eclipse.ui.editors.text.TextEditor that can edit each source file. Each source file is compiled into its own compilation unit, but it can reference types from other (already compiled) source files.
Two tasks for which this is important are:
Compiling (to make sure the types we're using actually exist)
Autocomplete (to look up the type so we can see what properties/methods are present on it)
To accomplish this, I want to store a representation of all the compiled types in memory (referred to below as my "type store").
My question is two fold:
Task one above is performed by the builder and task two by the editor. So that they both have access to this type store, should I create a static store somewhere that they both can have access to, or does Eclipse provide a neater way to deal with this problem? Note that it is eclipse, not me, that instantiates the builders and editors when they are needed.
When opening eclipse, I don't want to have to rebuild the whole project just so I can re-populate my type store. My best solution so far is to persist this data somewhere and then repopulate my store from that (perhaps upon project open). Is this how other incremental compilers typically do this? I believe Java's approach is to use a special parser that efficiently extracts this data from the class files.
Any insights would be really appreciated. This is my first DSL.
This is an interesting question and one that doesn't have a simple solution. I'll try to describe a potential solution and also describe in a little bit more detail how JDT accomplishes incremental compilation.
First, a bit about JDT:
Yes, JDT does read class files for some of its information, but only for libraries that don't have source code. And this information is really only used for editing assistance (content assist, navigation, etc).
JDT computes incremental compilation by keeping track of dependencies between compilation units as they are compiled. This state information is stored on disk and retrieved and updated after each compile.
As a more complete example, let's say that after a full build, JDT determines that A.java depends on B.java, which depends on C.java.
If there is a structural change in C.java (a structural change is a change that can affect outside files (e.g., adding/removing a non-private field or method)), then B.java will be recompiled. A.java will not be recompiled since there was no structural change in B.java.
After this bit of clarification on how JDT works, here are some possible answers to your questions:
Yes. This must be done through statically accessible global objects. JDT does this through the JavaCore and JavaModelManager objects. If you don't want to use global singletons, then you can access to your type store available through your plugin's Bundle activator instance. The e4 project does allow dependency injection, which is probably even better (but is not really a part of the core Eclipse APIs).
I think persisting the information on the file system is your best bet. The only real way to determine incremental compile dependencies is to do a full build, so you need to persist the information somewhere. Again, this is how JDT does it. The information is stored in your workspaces' .metadata directory somewhere in the org.eclipse.core.resources plugin. You can have a look at the org.eclipse.jdt.internal.core.builder.State class to see the implementation.
So, this may not be the answer you are looking for, but I think this is the most promising way to approach your problem.
I'm in the middle of building an application but found myself too easily creating new packages without keeping the project's structure in mind.
Now, I'm trying to redo the whole project structure on paper first. I am using a Settings class with public properties, accessed as settings for several other classes around the project.
Now, since this Settings class applies for the whole project, I am unsure if it should be packaged and if so, in what kind of package should it exist? Or should it be in the root (the default package) with the main application class?
I've been thinking about putting it in my utils package, then again I don't think it really is an utlity. Any strategies on how to decide on such package structure for example for a Settings class?
Use of the default package is discouraged anyway (in java it is actually enforced as a warning as far as I know), even for the class containing the main.
Other than that, I prefer having a config package, even if it's the only class in there. I don't think it would fit in the utils package.
IMHO you should put it into a separate, low level package, since many other classes depend on it, but it (presumably) doesn't depend on anything. So it should definitely not be put in one package with the main application class. It could be in the utils package though, or in a separate package on the same level (e.g. config).
By "low level" I simply mean "low on the package dependency hierarchy", where a package A which depends on package B is higher than B. So it does not directly relate to the actual package hierarchy. The point is to avoid dependency cycles between your packages, so that you can have such an ordering between your packages.
Btw you should not use the root package in a real application.