In a Maven project, what are reasons for either a nested or a flat directory layout? - maven-2

As my Maven project grows, I'm trying to stay on top of the project structure. So far, I have a nested directory layout with 2-3 levels, where there's a POM on each level with module entries corresponding to the directories at that level. POM inheritance (parent property) does not necessarily follow this, and is not relevant for the purpose of this question.
Now, while the nested structure seems pretty natural to Maven, and it's nice and clean as long as you are on one particular level, I'm starting to get confused by what I look at in my IDE (Eclipse and IntelliJ IDEA).
I had a look at the Apache Felix sources, and they have a pretty complex project in what seems to be a flat directory structure, so I'm wondering if this would be a better way to go.
What are some pros and cons for either approach that you have experienced in practice?
Note that this question (which I found meanwhile) seems to be very similar. I'll leave it to the community to decide whether this should be closed as a duplicate.

I use a kind of mixed approach. Things with distinct lifecycle (from a release and thus VCS point of view) are flat, things with the same lifecycle are nested. And I use svn:externals for the checkout. I wrote about this approach in this previous answer.

I vote for nesting. I'm using IDEA 9 which shows the nesting in the project pane, so the presentation mirrors your logical project structure. (This wasn't the case in 8.1 - it was flattened out.)
I prefer keeping things nested, especially if the names are very similar - makes navigation much easier when using a command prompt. I have a project with names like myapp-layer-component, so they all start with the same prefix, and many have the same -layer-, so using autocomplete on the commandline is next to useless. Separating these out into a nested structure is then much easier because each part of the name (appname, layer or component) is repeated just once at each level in the directory structure.
If building from the command line, it's much easier to build a subset of the project, e.g. if I'm working on the db model, then often I need to build all projects in that area. This is tricky to do when the files are flattened out - the only way I know is to use the -pl argument to maven and specify the proejcts to build. With the nested directories, I just cd to the db directory and run mvn.
For example, instead of
myapp-web-gui1
myapp-web-gui2
myapp-web-base
myapp-svc-clustered
myapp-svc-clustered-integrationtest
myapp-svc-simple
myapp-db-model
myapp-db-hibernate
We have the structure
\myapp
\web
\gui1
pom.xml
\gui2
pom.xml (other poms omitted to keep it short)
\base
\svc
\clustered
\clustered-it
\simple
\db
\model
\hibernate
You could also add nesting for the integration tests, but this seems like driving the point too far.
With nesting, you also get all the benefits of inheritance (and some of it's pains...)
The only issue I've had with this is that the directory name doesn't match the artifact id. (I'm still using full artifactIds.) And so each project must explicitly define SCM paths, since these can no longer be inferred from the parent pom. Of course, each directory can be made the same as the artifactId, and then the SCM details can be inferred from the parent, but I found the long directory names a bit unwieldy.

Related

How to simply show *.jar in Intellij Idea?

The Intellij Idea show the External Libraries with group, version and jar, it seems too long, how to simply show aopalliance-1.0.jar on the top like in Eclipse??
That's currently not possible in IDEA.
Bear in mind that some libraries can contain multiple jars (for example expand the "<1.8>" library).
Also the dependency is defined by the group/artifact/version strings, so it makes sense to show them on the top level node.
However I agree that having exactly one node under the top level node for almost every library in the project is unnecessary and not good for usability.
So maybe the two nodes could be collapsed into a single node that shows both the jar name and the group/artifact/version string - with one of it probably grayed out a little.
I suggest you create a feature request at JetBrains issue tracker: https://youtrack.jetbrains.com/issues/IDEA

is it possible to have two project views in intellij idea

I am working with a big project on my IDEA with many modules. I would like to compare files and directories in the project view. Scrolling each time just to select the files for comparison is tedious. I am not using scroll from source so what I wanted to have is two views of the project, each one is scrolled to different files.
Is there a way to achieve that? Or any other alternative?
The closest you can get is the favourites view where you can drop individual files into a list and then compare them without having to scroll around the project view. Unfortunately that is only any use if you are comparing the same sets of files each time.
Unfortunately there is no out of the box way to do that, IDEA can run multiple instances, each with a different project, but switching is kind of painful.
One workaround is to import multiple maven projects as a modules of one project as described in this question.
There is an issue for that feature in the JetBrains issue tracker, it is interesting to read the conversation history there. Currently it seems JetBrains do not plan to implement this feature anywhere in the future.
...it makes no sense in IDEA. Unlike other platform-based IDEs, IDEA
supports multi-module projects, and all the contents displayed in a
single frame are modules of a single project. Introducing an extra
level of hierarchy above that would be unnecessary and extremely
confusing.
...
We don't have any plans to provide any other solution for this. The
1:1 correspondence between projects and frames is essential to the
internal design of IntelliJ IDEA: by definition, a project is the set
of code opened in a single frame. There is no way to change this
without rewriting the whole IDE, which we don't plan to do.
There was such view in intelliJ called Commander. Since last versions it's not shipped together with intelliJ, but you can install it as a plugin.
I think it will be helpful for your case.

IntelliJ multi-project

Moving to intellij i'm trying to understand properly the logic behind the its project structure. I come from eclipse. After reading for a while i understood the relation between workspace and project, then between project and modules. However something that is puzzling me is the logic of the default project configuration in Intellij. Indeed, when you create a project there is an initial module which to a certain extend is equivalent to the Project itself. To be more precise, the initial module folder is the Project folder. This is kind of confusing to me. Then when you add more module they are sub-module of that module.
My first question is what is the rationale of making this first module equivalent to the project folder ?
Following this, i would further ask, what the point of having modules as sub-module of others.
In eclipse i use to have simply different project (i.e. module) independent from each other and adding the dependency as necessary. So how does the Idea solution makes it better, if not what is the rational here ?
I saw that one can start an empty project and then add modules to it. However in that case, the modules added are added as subfolder of the Project and therefore there is no initial module equivalent to the Project folder ? So why this difference and what is the rationale behind it ?
What would be the better approach, the first or second ?
Would it be ok to have this first initial module with no src or test folder but just with the proper facet so as to spread it to the sub-module?
I would appreciate if someone could explain a bit the rational of all of it ?
I will move to SBT soon (i.e. maven structure which I suppose inspired all modern IDE project Structure) if one want to explain within that context fine, nevertheless i want to understand the rationale in intelliJ first.
Many thanks,
-M-
PS: What i'm looking for is some advise for some multi-module project structure in Intellij as i'm moving my eclipse workspaces to it.
I think that it's not uncommon for projects to be relatively small, so they don't need fancy modules with dependency management etc. In that case, I find the default project created by IntelliJ to fit perfectly my needs: no need to add submodules, everything is directly in the parent project, it reduces the structure to its bare minimum.
On the other hand, big projects with submodules will likely resemble the structure of a Maven multimodule project (perhaps SBT too, but I don't know this tool at all). You have a parent root which acts as a container for submodules. The parent project may also store configuration (a default SDK, a language level etc. that will be inherited by the submodules). The actual code will be contained in the submodules.
Regarding your questions, it all depends on the kind of project you are developing. For a small codebase, you could keep a simple project with no submodule. For bigger codebases, you can either create modules manually, or import an existing Maven/SBT/whatever project, which will automatically create modules reflecting the imported structure.

CMake: Best method for "subprojecting" files

I'm learning/vetting CMake at the moment as I'm thinking of migrating our code to it. One thing we do a lot of with our current make system is to "subproject" common code files. For example, we have a lot of shared, generic headers (plus some c/cpp files) which get included in every project we create. I want to replicate this in CMake but I don't see an easy way of doing it. To be precise, I want to do something like:
Parent CMakeLists.txt
add_subdirectory(shared_folder shared_build_folder)
#Next line should somehow add in the files reference in the shared_folder
add_executable([specific files for this project] build_folder)
Child CMakeLists.txt (shared_folder)
#Somehow have a list of files here that get added to the parent project
So far I've found various "ways" of doing this, but all seem a little hacky. I'm coming to the conclusion that this is in fact the way I have to do things and CMake isn't really geared towards this style of development. For clarity, most of my solutions involve doing something like creating a variable at the parent level which consists of a list of files. This variable (via some shenanigans) can get "passed" to/from any children, filled in and then when I call add_exectuable I use that variable to add the files.
All my solutions involve quite a few macros/functions and seemingly quite a bit of overhead. Is this something other people have tried? Any clues on the best approach for doing this?
Thanks
Andrew
We were facing the exact same problem and after some time of crying we accepted the CMake-way and it resulted in a better structured project even if it meant to change some parts of our structure.
When using sub-directories the targets are automatically exported throughout the whole project (even in subsequent other add_subdirectory-calls) once the add_subdirectory-statement was processed: sub-projects which contain common code are creating libraries.
There is also the PARENT_SCOPE which you can use to export variables to parent CMakeLists.txt
For "other" things we simulated the FindPackage-mechanism by including .cmake-files into the main CMakeLists.txt with include. In doing so we can provide variables easily, change the include_directories and do other fancy things global to the project.
As there are no dependencies between cmake-variables, we don't use cmake to configure the source (features of the project), but only the build (compiler, includes, libraries...). This split was the key element of our build-system-refactoring.

What's the best approach to incremental compilation when building a DSL using Eclipse?

As suggested by the Eclipse documentation, I have an org.eclipse.core.resources.IncrementalProjectBuilder that compiles each source file and separately I also have a org.eclipse.ui.editors.text.TextEditor that can edit each source file. Each source file is compiled into its own compilation unit, but it can reference types from other (already compiled) source files.
Two tasks for which this is important are:
Compiling (to make sure the types we're using actually exist)
Autocomplete (to look up the type so we can see what properties/methods are present on it)
To accomplish this, I want to store a representation of all the compiled types in memory (referred to below as my "type store").
My question is two fold:
Task one above is performed by the builder and task two by the editor. So that they both have access to this type store, should I create a static store somewhere that they both can have access to, or does Eclipse provide a neater way to deal with this problem? Note that it is eclipse, not me, that instantiates the builders and editors when they are needed.
When opening eclipse, I don't want to have to rebuild the whole project just so I can re-populate my type store. My best solution so far is to persist this data somewhere and then repopulate my store from that (perhaps upon project open). Is this how other incremental compilers typically do this? I believe Java's approach is to use a special parser that efficiently extracts this data from the class files.
Any insights would be really appreciated. This is my first DSL.
This is an interesting question and one that doesn't have a simple solution. I'll try to describe a potential solution and also describe in a little bit more detail how JDT accomplishes incremental compilation.
First, a bit about JDT:
Yes, JDT does read class files for some of its information, but only for libraries that don't have source code. And this information is really only used for editing assistance (content assist, navigation, etc).
JDT computes incremental compilation by keeping track of dependencies between compilation units as they are compiled. This state information is stored on disk and retrieved and updated after each compile.
As a more complete example, let's say that after a full build, JDT determines that A.java depends on B.java, which depends on C.java.
If there is a structural change in C.java (a structural change is a change that can affect outside files (e.g., adding/removing a non-private field or method)), then B.java will be recompiled. A.java will not be recompiled since there was no structural change in B.java.
After this bit of clarification on how JDT works, here are some possible answers to your questions:
Yes. This must be done through statically accessible global objects. JDT does this through the JavaCore and JavaModelManager objects. If you don't want to use global singletons, then you can access to your type store available through your plugin's Bundle activator instance. The e4 project does allow dependency injection, which is probably even better (but is not really a part of the core Eclipse APIs).
I think persisting the information on the file system is your best bet. The only real way to determine incremental compile dependencies is to do a full build, so you need to persist the information somewhere. Again, this is how JDT does it. The information is stored in your workspaces' .metadata directory somewhere in the org.eclipse.core.resources plugin. You can have a look at the org.eclipse.jdt.internal.core.builder.State class to see the implementation.
So, this may not be the answer you are looking for, but I think this is the most promising way to approach your problem.