Intellij Module Dependency Export Option - intellij-idea

In Intellij, you can add module dependency under Project Structure.
There is a checkbox under Export under Dependency tab. As shown below.
I try to select the checkbox for log4j dependency and recompile it. Nothing added in the output path. As shown below.
Can anyone tell me what is the use of the checkbox under Export? What is the expected behavior with this checkbox selected?
Remark:
In the official document, it said
The Export option lets you control the compilation classpath for the modules that depend on this one: the marked items will be included in the compilation classpath of the dependent module.
But I don't understand what is that mean. Thanks a lot.

Sometimes, you need to urge leaking dependencies in dependent modules. For example, you have module C, which is dependency of B, and if B provides some API methods you want to expose, for example, B is library, you probably use some structures from module C and without "export" checked when someone in module A, for example will use your API, there are issues with access to these structures, because these classes will not be added to compile classpath of module A.
A --- using this API requires C in compile classpath
|
B --- API uses these structures
C - data structures (should be exported when enumerated in B)
And sometimes you don't want leaking dependencies into compile classpath, so you need uncheck this option
If you don't know what does it mean compile classpath, read this: https://dzone.com/articles/runtime-classpath-vs-compile

Related

CMake: How to tell where transitive dependency is coming from?

I'm in the process of rewriting a legacy CMake setup to use modern features like automatic dependency propagation. (i.e. using things like target_include_directories(<target> PUBLIC <dir>) instead of include_directories(<dir>).) Currently, we manually handle all project dependency information by setting a bunch of global directory properties.
In my testing, I've found a few examples where a target in the new build will link to a library that the old build would not. I'm not linking to it explicitly, so I know this is coming from the target's dependencies, but in order to find which one(s) I have to recursively look through all of the project's CMakeLists.txts, following up the dependency hierarchy until I find one that pulls in the library in question. We have dozens of libraries so this is not a trivial process.
Does CMake provide any way to see, for each target, which of its dependencies were added explicitly, and which ones were propagated through transitive dependencies?
It looks like the --graphviz output does show this distinction, so clearly CMake knows the context internally. However, I'd like to write a tree-like script to show dependency information on the command line, and parsing Graphviz files sounds like both a nightmare and a hack.
As far as I can tell, cmake-file-api does not include this information. I thought the codemodel/target/dependencies field might work, but it lists both local and transitive dependencies mixed together. And the backtrace field of each dependency only ties back to the add_executable/add_library call for the current target.
You can parse dot file generated by graphviz and extract details which you want. Below is sample python script to do that.
import pydot
import sys
graph = pydot.graph_from_dot_file(sys.argv[1])
result = {}
for g in graph:
# print(g)
for node in g.get_node_list():
if node.get("label") != None:
result[node.get("label")] = []
for edge in g.get_edges():
result[g.get_node(edge.get_source())[0].get("label")].append(g.get_node(edge.get_destination())[0].get("label"))
for r in result:
print(r+":"+",".join(result[r]))
You can also add this script to run from cmake as custom target, so you can call it from you build system. You can find sample cmake project here

Duplicate symbols (two projects in a workspace use the same code)

A is a module project. There are some test targets and the relevant reusable code is compiled in a separate (static library) target. A uses the third party Lumberjack logging library. The Lumberjack code was simply dropped into the project.
B is a different module project, but otherwise it has the same properties as A.
C is the main project. It depends on A and B. It links the libraries of A and B.
Compiling C will result in duplicate Lumberjack symbols.
How can I have multiple separate module projects so that...
they don't know of each other,
use the same third party code,
can be compiled and tested on their own,
included in a main project without duplicate issues?
So, to elaborate on sergio's answer, I was able to succesfully build a test setup as follows.
I included the Lumberjack code in a separate project that builds Lumberjack as a static library.
I created a new project ProjectA with a static library target ModuleA and a test app target DemoA. I copied the Lumberjack project folder into the project folder of ProjectA and then added it as a subproject. I didn't make ModuleA dependent on Lumberjack or link Lumberjack in ModuleA. Instead, I made DemoA dependent on both and link both libraries. This way, I am able to compile the test target, but the library target doesn't include Lumberjack.
I created a second project ProjectB with the analogue setup as ProjectA.
In the main project, I included ProjectA, ProjectB and Lumberjack as subprojects. Unfortunately this will make Lumberjack included 3 times in the main project, which is a little bit inconvenient and ugly (for instance when selecting dependent targets, you can't really tell which one is which).
Finally, I made the main project's target dependent on Lumberjack, ModuleA and ModuleB and link all three libraries. Now, the main project can compile without duplicate symbol error and the submodules can also be compiled and tested on their own.
Since you are targeting OSX, the solution to your issue is building Lumberjack as a framework (as opposed to link the sources code in your A and B modules) and then using that framework wherever it is required (i.e., in any project using A or B modules).
Indeed, Lumberjack already includes a project that will build a Lumberjack.framework, check this: CocoaLumberjack/Xcode/LumberjackFramework/Desktop/Lumberjack.xcodeproj.
Elaborating more on this, you would define your A and B modules as you are doing now, but without dropping Lumberjack source code in it.
What you do instead is, whenever you want to use the A static library in a executable (say, your test target), you add the library to the target and also the lumberjack framework (exactly as you do with OSX SDK frameworks).
Adding the dynamic framework is just a different way to "drop the sources", if you want, but done properly.
When you want to use both A and B in a C project, you add both static libraries and your Lumberjack framework to C.
As you can see, this way of doing will comply with all your four requirements, at the expense of introducing one dependency: you need to make clear in your static libraries documentation that they depend on the Lumberjack framework. This is actually not a big issue, since the latter is available in its own project and any one will be able to build it on his own.
If you want to improve on the handling of this dependencies, cocoapods are the way to go (a cocoapod is a file associated to your library which describes its dependencies, so when you install your library, the cocoapods system will automatically install also the dependencies). But this is highly optional. One single dependency is not a big issue to document or comply with.
Hope this answers your question.
I hate to reference an existing answer but here's one solution that's cumbersome but works: What is the best way to solve an Objective-C namespace collision?
I have this same problem and I'm working on a better solution though. Another idea that might work but I'm not yet sure how to implement it I asked here: Selectively loading classes in Objective-C
A third idea I had because of something someone said on my question was to wrap one of the libraries in a framework and create functions that reference the functions you need. Then load using something like #import <myFramework/MFMyAliases.h>
Have you tried looking at the libraries with ar? If you are very lucky, running for example
ar -t libA.a
gives you a list of files like
__.SYMDEF SORTED
Afile1.o
Afile2.o
Lumberjack1.o
Lumberjack2.o
Afile3.o
SomeOtherLibrary.o
where the Lumberjack files are clearly separable from the rest. Then, you can kick them out
with
a -d Lumberjack1.o Lumberjack2.o
and link C against this trimmed library while using the full library when testing A alone.
I was trying to achieve the same thing before few months and "Easy, Modular Code Sharing Across iPhone Apps: Static Libraries and Cross-Project References" article got all what i needed. please check it out if its useful.
Are A and B binaries?
If not you could simply uncheck the compile checkbox for all *.m files of one of the projects, so as to avoid building duplicate objects.
Also if you could use A and B thorough Cocoapods it would be best.
Try this.
It is sharing libraries/modules between different projects.

How does modular code work in Go?

Not having come from a C/compiled languages background, I'm finding it hard to get to grips with using Go's packages mechanism to create modular code.
In Python, to import a module and get access to it's functions and whatnot, it's a simple case of
import foo
where foo.py is the name of the module you want to import in the same directory. Otherwise you can add an empty __init__.py into a subfolder and access the modules via
from subfolder import foo
You can then access functions by simply referencing them through the module name, e.g. y = foo.bar(y). This makes it easy to separate logical pieces of code from one another.
In Go however, you specify the package name in the source file itself, e.g.
package foo
at the top of the 'foo' module, which you can then supposedly import through
import (
"foo"
)
and then refer to it through that, i.e. y := foo.Bar(x) . But what I can't wrap my head around is how this works in practice. The relevant docs on golang.org seem terse, and directed to people with more (any) experience using makefiles and compilers.
Can someone please clearly explain how you are meant to modularise your code in Go, the right project structure to do so, and how the compilation process works?
Wiki answer, please feel free to add/edit.
Modularization
Multiple files in the same package
This is just what it sounds like. A bunch of files in the same directory that all start with the same package <name> directive means that they are treated as one big set of code by Go. You can transparently call functions in a.go from b.go. This is mostly for the benefit of code organization.
A fictional example would be a "blog" package might be laid out with blog.go (the main file), entry.go, and server.go. It's up to you. While you could write a blog package in one big file, that tends to affect readability.
Multiple packages
The standard library is done this way. Basically you create modules and optionally install them into $GOROOT. Any program you write can import "<name>" and then call <name>.someFunction()
In practice any standalone or shared components should be compiled into packages. Back to the blog package above, If you wanted to add a news feed, you could refactor server.go into a package. Then both blog.go and news.go would both import "server".
Compilation
I currently use gomake with Makefiles. The Go installation comes with some great include files for make that simplify the creation of a package or a command. It's not hard and the best way to get up to speed with these is to just look at sample makefiles from open source projects and read "How to Write Go Code".
In addition to the package organisation, Like pip in python, use dep https://github.com/golang/dep for go package management. if you use it on existing go package it will automatically build the dependency tree with versions for all the packages being used. when shifting to production server, dep ensure will use Gopkg.toml to install all the required packages.
Just use dep ensure -add , other commands for dep are:
Commands:
init Set up a new Go project, or migrate an existing one
status Report the status of the project's dependencies
ensure Ensure a dependency is safely vendored in the project
version Show the dep version information
check Check if imports, Gopkg.toml, and Gopkg.lock are in sync
Examples:
dep init set up a new project
dep ensure install the project's dependencies
dep ensure -update update the locked versions of all dependencies
dep ensure -add github.com/pkg/errors add a dependency to the project

Apache Ivy Configurations

I'm slowly beginning to understand the importance of module configurations within the Ivy universe. However it is still difficult for me to clearly see how the same chunk of code could have different configurations that have different dependency requirements (the one exception is in the case of test configs that require JUnit on top of the normal dependencies -- I actually understand that 100%!)
For instance, take the following code:
package org.myorg.myprogram.core;
// Import an object from a dependency
import org.someElse.theirJAR.Widget;
public class MyCode
{
public MyCode()
{
if(Widget.SOME_STATIC == 3)
System.out.println("Fizz");
else
System.out.println("Buzz");
}
}
Now aside from the fact that this is terrible code, I just don't see how my program (which, let's pretend is JARred up into MyProgram.jar) could be set to have multiple "configurations"; some of which may require theirJAR and its Widget class, and others that don't. To me, if we fail to provide MyCode with a Widget it will die at runtime, always.
Again, I understand the necessity for test configurations; just not anything else (I have also asked questions about compile- vs run-time dependencies, and I guess I also see the necessity for those as well). But beyond test configs, compile-time configs, and runtime configs, what other module configurations could you possibly need? How would MyCode need a Widget in some cases, and not in other cases, yet still run perfectly fine without a Widget?
I greatly appreciate any help wrapping my brain around this!
Hibernate is a good example. Hibernate supports multiple cache implementations to act as its level-2 cache. You don't want to transitively depend on all the possible caches, only the one you use.
In general, we use the typical compile, test, runtime set of configurations.
To add to SteveD's answer, remember that dependencies can be more than just .jar files. Some dependencies come with source and javadoc files, release notes, license files, etc. Multiple configurations of the dependency might let you select the subset of files you wish to resolve.
You might also want to use configurations to control the contents of different distributions. For example you might want to release the jar on it's own ("master" configuration in Maven parlance) and additionally build a tar package containing all runtime dependencies, with (or without) source code.
Another use for configurations is when you target multiple platforms. I often release groovy scripts packaged to run as standalone jars or as tomcat web applications

How to find unneccesary dependencies in a maven multi-project?

If you are developing a large evolving multi module maven project it seems inevitable that there are some dependencies given in the poms that are unneccesary, since they are transitively included by other dependencies. For example this happens if you have a module A that originally includes C. Later you refactor and have A depend on a module B which in turn depends on C. If you are not careful enough you'll wind up with both B and C in A's dependency list. But of course you do not need to put C into A's pom, since it is included transitively, anyway. Is there tool to find such unneccesary dependencies?
(These dependencies do not actually hurt, but they might obscure your actual module structure and having less stuff in the pom is usually better. :-)
To some extent you can use dependency:analyze, but it's not too helpful. Also check JBoss Tattletale.
Some time ago I've started a maven-storyteller-plugin to be able to deeper analyze the poms, but the project is very far from production/public use. You can use the storyteller:recount goal to analyze the unused/redundant dependencies.
The problem with the whole story is - how to determine "unused" things. What is quite possible to analyze is for instance class references. But it won't work if you're using reflection - directly or non-directly.
Update November 2014.
I've just moved my old code of the Storyteller plugin to GitHub. I'll refresh it and release to the central so that it's usable for others.
I
personaly use the pom editor of M2Eclipse to visually view the dependency tree (2D tree). Then I give a look in my deliverable (war, ear) lib directories. Then still in M2Eclipse pom dependencies viewer I go to every 3rd party, and right click on the dependency I want to exclude (an exclusion is added automatically in the right dependency).
There is no golden rules, simply some basic tips:
a lot of pom are not correct: a lot of 3rd party libs out there require way too much dependencies in the default compile scope, if everybody carefully craft their pom, you must not have so much unwanted dependencies.
you need to guess by the name of dependencies what you will have to exclude, best example are parsers, transformer, documentbuilder: xalan, xerces, xalan alfred and co. try to remove them and use the internal jdk1.6 parser, common apache stuff, log4j is also worth looking at.
look also regularly in lib delivery if you do not have duplicate libraries with different version (the dependency resolver of maven should avoid that)
go bottom up, start with your common modules, then go up till the service layer, trimming down dependencies in every module, dont try to start in modules ear/war, it will be too difficult
check often if your deliverable are still working, by either testing or comparing and old deliverable with the new one (especially in web-inf/lib directory what has disappeared with winmerge/beyoncompare)
When you have A -> B, B -> C, and then refactor such that A -> (B, C). IF it is the case that A still compiles against B, you very much don't want to simply pick up the dependency because you receive it transitively.
Think of the case when A -> (B-1.0, C-1.0), B-1.0 -> C-1.0. Everything's in sync, so to avoid "duplication" you remove C from A's dependency. Then you upgrade A to use B-2.0 -> C-2.0. You begin to see errors because A wants C-1.0 classes but found C-2.0 classes. While quickly reconcilable in this scenario, it is far less so when you have lots of dependencies.
You very much want the information in A's pom that says that it explicitly expects to find C-1.0 on the classpath so that you can understand when you have transitive dependency conflicts. Again, Maven will do the job of ensuring that the "closest" version of any particular jar ends up on your classpath. But when things go wrong - you want all the dependency metatdata you can get.
On a slightly more practical note, a dependency is unused when you can remove it from your pom and all of your unit/integration/acceptance tests still pass. ;-)