I try do the following:
I have a standard POM defined for all my Maven2 projects.
That POM includes the tools to use, and for PMD the rulesets to use.
I have defined a property that names these rulesets each.
Up to know this is working. I can define a new project POM, use there as parent my standard POM, and use the rulesets there defined. I am even able to override the definition of the property that defines the ruleset by another name.
I have defined that as auth-pmd-rule-set-3.x-v1-5.xml instead of pmd-rule-set-3.x-v1-5.xml (which is then choosen by Maven2) and have included the file auth-pmd-rule-set-3.x-v1-5.xml locally in my new project (under src/main/resources). But Maven does not find it. The error messages look like that:
[DEBUG] Preparing ruleset: auth-pmd-rule-set-3.x-v1-5.xml
[DEBUG] Before: auth-pmd-rule-set-3.x-v1-5.xml After: auth-pmd-rule-set-3.x-v1-5.xml
[DEBUG] The resource 'auth-pmd-rule-set-3.x-v1-5.xml' was not found with resourceLoader org.codehaus.plexus.resource.loa
der.FileResourceLoader.
[DEBUG] The resource 'auth-pmd-rule-set-3.x-v1-5.xml' was not found with resourceLoader org.codehaus.plexus.resource.loa
der.JarResourceLoader.
[DEBUG] The resource 'auth-pmd-rule-set-3.x-v1-5.xml' was not found with resourceLoader org.codehaus.plexus.resource.loa
der.ThreadContextClasspathResourceLoader.
[DEBUG] URLResourceLoader: Exception when looking for 'auth-pmd-rule-set-3.x-v1-5.xml' at ''
java.net.MalformedURLException: no protocol: auth-pmd-rule-set-3.x-v1-5.xml
Is there any technique available to reach what I want? I want to redefine the ruleset PMD should use without repeating the whole definition of everything.
Based on the error message, it looks like you may not have specified the full path to your custom rule-set auth-pmd-rule-set-3.x-v1-5.xml in your pom. As per the docs,
The rule sets may reside in the
classpath, filesystem or at a URL. For
rule sets that are bundled with the
PMD tool, you do not need to specificy
the absolute path of the file. It will
be resolved by the plugin. But if the
rule set is a custom rule set, you
need to specify its absolute path.
Related
I'd like to include a resource file (e.g. some xml config file) in my bundle and make it visible to all other bundles in the container. Is it possible without using the Fragment-Host manifest header? I'd like this resource file to always be visible in the classpath of all bundles running alongside my bundle, even those that do not exist yet, but will potentially be added in future.
EDIT:
To clarify - that resource must be available passively, i.e. the other bundles should be able to find it in their classpath, and not by refering to any special API or service of my bundle.
Some more background - my environment is a bit messy but I have no control over it and cannot change its existing bundles. The only way I can modify it is by adding my own bundles. That environment includes several copies of the ch.qos.logback.classic bundle. When logback starts up, it looks for specific XML config files in the classpath. If it doesn't find any of them, then its default behaviour is to print everything to stdout with debug level. This environment was previously used to host a GUI application so it didn't matter that much before, but now I am trying to adapt it so I can use some of its functionality in headless mode. So now it becomes important to me to be able to configure it in such a way that only warning and errors are printed to the console.
In general, no you cannot do this. Class-space isolation is at the heart of OSGi, but you want to put a resource in the class loader of one bundle and make it visible to all other bundles. That's not OSGi, it's the global application classpath.
The only thing you can do to add to the internal classpath of a specific bundle is to write a fragment which attaches to that bundle. A fragment can attach to multiple host bundles, but only if those hosts have the same symbolic name, i.e. because they are different versions of the same bundle. See OSGi R6 Core Specification, section 3.14.
You did however state that the bundles you want to attach are all copies of ch.qos.logback.classic. If that means they all have that exact symbolic name then perhaps a fragment will work after all.
You can not change the classpath of other bundles this way.
What you can do is retrieve the classloader of your bundle from your bundleContext. You can give this classloader to another bundle to retrieve your resource.
ClassLoader cl = context.getBundle().adapt(BundleWiring.class).getClassLoader();
Another option is to give the other bundle the URL of the resource.
As long as the resource is on the classpath, any bundle can access the resource if it can get hold of the class loader of the bundle that contains the resource.
For example:
ClassLoader classLoaderOfBundleWithResource = ...
classLoaderOfBundleWithResource.getResourceAsStream("org/example/resource.xml");
From a maintenance and API point of view, I would not recommend exposing a resource that way. Java types are much better suited therefore. Instead, let the resource bundle export a class that gives clients access to the contents of the resource.
For example:
public class XmlDocumentProvider {
public InputStream openDocument() {
return getClass().getResourceAsStream("resource.xml");
}
}
Assuming that both the resource.xml and the XmlDocumentProvider reside in the same package, openDocument will return the resource content just like in the first example.
I'm trying out Dagger2 in Intellij 2016.1 (but not with gradle) on ubuntu.
Intellij creates dagger's generated sources in either
./out/production/<ProjectModule>/generated/ or
./out/test/<ProjectModule>/generated_tests/ depending on if it was generated from a source or test directory, respectively.
But from what I tell, I can only mark those directories as either sources root, test sources root, or generated sources root; there is no option for generated test sources root, say.
Why is this important? Because the generated test sources depend on my test sources. If they are marked as a generated sources root then Intellij cannot find the dependencies.
Note: I don't think they should be marked as test sources root because then Intellij tries to compile those again; unless there is some way of preventing this of which I am unaware.
So is there a way to mark this directory as a generated test sources root or something equivalent?
To mark a diectory as "generated test sources root", open the "Project Structure" dialog at Project Settings > Modules and click on the little "P" next to your folder of choice, and select the "For generated resources" button.
Dagger uses annotation processing to generate sources during compilation. IntelliJ has a specific configuration for this feature in Settings -> Build, Execution, Deployment -> Compiler -> Annotation Processors
When it is enabled IntelliJ automatically adds generated sources to project.
With annotation processing enabled I can see that generated test sources are marked both as Test Sources Root and Generated Sources Root. But when I try to manually set both flags it does not work - I get flags Sources Root and Generated Sources Root.
For me it looks like a bug.
Here's what worked for me. Create a directory in the Module root called generated and under it have two simlinks to <ProjectRoot>/out/production/<ProjectModule>/generated/ and <ProjectRoot>/out/test/<ProjectModule>/generated_tests/. Mark the first as Resource Root and the second as Test Resource Root.
I created the new directory and simlinks because it appears Intillij auto-marks <ProjectRoot>/out as Excluded.
I marked the directory as Test Resource Root so that Intellij doesn't try to compile the source twice to the same class. (Hint: big complains from the compiler.)
In the end, no red squiggles and auto-complete works.
Note: I didn't change the Intellij's generated sources directory for the module. (Well, I did to try another answer, but changed it back.)
I've been going through all the scenarios, digging around the web, and have yet to find an answer to this. Is it possible for Artifactory to map from one repository layout to another? This is my attempt so far...
In our business we currently have an IVY repository for which we deploy built artifacts. One such artifact is stored at the following path, with the following IVY file:
http://someserver:8080/com.abc.common_library/common_library_to/4.0.0.4-1/jar/common_library_to.jar
http://someserver:8080/com.abc.common_library/common_library_to/4.0.0.4-1/ivy/ivy.xml
For the IVY layouts I've configured the following:
[orgPath]/[module]/baseRev/[type]/([orgPath].)module(-[classifier]).[ext]
[orgPath]/[module]/baseRev/[type]/ivy(-[fileItegRev])(-[classifier]).xml
Now we want to expose this within Artifactory for our maven2 projects to consume. So I configure a new repository, setting the url, etc, and under advanced settings, I set the 'Repository Layout' to be maven-2-default and 'Remote Layout Mapping' to be the modified ivy-default. On making these changes I see the following message appear:
Not all tokens can be mapped between the source and the target layout, which may cause path translation not to work as expected.
I test and save the new repository and all appears happy. I can browse the newly configured repository and view its contents, including the above mentioned artifact. I then generate the maven settings from the home screen, ensure that the correct repositories are selected that include the newly configured one, and apply this to Eclipse.
Having done all of this, I now open the pom file within my Eclipse project and create a new dependency. I specify the following configuration:
Group Id: com.abc.common_library
Artifact Id: common_library_to
Version: 4.0.0.4-1
Type: jar
Scope: compile
Eclipse now attempts to resolve the dependency but gives the following error:
Missing artifact com.abc.common_library:common_library_to:jar:4.0.0.4-1:compile
Am I missing something here? This is quite an important step for us to be able to do. Any feedback will be most appreciated.
See Yoav's response here:
http://forums.jfrog.org/Mapping-from-one-repository-type-to-another-does-it-work-td6807726.html
I'm working on the multi module project. I want to run my plugin after build for one of the modules. This mojo should be only run directly from CLI and can not be attached to the phase as on some environments we dont want to run this goal.
What is the best aproach to configure my plugin? Should it be configured within parent or should i configure it within module?
If I configure it within both parent and module will module configuration overwrite parent configuration?
If I configure it only within parent will I be able to run it from within module folder?
At the moment I have it configured only in my-module and i run it like this from parent folder
mvn -pl my-module groupId:artifactId:myGoal
It looks like i have to use fully qualified name. I guess this is because parent doesnt know anything about this plugin.
If you want the plugin to be executed once per build, then use the #aggregator annotation on your Mojo. This signals to Maven to only execute the mojo once in the Maven build, unless it is explicitly bound to a lifecycle phase. You can find out more at the Mojo API Specification page.
If you want to avoid having to declare the fully-qualified name of the mojo, you can configure the groupId in the pluginGroups section of the settings.xml. You may also be able to specify it in the pluginManagement section of the pom.xml, though I'm not certain that this works for your use case.
I'm happily using the Maven bundle-plugin to create OSGi manifest headers for my modules. However, when there are configuration files that pull in classes which aren't referenced directly in the code, the plugin can't tell which packages it's going to need.
One example is a bundle with domain models that constitute a Persistence Unit for JPA. The driver class is part of the PU configuration and either set in an XML file or at runtime when the EntityManager is instantiated. I have to manually add an Import-Package header for the driver class that I want to load, or I get CNF errors.
Another example is a Struts war, where the web.xml pulls in the Struts dispatcher that's otherwise not found anywhere in the code and has to be manually added to the headers.
How can I avoid this?
I tried adding the required packages as dependencies with a provided scope, but that didn't help.
In the plug-in section of the bnd configuration you can specify plug-ins to analyze these files and contribute to the import-package header. For spring it looks like this:
<_plugin>aQute.lib.spring.SpringComponent</_plugin>
I am not sure, what descriptors are supported on top of spring. Just take a look at the source (it's in the Apache Felix SVN) and see for yourself. In the worst case you have to write your own plug-in, but at least it is possible! Also peter kriens site about the bnd explains the usage and some internals.
Other then that I am not aware of any simple solution.