We have a custom build tool which is dependent on the ivy functionality to resolve dependencies. The configuration of the dependencies is not an ivy.xml file, but a custom configuration that allows for.. well, irrelevant. The key is that we're using ivy programmatically.
Given a dependency (group id, artifact id, version), we create a ModuleRevisionId:
ModuleRevisionId id = ModuleRevisionId.newInstance(orgName, moduleName, revisionName);
followed by a ModuleDescriptor. This is, I'm guessing, where I'm not convincing enough to inform ivy that I want both the target library jar file as well as the sources. I'm just not sure what a DependencyConfiguration is vs. just a 'configuration' when creating a ModuleDescriptor.
DefaultModuleDescriptor md
= new DefaultModuleDescriptor(
ModuleRevisionId.parse("org#standalone;working"),
"integration",
new java.util.Date());
DefaultDependencyDescriptor mainDep
= new DefaultDependencyDescriptor(id, /* force = */ true);
mainDep.addDependencyConfiguration("compile", "compile");
mainDep.addDependencyConfiguration("compile", "sources");
md.addDependency(mainDep);
md.addConfiguration(new Configuration("compile"));
md.addConfiguration(new Configuration("sources"));
Nor do I really understand the above vs. RetrieveOptions vs. ResolveOptions.
I need a drink.
Ok, so it took a while, but I finally wrapped my head around some of this.
// define 'our' module
DefaultModuleDescriptor md
= new DefaultModuleDescriptor(ModuleRevisionId.parse("org#standalone;working"),
/* status = */ "integration",
new java.util.Date());
// add a configuration to our module definition
md.addConfiguration(new Configuration("compile"));
// define a dependency our module has on the (third party, typically) dependee module
DefaultDependencyDescriptor mainDep = new DefaultDependencyDescriptor(md, dependeeModuleId, /* force = */ true, false, true);
mainDep.addDependencyConfiguration("compile", "default");
mainDep.addDependencyConfiguration("compile", "sources");
// define which configurations we want to resolve (only have 1 in this case anyway)
ResolveOptions resolveOptions = new ResolveOptions();
String[] confs = new String[] {"compile"};
resolveOptions.setConfs(confs);
resolveOptions.setTransitive(true); // default anyway
resolveOptions.setDownload(true); // default anyway
ResolveReport report = ivy.resolve(md, resolveOptions);
This pulls down both the default jar as well as the sources target. Note that ivy has an issue where it won't transitively pull sources, though it will transitively pull 'main' jars. So you only get the sources for immediate dependency defined here, not the sub dependencies.
One other weakness I'm trying to figure out is this assumes the target dependency has a 'sources' configuration. I'd rather tell it to get any artifacts of type sources/source/src. Haven't figured that one out yet.
Related
Running
val myAvroObject = MyAvroObject.newBuilder()
results in a compilation error:
Cannot access class 'MyAvroObject.Builder'. Check your module classpath for missing or conflicting dependencies
I am able to access other MyAvroObject variables. More precisely, methods such as
val schema = MyAvroObject.getClassSchema()
val decoder = MyAvroObject.getDecoder()
compiles fine. What makes it even stranger is that I can access newBuilder() in my test/ folder, but not in my src/ folder.
Why do I get a compile error when using newBuilder()? Is the namespace of the avro-schema used to generate MyAvroObject of importance?
Check your module classpath generally means, that your dependencies (which you didn't provide) are messed up. One of them should read implementation instead of testImplementation, in order to have the method available in the main source-set, instead of only the test source-set - but this may well have to do with the input classes, the output location of generated classes, or annotations alike #VisibleForTesting (just see what it even generates). Command gradlew can also list the dependencies per configuration. The builder seems to be called org.apache.avro.SchemaBuilder... there's only avro-1.11.0.jar & avro-tools-1.11.0.jar. With the "builder" design pattern, .newBuilder() tries to return inner class Builder.
had the same problem today and was able to solve it by adding the following additional source folder
<sourceDir>${project.basedir}/target/generated-sources/avro</sourceDir>
to the kotlin-maven-plugin.
I'm trying to include a single source file for the Main-Class of a jar -- actually I have a toplevel directory of such files, demo/, but I don't want them all in a jar. I want separate jars, each using only one of these.
This seems like sort of an anti-pattern in gradle, as the fundamental mechanism infers or prefers that I should instead place each in a distinct sourceSet. Ugh.
A casual reading of the docs implies Jar.from() might be useful this way: "Specifies the source files or directories..."
As it turns out, "source" is perhaps a bit of a misnomer. Here's an example, a typical kotlin fat jar with the added from("demo/LockingBufferDemo.kt"):
val jar by tasks.getting(Jar::class) {
manifest { attributes["Main-Class"] = "LockingBufferDemoKt" }
from(sourceSets.main.get().output)
from("demo/LockingBufferDemo.kt")
dependsOn(configurations.runtimeClasspath)
from({
configurations.runtimeClasspath.get().filter {
it.name.endsWith("jar") }.map { zipTree(it) }
})
}
Forgive my naivety: Guess what does not end up in the jar? LockingBufferDemo.class. Guess what does? LockingBufferDemo.kt. In other words, this is treated more like a resource, not a source, and what would have been the simplest answer is a dead end.
Another way to approach this would be add the demo directory as an independent sourceSet and then use from(sourceSets["demo"].get(), except I can't find a way to complete that; according to IntelliJ get() returns a rather opaque "Provider" which I can't find mentioned in the actual javadoc: 1, 2 and I really feel like I'm heading down the garden path at this point with the woods rapidly growing darker around me.
This should not be this complicated.
How can I add a single file (or class derived from such) into a jar in gradle without having to put it alone in a directory and create a sourceSet for every such directory?
Regarding your explanations at the start of your post, you should consider creating multiple tasks of type Jar on your own, as every task of type Jar will only create a single JAR-file, and you "want separate jars". I do not think you should use different source sets, as all of the files are Java Kotlin source files in the end and are processed in the same way (compilation, tests, docs ...). Multiple source sets would complicate this common pipeline.
"Specifies the source files or directories..." As it turns out, "source" is perhaps a bit of a misnomer.
Well, the documentation does not stop there, but it says "for a copy and creates a child CopySpec". So it is not the source as in source code, but the source of a copy operation. In Gradle, tasks that create an archive (ZIP, JAR) share their API with tasks that copy files, as the creation of an archive can be seen as copying files from their source location to their target location (inside the archive).
So, the from method can be used to specify the files that are copied / archived. But it does not only take a sourcePath parameter, but also a closure or action for configuration. Using this second parameter, you can narrow your source files or directories down to the one file you need, for example using the method include:
val jar by tasks.getting(Jar::class) {
manifest { attributes["Main-Class"] = "LockingBufferDemoKt" }
from(sourceSets.main.get().output) {
include("**/LockingBufferDemo.class")
}
dependsOn(configurations.runtimeClasspath)
from({
configurations.runtimeClasspath.get().filter {
it.name.endsWith("jar") }.map { zipTree(it) }
})
}
I migrated a 3rd-party tool's gradle.build configs, so it uses android gradle plugin 3.5.3 and gradle 5.4.1.
The build goes all smoothly, but when I'm trying to make an .aab archive, things got broken because the toolchain expects the output .aab file to be named MyApplicationId.aab, but the new gradle defaults to output MyApplicationId-release.aab, with the buildType suffix which wasn't there.
I tried to search for a solution, but documentations about product flavors are mostly about adding suffix. How do I prevent the default "-release" suffix to be added? There wasn't any product flavor blocks in the toolchain's gradle config files.
I realzed that I have to create custom tasks after reading other questions and answers:
How to change the generated filename for App Bundles with Gradle?
Renaming applicationVariants.outputs' outputFileName does not work because those are for .apks.
I'm using Gradle 5.4.1 so my Copy task syntax reference is here.
I don't quite understand where the "app.aab" name string came from, so I defined my own aabFile name string to match my toolchain's output.
I don't care about the source file so it's not deleted by another delete task.
Also my toolchain seems to be removing unknown variables surrounded by "${}" so I had to work around ${buildDir} and ${flavor} by omitting the brackets and using concatenation for proper delimiting.
tasks.whenTaskAdded { task ->
if (task.name.startsWith("bundle")) { // e.g: buildRelease
def renameTaskName = "rename${task.name.capitalize()}Aab" // renameBundleReleaseAab
def flavorSuffix = task.name.substring("bundle".length()).uncapitalize() // "release"
tasks.create(renameTaskName, Copy) {
def path = "$buildDir/outputs/bundle/" + "$flavorSuffix/"
def aabFile = "${android.defaultConfig.applicationId}-" + "$flavorSuffix" + ".aab"
from(path) {
include aabFile
rename aabFile, "${android.defaultConfig.applicationId}.aab"
}
into path
}
task.finalizedBy(renameTaskName)
}
}
As the original answer said: This will add more tasks than necessary, but those tasks will be skipped since they don't match any folder.
e.g.
Task :app:renameBundleReleaseResourcesAab NO-SOURCE
I've been trying to inject an EJB into a JAX-RS resource via InitialContext()lookup() getting the following exception:
<javax.naming.NameNotFoundException: While trying to look up
comp/env/AServiceLocal
in /app/webapp/wcc/1377099157.; remaining name 'comp/env/AServiceLocal'>
My lookup in resource constructor:
try {
initialContext = new InitialContext();
String jndiSubcontext = "java:comp/env/";
aService = (AServiceLocal) initialContext.lookup(jndiSubcontext+AServiceLocal.class.getSimpleName());
eSService = (ESServiceLocal) initialContext.lookup(jndiSubcontext+ESServiceLocal.class.getSimpleName());
eService = (EServiceLocal) initialContext.lookup(jndiSubcontext+EServiceLocal.class.getSimpleName());
} catch (NamingException e) {
e.printStackTrace();
}
Here's the file structure taking into account that they are all maven projects:
global
|
--shared
|
|---src/main/java/com/x/y/z/AServiceLocal.java (ejb)
|
--war-project
|
|--src/main/java/comm/x/y/z/TheResource.java (jax-rs)
There are more maven projects related and they are all maven-configured through the global project in a hierarchical way.
There is also a resource in the same project as war-project that also performs lookups to the shared project and they do work.
I don't understand what the problem is.
edit
After adding ejb-local-ref to deployment descriptor:
<ejb-local-ref>
<ejb-ref-name>AServiceLocal</ejb-ref-name>
<ejb-ref-type>Session</ejb-ref-type>
<local>com.x.y.service.AdminXProfileServiceLocal</local>
<ejb-link>shared.jar#AdminXProfileService</ejb-link>
</ejb-local-ref>
I get the following error:
[J2EE:160101]Error: The ejb-link "shared.jar#AService"
declared in the ejb-ref or ejb-local-ref "AServiceLocal"
in the application module "xyz-99.1.0-SNAPSHOT.war" could not be
resolved. The target EJB for the ejb-ref could not be found. Ensure
that the link is correct.
The jar shared.jar is a dependency of the war project, but it seems that location is not correct. Must I add the packages also to the ejb-link ?
Something like: <ejb-link>shared.jar#com.x.y.ServiceImpl</ejb-link>
I aslo need to point out that there is a mix of hk2,cdi and lookups as part of the injection due to the fact the project is quite old and also it was migrated to weblogic 12c version recently so normal #Inject or #EJB don't appear to be working.
If I take into account all the variables I am seeing like Maven, project structure, JNDI lookup etc... the best way is to add an entry in the web.xml of the war project referencing your ejbs.
<ejb-local-ref>
<ejb-ref-name>ejb/adminXProfileService</ejb-ref-name>
<ejb-ref-type>Session</ejb-ref-type>
<local>com.myapp.ejb.AdminXProfileServiceLocal</local>
<ejb-link>YourEJBLibrary.jar#adminXProfileService</ejb-link>
</ejb-local-ref>
Then in your lookup you simply look for:
contenxt.lookup("java:comp/env/ejb/adminXProfileService");
I have a Maven project that uses the jaxb2-maven-plugin to compile some xsd files. It uses the staleFile to determine whether or not any of the referenced schemaFiles have been changed. Unfortunately, the xsd files in question use <xs:include schemaLocation="../relative/path.xsd"/> tags to include other schema files that are not listed in the schemaFile argument so the staleFile calculation in the plugin doesn't accurately detect when things need to be actually recompiled. This winds up breaking incremental builds as the included schemas evolve.
Obviously, one solution would be to list all the recursively referenced files in the execution's schemaFile. However, there are going to be cases where developers don't do this and break the build. I'd like instead to automate the generation of this list in some way.
One approach that comes to mind would be to somehow parse the top-level XSD files and then either sets a property or outputs a file that I can then pass into the schemaFile parameter or schemaFiles parameter. The Groovy gmaven plugin seems like it might be a natural way to embed that functionality right into the POM. But I'm not familiar enough with Groovy to get started.
Can anyone provide some sample code? Or offer an alternative implementation/solution?
Thanks!
Not sure how you'd integrate it into your Maven build -- Maven isn't really my thing :-(
However, if you have the path to an xsd file, you should be able to get the files it references by doing something like:
def rootXsd = new File( 'path/to/xsd' )
def refs = new XmlSlurper().parse( rootXsd ).depthFirst().findAll { it.name()=='include' }.#schemaLocation*.text()
println "$rootXsd references $refs"
So refs is a list of Strings which should be the paths to the included xsds
Based on tim_yates's answer, the following is a workable solution, which you may have to customize based on how you are configuring the jaxb2 plugin.
Configure a gmaven-plugin execution early in the lifecycle (e.g., in the initialize phase) that runs with the following configuration...
Start with a function to collect File objects of referenced schemas (this is a refinement of Tim's answer):
def findRefs { f ->
def relPaths = new XmlSlurper().parse(f).depthFirst().findAll {
it.name()=='include'
}*.#schemaLocation*.text()
relPaths.collect { new File(f.absoluteFile.parent + "/" + it).canonicalFile }
}
Wrap that in a function that iterates on the results until all children are found:
def recursiveFindRefs = { schemaFiles ->
def outputs = [] as Set
def inputs = schemaFiles as Queue
// Breadth-first examine all refs in all schema files
while (xsd = inputs.poll()) {
outputs << xsd
findRefs(xsd).each {
if (!outputs.contains(it)) inputs.add(it)
}
}
outputs
}
The real magic then comes in when you parse the Maven project to determine what to do.
First, find the JAXB plugin:
jaxb = project.build.plugins.find { it.artifactId == 'jaxb2-maven-plugin' }
Then, parse each execution of that plugin (if you have multiple). The code assumes that each execution sets schemaDirectory, schemaFiles and staleFile (i.e., does not use the defaults!) and that you are not using schemaListFileName:
jaxb.executions.each { ex ->
log.info("Processing jaxb execution $ex")
// Extract the schema locations; the configuration is an Xpp3Dom
ex.configuration.children.each { conf ->
switch (conf.name) {
case "schemaDirectory":
schemaDirectory = conf.value
break
case "schemaFiles":
schemaFiles = conf.value.split(/,\s*/)
break
case "staleFile":
staleFile = conf.value
break
}
}
Finally, we can open the schemaFiles, parse them using the functions we've defined earlier:
def schemaHandles = schemaFiles.collect { new File("${project.basedir}/${schemaDirectory}", it) }
def allSchemaHandles = recursiveFindRefs(schemaHandles)
...and compare their last modified times against the stale file's modification time,
unlinking the stale file if necessary.
def maxLastModified = allSchemaHandles.collect {
it.lastModified()
}.max()
def staleHandle = new File(staleFile)
if (staleHandle.lastModified() < maxLastModified) {
log.info(" New schemas detected; unlinking $staleFile.")
staleHandle.delete()
}
}