How to differentiate TomEE different versions using jar files? - apache-tomee

I have downloaded TomEE plume 8.0.0-M2, TomEE plus 8.0.0-M2, TomEE webprofile 8.0.0-M2, TomEE microprofile 8.0.0-M2 and OpenEJB Standalone 8.0.0-M2 (from http://tomee.apache.org/download-ng.html)
I have installed all those TomEE versions and changed the name of those folders after extracting it but now, I'm not able to check which version I'm using. I have tried using tomee-catalina-8.0.0-M2.jar JAR file but it all looks same.
I just want to differentiate versions mentioned in http://tomee.apache.org/comparison.html
Note: Don't give me answers based on a random jar file present or not present in different versions of TomEE.

The difference between the versions are only in the contained numbers of libraries. Meaning, the TomEE source files will not differ at all. The installations will only have a different size of the lib folder (because of the mentioned differences in libraries).
If you have access to the directory structure, the simple solution is to check on the list of jar files. You are not interested in this method, but this is the easiest. :)
If you do not have access to the directory structure, or would like to know at runtime what kind of features are supported you could use a neat little trick to detect features and correlate those with the table provided here.
The solution itself is to ask for the classes regarding the specific features. To do so you need a list of class names with correlation to the features. After that you can use a method like this to check if the class is accessible (if you are using classpaths instead of module-path).
private boolean isClassPresent(String className){
try {
Class.forName(className);
return true;
} catch (ClassNotFoundException e) {
return false;
}
}
With the generated list you'll be able to guess which pre-configured version of TomEE is used.
Be careful tho, if any additional lib is added, it could mess up your calculations.

Related

Adding jdbc driver when creating Ontop Virtual repository problem

I'm having problems adding a jdbc driver when creating an Ontop virtual SPARQL repository. I follow the instructions here.
The interface already warns that there is no JDBC driver found in the classpath. There is also a link to the download site where you can get the drivers. That all works. But adding the driver to the lib path (in the case of a Linux installation \opt\graphdb-free\app\lib) and then restarting GraphDB does not work. GraphDB is still reporting that the driver is not found.
I did try a lot of things. Adding the correct .jar to the CLASSPATH did not work. Using several other potential lib directories (the instructions are not precise on which directory to choose) also changed nothing. Then I took a look in the files you can create under Help - System Information - New Report. I found that all the .jar files in \opt\graphdb-free\app\lib were 'registered' (don't know if that is the correct term), but not the new one I placed there.
Tried adding other .jars (for MS SQL, next to the MySQL that I needed). Same problem. Then I tried something weird that actually worked. I renamed a .jar that I thought I wouldn't need to .backup and then renamed the mysql driver .jar to that original .jar (hope this is not to confusing). Restarted Grapdb and it worked!
What am I missing here? Is the list of .jars that are in the lib directory hardcoded somewhere? Very curious how to configure this the right way.
There is a config file, named graphdb-free.cfg, within graphdb-free/appfolder.
Open it and alter the app.classpath property by adding the additional jar(s) for the JDBC driver to the list. Save and restart
For docker-install and posterity, the right directory is /opt/graphdb/dist/lib and you may add this line in your Dockerfile : COPY /driver-jdbc-postgresql/jdbc-driver.jar /opt/graphdb/dist/lib

Why are the source file names not human readable?

I installed Perl6 with rakudobrew and wanded to browse the installed files to see a list of hex-filenames in ~/.rakudobrew/moar-2018.08/install/share/perl6/site/sources as well as ~/.rakudobrew/moar-2018.08/install/share/perl6/sources/.
E.g.
> ls ~/.rakudobrew/moar-2018.08/install/share/perl6/sources/
09A0291155A88760B69483D7F27D1FBD8A131A35 AAC61C0EC6F88780427830443A057030CAA33846
24DD121B5B4774C04A7084827BFAD92199756E03 C57EBB9F7A3922A4DA48EE8FCF34A4DC55942942
2ACCA56EF5582D3ED623105F00BD76D7449263F7 C712FE6969F786C9380D643DF17E85D06868219E
51E302443A2C8FF185ABC10CA1E5520EFEE885A1 FBA542C3C62C08EB82C1F4D25BE7B4696F41B923
522BE83A1D821D8844E8579B32BA04966BAB7B87 FE7156F9200E802D3DB8FA628CF91AD6B020539B
5DD1D8B49C838828E13504545C427D3D157E56EC
The files contain the source of packages but this does not feel very accessible. What is the rational for that?
In Perl 6, the mechanism for loading modules and caching their compilations is pluggable. Rakudo Perl 6 comes with two main mechanisms for this.
One is a file-system based repository, and it's used with things like -Ilib. This resolves modules simply using paths on disk. Whenever a module loaded, it first has to check that the modules sources have not changed in order to re-compile them if so. This is ideal for development, however such checks take time. Furthermore, this doesn't allow for having multiple versions of the same module available and picking the one matching the specification in the use statement. Again, ideal for development, when you just want it to use your latest changes, but less so for installation of modules from the ecosystem.
The other is an installation repository. Here, specific versions of modules are installed and precompiled. It is expected that all interactions with such a repository will be done through the API or tools using the API (for example, zef locate Some::Module). It's assumed that once a specific version of a module has been installed, then it is immutable. Thus, no checks need to be done against source, and it can go straight to loaded the compiled version of the module.
Thus, the installation repository is not intended for direct human consumption. The SHA-1s are primarily an implementation convenience; an alternative scheme could have been used in return for a bit more effort (and may well be used in the future). However, the SHA-1s do also create the appearance of something that wasn't intended for direct manipulation - which is indeed the case: editing a source file in there will have no effect in the immediate, and probably confusing effects next time the compiler is upgraded to a new version.

How to make a resource file visible to all bundles in OSGi?

I'd like to include a resource file (e.g. some xml config file) in my bundle and make it visible to all other bundles in the container. Is it possible without using the Fragment-Host manifest header? I'd like this resource file to always be visible in the classpath of all bundles running alongside my bundle, even those that do not exist yet, but will potentially be added in future.
EDIT:
To clarify - that resource must be available passively, i.e. the other bundles should be able to find it in their classpath, and not by refering to any special API or service of my bundle.
Some more background - my environment is a bit messy but I have no control over it and cannot change its existing bundles. The only way I can modify it is by adding my own bundles. That environment includes several copies of the ch.qos.logback.classic bundle. When logback starts up, it looks for specific XML config files in the classpath. If it doesn't find any of them, then its default behaviour is to print everything to stdout with debug level. This environment was previously used to host a GUI application so it didn't matter that much before, but now I am trying to adapt it so I can use some of its functionality in headless mode. So now it becomes important to me to be able to configure it in such a way that only warning and errors are printed to the console.
In general, no you cannot do this. Class-space isolation is at the heart of OSGi, but you want to put a resource in the class loader of one bundle and make it visible to all other bundles. That's not OSGi, it's the global application classpath.
The only thing you can do to add to the internal classpath of a specific bundle is to write a fragment which attaches to that bundle. A fragment can attach to multiple host bundles, but only if those hosts have the same symbolic name, i.e. because they are different versions of the same bundle. See OSGi R6 Core Specification, section 3.14.
You did however state that the bundles you want to attach are all copies of ch.qos.logback.classic. If that means they all have that exact symbolic name then perhaps a fragment will work after all.
You can not change the classpath of other bundles this way.
What you can do is retrieve the classloader of your bundle from your bundleContext. You can give this classloader to another bundle to retrieve your resource.
ClassLoader cl = context.getBundle().adapt(BundleWiring.class).getClassLoader();
Another option is to give the other bundle the URL of the resource.
As long as the resource is on the classpath, any bundle can access the resource if it can get hold of the class loader of the bundle that contains the resource.
For example:
ClassLoader classLoaderOfBundleWithResource = ...
classLoaderOfBundleWithResource.getResourceAsStream("org/example/resource.xml");
From a maintenance and API point of view, I would not recommend exposing a resource that way. Java types are much better suited therefore. Instead, let the resource bundle export a class that gives clients access to the contents of the resource.
For example:
public class XmlDocumentProvider {
public InputStream openDocument() {
return getClass().getResourceAsStream("resource.xml");
}
}
Assuming that both the resource.xml and the XmlDocumentProvider reside in the same package, openDocument will return the resource content just like in the first example.

Maven: Combine web projects

I have following Maven projects set up:
PM-Core
PM-Web (with a dependency to PM-Core)
Now, this project is used for several clients but for each client there are some small differences: mostly differences in configuration files but some clients also require additional java files (which may not be installed for the other clients).
I've been considering several alternatives on how to support this with maven but am still looking for the perfect solution.
The best solution I can think of is to create a separate maven project for each client (e.g. PM-CLIENT1, ...) which contains only the client specific configuration files and additional java files or jsp's, ... . Next step would be to consider the PM-Web project and the client project as one web project, meaning: have them combined (packaged) into 1 war file with files from the client project having precedence over files from the PM-Web project.
More concrete: running mvn package on PM-Client1 would take everything from PM-Web, add/replace the files from PM-Client1 and then package this into a single war.
So the question is: how to achieve this with maven?
Yes, this can be done using Overlays. The sample on the webpage is exactly what you are talking about.
For the project structure, you could have something like this:
.
|-- PM-Core
|-- PM-WebCommon (of type war, depends on core)
|-- PM-Client1 (of type war, depends on webcommon)
`-- PM-Client2 (of type war, depends on webcommon)
And use overlay in PM-Client1 and PM-Client2 to "merge" them with PM-WebCommon and package wars for each client.
UPDATE I won't cover all the details but I think that declaring the war dependency with a scope of type runtime is required when using overlay, this is how overlay do work (actually, the whole overlay thing is a kind of hack). Now, to solve your eclipse issue, one solution would be to create a JAR containing the classes of the PM-WebCommon project. To do so, use the attachClasses optional parameter and set it to true. This will tell maven to create a PM-WebCommon-<version>-classes.jar that you'll then be able to declare as dependency in PM-Client1 (with a provided scope). For the details, have a look at MWAR-73 and MWAR-131. This is also discussed in the FAQ of the war plugin. Note that this is not a recommended practice, the right way would be to move the classes to a separate module (and this is the other solution I wanted to mention).
UPDATE (201001018): I've tried the attachClasses parameter and it works with version 2.1-beta-1 of the plugin.
You could use profiles see http://maven.apache.org/guides/mini/guide-building-for-different-environments.html and use classifiers to distinguish between the artifacts from the different builds for the same version.
In this setup, you could create additional optional modules for each of your clients specific customisations under the parent project i.e.
+ PM
++ PM-Core
++ PM-Web
++ PM-Client1
++ PM-Client2
Or you could look at using use the maven assembly plugin
Compare also the answers for question different WAR files, shared resources .

How do I avoid having to manually tweak Import-Package headers with Maven bundle-plugin?

I'm happily using the Maven bundle-plugin to create OSGi manifest headers for my modules. However, when there are configuration files that pull in classes which aren't referenced directly in the code, the plugin can't tell which packages it's going to need.
One example is a bundle with domain models that constitute a Persistence Unit for JPA. The driver class is part of the PU configuration and either set in an XML file or at runtime when the EntityManager is instantiated. I have to manually add an Import-Package header for the driver class that I want to load, or I get CNF errors.
Another example is a Struts war, where the web.xml pulls in the Struts dispatcher that's otherwise not found anywhere in the code and has to be manually added to the headers.
How can I avoid this?
I tried adding the required packages as dependencies with a provided scope, but that didn't help.
In the plug-in section of the bnd configuration you can specify plug-ins to analyze these files and contribute to the import-package header. For spring it looks like this:
<_plugin>aQute.lib.spring.SpringComponent</_plugin>
I am not sure, what descriptors are supported on top of spring. Just take a look at the source (it's in the Apache Felix SVN) and see for yourself. In the worst case you have to write your own plug-in, but at least it is possible! Also peter kriens site about the bnd explains the usage and some internals.
Other then that I am not aware of any simple solution.