OSGi unit testing without step that packages bundles - testing

I have checked a few testing solution for OSGI including PAX and had a quick look at the abstract TestCase within Spring DM but they both appear to require one to jar up and bundle associated bundles. I was hoping to find something that works without this intermediate step.
Imagine the ability to package up packages on your classpath so that packages x and y made up bundle XY and packages x and z made up bundle XZ. Bundle XZ would not "see" package "y" but could import a service from XY living in package x. Any comments if this is possible or if a equivalent test case / library exists ?

I think that using Tiny Bundles from OPS4J with Pax Exam is what you are looking for.
http://wiki.ops4j.org/display/paxexam/ExamAndTinybundles

If you really want to enforce runtime visibility rules than you probably have to run your tests inside OSGi environment and pay some performance overhead.
However it might be sufficient for you to enforce compile time visibility by separating your classes into distinct compilation units (e.g. separate Maven modules X,Y,Z) with proper dependencies and then running a standard testing framework (e.g. JUnit) without OSGi.

Related

Are Paket dependency groups more than just a way to solve version conflicts?

The paket.dependencies sample file produced when running dotnet new fake currently looks like:
// [ FAKE GROUP ]
group Build
source https://api.nuget.org/v3/index.json
nuget Fake.DotNet.Cli
nuget Fake.IO.FileSystem
nuget Fake.Core.Target
I understand how dependency groups can be used to solve version conflicts, however it seems unnecessary to introduce them until an actual version conflict situation arises.
What is the semantic of the Build group here and why not just have the three dependencies under the Main default group? The same reflection applies to the Test group in the Paket documentation example.
Can one elaborate on reasons for segregating dependencies in groups in the case of no version conflicts? Maybe explaining a bit more the rationale behind the Build and Test groups?
As I have basically introduced that split for FAKE 5:
The reasoning is that one set of the dependencies is used at BUILD-time (ie when running the build script) and one is for your project RUN-time. It is completely valid to have a different set of dependencies for those two.
Consider the following scenario: You use the FSharp.Formatting (FSF, a markdown parser) project in your build process to generate API documentation and in your project to generate websites. Now you want to update the API documentation by updating FSF but you cannot upgrade FSF in your project for compatibility reasons. With the separation between BUILD and RUN-time this is not a problem and you can see them as "different" dependencies in different versions.
I'd like to see that approach similar to how node separates dependencies and dev-dependencies
Regarding the split between RUN and TEST: Personally I'm not a huge fan. I can see how people want to separate their dependencies but paket currently doesn't "really" support that scenario and you can indeed run into issues with that approach. My current suggestion would be to not split between RUN and TEST and manage them in a single group.
To properly split between RUN & TEST paket would need a new feature to reference another group:
group Run
source https://api.nuget.org/v3/index.json
nuget MyDep1
group Test
reference_group Run
source https://api.nuget.org/v3/index.json
nuget MyRunner1
Similar to the external lock-file feature: https://github.com/fsprojects/Paket/pull/3062#issuecomment-367658114

How to add the target jar as a test resource of the same project?

I'm developing a Solr plugin and using the Solr test-framework I place a test SOLR_HOME dir under test/resources with /conf/ and /lib . Now the framework inistantiates a SolrCore and loads my plugin from /lib. Not an issue to output the jar of the plugin to /lib, but the issue is that the plugin is not yet available since it still needs to past the test (chicken and the egg).
How do you recommend solving this? I see those options:
Create another project for the tests with a dependency on the plugin, and in it run the tests. Simple enough, but how do I ensure that everytime the plugin is built also the tests of this other project is built? The point of the automated tests at every build is to having a new plugin jar which breaks the tests.
In dp4j pom.xml I build the project on 2 phases, in the 1st I <include> only the annotation processors while in the other I compile the tests which rely on the annotation processors compiled in the eariler phase.
I'm in favor of 2 since copy-pasting the configuration doesn't seem a bad option, and makes it seem less complicated than it probably is. I don't remember if I had asked about it here - what do you recommend? Any other case studies /working code to look at?
there's a 3rd. most probably best solution ~ do nothing!
I was under the impression that the Solr Testframework need to load my plugin from /lib but apparently it doesn't need to, it can load it from test-classes, all on its own!

How can I tell Hudson to build the modules instead of the jobs?

I have a alot of jobs on Hudson, most of which are really small and consist of just a few modules. But one is big and consist of several modules.
When ever I make a commit to our subversion repository for any of those several modules in that big job, Hudson builds the entire job instead of just the module that have changed.
It doesn't matter if I just scm-polling or a subversion hook, the result is the same.
It seems to me like it would be better if the modules where built instead of the jobs since the other modules in other jobs have dependencies to the modules and not to the jobs.
Can this be configured or do I have to create several jobs instead of the big one? And if so, can I configure the big job to never build when any of it's modules are being triggered but still build when it's own pom.xml is changed?
Thanks.
Hudson has an "Incremental Build" option in the Maven area of the job configuration.
It's hidden in the "Advanced" area.
You could make use of the reactor plugin. For example:
mvn reactor:make-scm-changes
This will only build those modules that have been changed in the SCM. Follow the link for other examples.
Doesn't your compiler offers you the incremental compile option? The java 1.6 compiler usually searches for class and source files and decides using the timestamp to determine whether to use the source or class file. Just leave out the clean goal when building your code.
Another option would be to first run a batch/shell script to determine what files changed and delete the corresponding class files so that the compiler incrementally builds the class files that are missing.

A layout for maven project with a patched dependency

Suppose, I have an opensource project that depends on some library, that must be patched in order to fix some issues. How do I do that? My ideas are:
Have that library sources set up as a module, keep them in my vcs. Pros: simple. Cons: some third party sources in my repo, might slow down build process, hard to find a patched place (though can be fixed in README)
Have a module, like in 1, but keep patched source files only, compile them with orignal library jar in classpath and somehow replace *.class files in library jar on build. Pros: builds faster, easy to find patched places. Cons: hard to configure, that jar hackery is non-obvious (library jar in repository and in my project assembly would be different)
Keep patched *.class files in main/resources, and replace on packaging like in 2). Pros: almost none. Cons: binaries in vcs, hard to recompile a patched class as patch compilation is not automated.
One nice solution is to create a distinct project with patched library sources, and deploy it on local/enterprise repository with -patched qualifier. But that would not fit for an opensourced project that is meant to be easily buildable by anyone who checks out its sources. Or should I just say "and also, before you build my project, please check out that stuff and run mvn install".
One nice solution is to create a distinct project with patched library sources, and deploy it on local/enterprise repository with -patched qualifier. But that would not fit for an opensourced project that is meant to be easily buildable by anyone who checks out its sources. Or should I just say "and also, before you build my project, please check out that stuff and run mvn install".
This is what I would do (and actually what I do) for both a corporate and an opensource project. Get the sources, put them under version control in a distinct project, patch them, rebuild the patched library (and include this information in the version, something like X.Y.Z-patched), deploy it to a repository (you could use SVN for this, a la Google Code1), declare the repository in your POM and update the dependency to point on your patched version.
With this approach, you can say to your users: check out my code and run mvn install and they will just get the patched version without any extra action. This is IMHO the cleanest way (not error prone, no class path order mess, no increase of the build time, etc).
1 Lots of people are deploying their code to their hosted subversion repository (how-to in this post).
One nice solution is to create a distinct project with patched library sources, and deploy it on local/enterprise repository with -patched qualifier. But that would not fit for an opensourced project that is meant to be easily buildable by anyone who checks out its sources. Or should I just say "and also, before you build my project, please check out that stuff and run mvn install".
I'd agree with this and Pascal's answer. Some additional notes:
you may use dependency:unpack on the original artifact and then combine that with your compiled classes if you don't want to rebuild the whole dependant project
in either case, your pom.xml will need to correctly represent the dependencies of that library
you can still integrate this as part of your project's build to avoid the 'deploy to a repository' step
make sure you honour the constraints of the project's license when doing all this!

find dependencies in target/classes instead of local repository?

Summary: I'm looking for a way to instruct maven to search for dependencies in target/classes instead of jar in the local repository
Say I have 2 modules, A and B where A depends on B. Both are listed in a module S. Normally I need to run 'mvn install' in S. I'm looking for a way to run 'mvn compile' so that when A is compiled its classpath will contain ../B/target/classes instead of ~/.m2/repository/com/company/b/1.0/b-1.0.jar.
(my reason is so that i can have continous compilation without the need to go through packaing and installation, or, more exactly, use 'mvn scala:cc' on multiple modules)
I don't think that this is possible without horrible hacking, this is just not how maven works. Maven uses binary dependencies and needs a local repository to resolve them. So, the maven way to handle this is to launch a reactor build on all modules. Just in case, have a look at Maven Tips and Tricks: Advanced Reactor Options.
But, during development, can't you just import all your projects in your IDE and use "project references" (i.e. configure your projects to depend on source code instead of a JAR) like most Java developers are doing? This is the common approach to avoid having to install an artifact to "see" the modifications.
If this is not possible and if you really don't want to install artifacts into your local repository, then you'll have to move your code into a unique module.
i know this is annoying. which helped me here is definitely IDE support. eclipse and IntelliJ are clever to collect all dependencies once a maven-project import is done. even cross module dependencies are compiled live.