How can I run something when all extensions are available? - eclipse-plugin

I'm trying to understand what is safe vs. not safe with respect to the Eclipse plugin lifecycle.
Background
Something in the Eclipse/RCP/OSGI framework allows for circular dependencies between bundles by allowing bundles to provide extension points. If bundle X provides an extension point, Bundle Y may both depend on bundle X, and provide an extension that implements an interface or extends a class known to X, and make that extension available to bundle X.
Then there's the promise of activators: as far as I understand, it is promised that your activator's start(BundleContext) method will be called before any class in your bundle is made available to any other bundle, and that your dependencies' start(...) methods will have been called before yours.
Limitations/Possible Contradictions
Now, I'm ready to describe my conundrum: I would like to retrieve all the providers of a specific extension point as soon as possible; the easy way to do this would appear to be in the activator of my bundle.
However, if what I've described about the promises that the Eclipse/RCP/OSGI framework makes is true, then I'm pretty sure it shouldn't be possible for me to do that during activation:
Either (1) I'll have a reference to classes provided by one of my dependencies before their start(...) method has been called, or (2) My dependency's start(...) method will have to be called before mine, or (3) No violations will occur, but I'll retrieve zero extensions because the plugins that depend on me couldn't be started before me, so their implementations of my extension point are not yet available.
Why I Need Extensions at Startup
My challenge is that I need to load some data ASAP after the startup of my plugin, but I need to ensure that my extensions are loaded first, because the extensions in question are extensions to the data format of the data that I need to load; if I load the data first, it fails or becomes corrupted.
I'm also wondering whether my picture of the Eclipse plugin lifecycle is correct, because, despite searching for discussions of the plugin lifecycle, I haven't come across any warnings about its limitations; I'm fairly certain it must be possible to do things wrong and create serious problems, and I'd like to understand under what circumstances things would go wrong so I can avoid creating problems.

The extension point registry accessed by the IExtensionRegistry interface will tell you about extension points without starting any of the plugins involved.
IExtensionRegistry extReg = Platform.getExtensionRegistry();
In the registry for an extension point you will have a number of IConfigurationElement entries describing the individual extensions declared by plugins. It is only when you call the createExecutableExtension method of this interface that the the contributing plugin is started.
Note: A plugin's activator start method is not normally run until Eclipse needs to run some other code in the plugin - it does not run at Eclipse startup unless you force it too.

Related

AspectJ & controlling calls in other jars

POST 1: theoretical question
We use some software, that is actually a Web Module with its own Tomcat and shell scripts for controlling it. It has also a Plugin System, which allows you to upload a .jar file with a certain structure to add new functionality to the Application.
Question:
I would like to control&actually change the responses to different calls in the main system/application (not in my jar). Could I use AspectJ to do that? Why or why not? What would be the other general possibilities, except changing the code of the Main Application.
POST 2: the try
I tried to do it this way (in Eclipse):
In the AspectJ Project I added the jar file, where the classes to be woven are (actually I added it to the INPATH).
Exported the Project as "Jar with AspectJ support"
Deployed the jar file exported at the step 2: No result.
Questions:
In the exported aspect-jar, there are only the .class files of the AspectJ project, no .class files for the INPATH-Jar.
Should there be other classes, from the imported INPATH-jar?
In the exported aspect-jar there is no jar with the aspectj-runtime (aspectj-rt.jar). Should it be there, or how to configure the virtual machine to have it?
Yes, why not? If you could extend your question and explain (maybe with an example) which actors and actions there are in the system, we might be able to help you in a more conrete fashion. But basically I see no problem. The JAR modules might be loaded dynamically, but if you know which calls in the Tomcat app you want to intercept, you can easily instrument them either statically by reweaving the existing classes or dynamically via LTW (load-time weaving) during JVM start-up. There is no need to touch your uploaded JAR modules, which is, as I understand you, what you want to avoid.
You probably want to weave your main application's target classes via
execution(<methodsToBeChecked>) pointcut in combination with
around() advice.
The other details depend on your specific use case, the package, class and method names, parameters etc. The around advice can do one or several of the following things:
determine caller,
check call paramaters,
manipulate call parameters,
call original target with original or changed parameters,
alternatively not perform the original call at all,
pass back the result of the original call to the caller,
pass back a manipulated version of the result to the caller,
pass any synthetic value with the correct return type to the caller,
catch exceptions raised by the original call,
throw your own exceptions
etc.
Your fantasy (and AspectJ's few limitations) are the limit. :-)

How to do post-build modifications in an Eclipse builder

I'm currently working an Eclipse plug-in to provide iPOJO manipulation support.
The principle of iPOJO is to modify the .class files generated by the Java compiler to inject some methods and to add/update an entry to the Manifest.mf file.
Currently, my plug-in provides a project Nature and adds a Builder, added at the end of a project builder list, that calls the iPOJO Manipulator.
I use it on PDE projects.
The complete process works but I have a problem :
When my builder has finished its job (and the building process), the whole building process restarts, erasing the output folder and calling my builder again.
If I don't add a safety trick, it makes the building process loop over and over.
As I work on IResource, an IResourceDeltaEvent must be sent at the end of the building process, so I think the best way to avoid that kind of problem is to hide the fact that the resource has changed.
To be clear, I'm looking for a way to modify the class files after a PDE build, without inducing a new build, and without disabling the workspace auto-build property.
Thanks for answers.
I am a little unclear as to what you are describing.
You mention that you want this to work for PDE builds, but PDE builds happen largely outside of the workspace using ant scripts. They do not use IResource, Builder, or IResourceDeltaEvent.
I am guessing that you don't really mean PDE builds, but rather the building of plugin projects inside of the workspace.
In general, Eclipse (JDT in particular) expects that it has complete control over the output folders. However, there is an option in Preferences -> Java -> Building -> Output Folder called "Rebuild class files generated by others". Ensure that this is disabled. Eclipse should not try to rebuild class files that you touch. If your builder only touches class files then it will not trigger other builds after it changes the class files. The only thing is that you need to be careful not to compile things twice (and I think this is the problem that you are describing).
Alternatively, it may be easier for you to implement a CompilationParticipant (and the org.eclipse.jdt.core.compilationParticipant extension point). This will allow you to know exactly when JDT calls a compilation and exactly what it compiles.
Additionally, you will be notified of reconcile operations (ie- changes in working copies that have not been saved). This may be useful for you if you wanted to manipulate files as-you-type.

Apache Ivy Configurations

I'm slowly beginning to understand the importance of module configurations within the Ivy universe. However it is still difficult for me to clearly see how the same chunk of code could have different configurations that have different dependency requirements (the one exception is in the case of test configs that require JUnit on top of the normal dependencies -- I actually understand that 100%!)
For instance, take the following code:
package org.myorg.myprogram.core;
// Import an object from a dependency
import org.someElse.theirJAR.Widget;
public class MyCode
{
public MyCode()
{
if(Widget.SOME_STATIC == 3)
System.out.println("Fizz");
else
System.out.println("Buzz");
}
}
Now aside from the fact that this is terrible code, I just don't see how my program (which, let's pretend is JARred up into MyProgram.jar) could be set to have multiple "configurations"; some of which may require theirJAR and its Widget class, and others that don't. To me, if we fail to provide MyCode with a Widget it will die at runtime, always.
Again, I understand the necessity for test configurations; just not anything else (I have also asked questions about compile- vs run-time dependencies, and I guess I also see the necessity for those as well). But beyond test configs, compile-time configs, and runtime configs, what other module configurations could you possibly need? How would MyCode need a Widget in some cases, and not in other cases, yet still run perfectly fine without a Widget?
I greatly appreciate any help wrapping my brain around this!
Hibernate is a good example. Hibernate supports multiple cache implementations to act as its level-2 cache. You don't want to transitively depend on all the possible caches, only the one you use.
In general, we use the typical compile, test, runtime set of configurations.
To add to SteveD's answer, remember that dependencies can be more than just .jar files. Some dependencies come with source and javadoc files, release notes, license files, etc. Multiple configurations of the dependency might let you select the subset of files you wish to resolve.
You might also want to use configurations to control the contents of different distributions. For example you might want to release the jar on it's own ("master" configuration in Maven parlance) and additionally build a tar package containing all runtime dependencies, with (or without) source code.
Another use for configurations is when you target multiple platforms. I often release groovy scripts packaged to run as standalone jars or as tomcat web applications

Cocoa/Objective-C Plugins Collisions

My application has a plugin system that allows my users to write their own plugins that get loaded at runtime. Usually this is fine but in some cases two plugins use the same libraries that will cause a collision between those two.
Example:
Plugin A wants to use TouchJSON for working with JSON and thus the creator adds the TouchJSON code to the plugin source and it gets compiled and linked into the plugin binary. Later Plugin B also wants to use that same library and does exactly the same. Now when my app loads these two different plugins it detects this and spits out an warning like this:
Class CJSONScanner is implemented in
both [path_to_plugin_a] and
[path_to_plugin_b]. One of the two
will be used. Which one is undefined.
Since my app just loads plugins and makes sure they conform to a certain protocol I have no control over which plugins are loaded and if two or more use the same library.
As long as both plugins use the exact same version of the library this will probably work but as soon as the API changes in one plugin a bunch of problems will arise.
Is there anything I can do about this?
The bundle loading system provides no mean to pacifically resolve name conflicts. In fact, we're told to ensure ourselves that the problem doesn't happen, rather than what to do if it happens. (Obviously, in your case, that's not possible).
You could file a bug report with this issue.
If this is absolutely critical to your application, you may want to have bundles live in separate processes and use some kind of IPC, possibly NSDistantObject, to pass the data from your program to the plugin hosts. However, I'm fairly sure this is a bag of hurt, so if you don't have very clearly-defined interfaces that allow for distribution into different processes, it might be quite an undertaking.
In a single-process model, the only way to deal with this is to ensure that the shared code (more precisely, the shared Objective-C classes) is loaded once. There are two ways to do this:
Put the shared code in a framework.
Put the shared code in a loadable bundle, and load the bundle when the plug-in is loaded if the relevant classes aren’t already available (check using NSClassFromString()). The client code would also have to use NSClassFromString() rather than referring to classes directly.
Of course, if you aren’t in control of the plug-ins you can’t enforce either of these schemes. The best you can do is provide appropriate guidelines and possibly infrastructure; for instance, in the second case the loading could be handled by the application, perhaps by specifying a class to check for and the name of an embedded bundle to load if it isn’t available in the plug-in’s Info.plist.

How do you organise your NInject modules?

NInject's module architecture seems useful but I'm worried that it is going to get in a bit of a mess.
How do you organise your modules? Which assembly do you keep them in and how do you decide what wirings go in which module?
Each subsystem gets a module. Of course the definition of what warrants categorisation as a 'subsystem' depends...
In some cases, responsibility for some bindings gets pushed up to a higher level as a lower-level subsystem/component is not in a position to make a final authoritative decision - in some cases this can be achieved by passing parameters into the Module.
Replying to my own post after a couple of years of using NInject.
Here is how I organise my NInjectModules, using a Book Store as an example:
BookStoreSolution
Domain.csproj
Services.csproj
CustomerServicesInjectionModule.cs
PaymentProcessingInjectionModule.cs
DataAccess.csproj
CustomerDatabaseInjectionModule.cs
BookDatabaseInjectionModule.cs
CustomSecurityFramework.csproj
CustomSecurityFrameworkInjectionModule.cs
PublicWebsite.csproj
PublicWebsiteInjectionModule.cs
Intranet.csproj
IntranetInjectionModule.cs
What this is saying is that each project in the system comes prepackaged with one or more NInject modules that know how to setup the bindings for that project's classes.
Most of the time an individual application is not going to want to make significant changes to the default injection modules provided by a project. For example, if I am creating a little WinForm app which needs to import the DataAccess project, normally I am also going to want to have all the project's Repository<> classes bound to their associated IRepository<> interfaces.
At the same time, there is nothing forcing an individual application to use a particular injection module. An application can create its own injection module and ignore the default modules provided by a project that it is importing. In this way the system still remains flexible and decoupled.