How to do post-build modifications in an Eclipse builder - eclipse-plugin

I'm currently working an Eclipse plug-in to provide iPOJO manipulation support.
The principle of iPOJO is to modify the .class files generated by the Java compiler to inject some methods and to add/update an entry to the Manifest.mf file.
Currently, my plug-in provides a project Nature and adds a Builder, added at the end of a project builder list, that calls the iPOJO Manipulator.
I use it on PDE projects.
The complete process works but I have a problem :
When my builder has finished its job (and the building process), the whole building process restarts, erasing the output folder and calling my builder again.
If I don't add a safety trick, it makes the building process loop over and over.
As I work on IResource, an IResourceDeltaEvent must be sent at the end of the building process, so I think the best way to avoid that kind of problem is to hide the fact that the resource has changed.
To be clear, I'm looking for a way to modify the class files after a PDE build, without inducing a new build, and without disabling the workspace auto-build property.
Thanks for answers.

I am a little unclear as to what you are describing.
You mention that you want this to work for PDE builds, but PDE builds happen largely outside of the workspace using ant scripts. They do not use IResource, Builder, or IResourceDeltaEvent.
I am guessing that you don't really mean PDE builds, but rather the building of plugin projects inside of the workspace.
In general, Eclipse (JDT in particular) expects that it has complete control over the output folders. However, there is an option in Preferences -> Java -> Building -> Output Folder called "Rebuild class files generated by others". Ensure that this is disabled. Eclipse should not try to rebuild class files that you touch. If your builder only touches class files then it will not trigger other builds after it changes the class files. The only thing is that you need to be careful not to compile things twice (and I think this is the problem that you are describing).
Alternatively, it may be easier for you to implement a CompilationParticipant (and the org.eclipse.jdt.core.compilationParticipant extension point). This will allow you to know exactly when JDT calls a compilation and exactly what it compiles.
Additionally, you will be notified of reconcile operations (ie- changes in working copies that have not been saved). This may be useful for you if you wanted to manipulate files as-you-type.

Related

Adding a two new phases to an Xcode framework project

I am building a project on Github written in Objective-C. It resolves MAC addresses down to manufacturer details. The lookup table is currently stored as text file manuf.txt (from the Wireshark project), which is parsed at run-time, which is costly. I would prefer to compile this down to archived objects at build-time, and load that instead.
I would like to amend the build phases such that I:
Build a simple compiler
Run the compiler, parsing manuf.txt and outputting archived objects
Build the framework
Copy the archived objects into the framwork
I am looking for wisdom on how to achieve steps 1 and 2 using Xcode v7.3 as Xcode provides only a Copy Files phase or a Run Script phase. An example of other projects achieving similar goals would be inspiring.
I suspect that what you are asking is possible, but tricky. The reason is that you will need to write a bunch of class files and then dynamically add them to the project.
Firstly you will need to employ a run script phase to run various tools from the command line to parse your file and generate a number of class files from it. I would suggest looking into various templating engines. For example appledoc uses moustache templates to generate API documentation files. You could use the same technique to generate header and implementation files.
Next, rather than generating archived objects an trying to import into a framework. I think you may be better off generating raw source code, adding it to a project and compiling into a framework. Probably simpler in the long run.
To automatically include the generated code I would look into (which means I haven't actually tried this :-) adding a folder reference to the project rather than an Xcode group. Folder references are an option in the 'Add files to ...' dialog.
Folder references refer to a directory and automatically add the entire contents of that directory to a project. So you can use one to point to the directory where you have generated the source code. This is a much better option than trying to manipulate the project or injecting things into an established framework.
I would prefer to parse the file at runtime. After launch you can look for an already existing output, otherwise parse it one time.
However, I have to do something similar at Objective-Cloud. I simply added a run script build phase and put the compiler call into it.

How can I run something when all extensions are available?

I'm trying to understand what is safe vs. not safe with respect to the Eclipse plugin lifecycle.
Background
Something in the Eclipse/RCP/OSGI framework allows for circular dependencies between bundles by allowing bundles to provide extension points. If bundle X provides an extension point, Bundle Y may both depend on bundle X, and provide an extension that implements an interface or extends a class known to X, and make that extension available to bundle X.
Then there's the promise of activators: as far as I understand, it is promised that your activator's start(BundleContext) method will be called before any class in your bundle is made available to any other bundle, and that your dependencies' start(...) methods will have been called before yours.
Limitations/Possible Contradictions
Now, I'm ready to describe my conundrum: I would like to retrieve all the providers of a specific extension point as soon as possible; the easy way to do this would appear to be in the activator of my bundle.
However, if what I've described about the promises that the Eclipse/RCP/OSGI framework makes is true, then I'm pretty sure it shouldn't be possible for me to do that during activation:
Either (1) I'll have a reference to classes provided by one of my dependencies before their start(...) method has been called, or (2) My dependency's start(...) method will have to be called before mine, or (3) No violations will occur, but I'll retrieve zero extensions because the plugins that depend on me couldn't be started before me, so their implementations of my extension point are not yet available.
Why I Need Extensions at Startup
My challenge is that I need to load some data ASAP after the startup of my plugin, but I need to ensure that my extensions are loaded first, because the extensions in question are extensions to the data format of the data that I need to load; if I load the data first, it fails or becomes corrupted.
I'm also wondering whether my picture of the Eclipse plugin lifecycle is correct, because, despite searching for discussions of the plugin lifecycle, I haven't come across any warnings about its limitations; I'm fairly certain it must be possible to do things wrong and create serious problems, and I'd like to understand under what circumstances things would go wrong so I can avoid creating problems.
The extension point registry accessed by the IExtensionRegistry interface will tell you about extension points without starting any of the plugins involved.
IExtensionRegistry extReg = Platform.getExtensionRegistry();
In the registry for an extension point you will have a number of IConfigurationElement entries describing the individual extensions declared by plugins. It is only when you call the createExecutableExtension method of this interface that the the contributing plugin is started.
Note: A plugin's activator start method is not normally run until Eclipse needs to run some other code in the plugin - it does not run at Eclipse startup unless you force it too.

IntelliJ multi-project

Moving to intellij i'm trying to understand properly the logic behind the its project structure. I come from eclipse. After reading for a while i understood the relation between workspace and project, then between project and modules. However something that is puzzling me is the logic of the default project configuration in Intellij. Indeed, when you create a project there is an initial module which to a certain extend is equivalent to the Project itself. To be more precise, the initial module folder is the Project folder. This is kind of confusing to me. Then when you add more module they are sub-module of that module.
My first question is what is the rationale of making this first module equivalent to the project folder ?
Following this, i would further ask, what the point of having modules as sub-module of others.
In eclipse i use to have simply different project (i.e. module) independent from each other and adding the dependency as necessary. So how does the Idea solution makes it better, if not what is the rational here ?
I saw that one can start an empty project and then add modules to it. However in that case, the modules added are added as subfolder of the Project and therefore there is no initial module equivalent to the Project folder ? So why this difference and what is the rationale behind it ?
What would be the better approach, the first or second ?
Would it be ok to have this first initial module with no src or test folder but just with the proper facet so as to spread it to the sub-module?
I would appreciate if someone could explain a bit the rational of all of it ?
I will move to SBT soon (i.e. maven structure which I suppose inspired all modern IDE project Structure) if one want to explain within that context fine, nevertheless i want to understand the rationale in intelliJ first.
Many thanks,
-M-
PS: What i'm looking for is some advise for some multi-module project structure in Intellij as i'm moving my eclipse workspaces to it.
I think that it's not uncommon for projects to be relatively small, so they don't need fancy modules with dependency management etc. In that case, I find the default project created by IntelliJ to fit perfectly my needs: no need to add submodules, everything is directly in the parent project, it reduces the structure to its bare minimum.
On the other hand, big projects with submodules will likely resemble the structure of a Maven multimodule project (perhaps SBT too, but I don't know this tool at all). You have a parent root which acts as a container for submodules. The parent project may also store configuration (a default SDK, a language level etc. that will be inherited by the submodules). The actual code will be contained in the submodules.
Regarding your questions, it all depends on the kind of project you are developing. For a small codebase, you could keep a simple project with no submodule. For bigger codebases, you can either create modules manually, or import an existing Maven/SBT/whatever project, which will automatically create modules reflecting the imported structure.

What's the best approach to incremental compilation when building a DSL using Eclipse?

As suggested by the Eclipse documentation, I have an org.eclipse.core.resources.IncrementalProjectBuilder that compiles each source file and separately I also have a org.eclipse.ui.editors.text.TextEditor that can edit each source file. Each source file is compiled into its own compilation unit, but it can reference types from other (already compiled) source files.
Two tasks for which this is important are:
Compiling (to make sure the types we're using actually exist)
Autocomplete (to look up the type so we can see what properties/methods are present on it)
To accomplish this, I want to store a representation of all the compiled types in memory (referred to below as my "type store").
My question is two fold:
Task one above is performed by the builder and task two by the editor. So that they both have access to this type store, should I create a static store somewhere that they both can have access to, or does Eclipse provide a neater way to deal with this problem? Note that it is eclipse, not me, that instantiates the builders and editors when they are needed.
When opening eclipse, I don't want to have to rebuild the whole project just so I can re-populate my type store. My best solution so far is to persist this data somewhere and then repopulate my store from that (perhaps upon project open). Is this how other incremental compilers typically do this? I believe Java's approach is to use a special parser that efficiently extracts this data from the class files.
Any insights would be really appreciated. This is my first DSL.
This is an interesting question and one that doesn't have a simple solution. I'll try to describe a potential solution and also describe in a little bit more detail how JDT accomplishes incremental compilation.
First, a bit about JDT:
Yes, JDT does read class files for some of its information, but only for libraries that don't have source code. And this information is really only used for editing assistance (content assist, navigation, etc).
JDT computes incremental compilation by keeping track of dependencies between compilation units as they are compiled. This state information is stored on disk and retrieved and updated after each compile.
As a more complete example, let's say that after a full build, JDT determines that A.java depends on B.java, which depends on C.java.
If there is a structural change in C.java (a structural change is a change that can affect outside files (e.g., adding/removing a non-private field or method)), then B.java will be recompiled. A.java will not be recompiled since there was no structural change in B.java.
After this bit of clarification on how JDT works, here are some possible answers to your questions:
Yes. This must be done through statically accessible global objects. JDT does this through the JavaCore and JavaModelManager objects. If you don't want to use global singletons, then you can access to your type store available through your plugin's Bundle activator instance. The e4 project does allow dependency injection, which is probably even better (but is not really a part of the core Eclipse APIs).
I think persisting the information on the file system is your best bet. The only real way to determine incremental compile dependencies is to do a full build, so you need to persist the information somewhere. Again, this is how JDT does it. The information is stored in your workspaces' .metadata directory somewhere in the org.eclipse.core.resources plugin. You can have a look at the org.eclipse.jdt.internal.core.builder.State class to see the implementation.
So, this may not be the answer you are looking for, but I think this is the most promising way to approach your problem.

How do I test the final built product of an iOS static library, as opposed to it's constituent classes?

I have written a static library to re-use some code between iOS projects, let's call it MyLib.
While MyLib has good unit test coverage, it interacts heavily with external resources, and I'd like to make sure that it does the right then when it's hooked up with other pieces of an actual application. In short, I would like to add an application test target to my static library in addition to the unit test target. No drama there.
In thinking about the general setup, I also wondered if it wouldn't be a good idea to ensure that my library's application test target linked against the actual libMyLib.a binary artifact, thereby putting that step of the process under test as well.
So two questions:
Is this even necessary? Is there any significant risk that the final libMyLib.a binary product could behave differently than the series of compiled .m files that my unit tests already exercise? (Possible answer would be that it guards against the accidental exclusion of one of those .m files in the eventual build target).
How would one make certain that the application test target was linking against libMyLib.a? Is this configurable within the build settings? Is any custom build scripting necessary?
I apologize in advance for the basic question if this is common knowledge, I'm slowly coming up to speed on build process, linking, etc. in general, not just how it is implemented using Apple's SDKs.
This is actually accomplished fairly easily. In your static library's project:
(Assumes XCode 3)
Create a new application target.
"Get Info" on the application target to edit its properties.
Click the "General" Tab at the top
Click the "+" button under the "Direct Dependencies" pane, add your static library build target. This will build the static library BEFORE you build the new application target.
Click the "+" button under the "Linked Libraries" pane, add libMyLib.a (or whatever your library is called)
Your project should now link against the actual built artifact.