AspectJ - compile time - Weave types int JDK classes - aop

I need to add fields into JDK Classes. e.g java.lang.String
First I tried:
declare parents: ( * && !java.lang.Object ) implements VistaInt;
public String[] VistaInt.abc;
this however doesn't work.
It throws a warning
this affected type is not exposed to the weaver:
org.aspectj.lang.Signature [Xlint:typeNotExposedToWeaver]
So I researched it on internet and find out, that it is harder as it seems to be and
AspectJ doesn't support intrumenting JDK classes directly says here:
http://www.inf.usi.ch/faculty/binder/documents/pppj08.pdf
But there is proposed something callse FERRARI framework, tool for AspectJ that should allow injecting JDK classes.
So I kept searching for it and get here:
http://dev.eclipse.org/mhonarc/lists/aspectj-dev/msg02520.html
But none of these links work and I was not able to find any other source, or tool or anything that would help me.
Do you have any idea, where to find this library, or how to inject java JDK types??
Thank you!

First you need to locate the rt.jar that your Eclipse project is using. This is most likely the default for your machine but to check you can right-click on the JRE system library icon in your project, click on Properties and Installed JREs. The rt.jar file is under lib.
Once you have it you will need to weave it from the command line (you might need to download separate ajc compiler). Say you want to weave MyAspect.aj you would run
ajc -inpath rt.jar MyAspect.aj -outjar newrt.jar
You then need to make sure that your code uses this library by putting newrt.jar on the bootclasspath ahead of rt.jar. Running from the command line you do this
java -Xbootclasspath/<path to newrt.jar> MyApplication
In Eclsipse you add -Xbootclasspath/<path to newrt.jar> to the Run configuration.
However, I would not recommend modifying java.lang.String as JVMs often treat this class specially. But you can give it a go if you want :)
Note
I believe that the FERRARI framework that you refer to is for LTW (Load Time Weaving) and this discussion has been for a CTW (Compile Time Weaving) solution. If you want to do LTW then you're going to have difficulties as custom class loaders can't load java.* classes so you can't weave these at load time. Your link suggests that people have attempted a workaround but I don't know anything about this.

Related

How can I run something when all extensions are available?

I'm trying to understand what is safe vs. not safe with respect to the Eclipse plugin lifecycle.
Background
Something in the Eclipse/RCP/OSGI framework allows for circular dependencies between bundles by allowing bundles to provide extension points. If bundle X provides an extension point, Bundle Y may both depend on bundle X, and provide an extension that implements an interface or extends a class known to X, and make that extension available to bundle X.
Then there's the promise of activators: as far as I understand, it is promised that your activator's start(BundleContext) method will be called before any class in your bundle is made available to any other bundle, and that your dependencies' start(...) methods will have been called before yours.
Limitations/Possible Contradictions
Now, I'm ready to describe my conundrum: I would like to retrieve all the providers of a specific extension point as soon as possible; the easy way to do this would appear to be in the activator of my bundle.
However, if what I've described about the promises that the Eclipse/RCP/OSGI framework makes is true, then I'm pretty sure it shouldn't be possible for me to do that during activation:
Either (1) I'll have a reference to classes provided by one of my dependencies before their start(...) method has been called, or (2) My dependency's start(...) method will have to be called before mine, or (3) No violations will occur, but I'll retrieve zero extensions because the plugins that depend on me couldn't be started before me, so their implementations of my extension point are not yet available.
Why I Need Extensions at Startup
My challenge is that I need to load some data ASAP after the startup of my plugin, but I need to ensure that my extensions are loaded first, because the extensions in question are extensions to the data format of the data that I need to load; if I load the data first, it fails or becomes corrupted.
I'm also wondering whether my picture of the Eclipse plugin lifecycle is correct, because, despite searching for discussions of the plugin lifecycle, I haven't come across any warnings about its limitations; I'm fairly certain it must be possible to do things wrong and create serious problems, and I'd like to understand under what circumstances things would go wrong so I can avoid creating problems.
The extension point registry accessed by the IExtensionRegistry interface will tell you about extension points without starting any of the plugins involved.
IExtensionRegistry extReg = Platform.getExtensionRegistry();
In the registry for an extension point you will have a number of IConfigurationElement entries describing the individual extensions declared by plugins. It is only when you call the createExecutableExtension method of this interface that the the contributing plugin is started.
Note: A plugin's activator start method is not normally run until Eclipse needs to run some other code in the plugin - it does not run at Eclipse startup unless you force it too.

How to make IntelliJ IDEA recognise code created by macros?

Background
I have an sbt-managed Scala project that uses the usual sbt project layout for Scala projects with macros, i.e., a subproject that contains the macros a main project that is the actual application and that depends on the macro subproject. The macros are macro annotations which, in essence, generate companion objects for regular classes. The generated companion objects declare, amongst other members, apply/unapply methods.
I used the sbt-idea plugin to generate a corresponding IntelliJ IDEA project, and I use the sbt console from IDEA's sbt-plugin to compile and run my Scala application.
Everything works more or less fine, except that the generated companion objects, and more importantly, their members such as apply/unapply, are not recognised by IDEA. Thus, I get a squiggly line everywhere I, e.g., an apply method.
My setup is IntelliJ IDEA CE 133.471 with the plugins SBT 1.5.1 and Scala 0.28.363 on Windows 7 x64.
Questions
How do I get IntelliJ IDEA to recognise code (classes, objects, methods, ...) that has been generated by Scala macros (macro annotations, to be precise)?
Are other IDEs, e.g., Eclipse, known to work better in such a setting?
Related
This question (which is less detailed) essentially asks the same, but has not gotten a reply yet (2014-02-26).
According to a JetBrains developer the feature I requested is on their long-term to-do list, but won't be implemented any time soon (2014-03-05).
With the latest Scala plugin build, there is an API which can be used to write your own plugin to support your macros: http://blog.jetbrains.com/scala/2015/10/14/intellij-api-to-build-scala-macros-support/
Now, everyone can use this API to make their macros more friendly to their favorite IDE. To do that, you have to implement SyntheticMembersInjector, and register it in the plugin.xml file:
<extensions defaultExtensionNs="org.intellij.scala">
<syntheticMemberInjector implementation="org.jetbrains.example.injector.Injector"/>
</extensions>
Seems like there's limited support if any.
Quote by this link: http://blog.jetbrains.com/scala/2014/01/23/heading-to-the-perfect-scala-code-analysis/
Alexander Podkhalyuzin says:
January 30, 2014 at 10:13 am
We started support for Scala macros, but it’s not a simple task, so I can’t promise it will be done soon.
Best regards,
Alexander Podkhalyuzin.

Why there are no stubs for interfaces in Microsoft.Fakes

I'm about to use Microsoft.Fakes in my unit tests. I read a tutorial where Microsoft.Fakes creates a stub for an interface (implementred inside the solution), but in my solution stubs are available only for classes.
Can you tell me what should I do to get stubs also for all the intercaes. Both interfaces and classes are defined as public.
Fakes generates stubs for both classes and interfaces by default. You may have bumped into one of the current limitations, which is causing Fakes to skip your interface. To troubleshoot,
open the .Fakes file and set Verbosity attribute of the Fakes element to "Verbose"
open TOOLS -> Options -> Projects and Solutions -> Build and Run and change MSBuild output verbosity to "Detailed"
build the project that contains the .Fakes file
open the Output window and search for the GenerateFakes task; review its output for information that explains why a particular interface was not stubbed.
In the upcoming Quarterly Update 1 of Visual Studio 2012, this information reported as warnings in the Error List window, regardless of the logging settings, which should make troubleshooting much easier.
You may also not have drilled down to the proper namespace. The Fakes are generated in the same namespace as the interfaces are in in your assembly under test. So, for example, if you're testing MyApp.Validators.IRequestValidator, in your unit test, you'll have to use new MyApp.Validators.Fakes.StubIRequestValidator() as opposed to new MyApp.Fakes.StubIRequestValidator().

How to do post-build modifications in an Eclipse builder

I'm currently working an Eclipse plug-in to provide iPOJO manipulation support.
The principle of iPOJO is to modify the .class files generated by the Java compiler to inject some methods and to add/update an entry to the Manifest.mf file.
Currently, my plug-in provides a project Nature and adds a Builder, added at the end of a project builder list, that calls the iPOJO Manipulator.
I use it on PDE projects.
The complete process works but I have a problem :
When my builder has finished its job (and the building process), the whole building process restarts, erasing the output folder and calling my builder again.
If I don't add a safety trick, it makes the building process loop over and over.
As I work on IResource, an IResourceDeltaEvent must be sent at the end of the building process, so I think the best way to avoid that kind of problem is to hide the fact that the resource has changed.
To be clear, I'm looking for a way to modify the class files after a PDE build, without inducing a new build, and without disabling the workspace auto-build property.
Thanks for answers.
I am a little unclear as to what you are describing.
You mention that you want this to work for PDE builds, but PDE builds happen largely outside of the workspace using ant scripts. They do not use IResource, Builder, or IResourceDeltaEvent.
I am guessing that you don't really mean PDE builds, but rather the building of plugin projects inside of the workspace.
In general, Eclipse (JDT in particular) expects that it has complete control over the output folders. However, there is an option in Preferences -> Java -> Building -> Output Folder called "Rebuild class files generated by others". Ensure that this is disabled. Eclipse should not try to rebuild class files that you touch. If your builder only touches class files then it will not trigger other builds after it changes the class files. The only thing is that you need to be careful not to compile things twice (and I think this is the problem that you are describing).
Alternatively, it may be easier for you to implement a CompilationParticipant (and the org.eclipse.jdt.core.compilationParticipant extension point). This will allow you to know exactly when JDT calls a compilation and exactly what it compiles.
Additionally, you will be notified of reconcile operations (ie- changes in working copies that have not been saved). This may be useful for you if you wanted to manipulate files as-you-type.

Apache Ivy Configurations

I'm slowly beginning to understand the importance of module configurations within the Ivy universe. However it is still difficult for me to clearly see how the same chunk of code could have different configurations that have different dependency requirements (the one exception is in the case of test configs that require JUnit on top of the normal dependencies -- I actually understand that 100%!)
For instance, take the following code:
package org.myorg.myprogram.core;
// Import an object from a dependency
import org.someElse.theirJAR.Widget;
public class MyCode
{
public MyCode()
{
if(Widget.SOME_STATIC == 3)
System.out.println("Fizz");
else
System.out.println("Buzz");
}
}
Now aside from the fact that this is terrible code, I just don't see how my program (which, let's pretend is JARred up into MyProgram.jar) could be set to have multiple "configurations"; some of which may require theirJAR and its Widget class, and others that don't. To me, if we fail to provide MyCode with a Widget it will die at runtime, always.
Again, I understand the necessity for test configurations; just not anything else (I have also asked questions about compile- vs run-time dependencies, and I guess I also see the necessity for those as well). But beyond test configs, compile-time configs, and runtime configs, what other module configurations could you possibly need? How would MyCode need a Widget in some cases, and not in other cases, yet still run perfectly fine without a Widget?
I greatly appreciate any help wrapping my brain around this!
Hibernate is a good example. Hibernate supports multiple cache implementations to act as its level-2 cache. You don't want to transitively depend on all the possible caches, only the one you use.
In general, we use the typical compile, test, runtime set of configurations.
To add to SteveD's answer, remember that dependencies can be more than just .jar files. Some dependencies come with source and javadoc files, release notes, license files, etc. Multiple configurations of the dependency might let you select the subset of files you wish to resolve.
You might also want to use configurations to control the contents of different distributions. For example you might want to release the jar on it's own ("master" configuration in Maven parlance) and additionally build a tar package containing all runtime dependencies, with (or without) source code.
Another use for configurations is when you target multiple platforms. I often release groovy scripts packaged to run as standalone jars or as tomcat web applications