IDEA 10.5.2 Aspectj compiler - can't determine superclass of missing type org.springframework.transaction.interceptor.TransactionAspectSupport - intellij-idea

Trying to make a module with spring aspects gives me:
can't determine superclass of missing type org.springframework.transaction.interceptor.TransactionAspectSupport
Works in other modules, what's up with this one? Missing dep?
/S

this is unfortunately an error that occurs from time to time when developing with AspectJ.
Often, in the classpath of any java application, there are some "dead" classes, that is classes that are there inside some jar, but are never used.
These classes often also miss their dependencies. For example, Velocity (to name one, but most libraries do this) ships with classes to bridge many logging facilities, like log4j, java logging etc.. If you want to use one of them, you also need to include its dependency (like log4j.jar), otherwise if you don't use it, you can not add that dependency.
This is not a problem per se when using the library, because those classes will never be loaded. However, when you use AspectJ things change a bit.
Suppose you have a pointcut like :
execution(int MyClass+.getSomething())
While this pointcut seems very specific, it says "a method named getSomething in any subclass of MyClass". That means that to know wether a certain class meets or not the pointcut, AspectJ has to check all superclasses while weaving.
But what happens if AspectJ tries to do that on a "dead class" as the one mentioned above? It will search for superclass and fail, cause the class is not used and it's dependencies are not satisfied.
I usually instruct AspectJ to only warn me in this situation, instead of raising a blocking error, cause 9 times out of 10 this happens on dead code, and can be safely ignored.
Anther way is to spot which pointcut is causing AspectJ to check that class and try to rewrite it so that the scope is stricter. However this is not always possible.
A dirty, but quick, hack could be to write :
execution(... MyClass+ ....) && !this(org.springframework.....)
This is (usually) optimized by AspectJ so that the !this(....) fails before trying to evaluate the complete execution pointcut .. but it ties your pointcut to a specific situation, so is useful only for testing of last second patching a running system while searching for a better solution.
The one to blame, in this case, is not AspectJ, but libraries that include dead classes, which could (and should) be places in separate modules. Many libraries don't do this to avoid "module proliferation" (like, each library should release single modules for each logging system and so on..), which is a good argument, but can be solved easily and better with recent module management systems (like Maven, Ivy etc..) rather than packing single jar files with tons of classes with unmet dependencies, and then stating in documentation that you need that dependency to load that class.

You'll need to add the spring-tx dependency to clear this:
http://mvnrepository.com/artifact/org.springframework/spring-tx
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-tx</artifactId>
<version>${spring.version}</version>
</dependency>

I just solved a similar problem by making a maven clean.
The error message was almost same, but was about my own classes. So I think the answer from Simone Gianni should be correct, there were some incorrect classes which were generated by IDE for some reasons, so just remove them then it should be fine.

AbstractTransactionAspect from spring-aspects references TransactionAspectSupport from spring-tx - do you have it in deps?

Add optional dependency, if not needed actually at runtime:
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-tx</artifactId>
<optional>true</optional>
</dependency>
Or Xlint option change to warning( or ignore).

Related

In CMake, is there a way to set properties on all target dependencies?

In CMake, we can set target properties as either PRIVATE, PUBLIC, or INTERFACE. Both PUBLIC and INTERFACE properties are inherited by any targets that depend on the current target. However, unless I'm missing something, there doesn't seem to be an easy way to define a property that must propagate in the other direction (i.e., inherited by dependencies of the current target).
Most linkers/compilers require that all linked targets have the same value for certain properties (e.g., the exception handling model). If we want to change one of these properties for an executable it requires that it be set on all of its dependencies. Often these dependencies are submodules in our code where we can't modify their CMakeLists.txt files for our specific use-case. This leaves us with two options:
Set a global property (e.g., CMAKE_CXX_FLAGS or add_compile_options) that propagates to all targets in any subdirectories regardless of whether they are dependencies or not.
Explicitly set the properties on each dependent target using target_compile_options. This gets excessive and repetitive depending on the number of dependencies.
It would be nice if there was a functionality that would pass properties down only to dependency targets without having to specify them all individually. Does anyone know how to do this?
For the case of compiler flags that must be consistent for an entire program (including parts that are dynamically linked), such as MSVC's exception handling model, I think the set-something-global approach is suitable. To me, it seems pragmatic and slightly more robust than adding flags to each third-party target one-by-one (ie. what if you forget to handle to one? or what if third-party targets are added or removed in a new version? it seems like a ripe opportunity for human error).
Setting the environment variable [CMAKE_<LANG>_FLAGS] is a good start. You may need to do more if you are building external projects via ExternalProject.
A word of caution for such settings like the exception handling model: You might be tempted to hardcode this global setting inthe CMake files for your project. If your project is used by people other than just you or your company, and especially if its main component is a library and not an executable, it's good practice not to do that. Don't take away your user's ability to choose something like this (unless for some reason your library requires a certain exception handling model, in which case I would still leave this global setting up to them to set, provide documentation stating this, and look into emitting a CMake warning if a user doesn't comply). Instead, use a feature like CMake presets, or only set it if the project is the top-level project
An intersting side note: CMake currently globally "hard-codes" /EHsc for MSVC builds by default. Here's the ticket discussing this

Can not build thisJoinPoint lazily for this advice since it has no suitable guard

What is a "suitable guard" and what does it look like?
Linked this question because it refers to the same compiler message and the answer mentions a guard but not how to create one. Looked through the AspectJ docs but did not find and answer there.
This Lint warning is usually switched off in AJDT (AspectJ Development Tools) within Eclipse, but you can activate it as a warning or even error like this (I had to do it to actually see it at all when trying to reproduce your issue):
You can just ignore the Lint warning because basically it only says that there is no way for certain pointcuts to populate the thisJoinPoint object lazily during runtime because the pointcut has no dynamic component like if(), cflow() or similar, which is actually good news because it means that all your joinpoints can be determined statically during compile/weave time and are thus faster than dynamic pointcuts. On the other hand, the warning says that the tjp object always has to be created because for some reason it is also always needed during runtime and thus cannot be instantiated lazily.

hdfsFileStatus and FileStatus difference

what is the main difference between the 2 classes.
mainly, what situation would i use one and not the other?
org.apache.hadoop.hdfs.protocol package
http://www.sching.com/javadoc/hadoop/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.html
org.apache.hadoop.fs package
https://hadoop.apache.org/docs/r2.6.1/api/org/apache/hadoop/fs/FileStatus.html
HdfsFileStatus is marked with #InterfaceAudience.Private and #InterfaceStability.Evolving annotations (check the source code). The first annotation means it intended to be used for internal Hadoop implementations. The second annotation means the file might be changing (backwards compatible support might not be available between releases). Basically you should not use HdfsFileStatus in your code.

How to JMock a Singleton

My application has this structure: there's a RepositoryFacade (that is a Singleton) that uses many other ObjectRepository that are Singleton (UserRepository, etc).
Now I'd like to test it, mocking the [Objetct]Repositiries. To do that I made the [Objetct]Repositiry implements an interface, and then i tried to:
final IUserRepository mockIUserRepository= context.mock(IUserRepository.class);
RepositoryFacade.getInstance().setUserRepository(mockIUserRepository);
final User testUser = new User("username");
// expectations
context.checking(new Expectations() {{
oneOf (mockIUserRepository).save(testUser);
}});
// execute
RepositoryFacade.getInstance().save(testUser);
And in RepositoryFacade I added:
public IUserRepository userRepository = UserRepository.getInstance();
But if I try to run the test, I obtain:
java.lang.SecurityException: class "org.hamcrest.TypeSafeMatcher"'s signer
information does not match signer information of other classes in the same
package
p.s. Originally my RepositoryFacade had not a IUserRepository variable, I used it asking always UserRepository.getInstance().what_i_want(). I introduced it to try to use JMock, so if not needed I'll be glad to remove that bad use of Singleton.
Thanks,
Andrea
The error you're getting suggests that you have a classloading issue with the org.hamcrest package rather than any issue with your singletons. See this question for more on this exception and this one for the particular problem with hamcrest and potential solutions.
Check your classpath to make sure you're not including conflicting hamcrest code from multiple jars. If you find hamcrest in multiple jars, this may be corrected by something as simple as changing their order in your classpath.
Junit itself comes in two versions - one may include an old version of hamcrest. Switching to the one not including hamcrest may also fix your problem.
If you can find a way to do it, it would be better in the long run to get rid of the singletons altogether and instead do dependency injection using something like Spring or Guice.
But what you're doing should work, once you deal with the classloading, and it's a reasonable approach to dealing with singletons in a testing context.

Determining Maven execution phase within a plugin

I have a plugin which transforms the compiled classes. This transformation needs to be done for both the module's classes and the module's test classes. Thus, I bind the plugin to both the process-classes and process-test-classes phases. The problem I have is that I need to determine which phase the plugin is currently executing in, as I do not (cannot, actually) transform the same set of classes twice.
Thus, within the plugin, I would need to know if I'm executing process-classes - in which case I transform the module's classes. Or if I'm executing process-test-classes - in which I case I do not transform the module's classes and transform only the module's test classes.
I could, of course, create two plugins for this, but this kind of solution deeply offends my sensibilities and is probably against the law in several states.
It seems like something I could reach from my module should be able to tell me what the current phase is. I just can't for the life of me find out what that something is.
Thanks...
Thus, within the plugin, I would need to know if I'm executing process-classes (...) or if I'm executing process-test-classes
AFAIK, this is not really possible.
I could, of course, create two plugins for this, but this kind of solution deeply offends my sensibilities and is probably against the law in several states.
I don't see anything wrong with having two Mojos sharing code but bound to different phases. Something like the Maven Compiler Plugin (and its compiler:compile and compiler:testCompile goals).
you can't get the phase, but you can get the execution ID which you have as separate. In the plugin:
/**
* #parameter expression="${mojoExecution}"
*/
private org.apache.maven.plugin.MojoExecution execution;
...
public void execute() throws MojoExecutionException
{
...
System.out.println( "executionId is: " + execution.getExecutionId() );
}
I'm not sure if this is portable to Maven 3 yet.
Java plugin code snippets:
import org.apache.maven.plugin.MojoExecution;
import org.apache.maven.plugins.annotations.Component;
...
#Component
private MojoExecution execution;
...
execution.getLifecyclePhase()
Use Maven dependencies (your versions may vary):
<dependency>
<groupId>org.apache.maven</groupId>
<artifactId>maven-plugin-api</artifactId>
<version>3.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.maven.plugin-tools</groupId>
<artifactId>maven-plugin-annotations</artifactId>
<version>3.4</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.maven</groupId>
<artifactId>maven-core</artifactId>
<version>3.3.1</version>
</dependency>