How to rerun failed features in karate? - karate

Can anyone help me to rerun failed features in karate. below are the cucumber options and runner test which is using for parallel -
#CucumberOptions(features = "classpath:features/xxxxx/crud_api",
format = {"pretty", "html:target/cucumber","json:target/cucumber/report.json", "rerun:target/rerun/rerun.txt" })
#Test
public void test() throws IOException {
Results results = KarateRunnerTest.parallel(getClass(), threadCount, karateOutputPath);
assertTrue("there are scenario failures", results.getFailCount() == 0);
}

Here is my reusable implementation using karate-1.0#retry-framework-experimental,
Results retryFailedTests(Results results) {
System.out.println("======== Retrying failed tests ========");
Results initialResults = results;
List<ScenarioResult> retryResult = results.getScenarioResults().filter(ScenarioResult::isFailed)
.parallel()
.map(scenarioResult -> initialResults.getSuite().retryScenario(scenarioResult.getScenario()))
.collect(Collectors.toList());
for (ScenarioResult scenarioResult : retryResult) {
results = results.getSuite().updateResults(scenarioResult);
}
return results;
}
This java function takes care of retrying failed scenarios in parallel. You can check karate-timeline.html report to verify if the failed scenarios are retried in parallel.

Sharing my solution for when using the standalone JAR, implemented as a RuntimeHook (requires Karate 1.0+):
package retries;
import com.intuit.karate.core.Tag;
import com.intuit.karate.RuntimeHook;
import com.intuit.karate.core.ScenarioRuntime;
import com.intuit.karate.core.ScenarioResult;
import com.intuit.karate.core.FeatureRuntime;
import com.intuit.karate.core.Scenario;
import com.intuit.karate.core.StepResult;
import com.intuit.karate.core.Step;
import java.util.HashSet;
import java.util.List;
/**
* RuntimeHook that implements #retries tag.
*
* Usage:
* - compile this into a jar with:
* javac --release 8 -cp path/to/karate.jar *.java
* jar cvf retries-hook.jar *.class
* - tag your tests like #retries=3 to retry e.g. 3 times (4 total attempts)
* - invoke karate tests with:
* java -cp path/to/karate.jar:retries-hook.jar:path/to/java" com.intuit.karate.Main path/to/tests/dir --hook retries.RetriesHook
*/
public class RetriesHook implements RuntimeHook {
private static final HashSet<Scenario> RETRIES_ATTEMPTED = new HashSet<Scenario>();
private static final HashSet<ScenarioResult> RETRIES_SUCCEEDED = new HashSet<ScenarioResult>();
#Override
public void afterScenario(ScenarioRuntime sr) {
if (!sr.isFailed()) {
return;
}
int configuredRetries = 0;
for (Tag tag : sr.tags) {
if ("retries".equals(tag.getName())) {
configuredRetries = Integer.parseInt(tag.getValues().get(0));
break;
}
}
if (configuredRetries <= 0) {
return;
}
for (Scenario s : RETRIES_ATTEMPTED) {
if (s.isEqualTo(sr.scenario)) {
// we've already kicked off retries for this Scenario
return;
}
}
String scenarioName = sr.scenario.toString();
RETRIES_ATTEMPTED.add(sr.scenario);
int retryAttempt = 1;
while (retryAttempt <= configuredRetries) {
System.out.println("Scenario " + scenarioName + " failed, attempting retry #" + retryAttempt);
ScenarioResult retrySr = sr.featureRuntime.suite.retryScenario(sr.scenario);
if (!retrySr.isFailed()) {
System.out.println("Scenario " + scenarioName + " passed after " + retryAttempt + " retries");
// Mark the original ScenarioResult as passed on retry, so it can get filtered out later in afterFeature.
RETRIES_SUCCEEDED.add(sr.result);
sr.featureRuntime.result.getScenarioResults().add(retrySr);
return;
}
retryAttempt++;
}
System.out.println("Scenario " + scenarioName + " failed all " + configuredRetries + " retries");
}
#Override
public void afterFeature(FeatureRuntime fr) {
// afterScenario is called before the original ScenarioResult is saved,
// so we can't use Suite.updateResults() :/
// Instead, we add the passed ScenarioResult above and then filter out the
// failed one here.
if (fr.result.isFailed()) {
List<ScenarioResult> scenarioResults = fr.result.getScenarioResults();
scenarioResults.removeIf(sr -> RETRIES_SUCCEEDED.contains(sr));
fr.result.sortScenarioResults();
}
}
}

This is not something that Karate supports, but in dev mode (using the IDE for example) you can always re-run the failed tests manually.
You seem to be using annotation options not supported by Karate, e.g. format. Read the docs for what is supported it is limited to features and tags.
EDIT - Karate 1.0 has experimental support for this: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#retry-framework-experimental

Related

JUnit Reports Merger karate [duplicate]

Can anyone help me to rerun failed features in karate. below are the cucumber options and runner test which is using for parallel -
#CucumberOptions(features = "classpath:features/xxxxx/crud_api",
format = {"pretty", "html:target/cucumber","json:target/cucumber/report.json", "rerun:target/rerun/rerun.txt" })
#Test
public void test() throws IOException {
Results results = KarateRunnerTest.parallel(getClass(), threadCount, karateOutputPath);
assertTrue("there are scenario failures", results.getFailCount() == 0);
}
Here is my reusable implementation using karate-1.0#retry-framework-experimental,
Results retryFailedTests(Results results) {
System.out.println("======== Retrying failed tests ========");
Results initialResults = results;
List<ScenarioResult> retryResult = results.getScenarioResults().filter(ScenarioResult::isFailed)
.parallel()
.map(scenarioResult -> initialResults.getSuite().retryScenario(scenarioResult.getScenario()))
.collect(Collectors.toList());
for (ScenarioResult scenarioResult : retryResult) {
results = results.getSuite().updateResults(scenarioResult);
}
return results;
}
This java function takes care of retrying failed scenarios in parallel. You can check karate-timeline.html report to verify if the failed scenarios are retried in parallel.
Sharing my solution for when using the standalone JAR, implemented as a RuntimeHook (requires Karate 1.0+):
package retries;
import com.intuit.karate.core.Tag;
import com.intuit.karate.RuntimeHook;
import com.intuit.karate.core.ScenarioRuntime;
import com.intuit.karate.core.ScenarioResult;
import com.intuit.karate.core.FeatureRuntime;
import com.intuit.karate.core.Scenario;
import com.intuit.karate.core.StepResult;
import com.intuit.karate.core.Step;
import java.util.HashSet;
import java.util.List;
/**
* RuntimeHook that implements #retries tag.
*
* Usage:
* - compile this into a jar with:
* javac --release 8 -cp path/to/karate.jar *.java
* jar cvf retries-hook.jar *.class
* - tag your tests like #retries=3 to retry e.g. 3 times (4 total attempts)
* - invoke karate tests with:
* java -cp path/to/karate.jar:retries-hook.jar:path/to/java" com.intuit.karate.Main path/to/tests/dir --hook retries.RetriesHook
*/
public class RetriesHook implements RuntimeHook {
private static final HashSet<Scenario> RETRIES_ATTEMPTED = new HashSet<Scenario>();
private static final HashSet<ScenarioResult> RETRIES_SUCCEEDED = new HashSet<ScenarioResult>();
#Override
public void afterScenario(ScenarioRuntime sr) {
if (!sr.isFailed()) {
return;
}
int configuredRetries = 0;
for (Tag tag : sr.tags) {
if ("retries".equals(tag.getName())) {
configuredRetries = Integer.parseInt(tag.getValues().get(0));
break;
}
}
if (configuredRetries <= 0) {
return;
}
for (Scenario s : RETRIES_ATTEMPTED) {
if (s.isEqualTo(sr.scenario)) {
// we've already kicked off retries for this Scenario
return;
}
}
String scenarioName = sr.scenario.toString();
RETRIES_ATTEMPTED.add(sr.scenario);
int retryAttempt = 1;
while (retryAttempt <= configuredRetries) {
System.out.println("Scenario " + scenarioName + " failed, attempting retry #" + retryAttempt);
ScenarioResult retrySr = sr.featureRuntime.suite.retryScenario(sr.scenario);
if (!retrySr.isFailed()) {
System.out.println("Scenario " + scenarioName + " passed after " + retryAttempt + " retries");
// Mark the original ScenarioResult as passed on retry, so it can get filtered out later in afterFeature.
RETRIES_SUCCEEDED.add(sr.result);
sr.featureRuntime.result.getScenarioResults().add(retrySr);
return;
}
retryAttempt++;
}
System.out.println("Scenario " + scenarioName + " failed all " + configuredRetries + " retries");
}
#Override
public void afterFeature(FeatureRuntime fr) {
// afterScenario is called before the original ScenarioResult is saved,
// so we can't use Suite.updateResults() :/
// Instead, we add the passed ScenarioResult above and then filter out the
// failed one here.
if (fr.result.isFailed()) {
List<ScenarioResult> scenarioResults = fr.result.getScenarioResults();
scenarioResults.removeIf(sr -> RETRIES_SUCCEEDED.contains(sr));
fr.result.sortScenarioResults();
}
}
}
This is not something that Karate supports, but in dev mode (using the IDE for example) you can always re-run the failed tests manually.
You seem to be using annotation options not supported by Karate, e.g. format. Read the docs for what is supported it is limited to features and tags.
EDIT - Karate 1.0 has experimental support for this: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#retry-framework-experimental

How to integrate Testrail with Karate Framework [duplicate]

I am new to Java and using karate for API automation. I need help to integrate testrail with karate. I want to use tags for each scenario which will be the test case id (from testrail) and I want to push the result 'after the scenario'.
Can someone guide me on this? Code snippets would be more appreciated. Thank you!
I spent a lot of effort for this.
That's how I implement. Maybe you can follow it.
First of all, you should download the APIClient.java and APIException.java files from the link below.
TestrailApi in github
Then you need to add these files to the following path in your project.
For example: YourProjectFolder/src/main/java/testrails/
In your karate-config.js file, after each test, you can send your case tags, test results and error messages to the BaseTest.java file, which I will talk about shortly.
karate-config.js file
function fn() {
var config = {
baseUrl: 'http://111.111.1.111:11111',
};
karate.configure('afterScenario', () => {
try{
const BaseTestClass = Java.type('features.BaseTest');
BaseTestClass.sendScenarioResults(karate.scenario.failed,
karate.scenario.tags, karate.info.errorMessage);
}catch(error) {
console.log(error)
}
});
return config;
}
Please dont forget give tag to scenario in Feature file.
For example #1111
Feature: ExampleFeature
Background:
* def conf = call read('../karate-config.js')
* url conf.baseUrl
#1111
Scenario: Example
Next, create a runner file named BaseTests.java
BaseTest.java file
package features;
import com.intuit.karate.junit5.Karate;
import net.minidev.json.JSONObject;
import org.junit.jupiter.api.BeforeAll;
import testrails.APIClient;
import testrails.APIException;
import java.io.IOException;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.HashMap;
import java.util.List;
import java.util.Locale;
import java.util.Map;
public class BaseTest {
private static APIClient client = null;
private static String runID = null;
#BeforeAll
public static void beforeClass() throws Exception {
String fileName = System.getProperty("karate.options");
//Login to API
client = new APIClient("Write Your host, for example
https://yourcompanyname.testrail.io/");
client.setUser("user.name#companyname.com");
client.setPassword("password");
//Create Test Run
Map data = new HashMap();
data.put("suite_id", "Write Your Project SuitId(Only number)");
data.put("name", "Api Test Run");
data.put("description", "Karate Architect Regression Running");
JSONObject c = (JSONObject) client.sendPost("add_run/" +
TESTRAİL_PROJECT_ID, data);
runID = c.getAsString("id");
}
//Send Scenario Result to Testrail
public static void sendScenarioResults(boolean failed, List<String> tags, String errorMessage) {
try {
Map data = new HashMap();
data.put("status_id", failed ? 5 : 1);
data.put("comment", errorMessage);
client.sendPost("add_result_for_case/" + runID + "/" + tags.get(0),
data);
} catch (IOException e) {
e.printStackTrace();
} catch (APIException e) {
e.printStackTrace();
}
}
#Karate.Test
Karate ExampleFeatureRun() {
return Karate.run("ExampleFeatureRun").relativeTo(getClass());
}
}
Please look at 'hooks' documented here: https://github.com/intuit/karate#hooks
And there is an example with code over here: https://github.com/intuit/karate/blob/master/karate-demo/src/test/java/demo/hooks/hooks.feature
I'm sorry I can't help you with how to push data to testrail, but it may be as simple as an HTTP request. And guess what Karate is famous for :)
Note that values of tags can be accessed within a test, here is the doc for karate.tagValues (with link to example): https://github.com/intuit/karate#the-karate-object
Note that you need to be on the 0.7.0 version, right now 0.7.0.RC8 is available.
Edit - also see: https://stackoverflow.com/a/54527955/143475

How to properly test flux from sink (processor)?

I have processor like class, which internally uses sink. I have made extremely simplified one to showcase my question:
import reactor.core.publisher.Sinks;
import reactor.test.StepVerifier;
import java.time.Duration;
public class TestBed {
public static void main(String[] args) {
class StringProcessor {
public final Sinks.Many<String> sink = Sinks.many().multicast().directBestEffort();
public void httpPostWebhookController(String inputData) {
sink.emitNext(
inputData.toLowerCase() + " " + inputData.toUpperCase(),
(signalType, emitResult) -> {
System.out.println("error, signalType=" + signalType + "; emitResult=" + emitResult);
return false;
}
);
}
}
final StringProcessor stringProcessor = new StringProcessor();
final StepVerifier stepVerifier = StepVerifier.create(stringProcessor.sink.asFlux())
.expectSubscription()
.expectNext("asdf ASDF")
.expectNext("qw QW")
.thenCancel();
stringProcessor.httpPostWebhookController("asdf");
stringProcessor.httpPostWebhookController("Qw");
stepVerifier.verify(Duration.ofSeconds(2));
}
}
My stepVerified does not subscribe and when it subscribe (upon verify(Duration) call), it misses testing signals. I cannot move verify call before httpPostWebhookController method call, because, it is blocking and will fail because no signal comes.
How to use StepVerifier in such scenario?
As I have asked on udemy course (instructor Vinoth Selvaraj), solution is to use verifyLater call. It will cause to trigger subscription and it does not block. Fixed test code:
final StringProcessor stringProcessor = new StringProcessor();
final StepVerifier stepVerifier = StepVerifier.create(stringProcessor.sink.asFlux().log())
.expectSubscription()
.expectNext("asdf ASDF")
.expectNext("qw QW")
.thenCancel()
.verifyLater();
stringProcessor.httpPostWebhookController("asdf");
stringProcessor.httpPostWebhookController("Qw");
stepVerifier.verify(Duration.ofSeconds(2));

BinaryInvalidTypeException in Ignite Remote Filter

The following code is based on a combination of Ingite's CacheQueryExample and CacheContinuousQueryExample.
The code starts a fat Ignite client. Three organizations are created in the cache and we are listening to the updates to the cache. The remote filter is set to trigger the continuous query if the organization name is "Google". Peer class loading is enabled by the default examples xml config file (example-ignite.xml), so the expectation is that the remote node is aware of the Organization class.
However the following exceptions are shown in the Ignite server's console (one for each cache entry) and all three records are returned to the client in the continuous query's event handler instead of just the "Google" record. If the filter is changed to check on the key instead of the value, the correct behavior is observed and a single record is returned to the local listener.
[08:28:43,302][SEVERE][sys-stripe-1-#2][query] CacheEntryEventFilter failed: class o.a.i.binary.BinaryInvalidTypeException: o.a.i.examples.model.Organization
[08:28:51,819][SEVERE][sys-stripe-2-#3][query] CacheEntryEventFilter failed: class o.a.i.binary.BinaryInvalidTypeException: o.a.i.examples.model.Organization
[08:28:52,692][SEVERE][sys-stripe-3-#4][query] CacheEntryEventFilter failed: class o.a.i.binary.BinaryInvalidTypeException: o.a.i.examples.model.Organization
To run the code
Start an ignite server using examples/config/example-ignite.xml as the configuration file.
Replace the content of ignite's CacheContinuousQueryExample.java with the following code. You may have to change the path to the configuration file to an absolute path.
package org.apache.ignite.examples.datagrid;
import javax.cache.Cache;
import javax.cache.configuration.Factory;
import javax.cache.event.CacheEntryEvent;
import javax.cache.event.CacheEntryEventFilter;
import javax.cache.event.CacheEntryUpdatedListener;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.affinity.AffinityKey;
import org.apache.ignite.cache.query.ContinuousQuery;
import org.apache.ignite.cache.query.QueryCursor;
import org.apache.ignite.cache.query.ScanQuery;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.examples.ExampleNodeStartup;
import org.apache.ignite.examples.model.Organization;
import org.apache.ignite.examples.model.Person;
import org.apache.ignite.lang.IgniteBiPredicate;
import java.util.Collection;
/**
* This examples demonstrates continuous query API.
* <p>
* Remote nodes should always be started with special configuration file which
* enables P2P class loading: {#code 'ignite.{sh|bat} examples/config/example-ignite.xml'}.
* <p>
* Alternatively you can run {#link ExampleNodeStartup} in another JVM which will
* start node with {#code examples/config/example-ignite.xml} configuration.
*/
public class CacheContinuousQueryExample {
/** Organizations cache name. */
private static final String ORG_CACHE = CacheQueryExample.class.getSimpleName() + "Organizations";
/**
* Executes example.
*
* #param args Command line arguments, none required.
* #throws Exception If example execution failed.
*/
public static void main(String[] args) throws Exception {
Ignition.setClientMode(true);
try (Ignite ignite = Ignition.start("examples/config/example-ignite.xml")) {
System.out.println();
System.out.println(">>> Cache continuous query example started.");
CacheConfiguration<Long, Organization> orgCacheCfg = new CacheConfiguration<>(ORG_CACHE);
orgCacheCfg.setCacheMode(CacheMode.PARTITIONED); // Default.
orgCacheCfg.setIndexedTypes(Long.class, Organization.class);
// Auto-close cache at the end of the example.
try {
ignite.getOrCreateCache(orgCacheCfg);
// Create new continuous query.
ContinuousQuery<Long, Organization> qry = new ContinuousQuery<>();
// Callback that is called locally when update notifications are received.
qry.setLocalListener(new CacheEntryUpdatedListener<Long, Organization>() {
#Override public void onUpdated(Iterable<CacheEntryEvent<? extends Long, ? extends Organization>> evts) {
for (CacheEntryEvent<? extends Long, ? extends Organization> e : evts)
System.out.println("Updated entry [key=" + e.getKey() + ", val=" + e.getValue() + ']');
}
});
// This filter will be evaluated remotely on all nodes.
// Entry that pass this filter will be sent to the caller.
qry.setRemoteFilterFactory(new Factory<CacheEntryEventFilter<Long, Organization>>() {
#Override public CacheEntryEventFilter<Long, Organization> create() {
return new CacheEntryEventFilter<Long, Organization>() {
#Override public boolean evaluate(CacheEntryEvent<? extends Long, ? extends Organization> e) {
//return e.getKey() == 3;
return e.getValue().name().equals("Google");
}
};
}
});
ignite.getOrCreateCache(ORG_CACHE).query(qry);
// Populate caches.
initialize();
Thread.sleep(2000);
}
finally {
// Distributed cache could be removed from cluster only by #destroyCache() call.
ignite.destroyCache(ORG_CACHE);
}
}
}
/**
* Populate cache with test data.
*/
private static void initialize() {
IgniteCache<Long, Organization> orgCache = Ignition.ignite().cache(ORG_CACHE);
// Clear cache before running the example.
orgCache.clear();
// Organizations.
Organization org1 = new Organization("ApacheIgnite");
Organization org2 = new Organization("Apple");
Organization org3 = new Organization("Google");
orgCache.put(org1.id(), org1);
orgCache.put(org2.id(), org2);
orgCache.put(org3.id(), org3);
}
}
Here is an interim workaround that involves using and deserializing binary objects. Hopefully, someone can post a proper solution.
Here is the main() function modified to work with BinaryObjects instead of the Organization object:
public static void main(String[] args) throws Exception {
Ignition.setClientMode(true);
try (Ignite ignite = Ignition.start("examples/config/example-ignite.xml")) {
System.out.println();
System.out.println(">>> Cache continuous query example started.");
CacheConfiguration<Long, Organization> orgCacheCfg = new CacheConfiguration<>(ORG_CACHE);
orgCacheCfg.setCacheMode(CacheMode.PARTITIONED); // Default.
orgCacheCfg.setIndexedTypes(Long.class, Organization.class);
// Auto-close cache at the end of the example.
try {
ignite.getOrCreateCache(orgCacheCfg);
// Create new continuous query.
ContinuousQuery<Long, BinaryObject> qry = new ContinuousQuery<>();
// Callback that is called locally when update notifications are received.
qry.setLocalListener(new CacheEntryUpdatedListener<Long, BinaryObject>() {
#Override public void onUpdated(Iterable<CacheEntryEvent<? extends Long, ? extends BinaryObject>> evts) {
for (CacheEntryEvent<? extends Long, ? extends BinaryObject> e : evts) {
Organization org = e.getValue().deserialize();
System.out.println("Updated entry [key=" + e.getKey() + ", val=" + org + ']');
}
}
});
// This filter will be evaluated remotely on all nodes.
// Entry that pass this filter will be sent to the caller.
qry.setRemoteFilterFactory(new Factory<CacheEntryEventFilter<Long, BinaryObject>>() {
#Override public CacheEntryEventFilter<Long, BinaryObject> create() {
return new CacheEntryEventFilter<Long, BinaryObject>() {
#Override public boolean evaluate(CacheEntryEvent<? extends Long, ? extends BinaryObject> e) {
//return e.getKey() == 3;
//return e.getValue().name().equals("Google");
return e.getValue().field("name").equals("Google");
}
};
}
});
ignite.getOrCreateCache(ORG_CACHE).withKeepBinary().query(qry);
// Populate caches.
initialize();
Thread.sleep(2000);
}
finally {
// Distributed cache could be removed from cluster only by #destroyCache() call.
ignite.destroyCache(ORG_CACHE);
}
}
}
Peer class loading is enabled ... so the expectation is that the remote node is aware of the Organization class.
This is the problem. You can't peer class load "model" objects, i.e., objects used to create the table.
Two solutions:
Deploy the model class(es) to the server ahead of time. The rest of the code -- the filters -- can be peer class loaded
As #rgb1380 demonstrates, you can use BinaryObjects, which is the underlying data format
Another small point, to use "autoclose" you need to structure your code like this:
// Auto-close cache at the end of the example.
try (var cache = ignite.getOrCreateCache(orgCacheCfg)) {
// do stuff
}

Spring AOP Pointcut if condition

I am facing an issue with pointcuts, I am trying to enable the #Around when the log.isDebugEnabled is true for this I am trying the following code:
#Pointcut("within(org.apache.commons.logging.impl.Log4JLogger..*)")
public boolean isDebugEnabled() {
return log.isDebugEnabled();
}
and for testing purposes, I have two aspects configured
#AfterThrowing(value = "!isDebugEnabled()", throwing = "exception")
and
#Around(value = "isDebugEnabled()")
But all the times when I try to execute the code it always goes to #AfterThrowing, and it is not clear for me what I am doing wrong!
I am using aspectJWeaver 1.8.9, with Spring MVC 4.3!
Here is a sample class emulating the issue:
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.AfterThrowing;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.springframework.stereotype.Component;
import org.springframework.util.StopWatch;
#Component
#Aspect
public class SampleAspect {
private static final Log log = LogFactory.getLog(SampleAspect.class);
#Pointcut("within(org.apache.commons.logging.impl.Log4JLogger..*)")
public boolean isDebugEnabled() {
return log.isDebugEnabled();
}
#AfterThrowing(value = " !isDebugEnabled()", throwing = "exception")
public void getCalledOnException(JoinPoint joinPoint, Exception exception) {
log.error("Method " + joinPoint.getSignature() + " Throws the exception " + exception.getStackTrace());
}
//Never execute around method even when log.isDebugEnabled() = true
#Around(value = "isDebugEnabled()")
public Object aroundTest(ProceedingJoinPoint proceedingJoinPoint) throws Throwable {
StopWatch stopWatch = new StopWatch();
stopWatch.start();
final Object proceed;
try {
proceed = proceedingJoinPoint.proceed();
} catch (Exception e) {
throw e;
}
stopWatch.stop();
log.debug("It took " + stopWatch.getTotalTimeSeconds() + " seconds to be proceed");
return proceed;
}
}
edit,
I tried to use if() from aspectJ, but it didn't work in my project either.
#Pointcut("call(* *.*(int)) && args(i) && if()")
public static boolean someCallWithIfTest(int i) {
return i > 0;
}
Not sure if I need to add a different import or so, but I didn't manage to make it work.
Couple of points from documentation
== Spring AOP Capabilities and Goals
Spring AOP currently supports only method execution join points
(advising the execution of methods on Spring beans)
=== Declaring a Pointcut
In the #AspectJ annotation-style of AOP, a pointcut signature is provided
by a regular method definition, and the pointcut expression is
indicated by using the #Pointcut annotation (the method serving as the
pointcut signature must have a void return type).
Apache commons classes are not managed by Spring container . So the following will not be honoured.
#Pointcut("within(org.apache.commons.logging.impl.Log4JLogger..*)")
Following pointcut method is not valid
public boolean isDebugEnabled() {
return log.isDebugEnabled();
}