Spring AOP - passing arguments between annotated methods - aop

i've written a utility to monitor individual business transactions. For example, Alice calls a method which calls more methods and i want info on just Alice's call, separate from Bob's call to the same method.
Right now the entry point creates a Transaction object and it's passed as an argument to each method:
class Example {
public Item getOrderEntryPoint(int orderId) {
Transaction transaction = transactionManager.create();
transaction.trace("getOrderEntryPoint");
Order order = getOrder(orderId, transaction);
transaction.stop();
logger.info(transaction);
return item;
}
private Order getOrder(int orderId, Transaction t) {
t.trace("getOrder");
Order order = getItems(itemId, t);
t.addStat("number of items", order.getItems().size());
for (Item item : order.getItems()) {
SpecialOffer offer = getSpecialOffer(item, t);
if (null != offer) {
t.incrementStat("offers", 1);
}
}
t.stop();
return order;
}
private SpecialOffer getSpecialOffer(Item item, Transaction t) {
t.trace("getSpecialOffer(" + item.id + ")", TraceCategory.Database);
return offerRepository.getByItem(item);
t.stop();
}
}
This will print to the log something like:
Transaction started by Alice at 10:42
Statistics:
number of items : 3
offers : 1
Category Timings (longest first):
DB : 2s 903ms
code : 187ms
Timings (longest first):
getSpecialOffer(1013) : 626ms
getItems : 594ms
Trace:
getOrderEntryPoint (7ms)
getOrder (594ms)
getSpecialOffer(911) (90ms)
getSpecialOffer(1013) (626ms)
getSpecialOffer(2942) (113ms)
It works great but passing the transaction object around is ugly. Someone suggested AOP but i don't see how to pass the transaction created in the first method to all the other methods.
The Transaction object is pretty simple:
public class Transaction {
private String uuid = UUID.createRandom();
private List<TraceEvent> events = new ArrayList<>();
private Map<String,Int> stats = new HashMap<>();
}
class TraceEvent {
private String name;
private long durationInMs;
}
The app that uses it is a Web app, and this multi-threaded, but the individual transactions are on a single thread - no multi-threading, async code, competition for resources, etc.
My attempt at an annotation:
#Around("execution(* *(..)) && #annotation(Trace)")
public Object around(ProceedingJoinPoint point) {
String methodName = MethodSignature.class.cast(point.getSignature()).getMethod().getName();
//--- Where do i get this call's instance of TRANSACTION from?
if (null == transaction) {
transaction = TransactionManager.createTransaction();
}
transaction.trace(methodName);
Object result = point.proceed();
transaction.stop();
return result;

Introduction
Unfortunately, your pseudo code does not compile. It contains several syntactical and logical errors. Furthermore, some helper classes are missing. If I did not have spare time today and was looking for a puzzle to solve, I would not have bothered making my own MCVE out of it, because that would actually have been your job. Please do read the MCVE article and learn to create one next time, otherwise you will not get a lot of qualified help here. This was your free shot because you are new on SO.
Original situation: passing through transaction objects in method calls
Application helper classes:
package de.scrum_master.app;
public class Item {
private int id;
public Item(int id) {
this.id = id;
}
public int getId() {
return id;
}
#Override
public String toString() {
return "Item[id=" + id + "]";
}
}
package de.scrum_master.app;
public class SpecialOffer {}
package de.scrum_master.app;
public class OfferRepository {
public SpecialOffer getByItem(Item item) {
if (item.getId() < 30)
return new SpecialOffer();
return null;
}
}
package de.scrum_master.app;
import java.util.ArrayList;
import java.util.List;
public class Order {
private int id;
public Order(int id) {
this.id = id;
}
public List<Item> getItems() {
List<Item> items = new ArrayList<>();
int offset = id == 12345 ? 0 : 1;
items.add(new Item(11 + offset, this));
items.add(new Item(22 + offset, this));
items.add(new Item(33 + offset, this));
return items;
}
}
Trace classes:
package de.scrum_master.trace;
public enum TraceCategory {
Code, Database
}
package de.scrum_master.trace;
class TraceEvent {
private String name;
private TraceCategory category;
private long durationInMs;
private boolean finished = false;
public TraceEvent(String name, TraceCategory category, long startTime) {
this.name = name;
this.category = category;
this.durationInMs = startTime;
}
public long getDurationInMs() {
return durationInMs;
}
public void setDurationInMs(long durationInMs) {
this.durationInMs = durationInMs;
}
public boolean isFinished() {
return finished;
}
public void setFinished(boolean finished) {
this.finished = finished;
}
#Override
public String toString() {
return "TraceEvent[name=" + name + ", category=" + category +
", durationInMs=" + durationInMs + ", finished=" + finished + "]";
}
}
Transaction classes:
Here I tried to mimic your own Transaction class with as few as possible changes, but there was a lot I had to add and modify in order to emulate a simplified version of your trace output. This is not thread-safe and the way I am locating the last unfinished TraceEvent is not nice and only works cleanly if there are not exceptions. But you get the idea, I hope. The point is to just make it basically work and subsequently get log output similar to your example. If this was originally my code, I would have solved it differently.
package de.scrum_master.trace;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.UUID;
public class Transaction {
private String uuid = UUID.randomUUID().toString();
private List<TraceEvent> events = new ArrayList<>();
private Map<String, Integer> stats = new HashMap<>();
public void trace(String message) {
trace(message, TraceCategory.Code);
}
public void trace(String message, TraceCategory category) {
events.add(new TraceEvent(message, category, System.currentTimeMillis()));
}
public void stop() {
TraceEvent event = getLastUnfinishedEvent();
event.setDurationInMs(System.currentTimeMillis() - event.getDurationInMs());
event.setFinished(true);
}
private TraceEvent getLastUnfinishedEvent() {
return events
.stream()
.filter(event -> !event.isFinished())
.reduce((first, second) -> second)
.orElse(null);
}
public void addStat(String text, int size) {
stats.put(text, size);
}
public void incrementStat(String text, int increment) {
Integer currentCount = stats.get(text);
if (currentCount == null)
currentCount = 0;
stats.put(text, currentCount + increment);
}
#Override
public String toString() {
return "Transaction {" +
toStringUUID() +
toStringStats() +
toStringEvents() +
"\n}\n";
}
private String toStringUUID() {
return "\n uuid = " + uuid;
}
private String toStringStats() {
String result = "\n stats = {";
for (Entry<String, Integer> statEntry : stats.entrySet())
result += "\n " + statEntry;
return result + "\n }";
}
private String toStringEvents() {
String result = "\n events = {";
for (TraceEvent event : events)
result += "\n " + event;
return result + "\n }";
}
}
package de.scrum_master.trace;
public class TransactionManager {
public Transaction create() {
return new Transaction();
}
}
Example driver application:
package de.scrum_master.app;
import de.scrum_master.trace.TraceCategory;
import de.scrum_master.trace.Transaction;
import de.scrum_master.trace.TransactionManager;
public class Example {
private TransactionManager transactionManager = new TransactionManager();
private OfferRepository offerRepository = new OfferRepository();
public Order getOrderEntryPoint(int orderId) {
Transaction transaction = transactionManager.create();
transaction.trace("getOrderEntryPoint");
sleep(100);
Order order = getOrder(orderId, transaction);
transaction.stop();
System.out.println(transaction);
return order;
}
private Order getOrder(int orderId, Transaction t) {
t.trace("getOrder");
sleep(200);
Order order = new Order(orderId);
t.addStat("number of items", order.getItems().size());
for (Item item : order.getItems()) {
SpecialOffer offer = getSpecialOffer(item, t);
if (null != offer)
t.incrementStat("special offers", 1);
}
t.stop();
return order;
}
private SpecialOffer getSpecialOffer(Item item, Transaction t) {
t.trace("getSpecialOffer(" + item.getId() + ")", TraceCategory.Database);
sleep(50);
SpecialOffer specialOffer = offerRepository.getByItem(item);
t.stop();
return specialOffer;
}
private void sleep(long millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
new Example().getOrderEntryPoint(12345);
new Example().getOrderEntryPoint(23456);
}
}
If you run this code, the output is as follows:
Transaction {
uuid = 62ec9739-bd32-4a56-b6b3-a8a13624961a
stats = {
special offers=2
number of items=3
}
events = {
TraceEvent[name=getOrderEntryPoint, category=Code, durationInMs=561, finished=true]
TraceEvent[name=getOrder, category=Code, durationInMs=451, finished=true]
TraceEvent[name=getSpecialOffer(11), category=Database, durationInMs=117, finished=true]
TraceEvent[name=getSpecialOffer(22), category=Database, durationInMs=69, finished=true]
TraceEvent[name=getSpecialOffer(33), category=Database, durationInMs=63, finished=true]
}
}
Transaction {
uuid = a420cd70-96e5-44c4-a0a4-87e421d05e87
stats = {
special offers=2
number of items=3
}
events = {
TraceEvent[name=getOrderEntryPoint, category=Code, durationInMs=469, finished=true]
TraceEvent[name=getOrder, category=Code, durationInMs=369, finished=true]
TraceEvent[name=getSpecialOffer(12), category=Database, durationInMs=53, finished=true]
TraceEvent[name=getSpecialOffer(23), category=Database, durationInMs=63, finished=true]
TraceEvent[name=getSpecialOffer(34), category=Database, durationInMs=53, finished=true]
}
}
AOP refactoring
Preface
Please note that I am using AspectJ here because two things about your code would never work with Spring AOP because it works with a delegation pattern based on dynamic proxies:
self-invocation (internally calling a method of the same class or super-class)
intercepting private methods
Because of these Spring AOP limitations I advise you to either refactor your code so as to avoid the two issues above or to configure your Spring applications to use full AspectJ via LTW (load-time weaving) instead.
As you noticed, my sample code does not use Spring at all because AspectJ is completely independent of Spring and works with any Java application (or other JVM languages, too).
Refactoring idea
Now what should you do in order to get rid of passing around tracing information (Transaction objects), polluting your core application code and tangling it with trace calls?
You extract transaction tracing into an aspect taking care of all trace(..) and stop() calls.
Unfortunately your Transaction class contains different types of information and does different things, so you cannot completely get rid of context information about how to trace for each affected method. But at least you can extract that context information from the method bodies and transform it into a declarative form using annotations with parameters.
These annotations can be targeted by an aspect taking care of handling transaction tracing.
Added and updated code, iteration 1
Annotations related to transaction tracing:
package de.scrum_master.trace;
import static java.lang.annotation.ElementType.METHOD;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
#Retention(RUNTIME)
#Target(METHOD)
public #interface TransactionEntryPoint {}
package de.scrum_master.trace;
import static java.lang.annotation.ElementType.METHOD;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
#Retention(RUNTIME)
#Target(METHOD)
public #interface TransactionTrace {
String message() default "__METHOD_NAME__";
TraceCategory category() default TraceCategory.Code;
String addStat() default "";
String incrementStat() default "";
}
Refactored application classes with annotations:
package de.scrum_master.app;
import java.util.ArrayList;
import java.util.List;
import de.scrum_master.trace.TransactionTrace;
public class Order {
private int id;
public Order(int id) {
this.id = id;
}
#TransactionTrace(message = "", addStat = "number of items")
public List<Item> getItems() {
List<Item> items = new ArrayList<>();
int offset = id == 12345 ? 0 : 1;
items.add(new Item(11 + offset));
items.add(new Item(22 + offset));
items.add(new Item(33 + offset));
return items;
}
}
Nothing much here, only added an annotation to getItems(). But the sample application class changes massively, getting much cleaner and simpler:
package de.scrum_master.app;
import de.scrum_master.trace.TraceCategory;
import de.scrum_master.trace.TransactionEntryPoint;
import de.scrum_master.trace.TransactionTrace;
public class Example {
private OfferRepository offerRepository = new OfferRepository();
#TransactionEntryPoint
#TransactionTrace
public Order getOrderEntryPoint(int orderId) {
sleep(100);
Order order = getOrder(orderId);
return order;
}
#TransactionTrace
private Order getOrder(int orderId) {
sleep(200);
Order order = new Order(orderId);
for (Item item : order.getItems()) {
SpecialOffer offer = getSpecialOffer(item);
// Do something with special offers
}
return order;
}
#TransactionTrace(category = TraceCategory.Database, incrementStat = "specialOffers")
private SpecialOffer getSpecialOffer(Item item) {
sleep(50);
SpecialOffer specialOffer = offerRepository.getByItem(item);
return specialOffer;
}
private void sleep(long millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
new Example().getOrderEntryPoint(12345);
new Example().getOrderEntryPoint(23456);
}
}
See? Except for a few annotations there is nothing left of the transaction tracing logic, the application code only takes care of its core concern. If you also remove the sleep() method which only makes the application slower for demonstration purposes (because we want some nice statistics with measured times >0 ms), the class gets even more compact.
But of course we need to put the transaction tracing logic somewhere, more precisely modularise it into an AspectJ aspect:
Transaction tracing aspect:
package de.scrum_master.trace;
import java.lang.reflect.Array;
import java.util.Arrays;
import java.util.Collection;
import java.util.stream.Collectors;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.After;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
#Aspect("percflow(entryPoint())")
public class TransactionTraceAspect {
private static TransactionManager transactionManager = new TransactionManager();
private Transaction transaction = transactionManager.create();
#Pointcut("execution(* *(..)) && #annotation(de.scrum_master.trace.TransactionEntryPoint)")
private static void entryPoint() {}
#Around("execution(* *(..)) && #annotation(transactionTrace)")
public Object doTrace(ProceedingJoinPoint joinPoint, TransactionTrace transactionTrace) throws Throwable {
preTrace(transactionTrace, joinPoint);
Object result = joinPoint.proceed();
postTrace(transactionTrace);
addStat(transactionTrace, result);
incrementStat(transactionTrace, result);
return result;
}
private void preTrace(TransactionTrace transactionTrace, ProceedingJoinPoint joinPoint) {
String traceMessage = transactionTrace.message();
if ("".equals(traceMessage))
return;
MethodSignature signature = (MethodSignature) joinPoint.getSignature();
if ("__METHOD_NAME__".equals(traceMessage)) {
traceMessage = signature.getName() + "(";
traceMessage += Arrays.stream(joinPoint.getArgs()).map(arg -> arg.toString()).collect(Collectors.joining(", "));
traceMessage += ")";
}
transaction.trace(traceMessage, transactionTrace.category());
}
private void postTrace(TransactionTrace transactionTrace) {
if ("".equals(transactionTrace.message()))
return;
transaction.stop();
}
private void addStat(TransactionTrace transactionTrace, Object result) {
if ("".equals(transactionTrace.addStat()) || result == null)
return;
if (result instanceof Collection)
transaction.addStat(transactionTrace.addStat(), ((Collection<?>) result).size());
else if (result.getClass().isArray())
transaction.addStat(transactionTrace.addStat(), Array.getLength(result));
}
private void incrementStat(TransactionTrace transactionTrace, Object result) {
if ("".equals(transactionTrace.incrementStat()) || result == null)
return;
transaction.incrementStat(transactionTrace.incrementStat(), 1);
}
#After("entryPoint()")
public void logFinishedTransaction(JoinPoint joinPoint) {
System.out.println(transaction);
}
}
Let me explain what this aspect does:
#Pointcut(..) entryPoint() says: Find me all methods in the code annotated by #TransactionEntryPoint. This pointcut is used in two places:
#Aspect("percflow(entryPoint())") says: Create one aspect instance for each control flow beginning at a transaction entry point.
#After("entryPoint()") logFinishedTransaction(..) says: Execute this advice (AOP terminology for a method linked to a pointcut) after an entry point methods is finished. The corresponding method just prints the transaction statistics just like in the original code at the end of Example.getOrderEntryPoint(..).
#Around("execution(* *(..)) && #annotation(transactionTrace)") doTrace(..)says: Wrap methods annotated by TransactionTrace and do the following (method body):
add new trace element and start measuring time
execute original (wrapped) method and store result
update trace element with measured time
add one type of statistics (optional)
increment another type of statistics (optional)
return wrapped method's result to its caller
The private methods are just helpers for the #Around advice.
The console log when running the updated Example class and active AspectJ is:
Transaction {
uuid = 4529d325-c604-441d-8997-45ca659abb14
stats = {
specialOffers=2
number of items=3
}
events = {
TraceEvent[name=getOrderEntryPoint(12345), category=Code, durationInMs=468, finished=true]
TraceEvent[name=getOrder(12345), category=Code, durationInMs=366, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=11]), category=Database, durationInMs=59, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=22]), category=Database, durationInMs=50, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=33]), category=Database, durationInMs=51, finished=true]
}
}
Transaction {
uuid = ef76a996-8621-478b-a376-e9f7a729a501
stats = {
specialOffers=2
number of items=3
}
events = {
TraceEvent[name=getOrderEntryPoint(23456), category=Code, durationInMs=452, finished=true]
TraceEvent[name=getOrder(23456), category=Code, durationInMs=351, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=12]), category=Database, durationInMs=50, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=23]), category=Database, durationInMs=50, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=34]), category=Database, durationInMs=50, finished=true]
}
}
You see, it looks almost identical to the original application.
Idea for further simplification, iteration 2
When reading method Example.getOrder(int orderId) I was wondering why you are calling order.getItems(), looping over it and calling getSpecialOffer(item) inside the loop. In your sample code you do not use the results for anything other than updating the transaction trace object. I am assuming that in your real code you do something with the order and with the special offers in that method.
But just in case you really do not need those calls inside that method, I suggest
you factor the calls out right into the aspect, getting rid of the TransactionTrace annotation parameters String addStat() and String incrementStat().
The Example code would get even simpler and
the annotation #TransactionTrace(message = "", addStat = "number of items") in class would go away, too.
I am leaving this refactoring to you if you think it makes sense.

Related

Is there a way to get a LifecycleOwner in FirebaseMessagingService

I'm developing a chat app and I'm using Firebase Cloud Messaging for notifications.
I found that it was best to save my notifications (notification info) in Local database i.e Room so it help me to handle the badge counts and the clearing of specific chat notifications.
Steps:
Setup my FirebaseMessagingService and tested. (Getting my notifications successfully);
Setup Room database and tested to insert and get all data (LiveData) (working good);
I want to observe the liveData inside MyFirebaseMessagingService but to do so, I need a LivecycleOwner and I don't have any idea from where I will get it.
I searched on google but the only solution was to use a LifecycleService, but I need FirebaseMessagingService for my notification purpose.
this is my code:
//Room Database class
private static volatile LocalDatabase INSTANCE;
private static final int NUMBER_OF_THREADS = 4;
public static final ExecutorService taskExecutor =
Executors.newFixedThreadPool(NUMBER_OF_THREADS);
public static LocalDatabase getDatabase(final Context context) {
if (INSTANCE == null) {
synchronized (RoomDatabase.class) {
if (INSTANCE == null) {
INSTANCE = Room.databaseBuilder(context.getApplicationContext(),
LocalDatabase.class, "local_database")
.build();
}
}
}
return INSTANCE;
}
public abstract NotificationDao dao();
//DAO interface
#Insert
void insert(NotificationEntity notificationEntity);
#Query("DELETE FROM notificationentity WHERE trade_id = :tradeId")
int clearByTrade(String tradeId);
#Query("SELECT * FROM notificationentity")
LiveData<List<NotificationEntity>> getAll();
//Repository class{}
private LiveData<List<NotificationEntity>> listLiveData;
public Repository() {
firestore = FirebaseFirestore.getInstance();
storage = FirebaseStorage.getInstance();
}
public Repository(Application application) {
LocalDatabase localDb = LocalDatabase.getDatabase(application);
dao = localDb.dao();
listLiveData = dao.getAll();
}
...
public void saveNotificationInfo(#NonNull NotificationEntity entity){
LocalDatabase.taskExecutor.execute(() -> {
try {
dao.insert(entity);
H.debug("NotificationData saved in local db");
}catch (Exception e){
H.debug("Failed to save NotificationData in local db: "+e.getMessage());
}
});
}
public LiveData<List<NotificationEntity>> getNotifications(){return listLiveData;}
public void clearNotificationInf(#NonNull String tradeId){
LocalDatabase.taskExecutor.execute(() -> {
try {
H.debug("trying to delete rows for id :"+tradeId+"...");
int n = dao.clearByTrade(tradeId);
H.debug("Cleared: "+n+" notification info from localDatabase");
}catch (Exception e){
H.debug("Failed clear NotificationData in local db: "+e.getMessage());
}
});
}
//ViewModel class{}
private Repository rep;
private LiveData<List<NotificationEntity>> list;
public VModel(#NonNull Application application) {
super(application);
rep = new Repository(application);
list = rep.getNotifications();
}
public void saveNotificationInfo(Context context, #NonNull NotificationEntity entity){
rep.saveNotificationInfo(entity);
}
public LiveData<List<NotificationEntity>> getNotifications(){
return rep.getNotifications();
}
public void clearNotificationInf(Context context, #NonNull String tradeId){
rep.clearNotificationInf(tradeId);
}
and finally the FiebaseMessagingService class{}
private static final String TAG = "MyFireBaseService";
private static final int SUMMARY_ID = 999;
private SoundManager sm;
private Context context;
private final String GROUP_KEY = "com.opendev.xpresso.group_xpresso_group_key";
private Repository rep;
private NotificationDao dao;
#Override
public void onCreate() {
super.onCreate();
context = this;
rep = new Repository();
}
/**
* Called if InstanceID token is updated. This may occur if the security of
* the previous token had been compromised. Note that this is called when the InstanceID token
* is initially generated so this is where you would retrieve the token.
*/
#Override
public void onNewToken(#NonNull String s) {
super.onNewToken(s);
}
#Override
public void onMessageReceived(#NonNull RemoteMessage remoteMessage) {
super.onMessageReceived(remoteMessage);
H.debug("OnMessageReceived...");
try {
Map<String, String> data = remoteMessage.getData();
if (Objects.requireNonNull(data.get("purpose")).equals("notify_message")) {
String ChatId
if ((chatId=data.get("chatId"))==null){
H.debug("onMessageReceived: tradeId null! Aborting...");
return;
}
FirebaseFirestore db = FirebaseFirestore.getInstance();
Task<DocumentSnapshot> tradeTask = db.collection("activeTrades").document(chatTask).get();
Task<DocumentSnapshot> userTask = db.collection("users")
.document(FirebaseAuth.getInstance().getCurrentUser().getUid()).get();
Tasks.whenAllSuccess(chatTask, userTask).addOnSuccessListener(objects -> {
if (!((DocumentSnapshot)objects.get(0)).exists() || !((DocumentSnapshot)objects.get(1)).exists()){
H.debug("OnMessageReceived: querying data failed: NOT EXISTS");
return;
}
Chat chat = ((DocumentSnapshot)objects.get(0)).toObject(Trade.class);
MainActivity.USER = ((DocumentSnapshot)objects.get(1)).toObject(User.class);
//Now we got all the needed info we cant process the notification
//Saving the notification locally and updating badge count
//then notify for all the notification in localDatabase
NotificationEntity entity = new NotificationEntity();
entity.setNotificationId(getNextNotificationId());
entity.setTradeId(tradeId);
entity.setChanelId(context.getResources().getString(R.string.channel_id));
entity.setTitle(data.get("title"));
entity.setMessage(data.get("message"));
entity.setPriority(NotificationCompat.PRIORITY_HIGH);
entity.setCategory(NotificationCompat.CATEGORY_MESSAGE);
rep.saveNotificationInfo(entity);
rep.getNotifications().observe(HOW_TO_GET_THE_LIVECYCLE_OWNER, new Observer<List<NotificationEntity>>() {
#Override
public void onChanged(List<NotificationEntity> notificationEntities) {
//
}
});
}).addOnFailureListener(e -> H.debug("OnMessageReceived: querying data failed: "+e.getMessage()));
}
}catch (Exception e){H.debug(e.getMessage());}
}
Updated,
Because It is not recommended to use a LiveData object inside of a FirebaseMessagingService because a FirebaseMessagingService is not a part of the Android activity lifecycle and therefore does not have a lifecycle owner. Instead of trying to use LiveData inside of the FirebaseMessagingService, you could consider using a different approach to handle badge count and clearing specific chat notifications.
So I used a broadcast receiver to receive the notifications. Then I could set the broadcast receiver in my FirebaseMessagingService, and it will receive the notifications and update the badge count in local Room database.
I created a Broadcast Receiver for this, and in onReceive method I send a Intent to a service and handled the badge logic in service.
I'm answering my own question just to show my alternative workaround.
I believe the liveDataObserver still the best way for me but until someone help me by giving me the solution to get LivecycleOwner in FirebaseMessagingService, I'm going to use custom listener for my insert() and my getAll()
like follow
public interface RoomInsertListener{
void onInsert();
}
public interface RoomGetListener{
void onGet(List<NotificationEntity> list);
}
Then use it in FirebaseMessagingService as follow
NotificationEntity entity = new NotificationEntity();
entity.setNotificationId(getNextNotificationId());
entity.setTradeId(tradeId);
entity.setChanelId(context.getResources().getString(R.string.channel_id));
entity.setTitle(data.get("title"));
entity.setMessage(data.get("message"));
entity.setPriority(NotificationCompat.PRIORITY_HIGH);
entity.setCategory(NotificationCompat.CATEGORY_MESSAGE);
rep.saveNotificationInfo(entity, () -> rep.getNotifications(list -> {
ShortcutBadger.applyCount(context, list.size());
H.debug(list.size()+" notifications in Database: applied badge count...");
for (NotificationEntity e:list){
H.debug("id:"+e.getNotificationId()+" trade: "+e.getTradeId());
}
}));

AbstractStringBuilder.ensureCapacityInternal get NullPointerException in storm bolt

online system, the storm Bolt get NullPointerException,though I think I check it before line 61; It gets NullPointerException once in a while;
import ***.KeyUtils;
import ***.redis.PipelineHelper;
import ***.redis.PipelinedCacheClusterClient;
import **.redis.R2mClusterClient;
import org.apache.commons.lang3.StringUtils;
import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.IRichBolt;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.tuple.Tuple;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import java.util.Map;
/**
* RedisBolt batch operate
*/
public class RedisBolt implements IRichBolt {
static final long serialVersionUID = 737015318988609460L;
private static ApplicationContext applicationContext;
private static long logEmitNumber = 0;
private static StringBuffer totalCmds = new StringBuffer();
private Logger logger = LoggerFactory.getLogger(getClass());
private OutputCollector _collector;
private R2mClusterClient r2mClusterClient;
#Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
_collector = outputCollector;
if (applicationContext == null) {
applicationContext = new ClassPathXmlApplicationContext("spring/spring-config-redisbolt.xml");
}
if (r2mClusterClient == null) {
r2mClusterClient = (R2mClusterClient) applicationContext.getBean("r2mClusterClient");
}
}
#Override
public void execute(Tuple tuple) {
String log = tuple.getString(0);
String lastCommands = tuple.getString(1);
try {
//log count
if (StringUtils.isNotEmpty(log)) {
logEmitNumber++;
}
if (StringUtils.isNotEmpty(lastCommands)) {
if(totalCmds==null){
totalCmds = new StringBuffer();
}
totalCmds.append(lastCommands);//line 61
}
//日志数量控制
int numberLimit = 1;
String flow_log_limit = r2mClusterClient.get(KeyUtils.KEY_PIPELINE_LIMIT);
if (StringUtils.isNotEmpty(flow_log_limit)) {
try {
numberLimit = Integer.parseInt(flow_log_limit);
} catch (Exception e) {
numberLimit = 1;
logger.error("error", e);
}
}
if (logEmitNumber >= numberLimit) {
StringBuffer _totalCmds = new StringBuffer(totalCmds);
try {
//pipeline submit
PipelinedCacheClusterClient pip = r2mClusterClient.pipelined();
String[] commandArray = _totalCmds.toString().split(KeyUtils.REDIS_CMD_SPILT);
PipelineHelper.cmd(pip, commandArray);
pip.sync();
pip.close();
totalCmds = new StringBuffer();
} catch (Exception e) {
logger.error("error", e);
}
logEmitNumber = 0;
}
} catch (Exception e) {
logger.error(new StringBuffer("====RedisBolt error for log=[ ").append(log).append("] \n commands=[").append(lastCommands).append("]").toString(), e);
_collector.reportError(e);
_collector.fail(tuple);
}
_collector.ack(tuple);
}
#Override
public void cleanup() {
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
}
#Override
public Map<String, Object> getComponentConfiguration() {
return null;
}
}
exception info:
java.lang.NullPointerException at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:113) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415) at java.lang.StringBuffer.append(StringBuffer.java:237) at com.jd.jr.dataeye.storm.bolt.RedisBolt.execute(RedisBolt.java:61) at org.apache.storm.daemon.executor$fn__5044$tuple_action_fn__5046.invoke(executor.clj:727) at org.apache.storm.daemon.executor$mk_task_receiver$fn__4965.invoke(executor.clj:459) at org.apache.storm.disruptor$clojure_handler$reify__4480.onEvent(disruptor.clj:40) at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472) at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451) at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) at org.apache.storm.daemon.executor$fn__5044$fn__5057$fn__5110.invoke(executor.clj:846) at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745)
can anyone give me some advice to find the reason.
That is really odd thing to happen. Please read the code for two classes.
https://github.com/openjdk-mirror/jdk7u-jdk/blob/master/src/share/classes/java/lang/AbstractStringBuilder.java
https://github.com/openjdk-mirror/jdk7u-jdk/blob/master/src/share/classes/java/lang/StringBuffer.java
AbstractStringBuilder has constructor with no args which doesn't allocate the field 'value', which makes accessing the 'value' field being NPE. Any constructors in StringBuffer use that constructor. So maybe some odd thing happens in serialization/deserialization and unfortunately 'value' field in AbstractStringBuilder is being null.
Maybe initializing totalCmds in prepare() would be better, and also you need to consider synchronization (thread-safety) between bolts. prepare() can be called per bolt instance so fields are thread-safe, but class fields are not thread-safe.
I think I find the problem maybe;
the key point is
"StringBuffer _totalCmds = new StringBuffer(totalCmds);" and " totalCmds.append(lastCommands);//line 61"
when new a object, It takes serval steps:
(1) allocate memory and return reference
(2) initialize
if append after (1) and before (2) then the StringBuffer.java extends AbstractStringBuilder.java
/**
* The value is used for character storage.
*/
char[] value;
value is not initialized;so this will get null:
#Override
public synchronized void ensureCapacity(int minimumCapacity) {
if (minimumCapacity > value.length) {
expandCapacity(minimumCapacity);
}
}
this blot has a another question, some data maybe lost under a multithreaded environment

How do I configure spring-kafka to ignore messages in the wrong format?

We have an issue with one of our Kafka topics which is consumed by the DefaultKafkaConsumerFactory & ConcurrentMessageListenerContainer combination described here with a JsonDeserializer used by the Factory. Unfortunately someone got a little enthusiastic and published some invalid messages onto the topic. It appears that spring-kafka silently fails to process past the first of these messages. Is it possible to have spring-kafka log an error and continue? Looking at the error messages which are logged it seems that perhaps the Apache kafka-clients library should deal with the case that when iterating a batch of messages one or more of them may fail to parse?
The below code is an example test case illustrating this issue:
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.listener.KafkaMessageListenerContainer;
import org.springframework.kafka.listener.MessageListener;
import org.springframework.kafka.listener.config.ContainerProperties;
import org.springframework.kafka.support.SendResult;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerializer;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.utils.ContainerTestUtils;
import org.springframework.util.concurrent.ListenableFuture;
import java.util.HashMap;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertThat;
import static org.springframework.kafka.test.hamcrest.KafkaMatchers.hasKey;
import static org.springframework.kafka.test.hamcrest.KafkaMatchers.hasValue;
/**
* #author jfreedman
*/
public class TestSpringKafka {
private static final String TOPIC1 = "spring.kafka.1.t";
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 1, TOPIC1);
#Test
public void submitMessageThenGarbageThenAnotherMessage() throws Exception {
final BlockingQueue<ConsumerRecord<String, JsonObject>> records = createListener(TOPIC1);
final KafkaTemplate<String, JsonObject> objectTemplate = createPublisher("json", new JsonSerializer<JsonObject>());
sendAndVerifyMessage(records, objectTemplate, "foo", new JsonObject("foo"), 0L);
// push some garbage text to Kafka which cannot be marshalled, this should not interrupt processing
final KafkaTemplate<String, String> garbageTemplate = createPublisher("garbage", new StringSerializer());
final SendResult<String, String> garbageResult = garbageTemplate.send(TOPIC1, "bar","bar").get(5, TimeUnit.SECONDS);
assertEquals(1L, garbageResult.getRecordMetadata().offset());
sendAndVerifyMessage(records, objectTemplate, "baz", new JsonObject("baz"), 2L);
}
private <T> KafkaTemplate<String, T> createPublisher(final String label, final Serializer<T> serializer) {
final Map<String, Object> producerProps = new HashMap<>();
producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, embeddedKafka.getBrokersAsString());
producerProps.put(ProducerConfig.CLIENT_ID_CONFIG, "TestPublisher-" + label);
producerProps.put(ProducerConfig.ACKS_CONFIG, "all");
producerProps.put(ProducerConfig.RETRIES_CONFIG, 2);
producerProps.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 1);
producerProps.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 5000);
producerProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 5000);
producerProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, serializer.getClass());
final DefaultKafkaProducerFactory<String, T> pf = new DefaultKafkaProducerFactory<>(producerProps);
pf.setValueSerializer(serializer);
return new KafkaTemplate<>(pf);
}
private BlockingQueue<ConsumerRecord<String, JsonObject>> createListener(final String topic) throws Exception {
final Map<String, Object> consumerProps = new HashMap<>();
consumerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, embeddedKafka.getBrokersAsString());
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "TestConsumer");
consumerProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
consumerProps.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
consumerProps.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 15000);
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
final DefaultKafkaConsumerFactory<String, JsonObject> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
cf.setValueDeserializer(new JsonDeserializer<>(JsonObject.class));
final KafkaMessageListenerContainer<String, JsonObject> container = new KafkaMessageListenerContainer<>(cf, new ContainerProperties(topic));
final BlockingQueue<ConsumerRecord<String, JsonObject>> records = new LinkedBlockingQueue<>();
container.setupMessageListener((MessageListener<String, JsonObject>) records::add);
container.setBeanName("TestListener");
container.start();
ContainerTestUtils.waitForAssignment(container, embeddedKafka.getPartitionsPerTopic());
return records;
}
private void sendAndVerifyMessage(final BlockingQueue<ConsumerRecord<String, JsonObject>> records,
final KafkaTemplate<String, JsonObject> template,
final String key, final JsonObject value,
final long expectedOffset) throws InterruptedException, ExecutionException, TimeoutException {
final ListenableFuture<SendResult<String, JsonObject>> future = template.send(TOPIC1, key, value);
final ConsumerRecord<String, JsonObject> record = records.poll(5, TimeUnit.SECONDS);
assertThat(record, hasKey(key));
assertThat(record, hasValue(value));
assertEquals(expectedOffset, future.get(5, TimeUnit.SECONDS).getRecordMetadata().offset());
}
public static final class JsonObject {
private String value;
public JsonObject() {}
JsonObject(final String value) {
this.value = value;
}
public String getValue() {
return value;
}
public void setValue(final String value) {
this.value = value;
}
#Override
public boolean equals(final Object o) {
if (this == o) { return true; }
if (o == null || getClass() != o.getClass()) { return false; }
final JsonObject that = (JsonObject) o;
return Objects.equals(value, that.value);
}
#Override
public int hashCode() {
return Objects.hash(value);
}
#Override
public String toString() {
return "JsonObject{" +
"value='" + value + '\'' +
'}';
}
}
}
I have a solution but I don't know if it's the best one, I extended JsonDeserializer as follows which results in a null value being consumed by spring-kafka and requires the necessary downstream changes to handle that case.
class SafeJsonDeserializer[A >: Null](targetType: Class[A], objectMapper: ObjectMapper) extends JsonDeserializer[A](targetType, objectMapper) with Logging {
override def deserialize(topic: String, data: Array[Byte]): A = try {
super.deserialize(topic, data)
} catch {
case e: Exception =>
logger.error("Failed to deserialize data [%s] from topic [%s]".format(new String(data), topic), e)
null
}
}
Starting from the spring-kafka-2.x.x, we now have the comfort of declaring beans in the config file for the interface KafkaListenerErrorHandler with a implementation something as
#Bean
public ConsumerAwareListenerErrorHandler listen3ErrorHandler() {
return (m, e, c) -> {
this.listen3Exception = e;
MessageHeaders headers = m.getHeaders();
c.seek(new org.apache.kafka.common.TopicPartition(
headers.get(KafkaHeaders.RECEIVED_TOPIC, String.class),
headers.get(KafkaHeaders.RECEIVED_PARTITION_ID, Integer.class)),
headers.get(KafkaHeaders.OFFSET, Long.class));
return null;
};
}
more resources can be found at https://docs.spring.io/spring-kafka/reference/htmlsingle/#annotation-error-handling There is also another link with the similar issue: Spring Kafka error handling - v1.1.x and How to handle SerializationException after deserialization
Use ErrorHandlingDeserializer2. This is a delegating key/value deserializer that catches exceptions, returning them in the headers as serialized java objects.
Under consumer configuration, add/update the below lines:
import org.apache.kafka.clients.consumer.ConsumerConfig
import org.springframework.kafka.support.serializer.ErrorHandlingDeserializer2
configProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
classOf[ErrorHandlingDeserializer2[JsonDeserializer]].getName)
configProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, classOf[ErrorHandlingDeserializer2[StringDeserializer]].getName)
configProps.put(ErrorHandlingDeserializer2.KEY_DESERIALIZER_CLASS, classOf[StringDeserializer].getName)
configProps.put(ErrorHandlingDeserializer2.VALUE_DESERIALIZER_CLASS, classOf[JsonDeserializer].getName)

Customized parameter logging when using aspect oriented programing

All the examples I've seen that use aspect oriented programming for logging either log just class, method name and duration, and if they log parameters and return values they simply use ToString(). I need to have more control over what is logged. For example I want to skip passwords, or in some cases log all properties of an object but in other cases just the id property.
Any suggestions? I looked at AspectJ in Java and Unity interception in C# and could not find a solution.
You could try introducing parameter annotations to augment your parameters with some attributes. One of those attributes could signal to skip logging the parameter, another one could be used to specify a converter class for the string representation.
With the following annotations:
#Documented
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
public #interface Log {
}
#Documented
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.PARAMETER)
public #interface SkipLogging {
}
#Documented
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.PARAMETER)
public #interface ToStringWith {
Class<? extends Function<?, String>> value();
}
the aspect could look like this:
import java.lang.reflect.Parameter;
import java.util.function.Function;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import org.aspectj.lang.reflect.MethodSignature;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public aspect LoggingAspect {
private final static Logger logger = LoggerFactory.getLogger(LoggingAspect.class);
pointcut loggableMethod(): execution(#Log * *..*.*(..));
before(): loggableMethod() {
MethodSignature signature = (MethodSignature) thisJoinPoint.getSignature();
Parameter[] parameters = signature.getMethod()
.getParameters();
String message = IntStream.range(0, parameters.length)
.filter(i -> this.isLoggable(parameters[i]))
.<String>mapToObj(i -> toString(parameters[i], thisJoinPoint.getArgs()[i]))
.collect(Collectors.joining(", ",
"method execution " + signature.getName() + "(", ")"));
Logger methodLogger = LoggerFactory.getLogger(
thisJoinPointStaticPart.getSignature().getDeclaringType());
methodLogger.debug(message);
}
private boolean isLoggable(Parameter parameter) {
return parameter.getAnnotation(SkipLogging.class) == null;
}
private String toString(Parameter parameter, Object value) {
ToStringWith toStringWith = parameter.getAnnotation(ToStringWith.class);
if (toStringWith != null) {
Class<? extends Function<?, String>> converterClass =
toStringWith.value();
try {
#SuppressWarnings("unchecked")
Function<Object, String> converter = (Function<Object, String>)
converterClass.newInstance();
String str = converter.apply(value);
return String.format("%s='%s'", parameter.getName(), str);
} catch (Exception e) {
logger.error("Couldn't instantiate toString converter for logging "
+ converterClass.getName(), e);
return String.format("%s=<error converting to string>",
parameter.getName());
}
} else {
return String.format("%s='%s'", parameter.getName(), String.valueOf(value));
}
}
}
Test code:
public static class SomethingToStringConverter implements Function<Something, String> {
#Override
public String apply(Something something) {
return "Something nice";
}
}
#Log
public void test(
#ToStringWith(SomethingToStringConverter.class) Something something,
String string,
#SkipLogging Class<?> cls,
Object object) {
}
public static void main(String[] args) {
// execution of this method should log the following message:
// method execution test(something='Something nice', string='some string', object='null')
test(new Something(), "some string", Object.class, null);
}
I used Java 8 Streams API in my answer for it's compactness, you could convert the code to normal Java code if you don't use Java 8 features or need better efficiency. It's just to give you an idea.

why does hibernate hql distinct cause an sql distinct on left join?

I've got this test HQL:
select distinct o from Order o left join fetch o.lineItems
and it does generate an SQL distinct without an obvious reason:
select distinct order0_.id as id61_0_, orderline1_.order_id as order1_62_1_...
The SQL resultset is always the same (with and without an SQL distinct):
order id | order name | orderline id | orderline name
---------+------------+--------------+---------------
1 | foo | 1 | foo item
1 | foo | 2 | bar item
1 | foo | 3 | test item
2 | empty | NULL | NULL
3 | bar | 4 | qwerty item
3 | bar | 5 | asdfgh item
Why does hibernate generate the SQL distinct? The SQL distinct doesn't make any sense and makes the query slower than needed.
This is contrary to the FAQ which mentions that hql distinct in this case is just a shortcut for the result transformer:
session.createQuery("select distinct o
from Order o left join fetch
o.lineItems").list();
It looks like you are using the SQL DISTINCT keyword here. Of course, this is not SQL, this is HQL. This distinct is just a shortcut for the result transformer, in this case. Yes, in other cases an HQL distinct will translate straight into a SQL DISTINCT. Not in this case: you can not filter out duplicates at the SQL level, the very nature of a product/join forbids this - you want the duplicates or you don't get all the data you need.
thanks
Have a closer look at the sql statement that hibernate generates - yes it does use the "distinct" keyword but not in the way I think you are expecting it to (or the way that the Hibernate FAQ is implying) i.e. to return a set of "distinct" or "unique" orders.
It doesn't use the distinct keyword to return distinct orders, as that wouldn't make sense in that SQL query, considering the join that you have also specified.
The resulting sql set still needs processing by the ResultTransformer, as clearly the sql set contains duplicate orders. That's why they say that the HQL distinct keyword doesn't directly map to the SQL distinct keyword.
I had the exact same problem and I think this is an Hibernate issue (not a bug because code doesn't fail). However, I have to dig deeper to make sure it's an issue.
Hibernate (at least in version 4 which its the version I'm working on my project, specifically 4.3.11) uses the concept of SPI, long story short: its like an API to extend or modify the framework.
I took advantage of this feature to replace the classes org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory (This class is called by Hibernate and delegates the job of generating the SQL query) and org.hibernate.hql.internal.ast.QueryTranslatorImpl (This is sort of an internal class which is called by org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory and generates the actual SQL query). I did it as follows:
Replacement for org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory:
package org.hibernate.hql.internal.ast;
import java.util.Map;
import org.hibernate.engine.query.spi.EntityGraphQueryHint;
import org.hibernate.engine.spi.SessionFactoryImplementor;
import org.hibernate.hql.spi.QueryTranslator;
public class NoDistinctInSQLASTQueryTranslatorFactory extends ASTQueryTranslatorFactory {
#Override
public QueryTranslator createQueryTranslator(String queryIdentifier, String queryString, Map filters, SessionFactoryImplementor factory, EntityGraphQueryHint entityGraphQueryHint) {
return new NoDistinctInSQLQueryTranslatorImpl(queryIdentifier, queryString, filters, factory, entityGraphQueryHint);
}
}
Replacement for org.hibernate.hql.internal.ast.QueryTranslatorImpl:
package org.hibernate.hql.internal.ast;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.hibernate.HibernateException;
import org.hibernate.MappingException;
import org.hibernate.QueryException;
import org.hibernate.ScrollableResults;
import org.hibernate.engine.query.spi.EntityGraphQueryHint;
import org.hibernate.engine.spi.QueryParameters;
import org.hibernate.engine.spi.RowSelection;
import org.hibernate.engine.spi.SessionFactoryImplementor;
import org.hibernate.engine.spi.SessionImplementor;
import org.hibernate.event.spi.EventSource;
import org.hibernate.hql.internal.QueryExecutionRequestException;
import org.hibernate.hql.internal.antlr.HqlSqlTokenTypes;
import org.hibernate.hql.internal.antlr.HqlTokenTypes;
import org.hibernate.hql.internal.antlr.SqlTokenTypes;
import org.hibernate.hql.internal.ast.exec.BasicExecutor;
import org.hibernate.hql.internal.ast.exec.DeleteExecutor;
import org.hibernate.hql.internal.ast.exec.MultiTableDeleteExecutor;
import org.hibernate.hql.internal.ast.exec.MultiTableUpdateExecutor;
import org.hibernate.hql.internal.ast.exec.StatementExecutor;
import org.hibernate.hql.internal.ast.tree.AggregatedSelectExpression;
import org.hibernate.hql.internal.ast.tree.FromElement;
import org.hibernate.hql.internal.ast.tree.InsertStatement;
import org.hibernate.hql.internal.ast.tree.QueryNode;
import org.hibernate.hql.internal.ast.tree.Statement;
import org.hibernate.hql.internal.ast.util.ASTPrinter;
import org.hibernate.hql.internal.ast.util.ASTUtil;
import org.hibernate.hql.internal.ast.util.NodeTraverser;
import org.hibernate.hql.spi.FilterTranslator;
import org.hibernate.hql.spi.ParameterTranslations;
import org.hibernate.internal.CoreMessageLogger;
import org.hibernate.internal.util.ReflectHelper;
import org.hibernate.internal.util.StringHelper;
import org.hibernate.internal.util.collections.IdentitySet;
import org.hibernate.loader.hql.QueryLoader;
import org.hibernate.param.ParameterSpecification;
import org.hibernate.persister.entity.Queryable;
import org.hibernate.type.Type;
import org.jboss.logging.Logger;
import antlr.ANTLRException;
import antlr.RecognitionException;
import antlr.TokenStreamException;
import antlr.collections.AST;
/**
* A QueryTranslator that uses an Antlr-based parser.
*
* #author Joshua Davis (pgmjsd#sourceforge.net)
*/
public class NoDistinctInSQLQueryTranslatorImpl extends QueryTranslatorImpl implements FilterTranslator {
private static final CoreMessageLogger LOG = Logger.getMessageLogger(
CoreMessageLogger.class,
QueryTranslatorImpl.class.getName()
);
private SessionFactoryImplementor factory;
private final String queryIdentifier;
private String hql;
private boolean shallowQuery;
private Map tokenReplacements;
//TODO:this is only needed during compilation .. can we eliminate the instvar?
private Map enabledFilters;
private boolean compiled;
private QueryLoader queryLoader;
private StatementExecutor statementExecutor;
private Statement sqlAst;
private String sql;
private ParameterTranslations paramTranslations;
private List<ParameterSpecification> collectedParameterSpecifications;
private EntityGraphQueryHint entityGraphQueryHint;
/**
* Creates a new AST-based query translator.
*
* #param queryIdentifier The query-identifier (used in stats collection)
* #param query The hql query to translate
* #param enabledFilters Currently enabled filters
* #param factory The session factory constructing this translator instance.
*/
public NoDistinctInSQLQueryTranslatorImpl(
String queryIdentifier,
String query,
Map enabledFilters,
SessionFactoryImplementor factory) {
super(queryIdentifier, query, enabledFilters, factory);
this.queryIdentifier = queryIdentifier;
this.hql = query;
this.compiled = false;
this.shallowQuery = false;
this.enabledFilters = enabledFilters;
this.factory = factory;
}
public NoDistinctInSQLQueryTranslatorImpl(
String queryIdentifier,
String query,
Map enabledFilters,
SessionFactoryImplementor factory,
EntityGraphQueryHint entityGraphQueryHint) {
this(queryIdentifier, query, enabledFilters, factory);
this.entityGraphQueryHint = entityGraphQueryHint;
}
/**
* Compile a "normal" query. This method may be called multiple times.
* Subsequent invocations are no-ops.
*
* #param replacements Defined query substitutions.
* #param shallow Does this represent a shallow (scalar or entity-id)
* select?
* #throws QueryException There was a problem parsing the query string.
* #throws MappingException There was a problem querying defined mappings.
*/
#Override
public void compile(
Map replacements,
boolean shallow) throws QueryException, MappingException {
doCompile(replacements, shallow, null);
}
/**
* Compile a filter. This method may be called multiple times. Subsequent
* invocations are no-ops.
*
* #param collectionRole the role name of the collection used as the basis
* for the filter.
* #param replacements Defined query substitutions.
* #param shallow Does this represent a shallow (scalar or entity-id)
* select?
* #throws QueryException There was a problem parsing the query string.
* #throws MappingException There was a problem querying defined mappings.
*/
#Override
public void compile(
String collectionRole,
Map replacements,
boolean shallow) throws QueryException, MappingException {
doCompile(replacements, shallow, collectionRole);
}
/**
* Performs both filter and non-filter compiling.
*
* #param replacements Defined query substitutions.
* #param shallow Does this represent a shallow (scalar or entity-id)
* select?
* #param collectionRole the role name of the collection used as the basis
* for the filter, NULL if this is not a filter.
*/
private synchronized void doCompile(Map replacements, boolean shallow, String collectionRole) {
// If the query is already compiled, skip the compilation.
if (compiled) {
LOG.debug("compile() : The query is already compiled, skipping...");
return;
}
// Remember the parameters for the compilation.
this.tokenReplacements = replacements;
if (tokenReplacements == null) {
tokenReplacements = new HashMap();
}
this.shallowQuery = shallow;
try {
// PHASE 1 : Parse the HQL into an AST.
final HqlParser parser = parse(true);
// PHASE 2 : Analyze the HQL AST, and produce an SQL AST.
final HqlSqlWalker w = analyze(parser, collectionRole);
sqlAst = (Statement) w.getAST();
// at some point the generate phase needs to be moved out of here,
// because a single object-level DML might spawn multiple SQL DML
// command executions.
//
// Possible to just move the sql generation for dml stuff, but for
// consistency-sake probably best to just move responsiblity for
// the generation phase completely into the delegates
// (QueryLoader/StatementExecutor) themselves. Also, not sure why
// QueryLoader currently even has a dependency on this at all; does
// it need it? Ideally like to see the walker itself given to the delegates directly...
if (sqlAst.needsExecutor()) {
statementExecutor = buildAppropriateStatementExecutor(w);
} else {
// PHASE 3 : Generate the SQL.
generate((QueryNode) sqlAst);
queryLoader = new QueryLoader(this, factory, w.getSelectClause());
}
compiled = true;
} catch (QueryException qe) {
if (qe.getQueryString() == null) {
throw qe.wrapWithQueryString(hql);
} else {
throw qe;
}
} catch (RecognitionException e) {
// we do not actually propagate ANTLRExceptions as a cause, so
// log it here for diagnostic purposes
LOG.trace("Converted antlr.RecognitionException", e);
throw QuerySyntaxException.convert(e, hql);
} catch (ANTLRException e) {
// we do not actually propagate ANTLRExceptions as a cause, so
// log it here for diagnostic purposes
LOG.trace("Converted antlr.ANTLRException", e);
throw new QueryException(e.getMessage(), hql);
}
//only needed during compilation phase...
this.enabledFilters = null;
}
private void generate(AST sqlAst) throws QueryException, RecognitionException {
if (sql == null) {
final SqlGenerator gen = new SqlGenerator(factory);
gen.statement(sqlAst);
sql = gen.getSQL();
//Hack: The distinct operator is removed from the sql
//string to avoid executing a distinct query in the db server when
//the distinct is used in hql.
sql = sql.replace("distinct", "");
if (LOG.isDebugEnabled()) {
LOG.debugf("HQL: %s", hql);
LOG.debugf("SQL: %s", sql);
}
gen.getParseErrorHandler().throwQueryException();
collectedParameterSpecifications = gen.getCollectedParameters();
}
}
private static final ASTPrinter SQL_TOKEN_PRINTER = new ASTPrinter(SqlTokenTypes.class);
private HqlSqlWalker analyze(HqlParser parser, String collectionRole) throws QueryException, RecognitionException {
final HqlSqlWalker w = new HqlSqlWalker(this, factory, parser, tokenReplacements, collectionRole);
final AST hqlAst = parser.getAST();
// Transform the tree.
w.statement(hqlAst);
if (LOG.isDebugEnabled()) {
LOG.debug(SQL_TOKEN_PRINTER.showAsString(w.getAST(), "--- SQL AST ---"));
}
w.getParseErrorHandler().throwQueryException();
return w;
}
private HqlParser parse(boolean filter) throws TokenStreamException, RecognitionException {
// Parse the query string into an HQL AST.
final HqlParser parser = HqlParser.getInstance(hql);
parser.setFilter(filter);
LOG.debugf("parse() - HQL: %s", hql);
parser.statement();
final AST hqlAst = parser.getAST();
final NodeTraverser walker = new NodeTraverser(new JavaConstantConverter());
walker.traverseDepthFirst(hqlAst);
showHqlAst(hqlAst);
parser.getParseErrorHandler().throwQueryException();
return parser;
}
private static final ASTPrinter HQL_TOKEN_PRINTER = new ASTPrinter(HqlTokenTypes.class);
#Override
void showHqlAst(AST hqlAst) {
if (LOG.isDebugEnabled()) {
LOG.debug(HQL_TOKEN_PRINTER.showAsString(hqlAst, "--- HQL AST ---"));
}
}
private void errorIfDML() throws HibernateException {
if (sqlAst.needsExecutor()) {
throw new QueryExecutionRequestException("Not supported for DML operations", hql);
}
}
private void errorIfSelect() throws HibernateException {
if (!sqlAst.needsExecutor()) {
throw new QueryExecutionRequestException("Not supported for select queries", hql);
}
}
#Override
public String getQueryIdentifier() {
return queryIdentifier;
}
#Override
public Statement getSqlAST() {
return sqlAst;
}
private HqlSqlWalker getWalker() {
return sqlAst.getWalker();
}
/**
* Types of the return values of an <tt>iterate()</tt> style query.
*
* #return an array of <tt>Type</tt>s.
*/
#Override
public Type[] getReturnTypes() {
errorIfDML();
return getWalker().getReturnTypes();
}
#Override
public String[] getReturnAliases() {
errorIfDML();
return getWalker().getReturnAliases();
}
#Override
public String[][] getColumnNames() {
errorIfDML();
return getWalker().getSelectClause().getColumnNames();
}
#Override
public Set<Serializable> getQuerySpaces() {
return getWalker().getQuerySpaces();
}
#Override
public List list(SessionImplementor session, QueryParameters queryParameters)
throws HibernateException {
// Delegate to the QueryLoader...
errorIfDML();
final QueryNode query = (QueryNode) sqlAst;
final boolean hasLimit = queryParameters.getRowSelection() != null && queryParameters.getRowSelection().definesLimits();
final boolean needsDistincting = (query.getSelectClause().isDistinct() || hasLimit) && containsCollectionFetches();
QueryParameters queryParametersToUse;
if (hasLimit && containsCollectionFetches()) {
LOG.firstOrMaxResultsSpecifiedWithCollectionFetch();
RowSelection selection = new RowSelection();
selection.setFetchSize(queryParameters.getRowSelection().getFetchSize());
selection.setTimeout(queryParameters.getRowSelection().getTimeout());
queryParametersToUse = queryParameters.createCopyUsing(selection);
} else {
queryParametersToUse = queryParameters;
}
List results = queryLoader.list(session, queryParametersToUse);
if (needsDistincting) {
int includedCount = -1;
// NOTE : firstRow is zero-based
int first = !hasLimit || queryParameters.getRowSelection().getFirstRow() == null
? 0
: queryParameters.getRowSelection().getFirstRow();
int max = !hasLimit || queryParameters.getRowSelection().getMaxRows() == null
? -1
: queryParameters.getRowSelection().getMaxRows();
List tmp = new ArrayList();
IdentitySet distinction = new IdentitySet();
for (final Object result : results) {
if (!distinction.add(result)) {
continue;
}
includedCount++;
if (includedCount < first) {
continue;
}
tmp.add(result);
// NOTE : ( max - 1 ) because first is zero-based while max is not...
if (max >= 0 && (includedCount - first) >= (max - 1)) {
break;
}
}
results = tmp;
}
return results;
}
/**
* Return the query results as an iterator
*/
#Override
public Iterator iterate(QueryParameters queryParameters, EventSource session)
throws HibernateException {
// Delegate to the QueryLoader...
errorIfDML();
return queryLoader.iterate(queryParameters, session);
}
/**
* Return the query results, as an instance of <tt>ScrollableResults</tt>
*/
#Override
public ScrollableResults scroll(QueryParameters queryParameters, SessionImplementor session)
throws HibernateException {
// Delegate to the QueryLoader...
errorIfDML();
return queryLoader.scroll(queryParameters, session);
}
#Override
public int executeUpdate(QueryParameters queryParameters, SessionImplementor session)
throws HibernateException {
errorIfSelect();
return statementExecutor.execute(queryParameters, session);
}
/**
* The SQL query string to be called; implemented by all subclasses
*/
#Override
public String getSQLString() {
return sql;
}
#Override
public List<String> collectSqlStrings() {
ArrayList<String> list = new ArrayList<>();
if (isManipulationStatement()) {
String[] sqlStatements = statementExecutor.getSqlStatements();
Collections.addAll(list, sqlStatements);
} else {
list.add(sql);
}
return list;
}
// -- Package local methods for the QueryLoader delegate --
#Override
public boolean isShallowQuery() {
return shallowQuery;
}
#Override
public String getQueryString() {
return hql;
}
#Override
public Map getEnabledFilters() {
return enabledFilters;
}
#Override
public int[] getNamedParameterLocs(String name) {
return getWalker().getNamedParameterLocations(name);
}
#Override
public boolean containsCollectionFetches() {
errorIfDML();
List collectionFetches = ((QueryNode) sqlAst).getFromClause().getCollectionFetches();
return collectionFetches != null && collectionFetches.size() > 0;
}
#Override
public boolean isManipulationStatement() {
return sqlAst.needsExecutor();
}
#Override
public void validateScrollability() throws HibernateException {
// Impl Note: allows multiple collection fetches as long as the
// entire fecthed graph still "points back" to a single
// root entity for return
errorIfDML();
final QueryNode query = (QueryNode) sqlAst;
// If there are no collection fetches, then no further checks are needed
List collectionFetches = query.getFromClause().getCollectionFetches();
if (collectionFetches.isEmpty()) {
return;
}
// A shallow query is ok (although technically there should be no fetching here...)
if (isShallowQuery()) {
return;
}
// Otherwise, we have a non-scalar select with defined collection fetch(es).
// Make sure that there is only a single root entity in the return (no tuples)
if (getReturnTypes().length > 1) {
throw new HibernateException("cannot scroll with collection fetches and returned tuples");
}
FromElement owner = null;
for (Object o : query.getSelectClause().getFromElementsForLoad()) {
// should be the first, but just to be safe...
final FromElement fromElement = (FromElement) o;
if (fromElement.getOrigin() == null) {
owner = fromElement;
break;
}
}
if (owner == null) {
throw new HibernateException("unable to locate collection fetch(es) owner for scrollability checks");
}
// This is not strictly true. We actually just need to make sure that
// it is ordered by root-entity PK and that that order-by comes before
// any non-root-entity ordering...
AST primaryOrdering = query.getOrderByClause().getFirstChild();
if (primaryOrdering != null) {
// TODO : this is a bit dodgy, come up with a better way to check this (plus see above comment)
String[] idColNames = owner.getQueryable().getIdentifierColumnNames();
String expectedPrimaryOrderSeq = StringHelper.join(
", ",
StringHelper.qualify(owner.getTableAlias(), idColNames)
);
if (!primaryOrdering.getText().startsWith(expectedPrimaryOrderSeq)) {
throw new HibernateException("cannot scroll results with collection fetches which are not ordered primarily by the root entity's PK");
}
}
}
private StatementExecutor buildAppropriateStatementExecutor(HqlSqlWalker walker) {
final Statement statement = (Statement) walker.getAST();
switch (walker.getStatementType()) {
case HqlSqlTokenTypes.DELETE: {
final FromElement fromElement = walker.getFinalFromClause().getFromElement();
final Queryable persister = fromElement.getQueryable();
if (persister.isMultiTable()) {
return new MultiTableDeleteExecutor(walker);
} else {
return new DeleteExecutor(walker, persister);
}
}
case HqlSqlTokenTypes.UPDATE: {
final FromElement fromElement = walker.getFinalFromClause().getFromElement();
final Queryable persister = fromElement.getQueryable();
if (persister.isMultiTable()) {
// even here, if only properties mapped to the "base table" are referenced
// in the set and where clauses, this could be handled by the BasicDelegate.
// TODO : decide if it is better performance-wise to doAfterTransactionCompletion that check, or to simply use the MultiTableUpdateDelegate
return new MultiTableUpdateExecutor(walker);
} else {
return new BasicExecutor(walker, persister);
}
}
case HqlSqlTokenTypes.INSERT:
return new BasicExecutor(walker, ((InsertStatement) statement).getIntoClause().getQueryable());
default:
throw new QueryException("Unexpected statement type");
}
}
#Override
public ParameterTranslations getParameterTranslations() {
if (paramTranslations == null) {
paramTranslations = new ParameterTranslationsImpl(getWalker().getParameters());
}
return paramTranslations;
}
#Override
public List<ParameterSpecification> getCollectedParameterSpecifications() {
return collectedParameterSpecifications;
}
#Override
public Class getDynamicInstantiationResultType() {
AggregatedSelectExpression aggregation = queryLoader.getAggregatedSelectExpression();
return aggregation == null ? null : aggregation.getAggregationResultType();
}
public static class JavaConstantConverter implements NodeTraverser.VisitationStrategy {
private AST dotRoot;
#Override
public void visit(AST node) {
if (dotRoot != null) {
// we are already processing a dot-structure
if (ASTUtil.isSubtreeChild(dotRoot, node)) {
return;
}
// we are now at a new tree level
dotRoot = null;
}
if (node.getType() == HqlTokenTypes.DOT) {
dotRoot = node;
handleDotStructure(dotRoot);
}
}
private void handleDotStructure(AST dotStructureRoot) {
final String expression = ASTUtil.getPathText(dotStructureRoot);
final Object constant = ReflectHelper.getConstantValue(expression);
if (constant != null) {
dotStructureRoot.setFirstChild(null);
dotStructureRoot.setType(HqlTokenTypes.JAVA_CONSTANT);
dotStructureRoot.setText(expression);
}
}
}
#Override
public EntityGraphQueryHint getEntityGraphQueryHint() {
return entityGraphQueryHint;
}
#Override
public void setEntityGraphQueryHint(EntityGraphQueryHint entityGraphQueryHint) {
this.entityGraphQueryHint = entityGraphQueryHint;
}
}
If you follow the code flow you will notice that I just modified the method private void generate(AST sqlAst) throws QueryException, RecognitionException and added the a following lines:
//Hack: The distinct keywordis removed from the sql string to
//avoid executing a distinct query in the DBMS when the distinct
//is used in hql.
sql = sql.replace("distinct", "");
What I do with this code is to remove the distinct keyword from the generated SQL query.
After creating the classes above, I added the following line in the hibernate configuration file:
<property name="hibernate.query.factory_class">org.hibernate.hql.internal.ast.NoDistinctInSQLASTQueryTranslatorFactory</property>
This line tells hibernate to use my custom class to parse HQL queries and generate SQL queries without the distinct keyword. Notice I created my custom classes in the same package where the original HQL parser resides.