Delete query in gemfire cache? - gemfire

When I run SELECT query in gfsh console, it works as expected:
query --query="SELECT * FROM /channelProfiles_composite_asrd WHERE profile.channelCode='JJ'"
But similar DELETE query fails:
query --query="DELETE * FROM /channelProfiles_composite_asrd WHERE profile.channelCode='JJ'"
Result : false
startCount : 0
endCount : 20
Message : Query is invalid due for error : <Syntax error in query: unexpected token: FROM>
NEXT_STEP_NAME : END
Does gemfire support DELETE?

Geode / GemFire OQL unfortunately doesn't support DELETE. You would have to iterate over your result set and delete 'manually'.

From gfsh prompt you can use remove commands as below
removing specific keys
data with String keys
remove --region=RegionName --key=abc
data with Other object keys must use the key-class as well as below
remove --region=RegionName --key-class=java.lang.Integer --key=1
For a large number of records to be deleted, what I do is search the list of keys to be deleted using Select query. then build a scripts of above mentioned Remove commands and run them together.
For removing all region data
remove --region=RegionName --all
Otherwise you will need to a Java program to delete the region data using Gemfire API

Here is a GemFire function I wrote that will clear out a region. This code could be a lot shorter if you threw out the fluff and extra features we never ended up using. Also in hindsight I didn't need to "synchronize" the list of regions being cleared because the odds of two people calling the function in the same microsecond are virtually zero.
We use it in both GemFire 7 and GemFire 8 clusters. Once you put this in a jar and install it, you can call this function from gfsh to clear a region.
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Properties;
import java.util.Set;
import com.gemstone.gemfire.LogWriter;
import com.gemstone.gemfire.cache.CacheFactory;
import com.gemstone.gemfire.cache.Declarable;
import com.gemstone.gemfire.cache.EntryNotFoundException;
import com.gemstone.gemfire.cache.Region;
import com.gemstone.gemfire.cache.execute.Function;
import com.gemstone.gemfire.cache.execute.FunctionContext;
import com.gemstone.gemfire.cache.execute.RegionFunctionContext;
import com.gemstone.gemfire.cache.partition.PartitionRegionHelper;
import com.gemstone.gemfire.distributed.DistributedSystem;
public class ClearRegionFunction implements Function, Declarable {
private static final long serialVersionUID = 1L;
private static LogWriter log;
private static List<String> clearingRegionList = new ArrayList<String>();
static {
DistributedSystem ds = CacheFactory.getAnyInstance().getDistributedSystem();
log = ds.getLogWriter();
}
#Override
public void execute(FunctionContext fc) {
RegionFunctionContext rfc = (RegionFunctionContext) fc;
Region region = rfc.getDataSet();
String regionName = region.getName();
//If passed a flag of "true", that says to simulate the clear, but don't actually clear.
//This is used to test if a clear is already in progress, in which case we'd return false.
Boolean simulate = (Boolean)rfc.getArguments();
log.fine("Argument passed = " + simulate);
if (simulate == null) simulate = false;
if (simulate) {
rfc.getResultSender().lastResult( ! clearingRegionList.contains(regionName));
return;
}
log.warning("Clearing region: " + regionName); // Used "warning" because clearing a region is serious.
try {
// Protect against the same region being cleared twice at the same time.
synchronized (clearingRegionList) {
if (clearingRegionList.contains(regionName)) {
log.error("Clear of region " + regionName + " is already in progress. Aborting.");
// Let the client know we ignored their "clear" request.
rfc.getResultSender().lastResult(false);
return;
}
clearingRegionList.add(regionName);
}
if (!PartitionRegionHelper.isPartitionedRegion(region)) {
region.clear();
rfc.getResultSender().lastResult(true);
} else {
// We are going to clear the region in a partitioned manner, each node only clearing
// the data in it's own node. So we need to get the "local" region for the node.
Region localRegion = PartitionRegionHelper.getLocalDataForContext(rfc);
// Beware, this keySet() is a reference to the actual LIVE key set in memory. So
// we need to clone the set of keys we want to delete, otherwise we'll be looping
// through a live list and potentially deleting items that were added after the
// delete started.
List keyList = new ArrayList(localRegion.keySet());
// Once we have the keys, go ahead and set the lastResult to "true" to
// unblock the caller, because this could take a while. (The caller doesn't actually
// unblock until ALL nodes have returned "true".)
rfc.getResultSender().lastResult(true);
int regionSize = keyList.size();
log.info("Region " + regionName + " has " + regionSize + " entries to clear.");
int count = 0;
for (Object key : keyList) {
//The "remove" method returns the object removed. This is bad because it (sometimes?) causes
//GemFire to try and deserialize the object, and that fails because we don't have the class on
//our server classpath. But if we invalidate first, it destroys the entry object without
//deserializing it. Then "remove" cleans up the key.
try {
localRegion.invalidate(key);
localRegion.remove(key);
} catch (EntryNotFoundException enfe) { //If the entry has disappeared (or expired) by the time we try to remove it,
//then the GemFire API will throw an exception. But this is okay.
log.warning("Entry not found for key = " + key.toString(), enfe);
}
count++;
// Every 10000 is frequent enough to give you a quick pulse, but
// not so frequent as to spam your log.
if (count % 10000 == 0) {
log.info("Cleared " + count + "/" + regionSize + " entries for region " + regionName);
}
}
}
log.warning("Region cleared: " + regionName);
synchronized (clearingRegionList) {
clearingRegionList.remove(regionName);
}
} catch (RuntimeException rex) {
// Make sure we clean up our tracking list even in the unlikely event of a blowup.
clearingRegionList.remove(regionName);
log.error(rex.toString(), rex); // Log AND throw is bad, but from my experience, a RunTimeException
// CAN get sent all the way back to the client and never show
// up in gemfire.log. (If the exception happens before last result)
throw rex;
}
}
#Override
public String getId() {
return "clear-region-function";
}
#Override
public void init(Properties arg0) { }
#Override
public boolean hasResult() { return true; }
#Override
public boolean isHA() { return true; }
#Override
public boolean optimizeForWrite() {return true;}
}

We use GemFire, and I ended up writing a Function to wipe out an entire region. This executes much faster than a simple client-side loop deleting entries one at a time, because the Function is distributed, and each node only clears the entries local to that node.
And Functions can be executed from gfsh, so it's pretty easy to use.
I could share the source code for this Function if that would be of use?

You need to use GemFire function to clear region , its very fast and optimize way to delete records from gemfire regions ,find detail code on following github repo
https://github.com/vaquarkhan/geode-functions/tree/master/src/main/java/org/apache/geode/functions
POM:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>GemFireRemoveAllDataFunction</groupId>
<artifactId>GemFireRemoveAllDataFunction</artifactId>
<version>0.0.1-SNAPSHOT</version>
<build>
<sourceDirectory>src</sourceDirectory>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.7.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>io.pivotal.gemfire</groupId>
<artifactId>geode-core</artifactId>
<version>9.6.0</version>
</dependency>
</dependencies>
</project>
Function :
package com.khan.viquar.gemfire;
import java.util.ArrayList;
import java.util.List;
import org.apache.geode.cache.Declarable;
import org.apache.geode.cache.Region;
import org.apache.geode.cache.execute.Function;
import org.apache.geode.cache.execute.FunctionContext;
import org.apache.geode.cache.execute.RegionFunctionContext;
import org.apache.geode.cache.partition.PartitionRegionHelper;
#SuppressWarnings("rawtypes")
public class ClearRegionRemoveAllDataFunction implements Function, Declarable {
private static final long serialVersionUID = 11L;
private static final int batchSize = 30000;
#SuppressWarnings("unchecked")
public void execute(final FunctionContext ctx) {
if (ctx instanceof RegionFunctionContext) {
final RegionFunctionContext rfc = (RegionFunctionContext) ctx;
try {
final Region<Object, Object> region = rfc.getDataSet();
if (PartitionRegionHelper.isPartitionedRegion(region)) {
clear(PartitionRegionHelper.getLocalDataForContext(rfc));
} else {
clear(region);
}
ctx.getResultSender().lastResult("Success");
} catch (final Throwable t) {
rfc.getResultSender().sendException(t);
}
} else {
ctx.getResultSender().lastResult("ERROR: The function must be executed on region!");
}
}
private void clear(final Region<Object, Object> localRegion) {
int numLocalEntries = localRegion.keySet().size();
if (numLocalEntries <= batchSize) {
localRegion.removeAll(localRegion.keySet());
} else {
final List<Object> buffer = new ArrayList<Object>(batchSize);
int count = 0;
for (final Object k : localRegion.keySet()) {
buffer.add(k);
count++;
if (count == batchSize) {
localRegion.removeAll(buffer);
buffer.clear();
count = 0;
} else {
continue;
}
}
localRegion.removeAll(buffer);
}
}
public boolean hasResult() {
return true;
}
public String getId() {
return ClearRegionRemoveAllFunction.class.getSimpleName();
}
public boolean optimizeForWrite() {
return true;
}
public boolean isHA() {
return true;
}
}

Related

BinaryInvalidTypeException in Ignite Remote Filter

The following code is based on a combination of Ingite's CacheQueryExample and CacheContinuousQueryExample.
The code starts a fat Ignite client. Three organizations are created in the cache and we are listening to the updates to the cache. The remote filter is set to trigger the continuous query if the organization name is "Google". Peer class loading is enabled by the default examples xml config file (example-ignite.xml), so the expectation is that the remote node is aware of the Organization class.
However the following exceptions are shown in the Ignite server's console (one for each cache entry) and all three records are returned to the client in the continuous query's event handler instead of just the "Google" record. If the filter is changed to check on the key instead of the value, the correct behavior is observed and a single record is returned to the local listener.
[08:28:43,302][SEVERE][sys-stripe-1-#2][query] CacheEntryEventFilter failed: class o.a.i.binary.BinaryInvalidTypeException: o.a.i.examples.model.Organization
[08:28:51,819][SEVERE][sys-stripe-2-#3][query] CacheEntryEventFilter failed: class o.a.i.binary.BinaryInvalidTypeException: o.a.i.examples.model.Organization
[08:28:52,692][SEVERE][sys-stripe-3-#4][query] CacheEntryEventFilter failed: class o.a.i.binary.BinaryInvalidTypeException: o.a.i.examples.model.Organization
To run the code
Start an ignite server using examples/config/example-ignite.xml as the configuration file.
Replace the content of ignite's CacheContinuousQueryExample.java with the following code. You may have to change the path to the configuration file to an absolute path.
package org.apache.ignite.examples.datagrid;
import javax.cache.Cache;
import javax.cache.configuration.Factory;
import javax.cache.event.CacheEntryEvent;
import javax.cache.event.CacheEntryEventFilter;
import javax.cache.event.CacheEntryUpdatedListener;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.affinity.AffinityKey;
import org.apache.ignite.cache.query.ContinuousQuery;
import org.apache.ignite.cache.query.QueryCursor;
import org.apache.ignite.cache.query.ScanQuery;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.examples.ExampleNodeStartup;
import org.apache.ignite.examples.model.Organization;
import org.apache.ignite.examples.model.Person;
import org.apache.ignite.lang.IgniteBiPredicate;
import java.util.Collection;
/**
* This examples demonstrates continuous query API.
* <p>
* Remote nodes should always be started with special configuration file which
* enables P2P class loading: {#code 'ignite.{sh|bat} examples/config/example-ignite.xml'}.
* <p>
* Alternatively you can run {#link ExampleNodeStartup} in another JVM which will
* start node with {#code examples/config/example-ignite.xml} configuration.
*/
public class CacheContinuousQueryExample {
/** Organizations cache name. */
private static final String ORG_CACHE = CacheQueryExample.class.getSimpleName() + "Organizations";
/**
* Executes example.
*
* #param args Command line arguments, none required.
* #throws Exception If example execution failed.
*/
public static void main(String[] args) throws Exception {
Ignition.setClientMode(true);
try (Ignite ignite = Ignition.start("examples/config/example-ignite.xml")) {
System.out.println();
System.out.println(">>> Cache continuous query example started.");
CacheConfiguration<Long, Organization> orgCacheCfg = new CacheConfiguration<>(ORG_CACHE);
orgCacheCfg.setCacheMode(CacheMode.PARTITIONED); // Default.
orgCacheCfg.setIndexedTypes(Long.class, Organization.class);
// Auto-close cache at the end of the example.
try {
ignite.getOrCreateCache(orgCacheCfg);
// Create new continuous query.
ContinuousQuery<Long, Organization> qry = new ContinuousQuery<>();
// Callback that is called locally when update notifications are received.
qry.setLocalListener(new CacheEntryUpdatedListener<Long, Organization>() {
#Override public void onUpdated(Iterable<CacheEntryEvent<? extends Long, ? extends Organization>> evts) {
for (CacheEntryEvent<? extends Long, ? extends Organization> e : evts)
System.out.println("Updated entry [key=" + e.getKey() + ", val=" + e.getValue() + ']');
}
});
// This filter will be evaluated remotely on all nodes.
// Entry that pass this filter will be sent to the caller.
qry.setRemoteFilterFactory(new Factory<CacheEntryEventFilter<Long, Organization>>() {
#Override public CacheEntryEventFilter<Long, Organization> create() {
return new CacheEntryEventFilter<Long, Organization>() {
#Override public boolean evaluate(CacheEntryEvent<? extends Long, ? extends Organization> e) {
//return e.getKey() == 3;
return e.getValue().name().equals("Google");
}
};
}
});
ignite.getOrCreateCache(ORG_CACHE).query(qry);
// Populate caches.
initialize();
Thread.sleep(2000);
}
finally {
// Distributed cache could be removed from cluster only by #destroyCache() call.
ignite.destroyCache(ORG_CACHE);
}
}
}
/**
* Populate cache with test data.
*/
private static void initialize() {
IgniteCache<Long, Organization> orgCache = Ignition.ignite().cache(ORG_CACHE);
// Clear cache before running the example.
orgCache.clear();
// Organizations.
Organization org1 = new Organization("ApacheIgnite");
Organization org2 = new Organization("Apple");
Organization org3 = new Organization("Google");
orgCache.put(org1.id(), org1);
orgCache.put(org2.id(), org2);
orgCache.put(org3.id(), org3);
}
}
Here is an interim workaround that involves using and deserializing binary objects. Hopefully, someone can post a proper solution.
Here is the main() function modified to work with BinaryObjects instead of the Organization object:
public static void main(String[] args) throws Exception {
Ignition.setClientMode(true);
try (Ignite ignite = Ignition.start("examples/config/example-ignite.xml")) {
System.out.println();
System.out.println(">>> Cache continuous query example started.");
CacheConfiguration<Long, Organization> orgCacheCfg = new CacheConfiguration<>(ORG_CACHE);
orgCacheCfg.setCacheMode(CacheMode.PARTITIONED); // Default.
orgCacheCfg.setIndexedTypes(Long.class, Organization.class);
// Auto-close cache at the end of the example.
try {
ignite.getOrCreateCache(orgCacheCfg);
// Create new continuous query.
ContinuousQuery<Long, BinaryObject> qry = new ContinuousQuery<>();
// Callback that is called locally when update notifications are received.
qry.setLocalListener(new CacheEntryUpdatedListener<Long, BinaryObject>() {
#Override public void onUpdated(Iterable<CacheEntryEvent<? extends Long, ? extends BinaryObject>> evts) {
for (CacheEntryEvent<? extends Long, ? extends BinaryObject> e : evts) {
Organization org = e.getValue().deserialize();
System.out.println("Updated entry [key=" + e.getKey() + ", val=" + org + ']');
}
}
});
// This filter will be evaluated remotely on all nodes.
// Entry that pass this filter will be sent to the caller.
qry.setRemoteFilterFactory(new Factory<CacheEntryEventFilter<Long, BinaryObject>>() {
#Override public CacheEntryEventFilter<Long, BinaryObject> create() {
return new CacheEntryEventFilter<Long, BinaryObject>() {
#Override public boolean evaluate(CacheEntryEvent<? extends Long, ? extends BinaryObject> e) {
//return e.getKey() == 3;
//return e.getValue().name().equals("Google");
return e.getValue().field("name").equals("Google");
}
};
}
});
ignite.getOrCreateCache(ORG_CACHE).withKeepBinary().query(qry);
// Populate caches.
initialize();
Thread.sleep(2000);
}
finally {
// Distributed cache could be removed from cluster only by #destroyCache() call.
ignite.destroyCache(ORG_CACHE);
}
}
}
Peer class loading is enabled ... so the expectation is that the remote node is aware of the Organization class.
This is the problem. You can't peer class load "model" objects, i.e., objects used to create the table.
Two solutions:
Deploy the model class(es) to the server ahead of time. The rest of the code -- the filters -- can be peer class loaded
As #rgb1380 demonstrates, you can use BinaryObjects, which is the underlying data format
Another small point, to use "autoclose" you need to structure your code like this:
// Auto-close cache at the end of the example.
try (var cache = ignite.getOrCreateCache(orgCacheCfg)) {
// do stuff
}

Spring AOP - passing arguments between annotated methods

i've written a utility to monitor individual business transactions. For example, Alice calls a method which calls more methods and i want info on just Alice's call, separate from Bob's call to the same method.
Right now the entry point creates a Transaction object and it's passed as an argument to each method:
class Example {
public Item getOrderEntryPoint(int orderId) {
Transaction transaction = transactionManager.create();
transaction.trace("getOrderEntryPoint");
Order order = getOrder(orderId, transaction);
transaction.stop();
logger.info(transaction);
return item;
}
private Order getOrder(int orderId, Transaction t) {
t.trace("getOrder");
Order order = getItems(itemId, t);
t.addStat("number of items", order.getItems().size());
for (Item item : order.getItems()) {
SpecialOffer offer = getSpecialOffer(item, t);
if (null != offer) {
t.incrementStat("offers", 1);
}
}
t.stop();
return order;
}
private SpecialOffer getSpecialOffer(Item item, Transaction t) {
t.trace("getSpecialOffer(" + item.id + ")", TraceCategory.Database);
return offerRepository.getByItem(item);
t.stop();
}
}
This will print to the log something like:
Transaction started by Alice at 10:42
Statistics:
number of items : 3
offers : 1
Category Timings (longest first):
DB : 2s 903ms
code : 187ms
Timings (longest first):
getSpecialOffer(1013) : 626ms
getItems : 594ms
Trace:
getOrderEntryPoint (7ms)
getOrder (594ms)
getSpecialOffer(911) (90ms)
getSpecialOffer(1013) (626ms)
getSpecialOffer(2942) (113ms)
It works great but passing the transaction object around is ugly. Someone suggested AOP but i don't see how to pass the transaction created in the first method to all the other methods.
The Transaction object is pretty simple:
public class Transaction {
private String uuid = UUID.createRandom();
private List<TraceEvent> events = new ArrayList<>();
private Map<String,Int> stats = new HashMap<>();
}
class TraceEvent {
private String name;
private long durationInMs;
}
The app that uses it is a Web app, and this multi-threaded, but the individual transactions are on a single thread - no multi-threading, async code, competition for resources, etc.
My attempt at an annotation:
#Around("execution(* *(..)) && #annotation(Trace)")
public Object around(ProceedingJoinPoint point) {
String methodName = MethodSignature.class.cast(point.getSignature()).getMethod().getName();
//--- Where do i get this call's instance of TRANSACTION from?
if (null == transaction) {
transaction = TransactionManager.createTransaction();
}
transaction.trace(methodName);
Object result = point.proceed();
transaction.stop();
return result;
Introduction
Unfortunately, your pseudo code does not compile. It contains several syntactical and logical errors. Furthermore, some helper classes are missing. If I did not have spare time today and was looking for a puzzle to solve, I would not have bothered making my own MCVE out of it, because that would actually have been your job. Please do read the MCVE article and learn to create one next time, otherwise you will not get a lot of qualified help here. This was your free shot because you are new on SO.
Original situation: passing through transaction objects in method calls
Application helper classes:
package de.scrum_master.app;
public class Item {
private int id;
public Item(int id) {
this.id = id;
}
public int getId() {
return id;
}
#Override
public String toString() {
return "Item[id=" + id + "]";
}
}
package de.scrum_master.app;
public class SpecialOffer {}
package de.scrum_master.app;
public class OfferRepository {
public SpecialOffer getByItem(Item item) {
if (item.getId() < 30)
return new SpecialOffer();
return null;
}
}
package de.scrum_master.app;
import java.util.ArrayList;
import java.util.List;
public class Order {
private int id;
public Order(int id) {
this.id = id;
}
public List<Item> getItems() {
List<Item> items = new ArrayList<>();
int offset = id == 12345 ? 0 : 1;
items.add(new Item(11 + offset, this));
items.add(new Item(22 + offset, this));
items.add(new Item(33 + offset, this));
return items;
}
}
Trace classes:
package de.scrum_master.trace;
public enum TraceCategory {
Code, Database
}
package de.scrum_master.trace;
class TraceEvent {
private String name;
private TraceCategory category;
private long durationInMs;
private boolean finished = false;
public TraceEvent(String name, TraceCategory category, long startTime) {
this.name = name;
this.category = category;
this.durationInMs = startTime;
}
public long getDurationInMs() {
return durationInMs;
}
public void setDurationInMs(long durationInMs) {
this.durationInMs = durationInMs;
}
public boolean isFinished() {
return finished;
}
public void setFinished(boolean finished) {
this.finished = finished;
}
#Override
public String toString() {
return "TraceEvent[name=" + name + ", category=" + category +
", durationInMs=" + durationInMs + ", finished=" + finished + "]";
}
}
Transaction classes:
Here I tried to mimic your own Transaction class with as few as possible changes, but there was a lot I had to add and modify in order to emulate a simplified version of your trace output. This is not thread-safe and the way I am locating the last unfinished TraceEvent is not nice and only works cleanly if there are not exceptions. But you get the idea, I hope. The point is to just make it basically work and subsequently get log output similar to your example. If this was originally my code, I would have solved it differently.
package de.scrum_master.trace;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.UUID;
public class Transaction {
private String uuid = UUID.randomUUID().toString();
private List<TraceEvent> events = new ArrayList<>();
private Map<String, Integer> stats = new HashMap<>();
public void trace(String message) {
trace(message, TraceCategory.Code);
}
public void trace(String message, TraceCategory category) {
events.add(new TraceEvent(message, category, System.currentTimeMillis()));
}
public void stop() {
TraceEvent event = getLastUnfinishedEvent();
event.setDurationInMs(System.currentTimeMillis() - event.getDurationInMs());
event.setFinished(true);
}
private TraceEvent getLastUnfinishedEvent() {
return events
.stream()
.filter(event -> !event.isFinished())
.reduce((first, second) -> second)
.orElse(null);
}
public void addStat(String text, int size) {
stats.put(text, size);
}
public void incrementStat(String text, int increment) {
Integer currentCount = stats.get(text);
if (currentCount == null)
currentCount = 0;
stats.put(text, currentCount + increment);
}
#Override
public String toString() {
return "Transaction {" +
toStringUUID() +
toStringStats() +
toStringEvents() +
"\n}\n";
}
private String toStringUUID() {
return "\n uuid = " + uuid;
}
private String toStringStats() {
String result = "\n stats = {";
for (Entry<String, Integer> statEntry : stats.entrySet())
result += "\n " + statEntry;
return result + "\n }";
}
private String toStringEvents() {
String result = "\n events = {";
for (TraceEvent event : events)
result += "\n " + event;
return result + "\n }";
}
}
package de.scrum_master.trace;
public class TransactionManager {
public Transaction create() {
return new Transaction();
}
}
Example driver application:
package de.scrum_master.app;
import de.scrum_master.trace.TraceCategory;
import de.scrum_master.trace.Transaction;
import de.scrum_master.trace.TransactionManager;
public class Example {
private TransactionManager transactionManager = new TransactionManager();
private OfferRepository offerRepository = new OfferRepository();
public Order getOrderEntryPoint(int orderId) {
Transaction transaction = transactionManager.create();
transaction.trace("getOrderEntryPoint");
sleep(100);
Order order = getOrder(orderId, transaction);
transaction.stop();
System.out.println(transaction);
return order;
}
private Order getOrder(int orderId, Transaction t) {
t.trace("getOrder");
sleep(200);
Order order = new Order(orderId);
t.addStat("number of items", order.getItems().size());
for (Item item : order.getItems()) {
SpecialOffer offer = getSpecialOffer(item, t);
if (null != offer)
t.incrementStat("special offers", 1);
}
t.stop();
return order;
}
private SpecialOffer getSpecialOffer(Item item, Transaction t) {
t.trace("getSpecialOffer(" + item.getId() + ")", TraceCategory.Database);
sleep(50);
SpecialOffer specialOffer = offerRepository.getByItem(item);
t.stop();
return specialOffer;
}
private void sleep(long millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
new Example().getOrderEntryPoint(12345);
new Example().getOrderEntryPoint(23456);
}
}
If you run this code, the output is as follows:
Transaction {
uuid = 62ec9739-bd32-4a56-b6b3-a8a13624961a
stats = {
special offers=2
number of items=3
}
events = {
TraceEvent[name=getOrderEntryPoint, category=Code, durationInMs=561, finished=true]
TraceEvent[name=getOrder, category=Code, durationInMs=451, finished=true]
TraceEvent[name=getSpecialOffer(11), category=Database, durationInMs=117, finished=true]
TraceEvent[name=getSpecialOffer(22), category=Database, durationInMs=69, finished=true]
TraceEvent[name=getSpecialOffer(33), category=Database, durationInMs=63, finished=true]
}
}
Transaction {
uuid = a420cd70-96e5-44c4-a0a4-87e421d05e87
stats = {
special offers=2
number of items=3
}
events = {
TraceEvent[name=getOrderEntryPoint, category=Code, durationInMs=469, finished=true]
TraceEvent[name=getOrder, category=Code, durationInMs=369, finished=true]
TraceEvent[name=getSpecialOffer(12), category=Database, durationInMs=53, finished=true]
TraceEvent[name=getSpecialOffer(23), category=Database, durationInMs=63, finished=true]
TraceEvent[name=getSpecialOffer(34), category=Database, durationInMs=53, finished=true]
}
}
AOP refactoring
Preface
Please note that I am using AspectJ here because two things about your code would never work with Spring AOP because it works with a delegation pattern based on dynamic proxies:
self-invocation (internally calling a method of the same class or super-class)
intercepting private methods
Because of these Spring AOP limitations I advise you to either refactor your code so as to avoid the two issues above or to configure your Spring applications to use full AspectJ via LTW (load-time weaving) instead.
As you noticed, my sample code does not use Spring at all because AspectJ is completely independent of Spring and works with any Java application (or other JVM languages, too).
Refactoring idea
Now what should you do in order to get rid of passing around tracing information (Transaction objects), polluting your core application code and tangling it with trace calls?
You extract transaction tracing into an aspect taking care of all trace(..) and stop() calls.
Unfortunately your Transaction class contains different types of information and does different things, so you cannot completely get rid of context information about how to trace for each affected method. But at least you can extract that context information from the method bodies and transform it into a declarative form using annotations with parameters.
These annotations can be targeted by an aspect taking care of handling transaction tracing.
Added and updated code, iteration 1
Annotations related to transaction tracing:
package de.scrum_master.trace;
import static java.lang.annotation.ElementType.METHOD;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
#Retention(RUNTIME)
#Target(METHOD)
public #interface TransactionEntryPoint {}
package de.scrum_master.trace;
import static java.lang.annotation.ElementType.METHOD;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
#Retention(RUNTIME)
#Target(METHOD)
public #interface TransactionTrace {
String message() default "__METHOD_NAME__";
TraceCategory category() default TraceCategory.Code;
String addStat() default "";
String incrementStat() default "";
}
Refactored application classes with annotations:
package de.scrum_master.app;
import java.util.ArrayList;
import java.util.List;
import de.scrum_master.trace.TransactionTrace;
public class Order {
private int id;
public Order(int id) {
this.id = id;
}
#TransactionTrace(message = "", addStat = "number of items")
public List<Item> getItems() {
List<Item> items = new ArrayList<>();
int offset = id == 12345 ? 0 : 1;
items.add(new Item(11 + offset));
items.add(new Item(22 + offset));
items.add(new Item(33 + offset));
return items;
}
}
Nothing much here, only added an annotation to getItems(). But the sample application class changes massively, getting much cleaner and simpler:
package de.scrum_master.app;
import de.scrum_master.trace.TraceCategory;
import de.scrum_master.trace.TransactionEntryPoint;
import de.scrum_master.trace.TransactionTrace;
public class Example {
private OfferRepository offerRepository = new OfferRepository();
#TransactionEntryPoint
#TransactionTrace
public Order getOrderEntryPoint(int orderId) {
sleep(100);
Order order = getOrder(orderId);
return order;
}
#TransactionTrace
private Order getOrder(int orderId) {
sleep(200);
Order order = new Order(orderId);
for (Item item : order.getItems()) {
SpecialOffer offer = getSpecialOffer(item);
// Do something with special offers
}
return order;
}
#TransactionTrace(category = TraceCategory.Database, incrementStat = "specialOffers")
private SpecialOffer getSpecialOffer(Item item) {
sleep(50);
SpecialOffer specialOffer = offerRepository.getByItem(item);
return specialOffer;
}
private void sleep(long millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
new Example().getOrderEntryPoint(12345);
new Example().getOrderEntryPoint(23456);
}
}
See? Except for a few annotations there is nothing left of the transaction tracing logic, the application code only takes care of its core concern. If you also remove the sleep() method which only makes the application slower for demonstration purposes (because we want some nice statistics with measured times >0 ms), the class gets even more compact.
But of course we need to put the transaction tracing logic somewhere, more precisely modularise it into an AspectJ aspect:
Transaction tracing aspect:
package de.scrum_master.trace;
import java.lang.reflect.Array;
import java.util.Arrays;
import java.util.Collection;
import java.util.stream.Collectors;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.After;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
#Aspect("percflow(entryPoint())")
public class TransactionTraceAspect {
private static TransactionManager transactionManager = new TransactionManager();
private Transaction transaction = transactionManager.create();
#Pointcut("execution(* *(..)) && #annotation(de.scrum_master.trace.TransactionEntryPoint)")
private static void entryPoint() {}
#Around("execution(* *(..)) && #annotation(transactionTrace)")
public Object doTrace(ProceedingJoinPoint joinPoint, TransactionTrace transactionTrace) throws Throwable {
preTrace(transactionTrace, joinPoint);
Object result = joinPoint.proceed();
postTrace(transactionTrace);
addStat(transactionTrace, result);
incrementStat(transactionTrace, result);
return result;
}
private void preTrace(TransactionTrace transactionTrace, ProceedingJoinPoint joinPoint) {
String traceMessage = transactionTrace.message();
if ("".equals(traceMessage))
return;
MethodSignature signature = (MethodSignature) joinPoint.getSignature();
if ("__METHOD_NAME__".equals(traceMessage)) {
traceMessage = signature.getName() + "(";
traceMessage += Arrays.stream(joinPoint.getArgs()).map(arg -> arg.toString()).collect(Collectors.joining(", "));
traceMessage += ")";
}
transaction.trace(traceMessage, transactionTrace.category());
}
private void postTrace(TransactionTrace transactionTrace) {
if ("".equals(transactionTrace.message()))
return;
transaction.stop();
}
private void addStat(TransactionTrace transactionTrace, Object result) {
if ("".equals(transactionTrace.addStat()) || result == null)
return;
if (result instanceof Collection)
transaction.addStat(transactionTrace.addStat(), ((Collection<?>) result).size());
else if (result.getClass().isArray())
transaction.addStat(transactionTrace.addStat(), Array.getLength(result));
}
private void incrementStat(TransactionTrace transactionTrace, Object result) {
if ("".equals(transactionTrace.incrementStat()) || result == null)
return;
transaction.incrementStat(transactionTrace.incrementStat(), 1);
}
#After("entryPoint()")
public void logFinishedTransaction(JoinPoint joinPoint) {
System.out.println(transaction);
}
}
Let me explain what this aspect does:
#Pointcut(..) entryPoint() says: Find me all methods in the code annotated by #TransactionEntryPoint. This pointcut is used in two places:
#Aspect("percflow(entryPoint())") says: Create one aspect instance for each control flow beginning at a transaction entry point.
#After("entryPoint()") logFinishedTransaction(..) says: Execute this advice (AOP terminology for a method linked to a pointcut) after an entry point methods is finished. The corresponding method just prints the transaction statistics just like in the original code at the end of Example.getOrderEntryPoint(..).
#Around("execution(* *(..)) && #annotation(transactionTrace)") doTrace(..)says: Wrap methods annotated by TransactionTrace and do the following (method body):
add new trace element and start measuring time
execute original (wrapped) method and store result
update trace element with measured time
add one type of statistics (optional)
increment another type of statistics (optional)
return wrapped method's result to its caller
The private methods are just helpers for the #Around advice.
The console log when running the updated Example class and active AspectJ is:
Transaction {
uuid = 4529d325-c604-441d-8997-45ca659abb14
stats = {
specialOffers=2
number of items=3
}
events = {
TraceEvent[name=getOrderEntryPoint(12345), category=Code, durationInMs=468, finished=true]
TraceEvent[name=getOrder(12345), category=Code, durationInMs=366, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=11]), category=Database, durationInMs=59, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=22]), category=Database, durationInMs=50, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=33]), category=Database, durationInMs=51, finished=true]
}
}
Transaction {
uuid = ef76a996-8621-478b-a376-e9f7a729a501
stats = {
specialOffers=2
number of items=3
}
events = {
TraceEvent[name=getOrderEntryPoint(23456), category=Code, durationInMs=452, finished=true]
TraceEvent[name=getOrder(23456), category=Code, durationInMs=351, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=12]), category=Database, durationInMs=50, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=23]), category=Database, durationInMs=50, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=34]), category=Database, durationInMs=50, finished=true]
}
}
You see, it looks almost identical to the original application.
Idea for further simplification, iteration 2
When reading method Example.getOrder(int orderId) I was wondering why you are calling order.getItems(), looping over it and calling getSpecialOffer(item) inside the loop. In your sample code you do not use the results for anything other than updating the transaction trace object. I am assuming that in your real code you do something with the order and with the special offers in that method.
But just in case you really do not need those calls inside that method, I suggest
you factor the calls out right into the aspect, getting rid of the TransactionTrace annotation parameters String addStat() and String incrementStat().
The Example code would get even simpler and
the annotation #TransactionTrace(message = "", addStat = "number of items") in class would go away, too.
I am leaving this refactoring to you if you think it makes sense.

Incremental score calculation bug?

I've been dealing with a score corruption error for few days with no apparent reason. The error appears only on FULL_ASSERT mode and it is not related to the constraints defined on the drools file.
Following is the error :
014-07-02 14:51:49,037 [SwingWorker-pool-1-thread-4] TRACE Move index (0), score (-4/-2450/-240/-170), accepted (false) for move (EMP4#START => EMP2).
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Score corruption: the workingScore (-3/-1890/-640/-170) is not the uncorruptedScore (-3/-1890/-640/-250) after completedAction (EMP3#EMP4 => EMP4):
The corrupted scoreDirector has 1 ConstraintMatch(s) which are in excess (and should not be there):
com.abcdl.be.solver/MinimizeTotalTime/level3/[org.drools.core.reteoo.InitialFactImpl#4dde85f0]=-170
The corrupted scoreDirector has 1 ConstraintMatch(s) which are missing:
com.abcdl.be.solver/MinimizeTotalTime/level3/[org.drools.core.reteoo.InitialFactImpl#4dde85f0]=-250
Check your score constraints.
The error appears every time after several steps are completed for no apparent reason.
I'm developing a software to schedule several tasks considering time and resources constraints.
The whole process is represented by a directed tree diagram such that the nodes of the graph represent the tasks and the edges, the dependencies between the tasks.
To do this, the planner change the parent node of each node until he finds the best solution.
The node is the planning entity and its parent the planning variable :
#PlanningEntity(difficultyComparatorClass = NodeDifficultyComparator.class)
public class Node extends ProcessChain {
private Node parent; // Planning variable: changes during planning, between score calculations.
private String delay; // Used to display the delay for nodes of type "And"
private int id; // Used as an identifier for each node. Different nodes cannot have the same id
public Node(String name, String type, int time, int resources, String md, int id)
{
super(name, "", time, resources, type, md);
this.id = id;
}
public Node()
{
super();
this.delay = "";
}
public String getDelay() {
return delay;
}
public void setDelay(String delay) {
this.delay = delay;
}
#PlanningVariable(valueRangeProviderRefs = {"parentRange"}, strengthComparatorClass = ParentStrengthComparator.class, nullable = false)
public Node getParent() {
return parent;
}
public void setParent(Node parent) {
this.parent = parent;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
/*public String toString()
{
if(this.type.equals("AND"))
return delay;
if(!this.md.isEmpty())
return Tools.excerpt(name+" : "+this.md);
return Tools.excerpt(name);
}*/
public String toString()
{
if(parent!= null)
return Tools.excerpt(name) +"#"+parent;
else
return Tools.excerpt(name);
}
public boolean equals( Object o ) {
if (this == o) {
return true;
} else if (o instanceof Node) {
Node other = (Node) o;
return new EqualsBuilder()
.append(name, other.name)
.append(id, other.id)
.isEquals();
} else {
return false;
}
}
public int hashCode() {
return new HashCodeBuilder()
.append(name)
.append(id)
.toHashCode();
}
// ************************************************************************
// Complex methods
// ************************************************************************
public int getStartTime()
{
try{
return Graph.getInstance().getNode2times().get(this).getFirst();
}
catch(NullPointerException e)
{
System.out.println("getStartTime() is null for " + this);
}
return 10;
}
public int getEndTime()
{ try{
return Graph.getInstance().getNode2times().get(this).getSecond();
}
catch(NullPointerException e)
{
System.out.println("getEndTime() is null for " + this);
}
return 10;
}
#ValueRangeProvider(id = "parentRange")
public Collection<Node> getPossibleParents()
{
Collection<Node> nodes = new ArrayList<Node>(Graph.getInstance().getNodes());
nodes.remove(this); // We remove this node from the list
if(Graph.getInstance().getParentsCount(this) > 0)
nodes.remove(Graph.getInstance().getParents(this)); // We remove its parents from the list
if(Graph.getInstance().getChildCount(this) > 0)
nodes.remove(Graph.getInstance().getChildren(this)); // We remove its children from the list
if(!nodes.contains(Graph.getInstance().getNt()))
nodes.add(Graph.getInstance().getNt());
return nodes;
}
/**
* The normal methods {#link #equals(Object)} and {#link #hashCode()} cannot be used because the rule engine already
* requires them (for performance in their original state).
* #see #solutionHashCode()
*/
public boolean solutionEquals(Object o) {
if (this == o) {
return true;
} else if (o instanceof Node) {
Node other = (Node) o;
return new EqualsBuilder()
.append(name, other.name)
.append(id, other.id)
.isEquals();
} else {
return false;
}
}
/**
* The normal methods {#link #equals(Object)} and {#link #hashCode()} cannot be used because the rule engine already
* requires them (for performance in their original state).
* #see #solutionEquals(Object)
*/
public int solutionHashCode() {
return new HashCodeBuilder()
.append(name)
.append(id)
.toHashCode();
}
}
Each move must update the graph by removing the previous edge and adding the new edge from the node to its parent, so i'm using a custom change move :
public class ParentChangeMove implements Move{
private Node node;
private Node parent;
private Graph g = Graph.getInstance();
public ParentChangeMove(Node node, Node parent) {
this.node = node;
this.parent = parent;
}
public boolean isMoveDoable(ScoreDirector scoreDirector) {
List<Dependency> dep = new ArrayList<Dependency>(g.getDependencies());
dep.add(new Dependency(parent.getName(), node.getName()));
return !ObjectUtils.equals(node.getParent(), parent) && !g.detectCycles(dep) && !g.getParents(node).contains(parent);
}
public Move createUndoMove(ScoreDirector scoreDirector) {
return new ParentChangeMove(node, node.getParent());
}
public void doMove(ScoreDirector scoreDirector) {
scoreDirector.beforeVariableChanged(node, "parent"); // before changes are made
//The previous edge is removed from the graph
if(node.getParent() != null)
{
Dependency d = new Dependency(node.getParent().getName(), node.getName());
g.removeEdge(g.getDep2link().get(d));
g.getDependencies().remove(d);
g.getDep2link().remove(d);
}
node.setParent(parent); // the move
//The new edge is added on the graph (parent ==> node)
Link link = new Link();
Dependency d = new Dependency(parent.getName(), node.getName());
g.addEdge(link, parent, node);
g.getDependencies().add(d);
g.getDep2link().put(d, link);
g.setStepTimes();
scoreDirector.afterVariableChanged(node, "parent"); // after changes are made
}
public Collection<? extends Object> getPlanningEntities() {
return Collections.singletonList(node);
}
public Collection<? extends Object> getPlanningValues() {
return Collections.singletonList(parent);
}
public boolean equals(Object o) {
if (this == o) {
return true;
} else if (o instanceof ParentChangeMove) {
ParentChangeMove other = (ParentChangeMove) o;
return new EqualsBuilder()
.append(node, other.node)
.append(parent, other.parent)
.isEquals();
} else {
return false;
}
}
public int hashCode() {
return new HashCodeBuilder()
.append(node)
.append(parent)
.toHashCode();
}
public String toString() {
return node + " => " + parent;
}
}
The graph does define multiple methods that are used by the constraints to calculate the score for each solution like the following :
rule "MinimizeTotalTime" // Minimize the total process time
when
eval(true)
then
scoreHolder.addSoftConstraintMatch(kcontext, 1, -Graph.getInstance().totalTime());
end
On other environment modes, the error does not appear but the best score calculated is not equal to the actual score.
I don't have any clue as to where the problem could come from. Note that i already checked all my equals and hashcode methods.
EDIT : Following ge0ffrey's proposition, I used collect CE in "MinimizeTotalTime" rule to check if the error comes again :
rule "MinimizeTotalTime" // Minimize the total process time
when
ArrayList() from collect(Node())
then
scoreHolder.addSoftConstraintMatch(kcontext, 0, -Graph.getInstance().totalTime());
end
At this point, no error appears and everything seems ok. But when I use "terminate early", I get the following error :
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Score corruption: the solution's score (-9133) is not the uncorruptedScore (-9765).
Also, I have a rule that doesn't use any method from the Graph class and seems to respect the incremental score calculation but returns another score corruption error.
The purpose of the rule is to make sure that we don't use more resources that available:
rule "addMarks" //insert a Mark each time a task starts or ends
when
Node($startTime : getStartTime(), $endTime : getEndTime())
then
insertLogical(new Mark($startTime));
insertLogical(new Mark($endTime));
end
rule "resourcesLimit" // At any time, The number of resources used must not exceed the total number of resources available
when
Mark($startTime: time)
Mark(time > $startTime, $endTime : time)
not Mark(time > $startTime, time < $endTime)
$total : Number(intValue > Global.getInstance().getAvailableResources() ) from
accumulate(Node(getEndTime() >=$endTime, getStartTime()<= $startTime, $res : resources), sum($res))
then
scoreHolder.addHardConstraintMatch(kcontext, 0, (Global.getInstance().getAvailableResources() - $total.intValue()) * ($endTime - $startTime) );
end
Following is the error :
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Score corruption: the workingScore (-193595) is not the uncorruptedScore (-193574) after completedAction (DWL_CM_XX_101#DWL_PA_XX_180 => DWL_PA_XX_180):
The corrupted scoreDirector has 4 ConstraintMatch(s) which are in excess (and should not be there):
com.abcdl.be.solver/resourcesLimit/level0/[43.0, 2012, 1891]=-2783
com.abcdl.be.solver/resourcesLimit/level0/[45.0, 1870, 1805]=-1625
com.abcdl.be.solver/resourcesLimit/level0/[46.0, 1805, 1774]=-806
com.abcdl.be.solver/resourcesLimit/level0/[45.0, 1774, 1762]=-300
The corrupted scoreDirector has 3 ConstraintMatch(s) which are missing:
com.abcdl.be.solver/resourcesLimit/level0/[43.0, 2012, 1901]=-2553
com.abcdl.be.solver/resourcesLimit/level0/[45.0, 1870, 1762]=-2700
com.abcdl.be.solver/resourcesLimit/level0/[44.0, 1901, 1891]=-240
Check your score constraints.
A score rule that has a LHS of just "eval(true)" is inherently broken. Either that constraint is always broken, for the exact same weight, and there really is no reason to evaluate it. Or it is sometimes broken (or always broken but for different weights) and then the rule needs to refire accordingly.
Problem: the return value of Graph.getInstance().totalTime() changes as the planning variables change value. But Drools just looks at the LHS as planning variables change and it sees that nothing in the LHS has changed so there's no need to re-evaluate that score rule, when the planning variables change. Note: this is called incremental score calculation (see docs), which is a huge performance speedup.
Subproblem: The method Graph.getInstance().totalTime() is inherently not incremental.
Fix: translate that totalTime() function into a DRL function based on Node selections. You 'll probably need to use accumulate. If that's too hard (because it's a complex calculation of the critical path or so), try it anyway (for incremental score calculation's sake) or try a LHS that does a collect over all Nodes (which is like eval(true) but it will be refired every time.

Load external properties files into EJB 3 app running on WebLogic 11

Am researching the best way to load external properties files from and EJB 3 app whose EAR file is deployed to WebLogic.
Was thinking about using an init servlet but I read somewhere that it would be too slow (e.g. my message handler might receive a message from my JMS queue before the init servlet runs).
Suppose I have multiple property files or one file here:
~/opt/conf/
So far, I feel that the best possible solution is by using a Web Logic application lifecycle event where the code to read the properties files during pre-start:
import weblogic.application.ApplicationLifecycleListener;
import weblogic.application.ApplicationLifecycleEvent;
public class MyListener extends ApplicationLifecycleListener {
public void preStart(ApplicationLifecycleEvent evt) {
// Load properties files
}
}
See: http://download.oracle.com/docs/cd/E13222_01/wls/docs90/programming/lifecycle.html
What would happen if the server is already running, would post start be a viable solution?
Can anyone think of any alternative ways that are better?
It really depends on how often you want the properties to be reloaded. One approach I have taken is to have a properties file wrapper (singleton) that has a configurable parameter that defines how often the files should be reloaded. I would then always read properties through that wrapper and it would reload the properties ever 15 minutes (similar to Log4J's ConfigureAndWatch). That way, if I wanted to, I can change properties without changing the state of a deployed application.
This also allows you to load properties from a database, instead of a file. That way you can have a level of confidence that properties are consistent across the nodes in a cluster and it reduces complexity associated with managing a config file for each node.
I prefer that over tying it to a lifecycle event. If you weren't ever going to change them, then make them static constants somewhere :)
Here is an example implementation to give you an idea:
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.*;
/**
* User: jeffrey.a.west
* Date: Jul 1, 2011
* Time: 8:43:55 AM
*/
public class ReloadingProperties
{
private final String lockObject = "LockMe";
private long lastLoadTime = 0;
private long reloadInterval;
private String filePath;
private Properties properties;
private static final Map<String, ReloadingProperties> instanceMap;
private static final long DEFAULT_RELOAD_INTERVAL = 1000 * 60 * 5;
public static void main(String[] args)
{
ReloadingProperties props = ReloadingProperties.getInstance("myProperties.properties");
System.out.println(props.getProperty("example"));
try
{
Thread.sleep(6000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
System.out.println(props.getProperty("example"));
}
static
{
instanceMap = new HashMap(31);
}
public static ReloadingProperties getInstance(String filePath)
{
ReloadingProperties instance = instanceMap.get(filePath);
if (instance == null)
{
instance = new ReloadingProperties(filePath, DEFAULT_RELOAD_INTERVAL);
synchronized (instanceMap)
{
instanceMap.put(filePath, instance);
}
}
return instance;
}
private ReloadingProperties(String filePath, long reloadInterval)
{
this.reloadInterval = reloadInterval;
this.filePath = filePath;
}
private void checkRefresh()
{
long currentTime = System.currentTimeMillis();
long sinceLastLoad = currentTime - lastLoadTime;
if (properties == null || sinceLastLoad > reloadInterval)
{
System.out.println("Reloading!");
lastLoadTime = System.currentTimeMillis();
Properties newProperties = new Properties();
FileInputStream fileIn = null;
synchronized (lockObject)
{
try
{
fileIn = new FileInputStream(filePath);
newProperties.load(fileIn);
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
if (fileIn != null)
{
try
{
fileIn.close();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
properties = newProperties;
}
}
}
public String getProperty(String key, String defaultValue)
{
checkRefresh();
return properties.getProperty(key, defaultValue);
}
public String getProperty(String key)
{
checkRefresh();
return properties.getProperty(key);
}
}
Figured it out...
See the corresponding / related post on Stack Overflow.

why does hibernate hql distinct cause an sql distinct on left join?

I've got this test HQL:
select distinct o from Order o left join fetch o.lineItems
and it does generate an SQL distinct without an obvious reason:
select distinct order0_.id as id61_0_, orderline1_.order_id as order1_62_1_...
The SQL resultset is always the same (with and without an SQL distinct):
order id | order name | orderline id | orderline name
---------+------------+--------------+---------------
1 | foo | 1 | foo item
1 | foo | 2 | bar item
1 | foo | 3 | test item
2 | empty | NULL | NULL
3 | bar | 4 | qwerty item
3 | bar | 5 | asdfgh item
Why does hibernate generate the SQL distinct? The SQL distinct doesn't make any sense and makes the query slower than needed.
This is contrary to the FAQ which mentions that hql distinct in this case is just a shortcut for the result transformer:
session.createQuery("select distinct o
from Order o left join fetch
o.lineItems").list();
It looks like you are using the SQL DISTINCT keyword here. Of course, this is not SQL, this is HQL. This distinct is just a shortcut for the result transformer, in this case. Yes, in other cases an HQL distinct will translate straight into a SQL DISTINCT. Not in this case: you can not filter out duplicates at the SQL level, the very nature of a product/join forbids this - you want the duplicates or you don't get all the data you need.
thanks
Have a closer look at the sql statement that hibernate generates - yes it does use the "distinct" keyword but not in the way I think you are expecting it to (or the way that the Hibernate FAQ is implying) i.e. to return a set of "distinct" or "unique" orders.
It doesn't use the distinct keyword to return distinct orders, as that wouldn't make sense in that SQL query, considering the join that you have also specified.
The resulting sql set still needs processing by the ResultTransformer, as clearly the sql set contains duplicate orders. That's why they say that the HQL distinct keyword doesn't directly map to the SQL distinct keyword.
I had the exact same problem and I think this is an Hibernate issue (not a bug because code doesn't fail). However, I have to dig deeper to make sure it's an issue.
Hibernate (at least in version 4 which its the version I'm working on my project, specifically 4.3.11) uses the concept of SPI, long story short: its like an API to extend or modify the framework.
I took advantage of this feature to replace the classes org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory (This class is called by Hibernate and delegates the job of generating the SQL query) and org.hibernate.hql.internal.ast.QueryTranslatorImpl (This is sort of an internal class which is called by org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory and generates the actual SQL query). I did it as follows:
Replacement for org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory:
package org.hibernate.hql.internal.ast;
import java.util.Map;
import org.hibernate.engine.query.spi.EntityGraphQueryHint;
import org.hibernate.engine.spi.SessionFactoryImplementor;
import org.hibernate.hql.spi.QueryTranslator;
public class NoDistinctInSQLASTQueryTranslatorFactory extends ASTQueryTranslatorFactory {
#Override
public QueryTranslator createQueryTranslator(String queryIdentifier, String queryString, Map filters, SessionFactoryImplementor factory, EntityGraphQueryHint entityGraphQueryHint) {
return new NoDistinctInSQLQueryTranslatorImpl(queryIdentifier, queryString, filters, factory, entityGraphQueryHint);
}
}
Replacement for org.hibernate.hql.internal.ast.QueryTranslatorImpl:
package org.hibernate.hql.internal.ast;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.hibernate.HibernateException;
import org.hibernate.MappingException;
import org.hibernate.QueryException;
import org.hibernate.ScrollableResults;
import org.hibernate.engine.query.spi.EntityGraphQueryHint;
import org.hibernate.engine.spi.QueryParameters;
import org.hibernate.engine.spi.RowSelection;
import org.hibernate.engine.spi.SessionFactoryImplementor;
import org.hibernate.engine.spi.SessionImplementor;
import org.hibernate.event.spi.EventSource;
import org.hibernate.hql.internal.QueryExecutionRequestException;
import org.hibernate.hql.internal.antlr.HqlSqlTokenTypes;
import org.hibernate.hql.internal.antlr.HqlTokenTypes;
import org.hibernate.hql.internal.antlr.SqlTokenTypes;
import org.hibernate.hql.internal.ast.exec.BasicExecutor;
import org.hibernate.hql.internal.ast.exec.DeleteExecutor;
import org.hibernate.hql.internal.ast.exec.MultiTableDeleteExecutor;
import org.hibernate.hql.internal.ast.exec.MultiTableUpdateExecutor;
import org.hibernate.hql.internal.ast.exec.StatementExecutor;
import org.hibernate.hql.internal.ast.tree.AggregatedSelectExpression;
import org.hibernate.hql.internal.ast.tree.FromElement;
import org.hibernate.hql.internal.ast.tree.InsertStatement;
import org.hibernate.hql.internal.ast.tree.QueryNode;
import org.hibernate.hql.internal.ast.tree.Statement;
import org.hibernate.hql.internal.ast.util.ASTPrinter;
import org.hibernate.hql.internal.ast.util.ASTUtil;
import org.hibernate.hql.internal.ast.util.NodeTraverser;
import org.hibernate.hql.spi.FilterTranslator;
import org.hibernate.hql.spi.ParameterTranslations;
import org.hibernate.internal.CoreMessageLogger;
import org.hibernate.internal.util.ReflectHelper;
import org.hibernate.internal.util.StringHelper;
import org.hibernate.internal.util.collections.IdentitySet;
import org.hibernate.loader.hql.QueryLoader;
import org.hibernate.param.ParameterSpecification;
import org.hibernate.persister.entity.Queryable;
import org.hibernate.type.Type;
import org.jboss.logging.Logger;
import antlr.ANTLRException;
import antlr.RecognitionException;
import antlr.TokenStreamException;
import antlr.collections.AST;
/**
* A QueryTranslator that uses an Antlr-based parser.
*
* #author Joshua Davis (pgmjsd#sourceforge.net)
*/
public class NoDistinctInSQLQueryTranslatorImpl extends QueryTranslatorImpl implements FilterTranslator {
private static final CoreMessageLogger LOG = Logger.getMessageLogger(
CoreMessageLogger.class,
QueryTranslatorImpl.class.getName()
);
private SessionFactoryImplementor factory;
private final String queryIdentifier;
private String hql;
private boolean shallowQuery;
private Map tokenReplacements;
//TODO:this is only needed during compilation .. can we eliminate the instvar?
private Map enabledFilters;
private boolean compiled;
private QueryLoader queryLoader;
private StatementExecutor statementExecutor;
private Statement sqlAst;
private String sql;
private ParameterTranslations paramTranslations;
private List<ParameterSpecification> collectedParameterSpecifications;
private EntityGraphQueryHint entityGraphQueryHint;
/**
* Creates a new AST-based query translator.
*
* #param queryIdentifier The query-identifier (used in stats collection)
* #param query The hql query to translate
* #param enabledFilters Currently enabled filters
* #param factory The session factory constructing this translator instance.
*/
public NoDistinctInSQLQueryTranslatorImpl(
String queryIdentifier,
String query,
Map enabledFilters,
SessionFactoryImplementor factory) {
super(queryIdentifier, query, enabledFilters, factory);
this.queryIdentifier = queryIdentifier;
this.hql = query;
this.compiled = false;
this.shallowQuery = false;
this.enabledFilters = enabledFilters;
this.factory = factory;
}
public NoDistinctInSQLQueryTranslatorImpl(
String queryIdentifier,
String query,
Map enabledFilters,
SessionFactoryImplementor factory,
EntityGraphQueryHint entityGraphQueryHint) {
this(queryIdentifier, query, enabledFilters, factory);
this.entityGraphQueryHint = entityGraphQueryHint;
}
/**
* Compile a "normal" query. This method may be called multiple times.
* Subsequent invocations are no-ops.
*
* #param replacements Defined query substitutions.
* #param shallow Does this represent a shallow (scalar or entity-id)
* select?
* #throws QueryException There was a problem parsing the query string.
* #throws MappingException There was a problem querying defined mappings.
*/
#Override
public void compile(
Map replacements,
boolean shallow) throws QueryException, MappingException {
doCompile(replacements, shallow, null);
}
/**
* Compile a filter. This method may be called multiple times. Subsequent
* invocations are no-ops.
*
* #param collectionRole the role name of the collection used as the basis
* for the filter.
* #param replacements Defined query substitutions.
* #param shallow Does this represent a shallow (scalar or entity-id)
* select?
* #throws QueryException There was a problem parsing the query string.
* #throws MappingException There was a problem querying defined mappings.
*/
#Override
public void compile(
String collectionRole,
Map replacements,
boolean shallow) throws QueryException, MappingException {
doCompile(replacements, shallow, collectionRole);
}
/**
* Performs both filter and non-filter compiling.
*
* #param replacements Defined query substitutions.
* #param shallow Does this represent a shallow (scalar or entity-id)
* select?
* #param collectionRole the role name of the collection used as the basis
* for the filter, NULL if this is not a filter.
*/
private synchronized void doCompile(Map replacements, boolean shallow, String collectionRole) {
// If the query is already compiled, skip the compilation.
if (compiled) {
LOG.debug("compile() : The query is already compiled, skipping...");
return;
}
// Remember the parameters for the compilation.
this.tokenReplacements = replacements;
if (tokenReplacements == null) {
tokenReplacements = new HashMap();
}
this.shallowQuery = shallow;
try {
// PHASE 1 : Parse the HQL into an AST.
final HqlParser parser = parse(true);
// PHASE 2 : Analyze the HQL AST, and produce an SQL AST.
final HqlSqlWalker w = analyze(parser, collectionRole);
sqlAst = (Statement) w.getAST();
// at some point the generate phase needs to be moved out of here,
// because a single object-level DML might spawn multiple SQL DML
// command executions.
//
// Possible to just move the sql generation for dml stuff, but for
// consistency-sake probably best to just move responsiblity for
// the generation phase completely into the delegates
// (QueryLoader/StatementExecutor) themselves. Also, not sure why
// QueryLoader currently even has a dependency on this at all; does
// it need it? Ideally like to see the walker itself given to the delegates directly...
if (sqlAst.needsExecutor()) {
statementExecutor = buildAppropriateStatementExecutor(w);
} else {
// PHASE 3 : Generate the SQL.
generate((QueryNode) sqlAst);
queryLoader = new QueryLoader(this, factory, w.getSelectClause());
}
compiled = true;
} catch (QueryException qe) {
if (qe.getQueryString() == null) {
throw qe.wrapWithQueryString(hql);
} else {
throw qe;
}
} catch (RecognitionException e) {
// we do not actually propagate ANTLRExceptions as a cause, so
// log it here for diagnostic purposes
LOG.trace("Converted antlr.RecognitionException", e);
throw QuerySyntaxException.convert(e, hql);
} catch (ANTLRException e) {
// we do not actually propagate ANTLRExceptions as a cause, so
// log it here for diagnostic purposes
LOG.trace("Converted antlr.ANTLRException", e);
throw new QueryException(e.getMessage(), hql);
}
//only needed during compilation phase...
this.enabledFilters = null;
}
private void generate(AST sqlAst) throws QueryException, RecognitionException {
if (sql == null) {
final SqlGenerator gen = new SqlGenerator(factory);
gen.statement(sqlAst);
sql = gen.getSQL();
//Hack: The distinct operator is removed from the sql
//string to avoid executing a distinct query in the db server when
//the distinct is used in hql.
sql = sql.replace("distinct", "");
if (LOG.isDebugEnabled()) {
LOG.debugf("HQL: %s", hql);
LOG.debugf("SQL: %s", sql);
}
gen.getParseErrorHandler().throwQueryException();
collectedParameterSpecifications = gen.getCollectedParameters();
}
}
private static final ASTPrinter SQL_TOKEN_PRINTER = new ASTPrinter(SqlTokenTypes.class);
private HqlSqlWalker analyze(HqlParser parser, String collectionRole) throws QueryException, RecognitionException {
final HqlSqlWalker w = new HqlSqlWalker(this, factory, parser, tokenReplacements, collectionRole);
final AST hqlAst = parser.getAST();
// Transform the tree.
w.statement(hqlAst);
if (LOG.isDebugEnabled()) {
LOG.debug(SQL_TOKEN_PRINTER.showAsString(w.getAST(), "--- SQL AST ---"));
}
w.getParseErrorHandler().throwQueryException();
return w;
}
private HqlParser parse(boolean filter) throws TokenStreamException, RecognitionException {
// Parse the query string into an HQL AST.
final HqlParser parser = HqlParser.getInstance(hql);
parser.setFilter(filter);
LOG.debugf("parse() - HQL: %s", hql);
parser.statement();
final AST hqlAst = parser.getAST();
final NodeTraverser walker = new NodeTraverser(new JavaConstantConverter());
walker.traverseDepthFirst(hqlAst);
showHqlAst(hqlAst);
parser.getParseErrorHandler().throwQueryException();
return parser;
}
private static final ASTPrinter HQL_TOKEN_PRINTER = new ASTPrinter(HqlTokenTypes.class);
#Override
void showHqlAst(AST hqlAst) {
if (LOG.isDebugEnabled()) {
LOG.debug(HQL_TOKEN_PRINTER.showAsString(hqlAst, "--- HQL AST ---"));
}
}
private void errorIfDML() throws HibernateException {
if (sqlAst.needsExecutor()) {
throw new QueryExecutionRequestException("Not supported for DML operations", hql);
}
}
private void errorIfSelect() throws HibernateException {
if (!sqlAst.needsExecutor()) {
throw new QueryExecutionRequestException("Not supported for select queries", hql);
}
}
#Override
public String getQueryIdentifier() {
return queryIdentifier;
}
#Override
public Statement getSqlAST() {
return sqlAst;
}
private HqlSqlWalker getWalker() {
return sqlAst.getWalker();
}
/**
* Types of the return values of an <tt>iterate()</tt> style query.
*
* #return an array of <tt>Type</tt>s.
*/
#Override
public Type[] getReturnTypes() {
errorIfDML();
return getWalker().getReturnTypes();
}
#Override
public String[] getReturnAliases() {
errorIfDML();
return getWalker().getReturnAliases();
}
#Override
public String[][] getColumnNames() {
errorIfDML();
return getWalker().getSelectClause().getColumnNames();
}
#Override
public Set<Serializable> getQuerySpaces() {
return getWalker().getQuerySpaces();
}
#Override
public List list(SessionImplementor session, QueryParameters queryParameters)
throws HibernateException {
// Delegate to the QueryLoader...
errorIfDML();
final QueryNode query = (QueryNode) sqlAst;
final boolean hasLimit = queryParameters.getRowSelection() != null && queryParameters.getRowSelection().definesLimits();
final boolean needsDistincting = (query.getSelectClause().isDistinct() || hasLimit) && containsCollectionFetches();
QueryParameters queryParametersToUse;
if (hasLimit && containsCollectionFetches()) {
LOG.firstOrMaxResultsSpecifiedWithCollectionFetch();
RowSelection selection = new RowSelection();
selection.setFetchSize(queryParameters.getRowSelection().getFetchSize());
selection.setTimeout(queryParameters.getRowSelection().getTimeout());
queryParametersToUse = queryParameters.createCopyUsing(selection);
} else {
queryParametersToUse = queryParameters;
}
List results = queryLoader.list(session, queryParametersToUse);
if (needsDistincting) {
int includedCount = -1;
// NOTE : firstRow is zero-based
int first = !hasLimit || queryParameters.getRowSelection().getFirstRow() == null
? 0
: queryParameters.getRowSelection().getFirstRow();
int max = !hasLimit || queryParameters.getRowSelection().getMaxRows() == null
? -1
: queryParameters.getRowSelection().getMaxRows();
List tmp = new ArrayList();
IdentitySet distinction = new IdentitySet();
for (final Object result : results) {
if (!distinction.add(result)) {
continue;
}
includedCount++;
if (includedCount < first) {
continue;
}
tmp.add(result);
// NOTE : ( max - 1 ) because first is zero-based while max is not...
if (max >= 0 && (includedCount - first) >= (max - 1)) {
break;
}
}
results = tmp;
}
return results;
}
/**
* Return the query results as an iterator
*/
#Override
public Iterator iterate(QueryParameters queryParameters, EventSource session)
throws HibernateException {
// Delegate to the QueryLoader...
errorIfDML();
return queryLoader.iterate(queryParameters, session);
}
/**
* Return the query results, as an instance of <tt>ScrollableResults</tt>
*/
#Override
public ScrollableResults scroll(QueryParameters queryParameters, SessionImplementor session)
throws HibernateException {
// Delegate to the QueryLoader...
errorIfDML();
return queryLoader.scroll(queryParameters, session);
}
#Override
public int executeUpdate(QueryParameters queryParameters, SessionImplementor session)
throws HibernateException {
errorIfSelect();
return statementExecutor.execute(queryParameters, session);
}
/**
* The SQL query string to be called; implemented by all subclasses
*/
#Override
public String getSQLString() {
return sql;
}
#Override
public List<String> collectSqlStrings() {
ArrayList<String> list = new ArrayList<>();
if (isManipulationStatement()) {
String[] sqlStatements = statementExecutor.getSqlStatements();
Collections.addAll(list, sqlStatements);
} else {
list.add(sql);
}
return list;
}
// -- Package local methods for the QueryLoader delegate --
#Override
public boolean isShallowQuery() {
return shallowQuery;
}
#Override
public String getQueryString() {
return hql;
}
#Override
public Map getEnabledFilters() {
return enabledFilters;
}
#Override
public int[] getNamedParameterLocs(String name) {
return getWalker().getNamedParameterLocations(name);
}
#Override
public boolean containsCollectionFetches() {
errorIfDML();
List collectionFetches = ((QueryNode) sqlAst).getFromClause().getCollectionFetches();
return collectionFetches != null && collectionFetches.size() > 0;
}
#Override
public boolean isManipulationStatement() {
return sqlAst.needsExecutor();
}
#Override
public void validateScrollability() throws HibernateException {
// Impl Note: allows multiple collection fetches as long as the
// entire fecthed graph still "points back" to a single
// root entity for return
errorIfDML();
final QueryNode query = (QueryNode) sqlAst;
// If there are no collection fetches, then no further checks are needed
List collectionFetches = query.getFromClause().getCollectionFetches();
if (collectionFetches.isEmpty()) {
return;
}
// A shallow query is ok (although technically there should be no fetching here...)
if (isShallowQuery()) {
return;
}
// Otherwise, we have a non-scalar select with defined collection fetch(es).
// Make sure that there is only a single root entity in the return (no tuples)
if (getReturnTypes().length > 1) {
throw new HibernateException("cannot scroll with collection fetches and returned tuples");
}
FromElement owner = null;
for (Object o : query.getSelectClause().getFromElementsForLoad()) {
// should be the first, but just to be safe...
final FromElement fromElement = (FromElement) o;
if (fromElement.getOrigin() == null) {
owner = fromElement;
break;
}
}
if (owner == null) {
throw new HibernateException("unable to locate collection fetch(es) owner for scrollability checks");
}
// This is not strictly true. We actually just need to make sure that
// it is ordered by root-entity PK and that that order-by comes before
// any non-root-entity ordering...
AST primaryOrdering = query.getOrderByClause().getFirstChild();
if (primaryOrdering != null) {
// TODO : this is a bit dodgy, come up with a better way to check this (plus see above comment)
String[] idColNames = owner.getQueryable().getIdentifierColumnNames();
String expectedPrimaryOrderSeq = StringHelper.join(
", ",
StringHelper.qualify(owner.getTableAlias(), idColNames)
);
if (!primaryOrdering.getText().startsWith(expectedPrimaryOrderSeq)) {
throw new HibernateException("cannot scroll results with collection fetches which are not ordered primarily by the root entity's PK");
}
}
}
private StatementExecutor buildAppropriateStatementExecutor(HqlSqlWalker walker) {
final Statement statement = (Statement) walker.getAST();
switch (walker.getStatementType()) {
case HqlSqlTokenTypes.DELETE: {
final FromElement fromElement = walker.getFinalFromClause().getFromElement();
final Queryable persister = fromElement.getQueryable();
if (persister.isMultiTable()) {
return new MultiTableDeleteExecutor(walker);
} else {
return new DeleteExecutor(walker, persister);
}
}
case HqlSqlTokenTypes.UPDATE: {
final FromElement fromElement = walker.getFinalFromClause().getFromElement();
final Queryable persister = fromElement.getQueryable();
if (persister.isMultiTable()) {
// even here, if only properties mapped to the "base table" are referenced
// in the set and where clauses, this could be handled by the BasicDelegate.
// TODO : decide if it is better performance-wise to doAfterTransactionCompletion that check, or to simply use the MultiTableUpdateDelegate
return new MultiTableUpdateExecutor(walker);
} else {
return new BasicExecutor(walker, persister);
}
}
case HqlSqlTokenTypes.INSERT:
return new BasicExecutor(walker, ((InsertStatement) statement).getIntoClause().getQueryable());
default:
throw new QueryException("Unexpected statement type");
}
}
#Override
public ParameterTranslations getParameterTranslations() {
if (paramTranslations == null) {
paramTranslations = new ParameterTranslationsImpl(getWalker().getParameters());
}
return paramTranslations;
}
#Override
public List<ParameterSpecification> getCollectedParameterSpecifications() {
return collectedParameterSpecifications;
}
#Override
public Class getDynamicInstantiationResultType() {
AggregatedSelectExpression aggregation = queryLoader.getAggregatedSelectExpression();
return aggregation == null ? null : aggregation.getAggregationResultType();
}
public static class JavaConstantConverter implements NodeTraverser.VisitationStrategy {
private AST dotRoot;
#Override
public void visit(AST node) {
if (dotRoot != null) {
// we are already processing a dot-structure
if (ASTUtil.isSubtreeChild(dotRoot, node)) {
return;
}
// we are now at a new tree level
dotRoot = null;
}
if (node.getType() == HqlTokenTypes.DOT) {
dotRoot = node;
handleDotStructure(dotRoot);
}
}
private void handleDotStructure(AST dotStructureRoot) {
final String expression = ASTUtil.getPathText(dotStructureRoot);
final Object constant = ReflectHelper.getConstantValue(expression);
if (constant != null) {
dotStructureRoot.setFirstChild(null);
dotStructureRoot.setType(HqlTokenTypes.JAVA_CONSTANT);
dotStructureRoot.setText(expression);
}
}
}
#Override
public EntityGraphQueryHint getEntityGraphQueryHint() {
return entityGraphQueryHint;
}
#Override
public void setEntityGraphQueryHint(EntityGraphQueryHint entityGraphQueryHint) {
this.entityGraphQueryHint = entityGraphQueryHint;
}
}
If you follow the code flow you will notice that I just modified the method private void generate(AST sqlAst) throws QueryException, RecognitionException and added the a following lines:
//Hack: The distinct keywordis removed from the sql string to
//avoid executing a distinct query in the DBMS when the distinct
//is used in hql.
sql = sql.replace("distinct", "");
What I do with this code is to remove the distinct keyword from the generated SQL query.
After creating the classes above, I added the following line in the hibernate configuration file:
<property name="hibernate.query.factory_class">org.hibernate.hql.internal.ast.NoDistinctInSQLASTQueryTranslatorFactory</property>
This line tells hibernate to use my custom class to parse HQL queries and generate SQL queries without the distinct keyword. Notice I created my custom classes in the same package where the original HQL parser resides.