NHibernate, get rowcount when criteria have a group by - nhibernate

I need to get the count of rows from a criteria query and the criteria got group by projections. (needed to get paging to work)
E.g.
projectionList.Add(Projections.GroupProperty("Col1"), "col1")
.Add(Projections.CountDistinct("Col2"), "Count");
I need to avoid CreateSQL, since I have a lot of criteria.. and the restrictions etc are complex.
Can you do a subcriteria (detached) and then select count(*) from .. ? Can't figure out how?
EDIT: I solved it by getting the sql from the criteria and then modifying it so that it now works! GetSql from criteria

Not entirely sure what you want, but something like this should work (if I understand your question properly):
var subQuery = DetachedCriteria.For<SomeClass>()
.Where(... add your conditions here ...);
var count = Session.CreateCriteria<SomeClass>()
.Where(Property.ForName("Col1").In(
CriteriaTransformer.Clone(subQuery).SetProjection(Projections.Property("Col1"))
.SetProjection(Projections.Count())
.FutureValue<int>();
var results = subQuery.GetExecutableCriteria(Session)
.SetProjection(Projections.GroupProperty("Col1"), "col1"),
Projections.CountDistinct("Col2"), "Count")
).List<object[]>();

Just to think a bit outside the box and remove the query complexity from NHiberate. You can make a View for the query in the database and then make a mapping for the view.

I think this can be done by using NH Multi Queries.
Here is some stuff about it: http://ayende.com/blog/3979/nhibernate-futures Example shows how we can run query and get results count of that query in one roundtrip to the database.
And here is good answer, which sounds similar to what you want to achieve: nhibernate 2.0 Efficient Data Paging DataList Control and ObjectDataSource in which they get the result page AND total records count in one roundtrip to the database.
Also, I doubt that it is possible to read pure ##rowcount value with NH without changing sql query, as ##rowcount is database specific thing.
My assumption would be that for your case it is not possible to avoid GetSql from criteria solution, unless you simplify your query or approach. Maybe it worth to try this as well.
If you can post bigger chunk of your code, probably someone will be able to figure this out.

I 've resolved this problem on the java version (Hibernate). The problem is that the RowProjection function is some like:
count(*)
That is an aggregate function: so if you create a 'group by' property your result is a list of the grouped row and for each row you have the total count of the group.
For me, with oracle database, to make it work i've create a custom projection that, instead of create function count(*), the function is
count(count(*))
and the property in the group by clause are not written in the select ... from statement. To do that it not that simple, the problem is that you have to provide all stack to create the right sql so, with the java version I've to subclasse 2 class:
SimpleProjection
ProjectionList
After that my query generated as:
select count(*), col1, col2 from table1 group by col1, col2
become
select count(count(*)) from table1 group by col1, col2
and the result are the total row given by
select col1, col2 from table1 group by col1, col2
(usable with pagination system)
I post here the java version of the classes, if are useful for you:
public class CustomProjectionList extends ProjectionList {
private static final long serialVersionUID = 5762155180392132152L;
#Override
public ProjectionList create() {
return new CustomProjectionList();
}
public static ProjectionList getNewCustomProjectionList() {
return new CustomProjectionList();
}
#Override
public String toSqlString(Criteria criteria, int loc, CriteriaQuery criteriaQuery) throws HibernateException {
StringBuffer buf = new StringBuffer();
for (int i = 0; i < getLength(); i++) {
Projection proj = getProjection(i);
String sqlString = proj.toSqlString(criteria, loc, criteriaQuery);
buf.append(sqlString);
loc += getColumnAliases(loc, criteria, criteriaQuery, proj).length;
if (i < (getLength() - 1) && sqlString != null && sqlString.length() > 0)
buf.append(", ");
}
return buf.toString();
}
private static String[] getColumnAliases(int loc, Criteria criteria, CriteriaQuery criteriaQuery, Projection projection) {
return projection instanceof EnhancedProjection ?
( ( EnhancedProjection ) projection ).getColumnAliases( loc, criteria, criteriaQuery ) :
projection.getColumnAliases( loc );
}
}
public class CustomPropertyProjection extends SimpleProjection {
private static final long serialVersionUID = -5206671448535977079L;
private String propertyName;
private boolean grouped;
#Override
public String[] getColumnAliases(int loc, Criteria criteria, CriteriaQuery criteriaQuery) {
return new String[0];
}
#Override
public String[] getColumnAliases(int loc) {
return new String[0];
}
#Override
public int getColumnCount(Criteria criteria, CriteriaQuery criteriaQuery) {
return 0;
}
#Override
public String[] getAliases() {
return new String[0];
}
public CustomPropertyProjection(String prop, boolean grouped) {
this.propertyName = prop;
this.grouped = grouped;
}
protected CustomPropertyProjection(String prop) {
this(prop, false);
}
public String getPropertyName() {
return propertyName;
}
public String toString() {
return propertyName;
}
public Type[] getTypes(Criteria criteria, CriteriaQuery criteriaQuery)
throws HibernateException {
return new Type[0];
}
public String toSqlString(Criteria criteria, int position, CriteriaQuery criteriaQuery)
throws HibernateException {
return "";
}
public boolean isGrouped() {
return grouped;
}
public String toGroupSqlString(Criteria criteria, CriteriaQuery criteriaQuery)
throws HibernateException {
if (!grouped) {
return super.toGroupSqlString(criteria, criteriaQuery);
}
else {
return StringHelper.join( ", ", criteriaQuery.getColumns( propertyName, criteria ) );
}
}
}
public class CustomRowCountProjection extends SimpleProjection {
/**
*
*/
private static final long serialVersionUID = -7886296860233977609L;
#SuppressWarnings("rawtypes")
private static List ARGS = java.util.Collections.singletonList( "*" );
public CustomRowCountProjection() {
super();
}
public String toString() {
return "count(count(*))";
}
public Type[] getTypes(Criteria criteria, CriteriaQuery criteriaQuery) throws HibernateException {
return new Type[] {
getFunction( criteriaQuery ).getReturnType( null, criteriaQuery.getFactory() )
};
}
public String toSqlString(Criteria criteria, int position, CriteriaQuery criteriaQuery) throws HibernateException {
SQLFunction countSql = getFunction( criteriaQuery );
String sqlString = countSql.toString() + "(" + countSql.render( null, ARGS, criteriaQuery.getFactory() ) + ") as y" + position + '_';
return sqlString;
}
protected SQLFunction getFunction(CriteriaQuery criteriaQuery) {
SQLFunction function = criteriaQuery.getFactory()
.getSqlFunctionRegistry()
.findSQLFunction( "count" );
if ( function == null ) {
throw new HibernateException( "Unable to locate count function mapping" );
}
return function;
}
}
Hope this help.

Related

What happens if no records exists when I use Room to query?

I use Room in my Android Studio App, the Code A will crash when no record exists to query
What happens if no records exists when I use Room to query with Code B ?
#Dao
interface RecordDao {
// Code A
#Query("SELECT * FROM record_table where id=:id")
fun getByID(id:Int): Flow<RecordEntity>
// Code B
#Query("SELECT * FROM record_table")
fun getAll(): Flow<List<RecordEntity>>
}
If you look at the code generated by room for the RecordDao_Impl class implementation of RecordDao, you'll notice multiple things:
The getById function code returns null when no matching records exist in the table, and since your 'Code A' function return type is not null, kotlin throws a NullPointerException for the first function.
The getAll function code returns a Flow object with an ArrayList, in case it found any records then it adds them to the list, otherwise it just emits the empty ArrayList to the Flow object, therefore, you'll always get a flow object with a list inside it regardless if room found matching records or not, so no exception is thrown.
You can understand this a bit more if you look at the code generated for the two functions here:
#Override
public Flow<RecordEntity> getByID(final int id) {
final String _sql = "SELECT * FROM record_table where id=?";
final RoomSQLiteQuery _statement = RoomSQLiteQuery.acquire(_sql, 1);
int _argIndex = 1;
_statement.bindLong(_argIndex, id);
return CoroutinesRoom.createFlow(__db, false, new String[]{"record_table"}, new Callable<RecordEntity>() {
#Override
public RecordEntity call() throws Exception {
final Cursor _cursor = DBUtil.query(__db, _statement, false, null);
try {
final int _cursorIndexOfId = CursorUtil.getColumnIndexOrThrow(_cursor, "id");
final RecordEntity _result;
if (_cursor.moveToFirst()) {
final int _tmpId;
_tmpId = _cursor.getInt(_cursorIndexOfId);
_result = new RecordEntity(_tmpId);
} else {
_result = null;
}
return _result;
} finally {
_cursor.close();
}
}
#Override
protected void finalize() {
_statement.release();
}
});
}
#Override
public Flow<List<RecordEntity>> getAll() {
final String _sql = "SELECT * FROM record_table";
final RoomSQLiteQuery _statement = RoomSQLiteQuery.acquire(_sql, 0);
return CoroutinesRoom.createFlow(__db, false, new String[]{"record_table"}, new Callable<List<RecordEntity>>() {
#Override
public List<RecordEntity> call() throws Exception {
final Cursor _cursor = DBUtil.query(__db, _statement, false, null);
try {
final int _cursorIndexOfId = CursorUtil.getColumnIndexOrThrow(_cursor, "id");
final List<RecordEntity> _result = new ArrayList<RecordEntity>(_cursor.getCount());
while (_cursor.moveToNext()) {
final RecordEntity _item;
final int _tmpId;
_tmpId = _cursor.getInt(_cursorIndexOfId);
_item = new RecordEntity(_tmpId);
_result.add(_item);
}
return _result;
} finally {
_cursor.close();
}
}
#Override
protected void finalize() {
_statement.release();
}
});
}

JPA - Attributeconverter not working with native query

I have created AttributeConverter class which is converting Enum to DB value and overriden the necessary methods.
If I use JPA query like below then converter is getting called and getting correct result.
public List<Driver> findByStatus(DriverStatus status);
BUT If I use with Query annotation that AttributeConverter is not getting called. I have more complex query where I need to use native query with Attribute Converter but it is not working for me.
#Query(value = "select * from driver where status=:status", nativeQuery = true)
public List<Driver> findByStatus1(DriverStatus status);
is there any way to handle this requirement ?
Update 1 - below is the Converter code
#Converter(autoApply = true)
public class DriverStatusConverter implements AttributeConverter<DriverStatus, String> {
#Override
public String convertToDatabaseColumn(DriverStatus driverStatus) {
if (driverStatus == null) {
return null;
}
System.err.println("from converter" +driverStatus.getCode());
return driverStatus.getCode();
}
#Override
public DriverStatus convertToEntityAttribute(String code) {
if (code == null) {
return null;
}
return Stream.of(DriverStatus.values()).filter(c -> c.getCode().equals(code)).findFirst()
.orElseThrow(IllegalArgumentException::new);
}
}
Faced similar issue, wasn't able to find satisfying solution, so had to resort to the ugly hack... Repository method signature was changed like so:
public List<Driver> findByStatus1(Integer status);
...and was called like so:
repository.findByStatus1(DriverStatus.YOUR_DESIRED_STATUS.getCode());
Yes, it's ugly and kinda defeats the purpose of AttributeConverter, but at least it works. In my particular case I had to resort to such measures just with one native query. AttributeConverter still works for the all other JPA queries.
I just come across same problem and use following to go around it
DriverStatusConverter move logic code in convertToEntityAttribute to public static method:
#Override
public DriverStatus convertToEntityAttribute(String code) {
return convertStringToEnum(code);
}
public static DriverStatus convertStringToEnum(String code) {
if (code == null) {
return null;
}
return Stream.of(DriverStatus.values()).filter(c -> c.getCode().equals(code)).findFirst()
.orElseThrow(IllegalArgumentException::new);
}
create new DriverDTO for native query result
public interface DriverDTO {
public Integer getId();
public String getName();
public String getStatusCode();
default DriverStatus getStatus() {
return DriverStatusConverter.convertStringToEnum(getStatusCode());
}
}
DriverRepository findByStatusCode() select columns in query and return DriverDTO class
#Query(value = "select id, name, status as statusCode from driver WHERE status=:statusCode", nativeQuery = true)
public List<DriverDTO> findByStatusCode(String statusCode);
Remove the nativeQuery = true and try.
#Query(value = "select * from driver where status=:status")
public List<Driver> findByStatus1(DriverStatus status);
Alternatively you can try using entityManager like so -
This is just sample code.
TypedQuery<Trip> q = entityManager.createQuery("SELECT t FROM Trip t WHERE t.vehicle = :v", Trip.class);
q.setParameter("v", Vehicle.PLANE);
List<Trip> trips = q.getResultList();
Reference -
https://thorben-janssen.com/jpa-21-type-converter-better-way-to/

Search where A or B with querydsl and spring data rest

http://localhost:8080/users?firstName=a&lastName=b ---> where firstName=a and lastName=b
How to make it to or ---> where firstName=a or lastName=b
But when I set QuerydslBinderCustomizer customize
#Override
default public void customize(QuerydslBindings bindings, QUser user) {
bindings.bind(String.class).all((StringPath path, Collection<? extends String> values) -> {
BooleanBuilder predicate = new BooleanBuilder();
values.forEach( value -> predicate.or(path.containsIgnoreCase(value) );
});
}
http://localhost:8080/users?firstName=a&firstName=b&lastName=b ---> where (firstName=a or firstName = b) and lastName=b
It seem different parameters with AND. Same parameters with what I set(predicate.or/predicate.and)
How to make it different parameters with AND like this ---> where firstName=a or firstName=b or lastName=b ??
thx.
Your current request param are grouped as List firstName and String lastName. I see that you want to keep your request parameters without a binding, but in this case it would make your life easier.
My suggestion is to make a new class with request param:
public class UserRequest {
private String lastName;
private List<String> firstName;
// getters and setters
}
For QueryDSL, you can create a builder object:
public class UserPredicateBuilder{
private List<BooleanExpression> expressions = new ArrayList<>();
public UserPredicateBuilder withFirstName(List<String> firstNameList){
QUser user = QUser.user;
expressions.add(user.firstName.in(firstNameList));
return this;
}
//.. same for other fields
public BooleanExpression build(){
if(expressions.isEmpty()){
return Expressions.asBoolean(true).isTrue();
}
BooleanExpression result = expressions.get(0);
for (int i = 1; i < expressions.size(); i++) {
result = result.and(expressions.get(i));
}
return result;
}
}
And after you can just use the builder as :
public List<User> getUsers(UserRequest userRequest){
BooleanExpression expression = new UserPredicateBuilder()
.withFirstName(userRequest.getFirstName())
// other fields
.build();
return userRepository.findAll(expression).getContent();
}
This is the recommended solution.
If you really want to keep the current params without a binding (they still need some kind of validation, otherwise it can throw an Exception in query dsl binding)
you can group them by path :
Map<StringPath,List<String>> values // example firstName => a,b
and after that to create your boolean expression based on the map:
//initial value
BooleanExpression result = Expressions.asBoolean(true).isTrue();
for (Map.Entry entry: values.entrySet()) {
result = result.and(entry.getKey().in(entry.getValues());
}
return userRepository.findAll(result);

why does hibernate hql distinct cause an sql distinct on left join?

I've got this test HQL:
select distinct o from Order o left join fetch o.lineItems
and it does generate an SQL distinct without an obvious reason:
select distinct order0_.id as id61_0_, orderline1_.order_id as order1_62_1_...
The SQL resultset is always the same (with and without an SQL distinct):
order id | order name | orderline id | orderline name
---------+------------+--------------+---------------
1 | foo | 1 | foo item
1 | foo | 2 | bar item
1 | foo | 3 | test item
2 | empty | NULL | NULL
3 | bar | 4 | qwerty item
3 | bar | 5 | asdfgh item
Why does hibernate generate the SQL distinct? The SQL distinct doesn't make any sense and makes the query slower than needed.
This is contrary to the FAQ which mentions that hql distinct in this case is just a shortcut for the result transformer:
session.createQuery("select distinct o
from Order o left join fetch
o.lineItems").list();
It looks like you are using the SQL DISTINCT keyword here. Of course, this is not SQL, this is HQL. This distinct is just a shortcut for the result transformer, in this case. Yes, in other cases an HQL distinct will translate straight into a SQL DISTINCT. Not in this case: you can not filter out duplicates at the SQL level, the very nature of a product/join forbids this - you want the duplicates or you don't get all the data you need.
thanks
Have a closer look at the sql statement that hibernate generates - yes it does use the "distinct" keyword but not in the way I think you are expecting it to (or the way that the Hibernate FAQ is implying) i.e. to return a set of "distinct" or "unique" orders.
It doesn't use the distinct keyword to return distinct orders, as that wouldn't make sense in that SQL query, considering the join that you have also specified.
The resulting sql set still needs processing by the ResultTransformer, as clearly the sql set contains duplicate orders. That's why they say that the HQL distinct keyword doesn't directly map to the SQL distinct keyword.
I had the exact same problem and I think this is an Hibernate issue (not a bug because code doesn't fail). However, I have to dig deeper to make sure it's an issue.
Hibernate (at least in version 4 which its the version I'm working on my project, specifically 4.3.11) uses the concept of SPI, long story short: its like an API to extend or modify the framework.
I took advantage of this feature to replace the classes org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory (This class is called by Hibernate and delegates the job of generating the SQL query) and org.hibernate.hql.internal.ast.QueryTranslatorImpl (This is sort of an internal class which is called by org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory and generates the actual SQL query). I did it as follows:
Replacement for org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory:
package org.hibernate.hql.internal.ast;
import java.util.Map;
import org.hibernate.engine.query.spi.EntityGraphQueryHint;
import org.hibernate.engine.spi.SessionFactoryImplementor;
import org.hibernate.hql.spi.QueryTranslator;
public class NoDistinctInSQLASTQueryTranslatorFactory extends ASTQueryTranslatorFactory {
#Override
public QueryTranslator createQueryTranslator(String queryIdentifier, String queryString, Map filters, SessionFactoryImplementor factory, EntityGraphQueryHint entityGraphQueryHint) {
return new NoDistinctInSQLQueryTranslatorImpl(queryIdentifier, queryString, filters, factory, entityGraphQueryHint);
}
}
Replacement for org.hibernate.hql.internal.ast.QueryTranslatorImpl:
package org.hibernate.hql.internal.ast;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.hibernate.HibernateException;
import org.hibernate.MappingException;
import org.hibernate.QueryException;
import org.hibernate.ScrollableResults;
import org.hibernate.engine.query.spi.EntityGraphQueryHint;
import org.hibernate.engine.spi.QueryParameters;
import org.hibernate.engine.spi.RowSelection;
import org.hibernate.engine.spi.SessionFactoryImplementor;
import org.hibernate.engine.spi.SessionImplementor;
import org.hibernate.event.spi.EventSource;
import org.hibernate.hql.internal.QueryExecutionRequestException;
import org.hibernate.hql.internal.antlr.HqlSqlTokenTypes;
import org.hibernate.hql.internal.antlr.HqlTokenTypes;
import org.hibernate.hql.internal.antlr.SqlTokenTypes;
import org.hibernate.hql.internal.ast.exec.BasicExecutor;
import org.hibernate.hql.internal.ast.exec.DeleteExecutor;
import org.hibernate.hql.internal.ast.exec.MultiTableDeleteExecutor;
import org.hibernate.hql.internal.ast.exec.MultiTableUpdateExecutor;
import org.hibernate.hql.internal.ast.exec.StatementExecutor;
import org.hibernate.hql.internal.ast.tree.AggregatedSelectExpression;
import org.hibernate.hql.internal.ast.tree.FromElement;
import org.hibernate.hql.internal.ast.tree.InsertStatement;
import org.hibernate.hql.internal.ast.tree.QueryNode;
import org.hibernate.hql.internal.ast.tree.Statement;
import org.hibernate.hql.internal.ast.util.ASTPrinter;
import org.hibernate.hql.internal.ast.util.ASTUtil;
import org.hibernate.hql.internal.ast.util.NodeTraverser;
import org.hibernate.hql.spi.FilterTranslator;
import org.hibernate.hql.spi.ParameterTranslations;
import org.hibernate.internal.CoreMessageLogger;
import org.hibernate.internal.util.ReflectHelper;
import org.hibernate.internal.util.StringHelper;
import org.hibernate.internal.util.collections.IdentitySet;
import org.hibernate.loader.hql.QueryLoader;
import org.hibernate.param.ParameterSpecification;
import org.hibernate.persister.entity.Queryable;
import org.hibernate.type.Type;
import org.jboss.logging.Logger;
import antlr.ANTLRException;
import antlr.RecognitionException;
import antlr.TokenStreamException;
import antlr.collections.AST;
/**
* A QueryTranslator that uses an Antlr-based parser.
*
* #author Joshua Davis (pgmjsd#sourceforge.net)
*/
public class NoDistinctInSQLQueryTranslatorImpl extends QueryTranslatorImpl implements FilterTranslator {
private static final CoreMessageLogger LOG = Logger.getMessageLogger(
CoreMessageLogger.class,
QueryTranslatorImpl.class.getName()
);
private SessionFactoryImplementor factory;
private final String queryIdentifier;
private String hql;
private boolean shallowQuery;
private Map tokenReplacements;
//TODO:this is only needed during compilation .. can we eliminate the instvar?
private Map enabledFilters;
private boolean compiled;
private QueryLoader queryLoader;
private StatementExecutor statementExecutor;
private Statement sqlAst;
private String sql;
private ParameterTranslations paramTranslations;
private List<ParameterSpecification> collectedParameterSpecifications;
private EntityGraphQueryHint entityGraphQueryHint;
/**
* Creates a new AST-based query translator.
*
* #param queryIdentifier The query-identifier (used in stats collection)
* #param query The hql query to translate
* #param enabledFilters Currently enabled filters
* #param factory The session factory constructing this translator instance.
*/
public NoDistinctInSQLQueryTranslatorImpl(
String queryIdentifier,
String query,
Map enabledFilters,
SessionFactoryImplementor factory) {
super(queryIdentifier, query, enabledFilters, factory);
this.queryIdentifier = queryIdentifier;
this.hql = query;
this.compiled = false;
this.shallowQuery = false;
this.enabledFilters = enabledFilters;
this.factory = factory;
}
public NoDistinctInSQLQueryTranslatorImpl(
String queryIdentifier,
String query,
Map enabledFilters,
SessionFactoryImplementor factory,
EntityGraphQueryHint entityGraphQueryHint) {
this(queryIdentifier, query, enabledFilters, factory);
this.entityGraphQueryHint = entityGraphQueryHint;
}
/**
* Compile a "normal" query. This method may be called multiple times.
* Subsequent invocations are no-ops.
*
* #param replacements Defined query substitutions.
* #param shallow Does this represent a shallow (scalar or entity-id)
* select?
* #throws QueryException There was a problem parsing the query string.
* #throws MappingException There was a problem querying defined mappings.
*/
#Override
public void compile(
Map replacements,
boolean shallow) throws QueryException, MappingException {
doCompile(replacements, shallow, null);
}
/**
* Compile a filter. This method may be called multiple times. Subsequent
* invocations are no-ops.
*
* #param collectionRole the role name of the collection used as the basis
* for the filter.
* #param replacements Defined query substitutions.
* #param shallow Does this represent a shallow (scalar or entity-id)
* select?
* #throws QueryException There was a problem parsing the query string.
* #throws MappingException There was a problem querying defined mappings.
*/
#Override
public void compile(
String collectionRole,
Map replacements,
boolean shallow) throws QueryException, MappingException {
doCompile(replacements, shallow, collectionRole);
}
/**
* Performs both filter and non-filter compiling.
*
* #param replacements Defined query substitutions.
* #param shallow Does this represent a shallow (scalar or entity-id)
* select?
* #param collectionRole the role name of the collection used as the basis
* for the filter, NULL if this is not a filter.
*/
private synchronized void doCompile(Map replacements, boolean shallow, String collectionRole) {
// If the query is already compiled, skip the compilation.
if (compiled) {
LOG.debug("compile() : The query is already compiled, skipping...");
return;
}
// Remember the parameters for the compilation.
this.tokenReplacements = replacements;
if (tokenReplacements == null) {
tokenReplacements = new HashMap();
}
this.shallowQuery = shallow;
try {
// PHASE 1 : Parse the HQL into an AST.
final HqlParser parser = parse(true);
// PHASE 2 : Analyze the HQL AST, and produce an SQL AST.
final HqlSqlWalker w = analyze(parser, collectionRole);
sqlAst = (Statement) w.getAST();
// at some point the generate phase needs to be moved out of here,
// because a single object-level DML might spawn multiple SQL DML
// command executions.
//
// Possible to just move the sql generation for dml stuff, but for
// consistency-sake probably best to just move responsiblity for
// the generation phase completely into the delegates
// (QueryLoader/StatementExecutor) themselves. Also, not sure why
// QueryLoader currently even has a dependency on this at all; does
// it need it? Ideally like to see the walker itself given to the delegates directly...
if (sqlAst.needsExecutor()) {
statementExecutor = buildAppropriateStatementExecutor(w);
} else {
// PHASE 3 : Generate the SQL.
generate((QueryNode) sqlAst);
queryLoader = new QueryLoader(this, factory, w.getSelectClause());
}
compiled = true;
} catch (QueryException qe) {
if (qe.getQueryString() == null) {
throw qe.wrapWithQueryString(hql);
} else {
throw qe;
}
} catch (RecognitionException e) {
// we do not actually propagate ANTLRExceptions as a cause, so
// log it here for diagnostic purposes
LOG.trace("Converted antlr.RecognitionException", e);
throw QuerySyntaxException.convert(e, hql);
} catch (ANTLRException e) {
// we do not actually propagate ANTLRExceptions as a cause, so
// log it here for diagnostic purposes
LOG.trace("Converted antlr.ANTLRException", e);
throw new QueryException(e.getMessage(), hql);
}
//only needed during compilation phase...
this.enabledFilters = null;
}
private void generate(AST sqlAst) throws QueryException, RecognitionException {
if (sql == null) {
final SqlGenerator gen = new SqlGenerator(factory);
gen.statement(sqlAst);
sql = gen.getSQL();
//Hack: The distinct operator is removed from the sql
//string to avoid executing a distinct query in the db server when
//the distinct is used in hql.
sql = sql.replace("distinct", "");
if (LOG.isDebugEnabled()) {
LOG.debugf("HQL: %s", hql);
LOG.debugf("SQL: %s", sql);
}
gen.getParseErrorHandler().throwQueryException();
collectedParameterSpecifications = gen.getCollectedParameters();
}
}
private static final ASTPrinter SQL_TOKEN_PRINTER = new ASTPrinter(SqlTokenTypes.class);
private HqlSqlWalker analyze(HqlParser parser, String collectionRole) throws QueryException, RecognitionException {
final HqlSqlWalker w = new HqlSqlWalker(this, factory, parser, tokenReplacements, collectionRole);
final AST hqlAst = parser.getAST();
// Transform the tree.
w.statement(hqlAst);
if (LOG.isDebugEnabled()) {
LOG.debug(SQL_TOKEN_PRINTER.showAsString(w.getAST(), "--- SQL AST ---"));
}
w.getParseErrorHandler().throwQueryException();
return w;
}
private HqlParser parse(boolean filter) throws TokenStreamException, RecognitionException {
// Parse the query string into an HQL AST.
final HqlParser parser = HqlParser.getInstance(hql);
parser.setFilter(filter);
LOG.debugf("parse() - HQL: %s", hql);
parser.statement();
final AST hqlAst = parser.getAST();
final NodeTraverser walker = new NodeTraverser(new JavaConstantConverter());
walker.traverseDepthFirst(hqlAst);
showHqlAst(hqlAst);
parser.getParseErrorHandler().throwQueryException();
return parser;
}
private static final ASTPrinter HQL_TOKEN_PRINTER = new ASTPrinter(HqlTokenTypes.class);
#Override
void showHqlAst(AST hqlAst) {
if (LOG.isDebugEnabled()) {
LOG.debug(HQL_TOKEN_PRINTER.showAsString(hqlAst, "--- HQL AST ---"));
}
}
private void errorIfDML() throws HibernateException {
if (sqlAst.needsExecutor()) {
throw new QueryExecutionRequestException("Not supported for DML operations", hql);
}
}
private void errorIfSelect() throws HibernateException {
if (!sqlAst.needsExecutor()) {
throw new QueryExecutionRequestException("Not supported for select queries", hql);
}
}
#Override
public String getQueryIdentifier() {
return queryIdentifier;
}
#Override
public Statement getSqlAST() {
return sqlAst;
}
private HqlSqlWalker getWalker() {
return sqlAst.getWalker();
}
/**
* Types of the return values of an <tt>iterate()</tt> style query.
*
* #return an array of <tt>Type</tt>s.
*/
#Override
public Type[] getReturnTypes() {
errorIfDML();
return getWalker().getReturnTypes();
}
#Override
public String[] getReturnAliases() {
errorIfDML();
return getWalker().getReturnAliases();
}
#Override
public String[][] getColumnNames() {
errorIfDML();
return getWalker().getSelectClause().getColumnNames();
}
#Override
public Set<Serializable> getQuerySpaces() {
return getWalker().getQuerySpaces();
}
#Override
public List list(SessionImplementor session, QueryParameters queryParameters)
throws HibernateException {
// Delegate to the QueryLoader...
errorIfDML();
final QueryNode query = (QueryNode) sqlAst;
final boolean hasLimit = queryParameters.getRowSelection() != null && queryParameters.getRowSelection().definesLimits();
final boolean needsDistincting = (query.getSelectClause().isDistinct() || hasLimit) && containsCollectionFetches();
QueryParameters queryParametersToUse;
if (hasLimit && containsCollectionFetches()) {
LOG.firstOrMaxResultsSpecifiedWithCollectionFetch();
RowSelection selection = new RowSelection();
selection.setFetchSize(queryParameters.getRowSelection().getFetchSize());
selection.setTimeout(queryParameters.getRowSelection().getTimeout());
queryParametersToUse = queryParameters.createCopyUsing(selection);
} else {
queryParametersToUse = queryParameters;
}
List results = queryLoader.list(session, queryParametersToUse);
if (needsDistincting) {
int includedCount = -1;
// NOTE : firstRow is zero-based
int first = !hasLimit || queryParameters.getRowSelection().getFirstRow() == null
? 0
: queryParameters.getRowSelection().getFirstRow();
int max = !hasLimit || queryParameters.getRowSelection().getMaxRows() == null
? -1
: queryParameters.getRowSelection().getMaxRows();
List tmp = new ArrayList();
IdentitySet distinction = new IdentitySet();
for (final Object result : results) {
if (!distinction.add(result)) {
continue;
}
includedCount++;
if (includedCount < first) {
continue;
}
tmp.add(result);
// NOTE : ( max - 1 ) because first is zero-based while max is not...
if (max >= 0 && (includedCount - first) >= (max - 1)) {
break;
}
}
results = tmp;
}
return results;
}
/**
* Return the query results as an iterator
*/
#Override
public Iterator iterate(QueryParameters queryParameters, EventSource session)
throws HibernateException {
// Delegate to the QueryLoader...
errorIfDML();
return queryLoader.iterate(queryParameters, session);
}
/**
* Return the query results, as an instance of <tt>ScrollableResults</tt>
*/
#Override
public ScrollableResults scroll(QueryParameters queryParameters, SessionImplementor session)
throws HibernateException {
// Delegate to the QueryLoader...
errorIfDML();
return queryLoader.scroll(queryParameters, session);
}
#Override
public int executeUpdate(QueryParameters queryParameters, SessionImplementor session)
throws HibernateException {
errorIfSelect();
return statementExecutor.execute(queryParameters, session);
}
/**
* The SQL query string to be called; implemented by all subclasses
*/
#Override
public String getSQLString() {
return sql;
}
#Override
public List<String> collectSqlStrings() {
ArrayList<String> list = new ArrayList<>();
if (isManipulationStatement()) {
String[] sqlStatements = statementExecutor.getSqlStatements();
Collections.addAll(list, sqlStatements);
} else {
list.add(sql);
}
return list;
}
// -- Package local methods for the QueryLoader delegate --
#Override
public boolean isShallowQuery() {
return shallowQuery;
}
#Override
public String getQueryString() {
return hql;
}
#Override
public Map getEnabledFilters() {
return enabledFilters;
}
#Override
public int[] getNamedParameterLocs(String name) {
return getWalker().getNamedParameterLocations(name);
}
#Override
public boolean containsCollectionFetches() {
errorIfDML();
List collectionFetches = ((QueryNode) sqlAst).getFromClause().getCollectionFetches();
return collectionFetches != null && collectionFetches.size() > 0;
}
#Override
public boolean isManipulationStatement() {
return sqlAst.needsExecutor();
}
#Override
public void validateScrollability() throws HibernateException {
// Impl Note: allows multiple collection fetches as long as the
// entire fecthed graph still "points back" to a single
// root entity for return
errorIfDML();
final QueryNode query = (QueryNode) sqlAst;
// If there are no collection fetches, then no further checks are needed
List collectionFetches = query.getFromClause().getCollectionFetches();
if (collectionFetches.isEmpty()) {
return;
}
// A shallow query is ok (although technically there should be no fetching here...)
if (isShallowQuery()) {
return;
}
// Otherwise, we have a non-scalar select with defined collection fetch(es).
// Make sure that there is only a single root entity in the return (no tuples)
if (getReturnTypes().length > 1) {
throw new HibernateException("cannot scroll with collection fetches and returned tuples");
}
FromElement owner = null;
for (Object o : query.getSelectClause().getFromElementsForLoad()) {
// should be the first, but just to be safe...
final FromElement fromElement = (FromElement) o;
if (fromElement.getOrigin() == null) {
owner = fromElement;
break;
}
}
if (owner == null) {
throw new HibernateException("unable to locate collection fetch(es) owner for scrollability checks");
}
// This is not strictly true. We actually just need to make sure that
// it is ordered by root-entity PK and that that order-by comes before
// any non-root-entity ordering...
AST primaryOrdering = query.getOrderByClause().getFirstChild();
if (primaryOrdering != null) {
// TODO : this is a bit dodgy, come up with a better way to check this (plus see above comment)
String[] idColNames = owner.getQueryable().getIdentifierColumnNames();
String expectedPrimaryOrderSeq = StringHelper.join(
", ",
StringHelper.qualify(owner.getTableAlias(), idColNames)
);
if (!primaryOrdering.getText().startsWith(expectedPrimaryOrderSeq)) {
throw new HibernateException("cannot scroll results with collection fetches which are not ordered primarily by the root entity's PK");
}
}
}
private StatementExecutor buildAppropriateStatementExecutor(HqlSqlWalker walker) {
final Statement statement = (Statement) walker.getAST();
switch (walker.getStatementType()) {
case HqlSqlTokenTypes.DELETE: {
final FromElement fromElement = walker.getFinalFromClause().getFromElement();
final Queryable persister = fromElement.getQueryable();
if (persister.isMultiTable()) {
return new MultiTableDeleteExecutor(walker);
} else {
return new DeleteExecutor(walker, persister);
}
}
case HqlSqlTokenTypes.UPDATE: {
final FromElement fromElement = walker.getFinalFromClause().getFromElement();
final Queryable persister = fromElement.getQueryable();
if (persister.isMultiTable()) {
// even here, if only properties mapped to the "base table" are referenced
// in the set and where clauses, this could be handled by the BasicDelegate.
// TODO : decide if it is better performance-wise to doAfterTransactionCompletion that check, or to simply use the MultiTableUpdateDelegate
return new MultiTableUpdateExecutor(walker);
} else {
return new BasicExecutor(walker, persister);
}
}
case HqlSqlTokenTypes.INSERT:
return new BasicExecutor(walker, ((InsertStatement) statement).getIntoClause().getQueryable());
default:
throw new QueryException("Unexpected statement type");
}
}
#Override
public ParameterTranslations getParameterTranslations() {
if (paramTranslations == null) {
paramTranslations = new ParameterTranslationsImpl(getWalker().getParameters());
}
return paramTranslations;
}
#Override
public List<ParameterSpecification> getCollectedParameterSpecifications() {
return collectedParameterSpecifications;
}
#Override
public Class getDynamicInstantiationResultType() {
AggregatedSelectExpression aggregation = queryLoader.getAggregatedSelectExpression();
return aggregation == null ? null : aggregation.getAggregationResultType();
}
public static class JavaConstantConverter implements NodeTraverser.VisitationStrategy {
private AST dotRoot;
#Override
public void visit(AST node) {
if (dotRoot != null) {
// we are already processing a dot-structure
if (ASTUtil.isSubtreeChild(dotRoot, node)) {
return;
}
// we are now at a new tree level
dotRoot = null;
}
if (node.getType() == HqlTokenTypes.DOT) {
dotRoot = node;
handleDotStructure(dotRoot);
}
}
private void handleDotStructure(AST dotStructureRoot) {
final String expression = ASTUtil.getPathText(dotStructureRoot);
final Object constant = ReflectHelper.getConstantValue(expression);
if (constant != null) {
dotStructureRoot.setFirstChild(null);
dotStructureRoot.setType(HqlTokenTypes.JAVA_CONSTANT);
dotStructureRoot.setText(expression);
}
}
}
#Override
public EntityGraphQueryHint getEntityGraphQueryHint() {
return entityGraphQueryHint;
}
#Override
public void setEntityGraphQueryHint(EntityGraphQueryHint entityGraphQueryHint) {
this.entityGraphQueryHint = entityGraphQueryHint;
}
}
If you follow the code flow you will notice that I just modified the method private void generate(AST sqlAst) throws QueryException, RecognitionException and added the a following lines:
//Hack: The distinct keywordis removed from the sql string to
//avoid executing a distinct query in the DBMS when the distinct
//is used in hql.
sql = sql.replace("distinct", "");
What I do with this code is to remove the distinct keyword from the generated SQL query.
After creating the classes above, I added the following line in the hibernate configuration file:
<property name="hibernate.query.factory_class">org.hibernate.hql.internal.ast.NoDistinctInSQLASTQueryTranslatorFactory</property>
This line tells hibernate to use my custom class to parse HQL queries and generate SQL queries without the distinct keyword. Notice I created my custom classes in the same package where the original HQL parser resides.

ROW_NUMBER() and nhibernate - finding an item's page

given a query in the form of an ICriteria object, I would like to use NHibernate (by means of a projection?) to find an element's order,
in a manner equivalent to using
SELECT ROW_NUMBER() OVER (...)
to find a specific item's index in the query.
(I need this for a "jump to page" functionality in paging)
any suggestions?
NOTE: I don't want to go to a page given it's number yet - I know how to do that - I want to get the item's INDEX so I can divide it by page size and get the page index.
After looking at the sources for NHibernate, I'm fairly sure that there exists no such functionality.
I wouldn't mind, however, for someone to prove me wrong.
In my specific setting, I did solve this problem by writing a method that takes a couple of lambdas (representing the key column, and an optional column to filter by - all properties of a specific domain entity). This method then builds the sql and calls session.CreateSQLQuery(...).UniqueResult(); I'm not claiming that this is a general purpose solution.
To avoid the use of magic strings, I borrowed a copy of PropertyHelper<T> from this answer.
Here's the code:
public abstract class RepositoryBase<T> where T : DomainEntityBase
{
public long GetIndexOf<TUnique, TWhere>(T entity, Expression<Func<T, TUnique>> uniqueSelector, Expression<Func<T, TWhere>> whereSelector, TWhere whereValue) where TWhere : DomainEntityBase
{
if (entity == null || entity.Id == Guid.Empty)
{
return -1;
}
var entityType = typeof(T).Name;
var keyField = PropertyHelper<T>.GetProperty(uniqueSelector).Name;
var keyValue = uniqueSelector.Compile()(entity);
var innerWhere = string.Empty;
if (whereSelector != null)
{
// Builds a column name that adheres to our naming conventions!
var filterField = PropertyHelper<T>.GetProperty(whereSelector).Name + "Id";
if (whereValue == null)
{
innerWhere = string.Format(" where [{0}] is null", filterField);
}
else
{
innerWhere = string.Format(" where [{0}] = :filterValue", filterField);
}
}
var innerQuery = string.Format("(select [{0}], row_number() over (order by {0}) as RowNum from [{1}]{2}) X", keyField, entityType, innerWhere);
var outerQuery = string.Format("select RowNum from {0} where {1} = :keyValue", innerQuery, keyField);
var query = _session
.CreateSQLQuery(outerQuery)
.SetParameter("keyValue", keyValue);
if (whereValue != null)
{
query = query.SetParameter("filterValue", whereValue.Id);
}
var sqlRowNumber = query.UniqueResult<long>();
// The row_number() function is one-based. Our index should be zero-based.
sqlRowNumber -= 1;
return sqlRowNumber;
}
public long GetIndexOf<TUnique>(T entity, Expression<Func<T, TUnique>> uniqueSelector)
{
return GetIndexOf(entity, uniqueSelector, null, (DomainEntityBase)null);
}
public long GetIndexOf<TUnique, TWhere>(T entity, Expression<Func<T, TUnique>> uniqueSelector, Expression<Func<T, TWhere>> whereSelector) where TWhere : DomainEntityBase
{
return GetIndexOf(entity, uniqueSelector, whereSelector, whereSelector.Compile()(entity));
}
}
public abstract class DomainEntityBase
{
public virtual Guid Id { get; protected set; }
}
And you use it like so:
...
public class Book : DomainEntityBase
{
public virtual string Title { get; set; }
public virtual Category Category { get; set; }
...
}
public class Category : DomainEntityBase { ... }
public class BookRepository : RepositoryBase<Book> { ... }
...
var repository = new BookRepository();
var book = ... a persisted book ...
// Get the index of the book, sorted by title.
var index = repository.GetIndexOf(book, b => b.Title);
// Get the index of the book, sorted by title and filtered by that book's category.
var indexInCategory = repository.GetIndexOf(book, b => b.Title, b => b.Category);
As I said, this works for me. I'll definitely tweak it as I move forward. YMMV.
Now, if the OP has solved this himself, then I would love to see his solution! :-)
ICriteria has this 2 functions:
SetFirstResult()
and
SetMaxResults()
which transform your SQL statement into using ROW_NUMBER (in sql server) or limit in MySql.
So if you want 25 records on the third page you could use:
.SetFirstResult(2*25)
.SetMaxResults(25)
After trying to find an NHibernate based solution for this myself, I ultimately just added a column to the view I happened to be using:
CREATE VIEW vw_paged AS
SELECT ROW_NUMBER() OVER (ORDER BY Id) AS [Row], p.column1, p.column2
FROM paged_table p
This doesn't really help if you need complex sorting options, but it does work for simple cases.
A Criteria query, of course, would look something like this:
public static IList<Paged> GetRange(string search, int rows)
{
var match = DbSession.Current.CreateCriteria<Job>()
.Add(Restrictions.Like("Id", search + '%'))
.AddOrder(Order.Asc("Id"))
.SetMaxResults(1)
.UniqueResult<Paged>();
if (match == null)
return new List<Paged>();
if (rows == 1)
return new List<Paged> {match};
return DbSession.Current.CreateCriteria<Paged>()
.Add(Restrictions.Like("Id", search + '%'))
.Add(Restrictions.Ge("Row", match.Row))
.AddOrder(Order.Asc("Id"))
.SetMaxResults(rows)
.List<Paged>();
}