HibernateException: Errors in named query - sql

When running a particular unit-test, I am getting the exception:
Caused by: org.hibernate.HibernateException: Errors in named queries: UPDATE_NEXT_FIRE_TIME
at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:437)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1385)
at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:954)
at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:891)
... 44 more
for the named query defined here:
#Entity(name="fireTime")
#Table(name="qrtz_triggers")
#NamedQueries({
#NamedQuery(
name="UPDATE_NEXT_FIRE_TIME",
query= "update fireTime t set t.next_fire_time = :epochTime where t.trigger_name = 'CalculationTrigger'")
})
public class JpaFireTimeUpdaterImpl implements FireTimeUpdater {
#Id
#Column(name="next_fire_time", insertable=true, updatable=true)
private long epochTime;
public JpaFireTimeUpdaterImpl() {}
public JpaFireTimeUpdaterImpl(final long epochTime) {
this.epochTime = epochTime;
}
#Override
public long getEpochTime() {
return this.epochTime;
}
public void setEpochTime(final long epochTime) {
this.epochTime = epochTime;
}
}
After debugging as deep as I could, I've found that the exception occurs in w.statement(hqlAst) in QueryTranslatorImpl:
private HqlSqlWalker analyze(HqlParser parser, String collectionRole) throws QueryException, RecognitionException {
HqlSqlWalker w = new HqlSqlWalker( this, factory, parser, tokenReplacements, collectionRole );
AST hqlAst = parser.getAST();
// Transform the tree.
w.statement( hqlAst );
if ( AST_LOG.isDebugEnabled() ) {
ASTPrinter printer = new ASTPrinter( SqlTokenTypes.class );
AST_LOG.debug( printer.showAsString( w.getAST(), "--- SQL AST ---" ) );
}
w.getParseErrorHandler().throwQueryException();
return w;
}
Is there something wrong with my query or annotations?

NamedQuery should be written with JPQL, but query seems to mix both names of persistent attributes and names of database columns. Names of database columns cannot be used in JPQL.
In this case instead of next_fire_time name of the persistent attribute epochTime should be used. Also trigger_name looks more like name of the database column than name of the persistent attribute, but it seems not to be mapped in your current class at all. After it is mapped, query is as follows:
update fireTime t set t.epochTime = :epochTime
where t.triggerName = 'CalculationTrigger'
If SQL query is preferred, then #NamedNativeQuery should be used instead.
As a side note, JPA 2.0 specification doesn't encourage changing primary key:
The application must not change the value of the primary key[10]. The
behavior is undefined if this occurs.[11]
In general entities are not aware of changed made via JPQL queries. That gets especially interesting when trying to refresh entity that does not exist anymore (because primary key was changed).
Additionally naming is little bit confusing:
Name of the class looks more like name of the service class
than name of the entity.
Starting name of the entity with lower
case letter is rather rare style.
Name of the entity, name of the
table and name of the class do not match too well.

Related

How to check collection for null in spring data jpa #Query with in predicate

I have this query in my spring data jpa repository:
#Query("SELECT table1 FROM Table1 table1 "
+ "INNER JOIN FETCH table1.error error"
+ "WHERE table1.date = ?1 "
+ "AND (COALESCE(?2) IS NULL OR (table1.code IN ?2)) "
+ "AND (COALESCE(?3) IS NULL OR (error.errorCode IN ?3)) ")
List<Table1> findByFilter(Date date, List<String> codes, List<String> errorCodes);
When I run this query, it shows me this error by console:
org.postgresql.util.PSQLException: ERROR: operator does not exist: character varying = bytea
Hint: No operator matches the given name and argument types. You might need to add explicit type casts.
Position: 1642
However if I run the query without the (COALESCE (?2) IS NULL OR part, just the table1.code IN ?2, it does work
Does anyone know what this error could be due to?
COALESCE with one parameter does not make sense. This is an abbreviated CASE expression that returns the first non-null operand. (See this)
I would suggest you to use named parameters instead of position-based parameters. As it's stated in the documentation this makes query methods a little error-prone when refactoring regarding the parameter position.
As it's stated in documentation related to the IN predicate:
The list of values can come from a number of different sources. In the constructor_expression and collection_valued_input_parameter, the list of values must not be empty; it must contain at least one value.
I would suggest you also avoid to use outdated Date and use instead java 8 Date/Time API.
So, taken into account all above, you should use a dynamic query as it was suggested also in comments by #SimonMartinelli. Particularly you can have a look at the specifications.
Assuming that you have the following mapping:
#Entity
public class Error
{
#Id
private Long id;
private String errorCode;
// ...
}
#Entity
public class Table1
{
#Id
private Long id;
private LocalDateTime date;
private String code;
#ManyToOne
private Error error;
// ...
}
you can write the following specification:
import javax.persistence.criteria.JoinType;
import javax.persistence.criteria.Predicate;
import org.springframework.data.jpa.domain.Specification;
import org.springframework.util.CollectionUtils;
public class TableSpecs
{
public static Specification<Table1> findByFilter(LocalDateTime date, List<String> codes, List<String> errorCodes)
{
return (root, query, builder) -> {
root.fetch("error", JoinType.LEFT);
Predicate result = builder.equal(root.get("date"), date);
if (!CollectionUtils.isEmpty(codes)) {
result = builder.and(result, root.get("code").in(codes));
}
if (!CollectionUtils.isEmpty(errorCodes)) {
result = builder.and(result, root.get("error").get("errorCode").in(errorCodes));
}
return result;
};
}
}
public interface TableRepository extends CrudRepository<Table1, Long>, JpaSpecificationExecutor<Table1>
{
default List<Table1> findByFilter(LocalDateTime date, List<String> codes, List<String> errorCodes)
{
return findAll(TableSpecs.findByFilter(date, codes, errorCodes));
}
}
and then use it:
List<Table1> results = tableRepository.findByFilter(date, Arrays.asList("TBL1"), Arrays.asList("ERCODE2")));

Elm: accessing common fields in union

I am trying to model a type as a union where each member of that union has properties in common with all other members.
I am currently achieving this like so:
type alias File = {
name : String
}
type CommonFileState extra = CommonFileState {
id : String
, file : File
} extra
type alias ValidFileState = CommonFileState {
validatedAt : Int
}
type alias InvalidFileState = CommonFileState {
reason : String
}
type alias LoadingFileState = CommonFileState {}
type FileState = Valid ValidFileState | Invalid InvalidFileState | Loading LoadingFileState
Now if I want to read one of those common properties on any given FileState, I must match against each member of the union:
getId : FileState -> String
getId fileState = case fileState of
Valid (CommonFileState {id} extra) -> id
Invalid (CommonFileState {id} extra) -> id
Loading (CommonFileState {id} extra) -> id
This feels wrong to me, because I have to duplicate the property access for each member. If I needed to manipulate this property somehow (e.g. concatenating something onto the string), I would also have to duplicate this.
I want to be able to easily access common properties of my union, and operate on those common properties.
When I started searching for other ways to do this, I found one alternative was to nest the union inside a record, which also holds the common properties:
type alias ValidCurrentFileState = {
validatedAt : Int
}
type alias InvalidCurrentFileState = {
reason : String
}
type alias LoadingCurrentFileState = {}
type CurrentFileState = Valid ValidCurrentFileState | Invalid InvalidCurrentFileState| Loading LoadingCurrentFileState
type alias File = {
name : String
}
type alias FileState = {
id : String
, file : File
, currentState : CurrentFileState
}
getId : FileState -> String
getId {id} = id
However this is awkward because I have to name the nested union, which adds a level of unnecessary indirection: "file state" and "current file state" are conceptually the same.
Are there any other ways of doing this which don't have the problems I mentioned?
I think you are thinking about this the wrong way around.
The purpose of modelling (in Elm) is capture the possible states of your data, and to exclude - in your model - 'impossible' states, so that the compiler can statically prevent the code every creating such states.
Once you're happy with your model, you write the helpers you need to make your core logic easy to express and to maintain.
I suspect I would normally go with your second approach, but I don't know all the issues you need to account for.

Ignite SqlQuery for complex java objects

In my cache I have a complex java object as below -
class Person{
private Department d;
....
}
class Department {
private Department code;
....
}
I am using below SQLQuery to read it -
SqlQuery<Short, BinaryObject> query = new SqlQuery<>(Person.class, "d.code = ?");
String args="101"; // department code
QueryCursor<Cache.Entry<Short, BinaryObject>> resultSet = personCache.query(query.setArgs(args))
I am getting below error -
Caused by: class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to parse query: SELECT "PERSON_CACHE"."PERSONENTITY"._KEY, "TPERSON_CACHE"."PERSONENTITY"._VAL FROM "PERSON_CACHE"."PERSONENTITY" WHERE id.code = ?
Am I doing anything wrong here ?
You can access nested fields, but only if they were configured with QuerySqlField annotation in advance:
class Person{
private Department d;
...
}
class Department {
#QuerySqlField
private Department code;
....
}
SqlQuery<Short, BinaryObject> query = new SqlQuery<>(Person.class, "code = ?");
Destructuring is not supported by Ignite SQL and there are no solid plans to implement it.
This means you can't peek into fields that are rich objects, maps, lists, etc. You should introduce a departmentId numeric field here.
Theoretically you could also try putting #QuerySqlField annotation on Department's field code, and then access it as CODE = ?. Your mileage may vary. I for one would like to hear about the result of such experiment.
I resolved it by using predicate.
IgniteBiPredicate<Long, BinaryObject> predicate = new IgniteBiPredicate<Long, BinaryObject>() {
#Override
public boolean apply(Long e1, BinaryObject e2) {
Person p= e2.deserialize();
short s = (short) args[0];
return p.getId().getCode == s;
}
};

Reading Hadoop SequenceFiles with Hive

I have some mapred data from the Common Crawl that I have stored in a SequenceFile format. I have tried repeatedly to use this data "as is" with Hive so I can query and sample it at various stages. But I always get the following error in my job output:
LazySimpleSerDe: expects either BytesWritable or Text object!
I have even constructed a simpler (and smaller) dataset of [Text, LongWritable] records, but that fails as well. If I output the data to text format and then create a table on that, it works fine:
hive> create external table page_urls_1346823845675
> (pageurl string, xcount bigint)
> location 's3://mybucket/text-parse/1346823845675/';
OK
Time taken: 0.434 seconds
hive> select * from page_urls_1346823845675 limit 10;
OK
http://0-italy.com/tag/package-deals 643 NULL
http://011.hebiichigo.com/d63e83abff92df5f5913827798251276/d1ca3aaf52b41acd68ebb3bf69079bd1.html 9 NULL
http://01fishing.com/fly-fishing-knots/ 3437 NULL
http://01fishing.com/flyin-slab-creek/ 1005 NULL
...
I tried using a custom inputformat:
// My custom input class--very simple
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.SequenceFileInputFormat;
public class UrlXCountDataInputFormat extends
SequenceFileInputFormat<Text, LongWritable> { }
I create the table then with:
create external table page_urls_1346823845675_seq
(pageurl string, xcount bigint)
stored as inputformat 'my.package.io.UrlXCountDataInputFormat'
outputformat 'org.apache.hadoop.mapred.SequenceFileOutputFormat'
location 's3://mybucket/seq-parse/1346823845675/';
But I still get the same SerDer error.
I'm sure there's something really basic I'm missing here, but I can't seem to get it right. Additionally, I have to be able to parse the SequenceFiles in place (i.e. I can't convert my data to text). So I need to figure out the SequenceFile approach for future portions of my project.
Solution:
As #mark-grover pointed out below, the issue is that Hive ignores the key by default. With only one column (i.e. just the value), the serder was unable to map my second column.
The solution was to use a custom InputFormat that was great deal more complex than what I had used originally. I tracked down one answer at link to a Git about using the keys instead of the values, and then I modified it to suit my needs: take the key and value from an internal SequenceFile.Reader and then combining them into the final BytesWritable. I.e. something like this (from the custom Reader, as that's where all the hard work happens):
// I used generics so I can use this all with
// other output files with just a small amount
// of additional code ...
public abstract class HiveKeyValueSequenceFileReader<K,V> implements RecordReader<K, BytesWritable> {
public synchronized boolean next(K key, BytesWritable value) throws IOException {
if (!more) return false;
long pos = in.getPosition();
V trueValue = (V) ReflectionUtils.newInstance(in.getValueClass(), conf);
boolean remaining = in.next((Writable)key, (Writable)trueValue);
if (remaining) combineKeyValue(key, trueValue, value);
if (pos >= end && in.syncSeen()) {
more = false;
} else {
more = remaining;
}
return more;
}
protected abstract void combineKeyValue(K key, V trueValue, BytesWritable newValue);
}
// from my final implementation
public class UrlXCountDataReader extends HiveKeyValueSequenceFileReader<Text,LongWritable>
#Override
protected void combineKeyValue(Text key, LongWritable trueValue, BytesWritable newValue) {
// TODO I think we need to use straight bytes--I'm not sure this works?
StringBuilder builder = new StringBuilder();
builder.append(key);
builder.append('\001');
builder.append(trueValue.get());
newValue.set(new BytesWritable(builder.toString().getBytes()) );
}
}
With that, I get all my columns!
http://0-italy.com/tag/package-deals 643
http://011.hebiichigo.com/d63e83abff92df5f5913827798251276/d1ca3aaf52b41acd68ebb3bf69079bd1.html 9
http://01fishing.com/fly-fishing-knots/ 3437
http://01fishing.com/flyin-slab-creek/ 1005
http://01fishing.com/pflueger-1195x-automatic-fly-reels/ 1999
Not sure if this is impacting you but Hive ignores keys when reading SequenceFiles. You may need to create a custom InputFormat (unless you can find one online:-))
Reference: http://mail-archives.apache.org/mod_mbox/hive-user/200910.mbox/%3C5573211B-634D-4BB0-9123-E389D90A786C#metaweb.com%3E

LinqToSQL Not Updating the Database

// goal: update Address record identified by "id", with new data in "colVal"
string cstr = ConnectionApi.GetSqlConnectionString("SwDb"); // get connection str
using (DataContext db = new DataContext(cstr)) {
Address addr = (from a in db.GetTable<Address>()
where a.Id == id
select a).Single<Address>();
addr.AddressLine1 = colValue.Trim();
db.SubmitChanges(); // this seems to have no effect!!!
}
In the debugger, addr has all the current values from the db table, and I can verify that AddressLine1 is changed just before I call db.SubmitChanges()... SQL Profiler shows only a "reset connection" when the SubmitChanges line executes. Anyone got a clue why this isn't working? THANKS!
You can get a quick view of the changes to be submitted using the GetChangeSet method.
Also make sure that your table has a primary key defined and that the mapping knows about this primary key. Otherwise you won't be able to perform updates.
Funny, to use GetTable and Single. I would have expected the code to look like this:
string cstr = ConnectionApi.GetSqlConnectionString("SwDb"); // get connection str
using (DataContext db = new DataContext(cstr))
{
Address addr = (from a in db.Address where a.Id == id select a).Single();
addr.AddressLine1 = colValue.Trim();
db.SubmitChanges(); // this seems to have no effect!!!
}
I got no idea what GetTable will do to you.
Another thing, for debugging Linq2SQL try adding
db.Log = Console.Out;
before SubmitChanges(), this will show you the executed SQL.
Thanks -- your comments will help me sort this out I'm sure! I didn't have the "Id" column defined as the PrimaryKey so that's an obvious non-starter. I would have expected that LinqToSQL would have thrown an error when the update fails. -- S.
Ok, here's the result. I can't use the form db.Address, because I didn't use the designer to create my database objects, instead I defined them as classes like this:
[Table(Name = "Addresses")]
public class Address
{
[Column(Name = "Id",IsPrimaryKey=true)]
public int Id { get; set; }
[Column(Name = "AddressLine1")]
public string AddressLine1 { get; set; }
...
Originally, I didn't have the "Id" column set as PK in the database, nor did I have it identified using IsPrimaryKey=true in the [Column...] specifier above. BOTH are required! Once I made that change, the ChangeSet found the update I wanted to do, and did it, but before that it told me that 0 rows needed to be updated and refused to commit the changes.
Thanks for your help! -- S.