autoincrement id is not reflecting in composite key using JPA - orm

I have a below mapping
#Entity
#Table(name = "auctions")
public class Auction{
.
.
#OneToMany(cascade = CascadeType.ALL, mappedBy = "auction")
private List<AuctionParamValue> auctionParamValueList;
.
.
}
#Entity
#Table(name = "auction_param_values")
public class AuctionParamValue {
#EmbeddedId
protected AuctionParamValuePK auctionParamValuePK;
#JoinColumn(name = "auction_param_id", referencedColumnName = "auction_param_id",updatable=false,insertable=false)
#ManyToOne #MapsId("auctionParamId")
private AuctionParam auctionParam;
#JoinColumn(name = "auction_id", referencedColumnName = "auction_id",updatable=false,insertable=false)
#ManyToOne #MapsId("auctionId")
private Auction auction;
}
#Embeddable
public class AuctionParamValuePK {
#Id
#Basic(optional = false)
#Column(name = "auction_id")
#Nullable
private Long auctionId = null;
#Id
#Basic(optional = false)
#Column(name = "auction_param_id")
#Nullable
private Long auctionParamId = null;
}
#Entity
#Table(name = "auction_params")
public class AuctionParam {
#OneToMany(cascade = CascadeType.ALL, mappedBy = "auctionParam")
private List<AuctionTypeParam> auctionTypeParamList;
#OneToMany(cascade = CascadeType.ALL, mappedBy = "auctionParam")
private List<AuctionParamValue> auctionParamValueList;
}
}
When I try to persist auction (I am manually setting the auctionParamId and expecting the auctionId to be automaticlly set (may be the last inserted id) )
but I am getting below error, I am not sure why the auctionId in the query is going as 0 instead of latest id in the auction.(I am using eclipselink jpa provider)
Internal Exception: com.mysql.jdbc.exceptions.MySQLIntegrityConstraintViolationException: Cannot add or update a child row: a foreign key constraint fails (`portaldemo`.`auction_param_values`, CONSTRAINT `auction_param_values_auction_id_fk` FOREIGN KEY (`auction_id`) REFERENCES `auctions` (`auction_id`))
Error Code: 1452
Call: INSERT INTO auction_param_values (auction_param_val, create_ts, last_updt_ts, auction_param_id, auction_id) VALUES (?, ?, ?, ?, ?)
bind => [2011-02-12 04:00:00, 2011-01-27 12:02:00.28, 2011-01-27 12:17:43.25, 2, 0]
Query: InsertObjectQuery(com.eaportal.domain.AuctionParamValue[auctionParamValuePK=com.eaportal.domain.AuctionParamValuePK[auctionId=0, auctionParamId=2]])
Here the [auctionId=0 is always comming as 0 and not the last inserted id :(
What is theproblem with this mapping ?

An #GeneratedValue will only set the value of the attribute it is annotated on, if you have other attributes in other classes that reference the id you are responsible for setting these.
i.e. you would need to first persist and flush the Auction, and then create the AuctionParamValue using its generate Id.
Or, if you used TABLE or SEQUENCE id generation then you would just need to call persist, and not the flush. In general I would never recommend IDENTITY sequencing as its values cannot be preallocated.
But really you should not have the duplicate fields as all. Remove the #EmbeddedId auctionParamValuePK entirely and just add #Id to the two #ManyToOnes, and use an #IdClass instead. This will make things much simplier and will just work, even with IDENTITY id generation.
You could also instead remove the insertable/updateable=false on the two #ManyToOne mappings and instead put them on the #EmbeddedId attributes, this will have the foreign key written from the relationships, but your object will still be corrupt in memory.
See,
http://en.wikibooks.org/wiki/Java_Persistence/Identity_and_Sequencing#Primary_Keys_through_OneToOne_and_ManyToOne_Relationships

You could try two things:
make the two ids nullable: Use wrapper Types instead of primitives (Integer, Long), and set it to null before saving
leave the combinded Primary ID field (auctionParamValuePK) empty (null) when you save it.
I don't know if this fix the problem, but I am sure that you need to do at least one of them to get it working.

Related

JPA EntityGraph making a Cartesian Product with subgraph

I am working with Spring Data JPA and Entity Graphs.
I have the following Entity structure:
Result entity has a list of SingleQuestionResponse entities, and the SingleQuestionResponse entity has a set of Answer entities (markedAnswers).
public class Result {
...
#OneToMany(cascade = CascadeType.PERSIST)
#JoinColumn(name = "result_id", nullable = false)
private List<SingleQuestionResponse> responses;
...
}
public class SingleQuestionResponse {
...
#ManyToMany(fetch = FetchType.LAZY)
#JoinTable(
name = "singlequestionresponses_answers",
joinColumns = #JoinColumn(name = "single_question_response_id"),
inverseJoinColumns = #JoinColumn(name = "answer_id")
)
private Set<Answer> markedAnswers;
...
}
and Answer just has simple-type fields.
Now, I would like to be able to fetch Result, along with all responses, and the markedAnswers in one query. For that I annotated the Result class with:
#NamedEntityGraph(name = "graph.Result.responsesWithQuestionsAndAnswersEager",
attributeNodes = #NamedAttributeNode(value = "responses", subgraph = "responsesWithMarkedAnswersAndQuestion"),
subgraphs = {
#NamedSubgraph(name = "responsesWithMarkedAnswersAndQuestion", attributeNodes = {
#NamedAttributeNode("markedAnswers"),
#NamedAttributeNode("question")
})
}
)
an example of usage is:
#EntityGraph("graph.Result.responsesWithQuestionsAndAnswersEager")
List<Result> findResultsByResultSetId(Long resultSetId);
I noticed, that calling the findResultsByResultSetId method (and other methods using this entity graph) results in responses (SingleQuestionResponse entities) being multiplied by the number of markedAnswers. What I mean by that is that result.getResponses() returns more SingleQuestionResponse objects than it should (it returns one response object per each markedAnswer).
I realize this is due to Hibernate making a Cartesian product with the join, but I have no idea how to fix it.
Can you help please? Thanks
You have to use the DISTINCT operator. With Spring Data JPA, this can be done by naming the method findDistinctResultsByResultSetId

CriteriaBuilder.size() and Hibernate's #Where annotation

I have the following setup:
#Entity
public class Function {
private String name;
#OneToMany(mappedBy = "function", cascade = CascadeType.ALL, orphanRemoval = true)
#Where(clause = "type = 'In'") // <=== seems to cause problems for CriteriaBuilder::size
private Set<Parameter> inParameters = new HashSet<>();
#OneToMany(mappedBy = "function", cascade = CascadeType.ALL, orphanRemoval = true)
#Where(clause = "type = 'Out'") // <=== seems to cause problems for CriteriaBuilder::size
private Set<Parameter> outParameters = new HashSet<>();
}
#Entity
public class Parameter {
private String name;
#Enumerated(EnumType.STRING)
private ParameterType type;
#ManyToOne(fetch = FetchType.LAZY, optional = false)
#JoinColumn(name = "function_id")
private Function function;
}
The overall problem I am trying to solve is find all functions that have outParameters with an exact dynamic set of names. E.g. find all function with outParameters whose names are exactly ('outParam1', 'outParam2')
This seems to be an "exact relational division" problem in SQL, so there might be better solutions out there, but the way I've gone about doing it is like this:
List<String> paramNames = ...
Root<Function> func = criteria.from(Function.class);
Path outParams = func.get("outParameters");
Path paramName = func.join("outParameters").get("name");
...
// CriteriaBuilder Code
builder.and(
builder.or(paramNames.stream().map(name -> builder.like(builder.lower(paramName), builder.literal(name))).toArray(Predicate[]::new)),
builder.equal(builder.size(outParams), paramNames.size()));
The problem I get is that the builder.size() does not seem to take into account the #Where annotation. Because the "CriteriaBuilder code" is nested in a generic Specification that should work for any type of Entity, I am not able to simply add a query.where() clause.
The code works when a function has 0 input parameters, but it does not work when it has more. I have taken a look at the SQL that is generated and I can see that it's missing:
SELECT DISTINCT
function0_.id AS id1_37_,
function0_.name AS name4_37_,
FROM
functions function0_
LEFT OUTER JOIN parameters outparamet2_ ON function0_.id = outparamet2_.function_id
AND (outparamet2_.type = 'Out') -- <== where clause added here
WHERE (lower(outparamet2_.name)
LIKE lower(?)
OR lower(outparamet2_.name)
LIKE lower(?))
AND (
SELECT
count(outparamet4_.function_id)
FROM
parameters outparamet4_
WHERE
function0_.id = outparamet4_.function_id) = 2 -- <== where clause NOT added here
Any help appreciated (either with a different approach to the problem, or with a workaround to builder.size() not working).
The where annotation is in the function entity, in the subquery you have not used that entity so the operation is correct, try using the function entity as root of the subquery, or to implement the where manually.
For the next one, it would be recommended that you include the complete Criteria API code to be more precise in the answers.

How to reduce the number of update queries hibernate makes to a cascaded table?

I'm trying to improve the update performance of an application using Hibernate 4.3.
I have a list of people and towns, like so
#Table(name = "PERSON", schema = "COUNTY", catalog = "")
#Entity
public class PersonBean{
#MapsId("town")
#ManyToOne
#JoinColumn(name = "TOWN_ID")
private Long townID;
#Column(name = "NAME")
#Basic
private String personName;
#Column(name = "UPDATE_DATE")
#Basic
private Date updateDate;
...
}
...
#Table(name = "TOWNS", schema = "COUNTY")
#Entity
public class TownBean {
#Column(name = "ID")
#Id
#GeneratedValue(strategy = GenerationType.AUTO, generator = "town_id_gen")
#SequenceGenerator(name = "town_id_gen", sequenceName = "TOWN_SEQ")
private Long id;
#Column(name = "NAME")
#Basic
private String name;
... (more simple properties) ...
#OneToMany(fetch = FetchType.LAZY, mappedBy = "people", cascade = CascadeType.ALL, orphanRemoval = true)
#BatchSize(size = 50)
private List<PersonBean> people;
When the update of these items is done simply by calling
repository.saveAndFlush(town);
Some of these towns are pretty big, 20,000 people in a town is not unusual.
In the DAO and resource, all updates occur at a town level... so for example, to change the name of one person in the town, you would PUT a new town resource with the complete new list of names.
In the database, this requires the update of all the person rows referencing that town to change the updateDate, and Hibernate does this by issuing one update query for each person, which can mean 20,000 queries. (The #BatchSize annotation applies only to reading, sadly.) This kills system performance.
I think I can optimize this by replacing the repository.saveAndFlush(town) operation with a custom transaction logic that opens a transaction, flushes the list of people in groups using batching, and then writes the town object...
Is there a smarter way I can reduce the number of update queries Hibernate sends (by batching or otherwise) without changing the system behavior? Maybe there's some cleverer way using custom SQL, Named Entity Graphs or something else?

JPA 2, understanding CascadeType.ALL and GenerationType.AUTO (EclipseLink 2.5)

I am having trouble to understand how properly persist entities with sub-entities when the JVM has been restarted and the database already contains data from previous sessions.
I have roughly the following entities:
#Entity
public class Organization {
...
#OneToOne(cascade = CascadeType.ALL, fetch = FetchType.EAGER, orphanRemoval = true)
#JoinColumn(name = "\"ADDRESS_ID\"", nullable = false)
private Address address;
}
#Entity
public class Address {
...
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "\"ADDRESS_ID\"")
private int addressId;
#ManyToOne(fetch = FetchType.EAGER, cascade = CascadeType.MERGE, optional = false)
#JoinColumn(name = "\"ADDRESS_TYPE_ID\"", nullable = false)
private AddressType addressType;
}
#Entity
public class AddressType {
...
// Not bi-directional, so nothing special here
}
It is excpected that the address types are present in the database (CascadeType.MERGE) before creating an address. A new organization is created with a new address and the address has a type set from the given selection. => This works ok when there is a clean database (only address types present).
Still developing, so every now and then I do shutdown the server (JVM) and restarted the application. Then I want to add a new organization to database which already contains data persisted in previous sessions, then I get the following error:
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL151120084237691' defined on 'ADDRESS'.
Error Code: -20001
Call: INSERT INTO "ADDRESS" ("ADDRESS_ID", "STREET_ADDRESS", "COUNTRY", "ZIP_CODE", "CITY", "ADDRESS_TYPE_ID") VALUES (?, ?, ?, ?, ?, ?)
bind => [2, testroad 1, Country, 99999, testcity, ABCDEF-123456]
It tries to use the same ID as already exists in the database. How do I make it realize that the id is already used and it should continue from last?
Notes:
- The address is persisted as part of the organization (CascadeType.ALL) not separately.
- In tests, I am loading all the existing organiztations to the same EntityManager that does the persisting operation => The organization has its addresses accessed eagerly, so they should be available in the em-cache. The duplicate address_id it complains about in unit tests seems to be an orphan entity (maybe this is the reason of the error actually?).
- I can get this error in unit tests using Derby, but a test server using Oracle DB has these same errors in log.
- I also tried adding a 'find all' query to load all address-entities into the cache of the same EntityManager that does the persisting operation of organization. The 'find all' is executed is before the persisting is done => it still failed.
// UPDATE
This same thing happens even that I use TableGenerator to get the id values.
#Entity
public class Address {
...
#Id
#GeneratedValue(strategy = GenerationType.TABLE, generator = "addr_gen")
#TableGenerator(name = "addr_gen", allocationSize = 1, initialValue = 100, table = "\"ADDRESS_GEN\"")
#Column(name = "\"ADDRESS_ID\"")
private int osoiteId;
...
}
The generator table gets created, but it remains empty. The id's however start running from the initial value of '100'.
Some more notes:
- When using self defined table and inserting a value there for the sequence, the id for address-entities continues correctly from that value. When the test is finsihed, the table gets emptied while there still remains data in the tables => Will fail next time.
- When using GenerationType.AUTO, the sequence table gets a default sequence, but after tests it is cleared (same thing as with self defined table)
^I guess this has happened in test servers and it can be duplicated by not emptying the database after test. However the sequence table gets emptied. So the question would be, how to synchronize the sequence table after JVM boot (or prevent it from not emptying itself)?
I do not know if this a good solution or even right in general for original topic, but I managed to make some kind of workaround by defining the sequences separately for all auto-generated id fields.
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "addrSeq")
#SequenceGenerator(name = "addrSeq", sequenceName = "addr_seq", allocationSize = 10)
#Column(name = "\"ADDRESS_ID\"")
private int addressId;
It seems to work, though I do not know why this behaves essentially differently than using 'AUTO'?
Is it normal that the default sequence is nulled when the server is restarted?

Hibernate Search does not work woth composite primary key using #IdClass

I've configured with hibernate-search annotation (4.1.1 version library) my class Intervento. So, I'm using jpa and in my case i can omit #DocumentId but I have a composite primary key...
#IdClass(it.domain.InterventoPK.class)
#Entity
#Indexed
#AnalyzerDef(name = "interventongram", tokenizer = #TokenizerDef(factory = StandardTokenizerFactory.class),
filters = {
#TokenFilterDef(factory = LowerCaseFilterFactory.class),
#TokenFilterDef(factory = StopFilterFactory.class, params = {
#Parameter(name = "words", value = "lucene/dictionary/stopwords.txt"),
#Parameter(name = "ignoreCase", value = "true"),
#Parameter(name = "enablePositionIncrements", value = "true")
}),
#TokenFilterDef(factory = ItalianLightStemFilterFactory.class),
#TokenFilterDef(factory = SynonymFilterFactory.class, params = {
#Parameter(name = "synonyms", value = "lucene/dictionary/synonyms.txt"),
#Parameter(name = "expand", value = "true")
}),
#TokenFilterDef(factory = SnowballPorterFilterFactory.class, params = {
#Parameter(name = "language", value = "Italian")
})
})
#Table(name = "intervento", catalog = "gestionale")
#XmlAccessorType(XmlAccessType.FIELD)
#XmlType(namespace = "Clinigo/it/domain", name = "Intervento")
#XmlRootElement(namespace = "Clinigo/it/domain")
public class Intervento implements Serializable {
private static final long serialVersionUID = 1L;
/**
*/
#Column(name = "idintervento", nullable = false)
#Basic(fetch = FetchType.EAGER)
#Id
#XmlElement
Integer idintervento;
/**
*/
#Column(name = "lingua_idlingua", nullable = false)
#Basic(fetch = FetchType.EAGER)
#Id
#XmlElement
Integer linguaIdlingua;
/**
*/
#Temporal(TemporalType.TIMESTAMP)
#Column(name = "version", nullable = false)
#Basic(fetch = FetchType.EAGER)
#XmlElement
Calendar version;
...
I'm getting....can you help me?
ERROR: HSEARCH000058: HSEARCH000116: Unexpected error during MassIndexer operation
java.lang.ClassCastException: it.domain.InterventoPK cannot be cast to java.lang.Integer
at org.hibernate.type.descriptor.java.IntegerTypeDescriptor.unwrap(IntegerTypeDescriptor.java:36)
at org.hibernate.type.descriptor.sql.IntegerTypeDescriptor$1.doBind(IntegerTypeDescriptor.java:57)
at org.hibernate.type.descriptor.sql.BasicBinder.bind(BasicBinder.java:92)
at org.hibernate.type.AbstractStandardBasicType.nullSafeSet(AbstractStandardBasicType.java:305)
at org.hibernate.type.AbstractStandardBasicType.nullSafeSet(AbstractStandardBasicType.java:300)
at org.hibernate.loader.Loader.bindPositionalParameters(Loader.java:1891)
at org.hibernate.loader.Loader.bindParameterValues(Loader.java:1862)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1737)
at org.hibernate.loader.Loader.doQuery(Loader.java:828)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:289)
at org.hibernate.loader.Loader.doList(Loader.java:2447)
at org.hibernate.loader.Loader.doList(Loader.java:2433)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2263)
at org.hibernate.loader.Loader.list(Loader.java:2258)
at org.hibernate.loader.criteria.CriteriaLoader.list(CriteriaLoader.java:122)
at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1535)
at org.hibernate.internal.CriteriaImpl.list(CriteriaImpl.java:374)
at org.hibernate.search.batchindexing.impl.IdentifierConsumerEntityProducer.loadList(IdentifierConsumerEntityProducer.java:150)
at org.hibernate.search.batchindexing.impl.IdentifierConsumerEntityProducer.loadAllFromQueue(IdentifierConsumerEntityProducer.java:117)
at org.hibernate.search.batchindexing.impl.IdentifierConsumerEntityProducer.run(IdentifierConsumerEntityProducer.java:94)
at org.hibernate.search.batchindexing.impl.OptionallyWrapInJTATransaction.run(OptionallyWrapInJTATransaction.java:84)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Hibernate Search does not handle composite id classes used with #IdClass. A workaround would be to use #EmbeddedId and place idintervento and linguaIdlingua into InterventoPK.
Seems also that you asked the same question on the Hibernate Search forum - https://forum.hibernate.org/viewtopic.php?f=9&t=1024512
You can convert your custom object / composite key to Lucene Understandable format by using bridges. For example for a class
#Entity
#Indexed
public class Person {
#EmbeddedId #DocumentId Embedded id
#FieldBridge(impl=PersonPkBridge.class)
private PersonPK id;
...
}
You can write the bridge as like this. These codes are from the book 'Hibernate Search In Action'. I found it very helpful.
Does your class declared as composite key (it.domain.InterventoPK.class, as declared via #IdClass class-level annotation) only contain two integer fields? Since you've also annotated two such Integer fields with #Id on your Intervento class, the composite key class must only contain those fields and they must have the same name. Also that composite PK class needs to be Serializable. From the docs:
"map multiple properties as #Id properties and declare an external class to be the identifier type. This class, which needs to be Serializable, is declared on the entity via the #IdClass annotation. The identifier type must contain the same properties as the identifier properties of the entity: each property name must be the same, its type must be the same as well if the entity property is of a basic type, its type must be the type of the primary key of the associated entity if the entity property is an association (either a #OneToOne or a #ManyToOne)."
http://docs.jboss.org/hibernate/annotations/3.5/reference/en/html_single/
(search "Composite identifier" in page)
I had already replied on your question on the Hibernate forums, but to complete my suggestion:
An alternative to changing your mapping is that you add add a #DocumentId on a new getter, and return any object - maybe even a string - which is a unique composite of the two ids components.
(This requires defining mapping on getters and setters however)
When using JPA you can avoid specifying the #DocumentId but you don't have to, you can still use the annotation to override the definition of identity which you want to apply on the index mapping.