Imagine you have
#EqualsAndHashcode(of = "id")
class AnAggregate {
private int id;
private Map<String, AValueObject> childrenByValueA;
void replaceChild(AValueObject childToReplace) {
childrenByValueA.put(childToReplace.getValueA(), childToReplace);
}
}
AValueObject should be a VO, but because of the nature of relational DB, it has a dummy ID:
#EqualsAndHashcode(exclude = "aDummyID")
class AValueObject {
private int aDummyID;
private String valueA;
private String valueB;
}
now, take a look at the part of xml mapping for AnAggregate class:
<map name="childrenByValueA" fetch="select" batch-size="666">
<key column="aggregate_id"/>
<map-key type="string" column="value_a"/>
<one-to-many class="AValueObject"/>
</map>
Now, when I create tables as follows:
create table an_aggregate (
aggregate_id bigint,
primary key (metric_id)
);
crate table a_value_object (
vo_id bigint,
aggregate_id bigint,
value_a varchar(255),
value_b varchar(255),
primary_key (vo_id);
);
alter table a_value_object add constraint a_fk foreign key (aggregate_id) references an_aggregate;
everything seems to work.
But if I declare:
value_a varchar(255) **not null**
then I have an integrity violation during the update operation.
Let's assume we have anAggregate - (1)
and aValueObject (1, 1, value_a, value_b)
and we want to replace that row with (_, 1, modified_value_a, modified_value_b)
It turns out, that hibernate tries to do stuff in the following order:
insert into a_value_object values (2, 1, modified_value_a,
modified_value_b)
update a_value_object set aggregate_id=null, value_a=null, where aggregate_id=1 and vo_id=1
delete from a_value_object where vo_id=1
and fail during the second step, because it violates the 'not null' constraint on 'value_a'.
Question(s):
How to overcome this? Why is hibernate executing stuff in such a strange order? Why does it try to null-out fields it shouldn't?
Related
I am using Testcontainers 1.15.2 (using postgres image 13.2-alpine) with Spring Boot 2.4.3. The Testcontainer is started using an init script which starts with a type definition, a table creation and insert values into it. I even perform a COMMIT; at the end but did not define a schema or so.
When I start the Spring Boot app the console output shows me that the init script was executed successfully.
When I execute a SELECT * FROM the result is empty. So...why are the postgresql tables empty although I did inserts before?
CREATE TYPE Erklaerungstyp AS ENUM ('AAAAA', 'BBBBB', 'CCCCC', 'DDDDD');
CREATE TYPE Geschlecht AS ENUM ('D', 'F', 'M');
DROP TABLE IF EXISTS Anschrift;
CREATE TABLE Anschrift (
a_id SERIAL PRIMARY KEY,
Zusatz VARCHAR(255),
Strasse VARCHAR(30) NOT NULL,
Hausnummer VARCHAR(30) NOT NULL,
plz VARCHAR(5) NOT NULL,
Ort VARCHAR(80) NOT NULL,
Bundesland VARCHAR(20),
Land VARCHAR(20) NOT NULL,
create_Date DATE NOT NULL,
modify_Date DATE
);
INSERT INTO Anschrift VALUES (1, null, 'Musterstrasse', '13M', '12345', 'Berlin', 'Berlin', 'Deutschland', '2001-09-28');
INSERT INTO Anschrift VALUES (2, 'bei Müller', 'Musterweg', '1-3', '54321', 'Musterhausen', 'Muster-Hausen', 'Deutschland', '2002-03-11');
DROP TABLE IF EXISTS Person;
CREATE TABLE Person (
ep_id SERIAL PRIMARY KEY,
Geschlecht Geschlecht,
Vorname VARCHAR(30) NOT NULL,
Familienname VARCHAR(30) NOT NULL,
Geburtsname VARCHAR(30) NOT NULL,
Titel VARCHAR(10),
Geburtsdatum Date NOT NULL,
Geburtsort VARCHAR(30),
Anschrift INTEGER REFERENCES Anschrift(a_id),
Email VARCHAR(80),
Telefon VARCHAR(20),
Versichertennummer VARCHAR(15) NOT NULL,
create_Date DATE NOT NULL,
modify_Date DATE
);
INSERT INTO Person VALUES (1, 'M', 'Max', 'Mustermann', 'Mustermann', 'Dipl.-Inf.', '01.01.1901', 'Berlin', 1, 'Max.Mustermann#max.de',
'0111 12 34 56 789', 'X000Y111Z999', '2001-09-28');
COMMIT;
I instantiate the Testcontainer in an abstract superclass for tests to be used in all inheriting subclassing tests:
#ActiveProfiles("test")
#Testcontainers
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public abstract class AbstractApplicationIT {
final static DockerImageName POSTGRES_IMAGE = DockerImageName.parse("postgres:13.2-alpine");
#Container
public static PostgreSQLContainer<?> postgreSQLContainer = new PostgreSQLContainer<>(POSTGRES_IMAGE);
#Test
public void contextLoads() {
}
}
In a subclass I do:
#Transactional
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class XxxIT extends AbstractApplicationIT {
#Value("${spring.datasource.password}")
private String password;
#Value("${spring.datasource.username}")
private String username;
#Value("${spring.datasource.dbname}")
private String dbName;
#Value("${spring.datasource.initScript}")
private String initScript;
#Autowired
private AnschriftJpaDao dao;
#Autowired
private XxxService xxxService;
#BeforeAll
public void setup() {
postgreSQLContainer = new PostgreSQLContainer<>(POSTGRES_IMAGE)
.withDatabaseName(dbName)
.withUsername(username)
.withPassword(password)
.withInitScript(initScript);
postgreSQLContainer.start();
}
#Test
public void checkDbContainerIsAlive() {
assertThat(this.dao.findAll()).isNotNull();
}
}
...and the test is green but when I do
#Test
public void anschrift_can_be_found() {
assertThat(this.dao.findAll().size() == 1);
List<Anschrift> anschriftList = this.dao.findAll();
System.out.println(anschriftList.size());
}
...the test is green but anschriftList is empty. Why?
And if I use Anschriften PK as a FK in Person entity...I get a LazyLoadingException although specifying fetch = FetchType.EAGER in the relationship definitions. Why?
In my application-test.yaml I defined
jpa:
hibernate:
ddl-auto: create-drop
as I found this on the internet.
-> Outcommenting this line leads to filled tables.
we're using Postgres and JPA/Hibernate to import a lot of data on a biweekly basis (~50-100M rows per import). We're trying to partition our tables per import, which has us running into some Hibernate PK/FK column mapping problems. The setup is essentially this on the SQL side
CREATE TABLE row (
import_timestamp timestamp,
id uuid,
PRIMARY KEY (import_timestamp, id)
) PARTITION BY LIST (import_timestamp);
CREATE TABLE row_detail (
import_timestamp timestamp,
id uuid,
row_id uuid,
PRIMARY KEY(import_timestamp, id),
CONSTRAINT row_detail_row_fk FOREIGN KEY (row_id, import_timestamp) REFERENCES row (id, import_timestamp)
) PARTITION BY LIST (import_timestamp);
and this on the Java side:
#Entity(name = "row")
public class RowEntity {
#EmbeddedId
private PartitionedId id;
#OneToMany(cascade = ALL, mappedBy = "row")
private List<RowDetailEntity> details;
}
#Entity(name = "row_detail")
public class RowDetailEntity {
#EmbeddedId
private PartitionedId id;
#ManyToOne
#JoinColumns({
#JoinColumn(name = "row_id", referencedColumnName = "id"),
#JoinColumn(name = "importTimestamp", referencedColumnName = "importTimestamp")
})
private RowEntity row;
}
#Embeddable
public class PartitionedId implements Serializable {
private Instant importTimestamp;
private UUID id;
}
Hibernate then complains on boot that:
column: import_timestamp (should be mapped with insert="false" update="false")
I can silence that error by doing as it says, but that makes little sense, because I am forced to set insertable=false and updatable=false for both #JoinColumn()s, which would mean row_id isn't populated on insert.
I could go the #MapsId route, but only if I give the row_detail table a PK that includes all 3 properties (import_timestamp, id, row_id), and I don't really want or need that.
So the question is, how do I get Hibernate to understand my overlapping, but not entirely nested PK/FK?
I am creating a new project and using Spring Data JPA to create some REST endpoints.
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.6.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
I am able to put and persist to my primary class (customer), which works as long as the json file does not have any oneToMany data. However, when posting to customer, if there is oneToMany data I am getting errors.
The errors relate to the foreign key being null when trying to persist. I am not sure how Spring Data JPA should be using the annotation to let hibernate know what the value of the foreign key should be.
I have looked at numerous bi-directional OneToMany examples, as well as examples for creating foreign keys and have tried a number of modifications without success.
I also tried to use the spring.jpa.hibernate.ddl-auto=update to help create and update the database schema without any luck.
The customer
#Entity
#Table(name="customer")
#EntityListeners(AuditingEntityListener.class)
public class Customer extends Auditable<String> {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#Column(name="id")
private int id;
#Column(name="first_name")
private String firstName;
#Column(name="last_name")
private String lastName;
#OneToMany(fetch=FetchType.LAZY, mappedBy="customer", cascade={CascadeType.ALL})
private List<EmailAddress> emailAddresses;
.......
The emails
#Table(name="email_address")
#EntityListeners(AuditingEntityListener.class)
public class EmailAddress extends Auditable<String> {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#Column(name="id")
private int id;
#Column(name="email_type")
private byte emailType;
#Column(name="email")
private String email;
#ManyToOne(fetch=FetchType.LAZY, cascade={CascadeType.ALL})
#JoinColumn(name="customer_id")
#JsonIgnore
private Customer customer;
.....
The postman json test
{
"id": 1,
"firstName": "Bobby",
"lastName": "Smith",
"emailAddresses": [
{
"id": 1,
"emailType": 1,
"email": "bobby#bobby.com",
},
{
"id": 2,
"emailType": 1,
"email": "bobby#gmail.com",
}
]
}
BTW, I have confirmed that within the customer controller, that the emails are included in the request body of customer.
The customer controller
#PutMapping("/customers")
public Customer updateCustomer(#RequestBody Customer theCustomer) {
System.out.println("****email count "+theCustomer.getEmailAddresses().size());
for(EmailAddress index: theCustomer.getEmailAddresses()) {
System.out.println(index.toString());
}
customerService.save(theCustomer);
return theCustomer;
}
The customer service
#Override
public void save(Customer theCustomer) {
//Validate the input
if(theCustomer == null) {
throw new CustomerNotFoundException("Did not find the Customer, was null...");
}
customerRepository.save(theCustomer);
}
MySQL Script
--
-- Table structure for table `customer`
--
DROP TABLE IF EXISTS `customer`;
CREATE TABLE `customer` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`first_name` varchar(24) COLLATE utf8_bin NOT NULL,
`last_name` varchar(24) COLLATE utf8_bin NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='Primary Customer Table';
--
-- Table structure for table `email_address`
--
DROP TABLE IF EXISTS `email_address`;
CREATE TABLE `email_address` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`email_type` tinyint(4) unsigned NOT NULL COMMENT 'email type',
`email` varchar(128) COLLATE utf8_bin NOT NULL COMMENT 'email address',
`customer_id` int(11) NOT NULL COMMENT 'foreign key',
INDEX par_ind (customer_id),
PRIMARY KEY (`id`),
UNIQUE KEY `email` (`email`),
KEY FK_EMAIL_CUSTOMER_idx (customer_id),
CONSTRAINT FK_EMAIL_CUSTOMER FOREIGN KEY (customer_id) REFERENCES customer (id) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='email addresses';
Postman Complaint
{
"status": 400,
"message": "could not execute statement; SQL [n/a]; constraint [null]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute statement",
"timeStamp": 1566840491483
}
Console Complaint
****email count 2
EmailAddress [id=1, type=1, email=bobby#bobby.com]
EmailAddress [id=2, type=1, email=bobby#gmail.com]
2019-08-28 17:33:07.625 WARN 8669 --- [nio-8080-exec-2] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 1048, SQLState: 23000
2019-08-28 17:33:07.626 ERROR 8669 --- [nio-8080-exec-2] o.h.engine.jdbc.spi.SqlExceptionHelper : Column 'customer_id' cannot be null
2019-08-28 17:33:07.629 ERROR 8669 --- [nio-8080-exec-2] o.h.i.ExceptionMapperStandardImpl : HHH000346: Error during managed flush [org.hibernate.exception.ConstraintViolationException: could not execute statement]
2019-08-28 17:33:07.735 WARN 8669 --- [nio-8080-exec-2] .m.m.a.ExceptionHandlerExceptionResolver : Resolved [org.springframework.dao.DataIntegrityViolationException: could not execute statement; SQL [n/a]; constraint [null]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute statement]
Therefore, with a post or put, I am not sure why the Spring Data JPA save does not satisfy the foreign key constraint for entities with oneToMany relationships. I am guessing it is either some missing annotations or something wrong with my sql script. Not sure why the update data does not persist to the email_address table. Does the emailAddress entity require some type of getter/setter for customer_id?
public class Customer extends Auditable<String> {
#OneToMany(fetch=FetchType.LAZY, mappedBy="customer", cascade={CascadeType.ALL})
private List<EmailAddress> emailAddresses;
}
public class EmailAddress extends Auditable<String> {
#ManyToOne(fetch=FetchType.LAZY, cascade={CascadeType.ALL})
#JoinColumn(name="customer_id")
private Customer customer;
}
The mappedBy here means that the relationship between Customer and EmailAddress (i.e. the value of customer_id in customer table ) are determined by EmailAdress#cutomer but not Customer#emailAdresses.
What you are trying to show it just the content of Customer#emailAddress which will be ignored by Hibernate when deciding which DB values to be updated/inserted for this relationship. So you have to make sure EmailAddress#customer are set correctly.
For example , you can have the following method to add an email address to a Customer
public class Customer {
#OneToMany(fetch=FetchType.LAZY, mappedBy="customer", cascade={CascadeType.ALL})
private List<EmailAddress> emailAddresses;
public void addEmailAddress(EmailAddress email){
//As said Hibernate will ignore it when persist this relationship.
//Add it mainly for the consistency of this relationship for both side in the Java instance
this.emailAddresses.add(email);
email.setCustomer(this);
}
}
And always call addEmailAddress() to add an email for a customer. You can apply the same idea for updating an email address for a customer.
I have a many-to-one relationship as below (I have removed columns that do not contribute to this discussion):
#Entity
#SecondaryTable(name = "RecordValue", pkJoinColumns = {
#PrimaryKeyJoinColumn(name = "RECORD_ID", referencedColumnName = "RECORD_ID") })
Class Record {
#Id
#Column(name = "RECORD_ID")
long recordId;
#OneToMany(mappedBy="key")
Set<RecordValue> values;
}
#Entity
class RecordValue {
#EmbeddedId
RecordValuePK pk;
#Column
long value;
#ManyToOne
#MapsId("recordId")
private Record key;
}
#Embeddable
class RecordValuePK {
#Column(name = "RECORD_ID")
#JoinColumn(referencedColumnName = "RECORD_ID", foreignKey = #ForeignKey(name = "FK_RECORD"))
long recordId;
#Column(name = "COLLECTION_DATE")
LocalDate collectionDate;
}
When hibernate creates tables, the RecordValue table has primary key consisting of only RECORD_ID and NOT COLLECTION_DATE.
What could be the problem?
Hibernate debug log shows the following:
DEBUG - Forcing column [collection_date] to be non-null as it is part of the primary key for table [recordvalue]
DEBUG - Forcing column [key_record_id] to be non-null as it is part of the primary key for table [recordvalue]
DEBUG - Forcing column [record_id] to be non-null as it is part of the primary key for table [recordvalue]
.
.
Hibernate:
create table Record (
RECORD_ID bigint not null,
primary key (RECORD_ID)
)
Hibernate:
create table RecordValue (
COLLECTION_DATE date not null,
VALUE bigint not null,
key_RECORD_ID bigint not null,
RECORD_ID bigint not null,
primary key (RECORD_ID)
)
Removing the #SecondaryTable specification has resolved this issue. The #SecondaryTable specification was forcing both tables to have the same the primary key. The found this solution after reading this blog:
https://antoniogoncalves.org/2008/05/20/primary-and-secondary-table-with-jpa.
I am trying to save an entity that has a many-to-many association to another entity and cascade the persistence to the associated entity and create the association using spring data jpa repository.
I can insert the parent entity_a which contains a set of entity_b using entityARepository.save(entityA). Spring jpa is taking care of all the inserts needed in the transaction. All the entity_b's get inserted, entity_a's get inserted and the join table in the middle has the association inserted as well. If I update the same entity_a with a new value in, say timestamp column, the same entityARepository.save(entityA) handles this and does a corresponding update.
The problem happens when there already exists entity_b (which has an association between some entity_a) and I try to insert a new entity_a with the same entity_b. It is many to many so this is how the data model is supposed to be. But instead of updating the existing entity_b during this entityA save() transaction, it tries to do inserts on entity_b and a constraint violation exception on the primary key is thrown.
org.springframework.transaction.TransactionSystemException: Could not commit JPA transaction; nested exception is javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.0.v20150309-bf26070): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (USER1.SYS_C0013494) violated
Error Code: 1
Call: INSERT INTO ENTITY_B (ID, NAME, VALUE, TIME_STAMP) VALUES (?, ?, ?, ?)
bind => [4 parameters bound]
Query: InsertObjectQuery(EntityB [name=shape, value=circle])
The problem is that spring doesn't have update(). It only has save which should handle update if it receives the same primary key. It's not doing that when a new entity_a is saved and has a collection of entity_b, if any entity_b exists, the whole transaction is failing sure to primary key constraint violation of entity_b.
public class EntityA {
#Id
#SequenceGenerator( name = "EntityASeq", sequenceName = "SQ_ENTITY_A", allocationSize = 1, initialValue = 1 )
#GeneratedValue(strategy = GenerationType.IDENTITY, generator = "EntityASeq")
#Column(name = "ID")
private Integer id;
#ManyToMany(cascade = {CascadeType.PERSIST, CascadeType.MERGE}, fetch = FetchType.LAZY)
#JoinTable(name = "MY_JOINED_TABLE",
joinColumns = {
#JoinColumn(name = "a_id", referencedColumnName = "ID")},
inverseJoinColumns = {
#JoinColumn(name = "b_id", referencedColumnName = "ID")})
private Set<EntityB> attributes;
// These three columns below have a unique constraint together.
#Column(name = "name")
private String name;
#Column(name = "tenant")
private String tenant;
#Column(name = "type")
private String type;
#Column(name = "timestamp")
private Timestamp timestamp;
}
public class EntityB {
#Id
#SequenceGenerator( name = "EntityBSeq", sequenceName = "SQ_ENTITY_B", allocationSize = 1, initialValue = 1 )
#GeneratedValue(strategy = GenerationType.IDENTITY, generator = "EntityBSeq")
#Column(name = "ID")
private Integer id;
#ManyToMany(mappedBy = "attributes")
private Set<EntityA> aSet;
// These two columns below have a unique constraint together.
#Column(name = "name")
private String name;
#Column(name = "value")
private String value;
#Column(name = "timestamp")
private Timestamp timestamp;
}
The id for each is generated by default. I also have a unique constraint on a few columns, which means if an EntityB has the same name/value as an existing one in the database, I want to just update the timestamp. That works if entity_a is already in the table and it has the same entity_b's. A and B's timestamp are updated and no error when I persist with entityARepository.save(entityA). (I do some checking on the db with findOne because the id is auto generated an not known. So if a name/value exist, I don't try to insert with a new id, I use the same one in the db and it works (similarly with entity_atenant/name/type.
It also works when I persist an existing entity_a with updated entity_b's. So if a new entity_b is associated with entity_a (that exists as an association with a different entity_a), etc, that works and the persistence is working.
The issue again, is just on INSERT of entityA via repo.save() when some entity_b
s already exist for other associations. It should be doing:
INSERT INTO entity_a ...
UPDATE entity_b ...
INSERT INTO MY_JOINED_TABLE ...
But it seems like it's doing
INSERT INTO entity_a ...
INSERT INTO entity_b ... -- fails because primary key constraint fails
INSERT INTO MY_JOINED_TABLE ...
EDIT: I tried removing CascadeType.PERSIST but I get an error saying
During synchronization a new object was found through a relationship that was not marked cascade PERSIST: EntityB [name=color, value=blue].
I wanted to try to manually insert/update but I couldn't do that. It wants me to have the EntityA specified with PERSIST because it has associations to the entityB
I tried inserting in the reverse and now I'm having issues inserting from entityB.save() when there already exists some entityA and I'm adding a new entityA to entityB