Testing unique constraint in #DataJpaTest - kotlin

I wrote this test to verify unique constraint on Domain.name in the database. But it doesn't work: I expect an exception to be thrown on the domainRepository.saveAndFlush(domainDuplicate) operation, but the test ends successfully.
#RunWith(SpringRunner::class)
#DataJpaTest
class DomainRepositoryTest {
#Autowired
private lateinit var util: TestEntityManager
#Autowired
private lateinit var domainRepository: DomainRepository
#Test
fun testNonUniqueDomainSave() {
// Arrange
val domain = Domain(name = "name")
util.persist(domain)
util.flush()
util.clear()
val domainDuplicate = domain.copy(id = 0L)
// Act
domainRepository.saveAndFlush(domainDuplicate)
// Exception is expected
}
}
Test log (shortened):
INFO 13522 --- [ main] o.s.t.c.transaction.TransactionContext : Began transaction (1) for test context [DefaultTestContext#8f8717b testClass = DomainRepositoryTest,...]; transaction manager [org.springframework.orm.jpa.JpaTransactionManager#65f36591]; rollback [true]
Hibernate: insert into domains (name, id) values (?, ?)
Hibernate: insert into domains (name, id) values (?, ?)
Hibernate: insert into domains (name, id) values (?, ?)
INFO 13522 --- [ main] o.s.t.c.transaction.TransactionContext : Rolled back transaction for test: [DefaultTestContext#8f8717b testClass = DomainRepositoryTest, ...], attributes = map[[empty]]]
Question: How to fix this test?
Additional question: Why 3 insert operations in log?
Database: H2

It was a problem with database initialization in tests: there was no unique constraint! I assumed that Liquibase should run migrations before any tests but in fact, it was not configured to do so. By default, Hibernate DDL auto update is used to create DB schema in tests.
I can think of 2 possible solutions:
add liquibase-core jar to test classpath and configure it to run migrations
declare #UniqueConstraint on domain entity and rely on Hibernate DDL.

The reason is that saveAndFlash() is doing an update to the entity if it exists (Yes, the name of the method is confusing..)
If you want to check your case, you need to override saveAndFlash() and use EntityManager by using persist method.
Here an example of override save() method of Spring JPA:
#PersistenceContext
private EntityManager em;
#Override
#Transactional
public Domain save(Domain domain) {
if (domain.getId() == null) {
em.persist(domain);
return domain;
} else {
return em.merge(domain);
}
}

Related

kotlin exposed failed (insert is successful, but select is failure)

thanks reading this question.
I created simple kotlin project and I want to learn kotlin exposed.
I use H2 database.
I wrote code like below.
package learn.exposed.tables
import org.jetbrains.exposed.sql.Table
object AuthorTable : Table("author") {
val name = varchar("name", 30)
}
fun main() {
// this url based on http://www.h2database.com/html/features.html#execute_sql_on_connection
val url = "jdbc:h2:mem:test;INIT=runscript from 'classpath:/create.sql'\\;runscript from 'classpath:/init.sql'"
Database.connect(url, driver = "org.h2.Driver", user = "root", password = "")
transaction {
AuthorTable.insert {
it[name] = "hoge"
}
println("insert done.") // this message can show on console. I think Insert is successfull.
}
transaction {
AuthorTable.selectAll().firstOrNull()
}
}
and sql files below.
create table author (name varchar(30));
insert into author values ('author1');
When execute main(), console showing insert done.. in short, I think insert is doing well, but when execute AuthorTable.selectAll().firstOrNull(), happend Exception like below,
Exception in thread "main" org.jetbrains.exposed.exceptions.ExposedSQLException: org.h2.jdbc.JdbcSQLNonTransientException: 一般エラー: "java.lang.NullPointerException"
General error: "java.lang.NullPointerException" [50000-200]
SQL: [Failed on expanding args for SELECT: org.jetbrains.exposed.sql.Query#27406a17]
at org.jetbrains.exposed.sql.statements.Statement.executeIn$exposed_core(Statement.kt:62)
at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:135)
at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:121)
at org.jetbrains.exposed.sql.AbstractQuery.iterator(AbstractQuery.kt:65)
at kotlin.collections.CollectionsKt___CollectionsKt.firstOrNull(_Collections.kt:267)
at learn.exposed.MainKt$main$2.invoke(Main.kt:22)
at learn.exposed.MainKt$main$2.invoke(Main.kt)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.inTopLevelTransaction$run(ThreadLocalTransactionManager.kt:179)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.access$inTopLevelTransaction$run(ThreadLocalTransactionManager.kt:1)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$inTopLevelTransaction$1.invoke(ThreadLocalTransactionManager.kt:205)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.keepAndRestoreTransactionRefAfterRun(ThreadLocalTransactionManager.kt:213)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.inTopLevelTransaction(ThreadLocalTransactionManager.kt:204)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$transaction$1.invoke(ThreadLocalTransactionManager.kt:156)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.keepAndRestoreTransactionRefAfterRun(ThreadLocalTransactionManager.kt:213)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction(ThreadLocalTransactionManager.kt:126)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction(ThreadLocalTransactionManager.kt:123)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction$default(ThreadLocalTransactionManager.kt:122)
at learn.exposed.MainKt.main(Main.kt:21)
at learn.exposed.MainKt.main(Main.kt)
can I solve this? do you know something how to solve this?
thanks.
Seems like you need at least one Primary Key(PK) or Constraint because of H2 bug.
https://github.com/h2database/h2database/issues/2191
https://github.com/JetBrains/Exposed/issues/801

Hibernate JPQL query is very slow compared to SQL

In my project I have a problem with one JPQL query which takes about 1,5s. When I execute SQL query copied from debug log (the same one which executes Hibernates) direct on PostgreSQL db it takes about 15ms.
#Service
#Transactional
#Slf4j
public class PersonSyncServiceImpl
extends HotelApiCommunicationService implements PersonSyncService {
[...]
private PersonLinked getLinkedPerson(CNPerson cnPerson, Obiekt obiekt) {
PersonLinked person = cnPerson.getPersonLinked();
if (person == null) {
var o = cnPerson;
List<PersonLinked> personList = personLinkedDao.findPersonLinkedByRecordData(
o.getPersonImiona(), o.getPersonNazwisko(), o.getPersonPesel(), o.getPersonEmail(), o.getPersonTelefon1());
person = personList.stream()
.findFirst().orElse(null);
if (person == null) {
person = createPersonLinkedFromCnPerson(cnPerson);
personLinkedDao.save(person);
}
cnPerson.setPersonLinked(person);
}
return person;
}
[...]
}
Problem is with this line:
List<PersonLinked> personList = personLinkedDao.findPersonLinkedByRecordData(
o.getPersonImiona(), o.getPersonNazwisko(), o.getPersonPesel(), o.getPersonEmail(), o.getPersonTelefon1());
Dao with defined query:
#Repository
#Transactional
public interface PersonLinkedDao extends JpaRepository<PersonLinked, Long> {
#Query("select o from PersonLinked o \n" +
"where o.personImiona = :imie and o.personNazwisko = :nazwisko \n" +
" and (o.personPesel = :pesel or o.personEmail = :email or o.personTelefon1 = :telefon)")
List<PersonLinked> findPersonLinkedByRecordData(
#Param("imie") String personImiona,
#Param("nazwisko") String personNazwisko,
#Param("pesel") String personPesel,
#Param("email") String personEmail,
#Param("telefon") String personTelefon);
}
SQL from Hibernate debug log:
select [..]
from
person personlinke0_
where
personlinke0_.person_imiona=?
and personlinke0_.person_nazwisko=?
and (
personlinke0_.person_pesel=?
or personlinke0_.person_email=?
or personlinke0_.person_telefon1=?
)
When I execute this query on database it takes about 15ms, execution from code takes about 1,5s. I commented out this line in code and lag disappeared, so for sure problem is this jpql select.
Database connection configuration:
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQL9Dialect
spring.datasource.url=jdbc:postgresql://192.168.1.200:5433/XXXXXXX
spring.datasource.username=XXXXX
spring.datasource.password=XXXXX
spring.jpa.show-sql=false
spring.jpa.properties.hibernate.format_sql=true
spring.jpa.properties.hibernate.jdbc.batch_size=50
spring.jpa.properties.hibernate.order_inserts=true
spring.jpa.properties.hibernate.generate_statistics=true
UPDATE 1:
debug.log:
26-09-2020 16:06:36.130 [http-nio-8091-exec-2] DEBUG org.hibernate.SQL.logStatement -
select [...]
from
person personlinke0_
where
personlinke0_.person_imiona=?
and personlinke0_.person_nazwisko=?
and (
personlinke0_.person_pesel=?
or personlinke0_.person_email=?
or personlinke0_.person_telefon1=?
)
26-09-2020 16:06:36.130 [http-nio-8091-exec-2] DEBUG o.s.orm.jpa.JpaTransactionManager.doGetTransaction - Found thread-bound EntityManager [SessionImpl(1971671100<open>)] for JPA transaction
26-09-2020 16:06:36.130 [http-nio-8091-exec-2] DEBUG o.s.orm.jpa.JpaTransactionManager.handleExistingTransaction - Participating in existing transaction
26-09-2020 16:06:36.146 [http-nio-8091-exec-2] DEBUG o.s.orm.jpa.JpaTransactionManager.doGetTransaction - Found thread-bound EntityManager [SessionImpl(1971671100<open>)] for JPA transaction
26-09-2020 16:06:36.146 [http-nio-8091-exec-2] DEBUG o.s.orm.jpa.JpaTransactionManager.handleExistingTransaction - Participating in existing transaction
26-09-2020 16:06:36.146 [http-nio-8091-exec-2] DEBUG o.s.orm.jpa.JpaTransactionManager.doGetTransaction - Found thread-bound EntityManager [SessionImpl(1971671100<open>)] for JPA transaction
26-09-2020 16:06:36.146 [http-nio-8091-exec-2] DEBUG o.s.orm.jpa.JpaTransactionManager.handleExistingTransaction - Participating in existing transaction
26-09-2020 16:06:37.521 [http-nio-8091-exec-2] DEBUG org.hibernate.SQL.logStatement -
UPDATE 2:
PersonLinked entity class:
#Entity
#Table(name = "person")
#Getter
#Setter
#SuperBuilder
#EqualsAndHashCode(of = "personId")
public class PersonLinked extends SCPerson {
#Id
#GeneratedValue(generator = "seq_person", strategy = GenerationType.SEQUENCE)
#SequenceGenerator(name = "seq_person", sequenceName = "seq_person", allocationSize = 30)
#Column(name = "OSOBA_ID", nullable = false)
private Long personId;
#OneToMany(mappedBy = "personLinked", fetch = FetchType.LAZY)
private List<CNPerson> cnPersonList;
#Tolerate
public PersonLinked() {
super();
}
#PrePersist
#Override
protected void preInsert() {
super.preInsert();
}
}
SCPerson class:
#MappedSuperclass
#Getter
#Setter
#SuperBuilder
public class SCPerson {
[...]
}
Finally I found a solution, problem was in another part of code.
Before calling method getLinkedPerson() I had this line of code:
List<CNPerson> cnPersonList = cnPersonDao.findCnPersonNotLinkedWithPerson(obiekt.getLoid());
cnPersonList constans here about 70 000 objects.
I changed it to:
List<Integer> ids = cnPersonDao.findCnPersonIdsNotLinkedWithPerson(obiekt.getLoid());
Problem is described here: https://stackoverflow.com/a/46258045/9678458
Slow down during Hibernate context refresh. In case when you update
too many objects ORM engine (lets say Hibernate) has to sink them and
keep them in memory. Literally Hibernate must have all old states and
all new states of updated objects. Sometimes it does this quite
unoptimal.
You can indicate this using debug. Try to find the slowest place and
check what exactly is being invoked there. I might guess that it slows
down when hibernate updates state of the cache.
I think it is because entities CNPerson and PersonLinked are linked, but I am not sure:
#ManyToOne(fetch = FetchType.LAZY,
cascade = {CascadeType.MERGE, CascadeType.PERSIST})
#JoinTable(name = "cnperson_links",
joinColumns = {#JoinColumn(name = "cnperson_loid")},
inverseJoinColumns = {#JoinColumn(name = "person_id")})
private PersonLinked personLinked;

JPA 2.1 Timestamp type field for versioning and optimistic locking always throwing OptimisticLockException

Environment: JPA 2.1, EclipseLink 2.6.3, SQL Server 2016
I want to use a field of type Timestamp for versioning and optimistic. I do not have option to use numeric column for versioning. My understanding is I just need to annotate the field with #Version and that all.
Database Table: token_t
token_id int PK
token_name varchar(100)
last_updt_dtm datetime
Entity Class
#Entity
#Table(name = "token_t")
public class TokenAE {
#Id
#Column(name = "token_id")
#GeneratedValue(strategy = GenerationType.IDENTITY)
private int tokenId;
#Column(name = "token_name")
private String tokenName;
#Version
#Column(name = "last_updt_dtm")
private Timestamp lastUpdtDtm;
// getter/setter omitted to avoid cluttering
}
Test Method
#Test
public void optimisticLockingTest1() throws Exception {
PersistenceHelper.getEntityManager().getTransaction().begin();
TokenAE tokenAE = tokenDAO.getToken(616);
assertNotNull("tokenAE is null", tokenAE);
tokenAE.setTokenName("new token name");
PersistenceHelper.getEntityManager().merge(tokenAE);
PersistenceHelper.getEntityManager().getTransaction().commit();
}
Note - PersistenceHelper is just helper class instantiating entity manager
As you can see, I am loading TokenAE updating name and doing merge. I made sure that underlying database record is not changed. So I am expecting the merge/update should be successful but it always throws OptimisticLockException.
See the stacktrace below. I enabled JPA query/param logging and I can see the UPDATE query and bind parameters. The value of last_updt_dtm in WHERE clause [2018-07-17 22:59:48.847] matches exactly to the value in database record and this UPDATE query should return rowCount 1 and it should be successful.
I have no idea what going on here. Any help is greatly appreciated.
Exception Stacktrace
[EL Fine]: sql: 2018-07-18 23:54:13.137--ClientSession(1451516720)--Connection(1323996324)--Thread(Thread[main,5,main])--
UPDATE token_t SET token_name = ?, last_updt_dtm = ? WHERE ((token_id = ?) AND (last_updt_dtm = ?))
bind => [new token name, 2018-07-18 23:54:13.35, 616, 2018-07-17 22:59:48.847]
[EL Warning]: 2018-07-18 23:54:13.286--UnitOfWork(998015174)--Thread(Thread[main,5,main])--Local Exception Stack:
Exception [EclipseLink-5006] (Eclipse Persistence Services - 2.6.3.v20160428-59c81c5): org.eclipse.persistence.exceptions.OptimisticLockException
Exception Description: The object [TokenAE [tokenId=616, tokenName=new token name, lastUpdtDtm=2018-07-18 23:54:13.35]] cannot be updated because it has changed or been deleted since it was last read.
Class> com.test.TokenAE Primary Key> 616
at org.eclipse.persistence.exceptions.OptimisticLockException.objectChangedSinceLastReadWhenUpdating(OptimisticLockException.java:144)
at org.eclipse.persistence.descriptors.VersionLockingPolicy.validateUpdate(VersionLockingPolicy.java:790)
at org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.updateObjectForWriteWithChangeSet(DatabaseQueryMechanism.java:1086)
at org.eclipse.persistence.queries.UpdateObjectQuery.executeCommitWithChangeSet(UpdateObjectQuery.java:84)
at org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:301)
at org.eclipse.persistence.queries.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:58)
at org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:904)
at org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:803)
at org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:108)
at org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:85)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2896)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1857)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1839)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1790)
at org.eclipse.persistence.internal.sessions.CommitManager.commitChangedObjectsForClassWithChangeSet(CommitManager.java:273)
at org.eclipse.persistence.internal.sessions.CommitManager.commitAllObjectsWithChangeSet(CommitManager.java:131)
at org.eclipse.persistence.internal.sessions.AbstractSession.writeAllObjectsWithChangeSet(AbstractSession.java:4264)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitToDatabase(UnitOfWorkImpl.java:1441)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitToDatabaseWithChangeSet(UnitOfWorkImpl.java:1531)
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:278)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commit(UnitOfWorkImpl.java:1113)
at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:137)
at sunlife.us.dc.bds.token.domain.TokenDAOTest.optimisticLockingTest1(TokenDAOTest.java:39)

JPA 2, understanding CascadeType.ALL and GenerationType.AUTO (EclipseLink 2.5)

I am having trouble to understand how properly persist entities with sub-entities when the JVM has been restarted and the database already contains data from previous sessions.
I have roughly the following entities:
#Entity
public class Organization {
...
#OneToOne(cascade = CascadeType.ALL, fetch = FetchType.EAGER, orphanRemoval = true)
#JoinColumn(name = "\"ADDRESS_ID\"", nullable = false)
private Address address;
}
#Entity
public class Address {
...
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "\"ADDRESS_ID\"")
private int addressId;
#ManyToOne(fetch = FetchType.EAGER, cascade = CascadeType.MERGE, optional = false)
#JoinColumn(name = "\"ADDRESS_TYPE_ID\"", nullable = false)
private AddressType addressType;
}
#Entity
public class AddressType {
...
// Not bi-directional, so nothing special here
}
It is excpected that the address types are present in the database (CascadeType.MERGE) before creating an address. A new organization is created with a new address and the address has a type set from the given selection. => This works ok when there is a clean database (only address types present).
Still developing, so every now and then I do shutdown the server (JVM) and restarted the application. Then I want to add a new organization to database which already contains data persisted in previous sessions, then I get the following error:
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL151120084237691' defined on 'ADDRESS'.
Error Code: -20001
Call: INSERT INTO "ADDRESS" ("ADDRESS_ID", "STREET_ADDRESS", "COUNTRY", "ZIP_CODE", "CITY", "ADDRESS_TYPE_ID") VALUES (?, ?, ?, ?, ?, ?)
bind => [2, testroad 1, Country, 99999, testcity, ABCDEF-123456]
It tries to use the same ID as already exists in the database. How do I make it realize that the id is already used and it should continue from last?
Notes:
- The address is persisted as part of the organization (CascadeType.ALL) not separately.
- In tests, I am loading all the existing organiztations to the same EntityManager that does the persisting operation => The organization has its addresses accessed eagerly, so they should be available in the em-cache. The duplicate address_id it complains about in unit tests seems to be an orphan entity (maybe this is the reason of the error actually?).
- I can get this error in unit tests using Derby, but a test server using Oracle DB has these same errors in log.
- I also tried adding a 'find all' query to load all address-entities into the cache of the same EntityManager that does the persisting operation of organization. The 'find all' is executed is before the persisting is done => it still failed.
// UPDATE
This same thing happens even that I use TableGenerator to get the id values.
#Entity
public class Address {
...
#Id
#GeneratedValue(strategy = GenerationType.TABLE, generator = "addr_gen")
#TableGenerator(name = "addr_gen", allocationSize = 1, initialValue = 100, table = "\"ADDRESS_GEN\"")
#Column(name = "\"ADDRESS_ID\"")
private int osoiteId;
...
}
The generator table gets created, but it remains empty. The id's however start running from the initial value of '100'.
Some more notes:
- When using self defined table and inserting a value there for the sequence, the id for address-entities continues correctly from that value. When the test is finsihed, the table gets emptied while there still remains data in the tables => Will fail next time.
- When using GenerationType.AUTO, the sequence table gets a default sequence, but after tests it is cleared (same thing as with self defined table)
^I guess this has happened in test servers and it can be duplicated by not emptying the database after test. However the sequence table gets emptied. So the question would be, how to synchronize the sequence table after JVM boot (or prevent it from not emptying itself)?
I do not know if this a good solution or even right in general for original topic, but I managed to make some kind of workaround by defining the sequences separately for all auto-generated id fields.
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "addrSeq")
#SequenceGenerator(name = "addrSeq", sequenceName = "addr_seq", allocationSize = 10)
#Column(name = "\"ADDRESS_ID\"")
private int addressId;
It seems to work, though I do not know why this behaves essentially differently than using 'AUTO'?
Is it normal that the default sequence is nulled when the server is restarted?

Using ActiveRecord/NHibernate, can I Delete and Refresh without a Flush?

I have the following Unit Test method:
void TestOrderItemDelete()
{
using (new SessionScope())
{
var order = Order.FindById(1234);
var originalItemCount = order.OrderItems.Count;
Assert.IsTrue(originalCount > 0);
var itemToDelete = order.OrderItems[0];
itemToDelete.DeleteAndFlush(); // itemToDelete.Delete();
order.Refresh();
Assert.AreEqual(originalCount - 1, order.OrderItems.Count);
}
}
As you can see from the comment after the DeleteAndFlush command, I had to change it from a simple Delete to get the Unit test to pass. Why is this? The same is not true for my other unit test for adding an OrderItem. This works just fine:
void TestOrderItemAdd()
{
using (new SessionScope())
{
var order = Order.FindById(1234);
var originalItemCount = order.OrderItems.Count;
var itemToAdd = new OrderItem();
itemToAdd.Order = order;
itemToAdd.Create(); // Notice, this is not CreateAndFlush
order.Refresh();
Assert.AreEqual(originalCount + 1, order.OrderItems.Count);
}
}
All of this came up when I started using Lazy Instantiation of the Order.OrderItems relationship mapping, and had to add the using(new SessionScope) block around the test.
Any ideas?
This is difficult to troubleshoot without knowing the contents of your mappings, but one possibility is that you have the ID property of the OrderItem mapped using an identity field (or sequence, etc.) in the DB. If this is the case, NHibernate must make a trip to the database in order to generate the ID field, so the OrderItem is inserted immediately. This is not true of a delete, so the SQL delete statement isn't executed until session flush.