How to use optimistic lock version for cascade entity in micronaut - kotlin

I already updated micronaut version to latest and micronaut-data is 3.9.1. I don't know whether it is a bug of micronaut-data.
There is a version field for opt lock in entity1 and entity2:
`data class Entity1(
#Relation(Relation.Kind.MANY_TO_ONE, cascade = [Cascade.ALL])
val verification: Entity2,
#field:Version val version: Long? = 0L
)`
`data class Entity2(
#DateCreated(truncatedTo = ChronoUnit.MILLIS) val createdAt: Instant? = null,
#DateUpdated(truncatedTo = ChronoUnit.MILLIS) val updatedAt: Instant? = null,
#field:Version val version: Long? = 0L
)`
when I use repository.update(entity1), I found the generated sql without version like below:
10:10:34.134 [Test worker] DEBUG io.micronaut.data.query - Executing SQL query: UPDATE "entity2" SET "deleted_at"=?,"updated_at"=? WHERE ("id" = ?)
And the updated sql for entity1 is correct like below:
10:10:34.139 [Test worker] DEBUG io.micronaut.data.query - Executing SQL query: UPDATE "entity1" SET "entity1_id"=?,"version"=? WHERE ("id" = ? AND "version" = ?)
I want to use the optimistic lock of entity2 when I update entity1.
How to do? Does anyone have the same case?
Thanks.

Related

JPA using #ElementCollection with #OrderColumn but it makes exception to 'duplicate key value violates unique constraint'

First of all, I'm n.b to spring and jpa. so, Sorry for the rudimentary question.
These days I tried to make server system to location points storing using springboot + jpa + docker + postgresql /kotlin
my idea is server get client call and store locations periodically
so, I using #ElementCollection for store location item with #Embeddable
but, I got exception from springTest code
Hibernate:
insert
into
pos_info_pos_list
(pos_info_id, pos_list_order, accuracy, event_time, geo_lati, geo_long)
values
(?, ?, ?, ?, ?, ?)
2022-11-12 22:07:34.963 WARN 25880 --- [ main] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: 23505
2022-11-12 22:07:34.963 ERROR 25880 --- [ main] o.h.engine.jdbc.spi.SqlExceptionHelper : ERROR: duplicate key value violates unique constraint "pos_info_pos_list_pkey"
Detail: Key (pos_info_id, pos_list_order)=(1, 0) already exists.
I'll explain the table structure below
PosInfo(one), PosData(many)
oneToMany relation
I want to use ordercolumn for performance and want posList size limitation(MAX_POS_DATA_SIZE = 200)
#Entity
data class PosInfo(
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
var id: Long? = null
) {
#ElementCollection(fetch = FetchType.EAGER, targetClass = PosData::class)
#OrderColumn
val posList: MutableList<PosData> = mutableListOf()
fun addPosData(posData: PosDataDto) {
while (posList.size >= MAX_POS_DATA_SIZE) {
posList.removeFirst()
}
val newData = PosData(posData.geoLati, posData.geoLong, posData.eventTime, posData.accuracy)
posList.add(newData)
}
}
PosData table
#Embeddable
data class PosData(
#Column
val geoLati: String,
#Column
val geoLong: String,
#Column
val eventTime: Long,
#Column
val accuracy: Int,
)
SpringTestCode is
first of all, insert maxSize posData then add more one data again
#Test
fun addPathMax() {
val dummyPosData = PosDataDto("", "", System.currentTimeMillis(), 0)
val dummyPosData2 = PosDataDto("yyyy", "eeeee", System.currentTimeMillis(), 0)
val id = "KSH"
service.tryAddUser(id, "")
val userInfo = service.getUserInfo(id)
assertThat(userInfo).isNotNull
val posIndex = userInfo!!.posIndex
val posInfo = service.getPosInfo(posIndex)
assertThat(posInfo).isNotNull
for (i in 0 until MAX_POS_DATA_SIZE) {
posInfo!!.addPosData(dummyPosData)
}
service.updatePosInfo(posInfo!!)
println("Next Input Check KSH_TEST")
val posInfo2 = service.getPosInfo(posIndex)
posInfo2!!.addPosData(dummyPosData2)
service.updatePosInfo(posInfo2!!)
}
#Transactional
service.updatePosInfo <= it just call to crudRepository save method
but I got duplicate key again and again
Q1. Shouldn't the 'pos_list_order' be 'existing last +1' since the first data of the previous data was erased and the new data was inserted? why '0'?
// Key (pos_info_id, pos_list_order)=(1, 0) already exists.
Q2. Is this structure not good for updating and storing location data periodically?(using ElementCollection, should I use OneToMany?)
=To be honest, I've tried "one To Many" before. By the way, I gave up because I was tired of fixing strange build errors. I came back with "Element Collection," which I thought was easy
Thank you in advance for all the helpful comments
===========================
= I already tried before below
OneToMany with mapped, but it made many error and when I tried insert more value, it was made all delete row and re-install all and + newer again
ElementCollection looks simple, but it was made duplicated exception again and again
I already checked using below
#CollectionTable(
name = "pos_data",
joinColumns = [JoinColumn(name = "pos_info_id")]
)
JpaRepository.save then flush doesn't work
but same result, I don't know why.. really sad
I got a solution
Now this problem was caused by my poor understanding of 'Transactional'
it's fixed with below annotation
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Rollback(false)
#Test
fun addPathMax() {
val dummyPosData = PosDataDto("", "", System.currentTimeMillis(), 0)
val dummyPosData2 = PosDataDto("yyyy", "eeeee", System.currentTimeMillis(), 0)
val id = "KSH"
service.tryAddUser(id, "")
val userInfo = service.getUserInfo(id)
assertThat(userInfo).isNotNull
val posIndex = userInfo!!.posIndex
val posInfo = service.getPosInfo(posIndex)
assertThat(posInfo).isNotNull
for (i in 0 until Constants.MAX_POS_DATA_SIZE) {
posInfo!!.addPosData(dummyPosData)
}
service.updatePosInfo(posInfo!!)
println("Next Input Check KSH_TEST")
val posInfo2 = service.getPosInfo(posIndex)
posInfo2!!.addPosData(dummyPosData2)
service.updatePosInfo(posInfo2!!)
}
I thought service already including 'Transactional' annotation
so it can be made query persist context to database
but it was not

Kotlin SpringMVC - JpaRepository generating invalid update query

I'm quite new to Spring MVC, and I'm having problems getting a simple entity update to work.
My data class looks like this...
#Entity
#Table(uniqueConstraints=[UniqueConstraint(columnNames=["name_search"])])
data class ArticleType(
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
val id: Long? = null,
val name : String = "",
val order: Int? = null,
var name_search : String = ""
)
The repository looks like so...
interface ArticleTypeRepository : JpaRepository<ArticleType, Long> {
fun findFirstById(id: Long) : ArticleType?
fun findAllByOrderByOrderAsc(): List<ArticleType>
fun findByName(name: String): ArticleType?
}
I'm trying to update the name_search column like so...
val article_type:ArticleType? = articleTypeRepository.findFirstById(1234)
if (article_type !== null) {
article_type.name_search = "abc"
articleTypeRepository.save(article_type)
}
This results in the following error...
java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'order=99 where id=1234' at line 1
I'm assuming this means the binding isn't working correctly, and it's missing the "name_search" binding, or missing the quotes or something. I've turned on logging, and I can see the following...
org.hibernate.SQL : update article_type set name=?, name_search=?, order=? where id=?
Then it lists the binding parameters "o.h.type.descriptor.sql.BasicBinder", which all appear to be correct.
I'm not sure what's going wrong, or where I need to start to try to fix it.
This is a legacy system I've inherited, and I don't fully understand it. If there is some extra information I need to provide here, please let me know.

JPA 2.1 Timestamp type field for versioning and optimistic locking always throwing OptimisticLockException

Environment: JPA 2.1, EclipseLink 2.6.3, SQL Server 2016
I want to use a field of type Timestamp for versioning and optimistic. I do not have option to use numeric column for versioning. My understanding is I just need to annotate the field with #Version and that all.
Database Table: token_t
token_id int PK
token_name varchar(100)
last_updt_dtm datetime
Entity Class
#Entity
#Table(name = "token_t")
public class TokenAE {
#Id
#Column(name = "token_id")
#GeneratedValue(strategy = GenerationType.IDENTITY)
private int tokenId;
#Column(name = "token_name")
private String tokenName;
#Version
#Column(name = "last_updt_dtm")
private Timestamp lastUpdtDtm;
// getter/setter omitted to avoid cluttering
}
Test Method
#Test
public void optimisticLockingTest1() throws Exception {
PersistenceHelper.getEntityManager().getTransaction().begin();
TokenAE tokenAE = tokenDAO.getToken(616);
assertNotNull("tokenAE is null", tokenAE);
tokenAE.setTokenName("new token name");
PersistenceHelper.getEntityManager().merge(tokenAE);
PersistenceHelper.getEntityManager().getTransaction().commit();
}
Note - PersistenceHelper is just helper class instantiating entity manager
As you can see, I am loading TokenAE updating name and doing merge. I made sure that underlying database record is not changed. So I am expecting the merge/update should be successful but it always throws OptimisticLockException.
See the stacktrace below. I enabled JPA query/param logging and I can see the UPDATE query and bind parameters. The value of last_updt_dtm in WHERE clause [2018-07-17 22:59:48.847] matches exactly to the value in database record and this UPDATE query should return rowCount 1 and it should be successful.
I have no idea what going on here. Any help is greatly appreciated.
Exception Stacktrace
[EL Fine]: sql: 2018-07-18 23:54:13.137--ClientSession(1451516720)--Connection(1323996324)--Thread(Thread[main,5,main])--
UPDATE token_t SET token_name = ?, last_updt_dtm = ? WHERE ((token_id = ?) AND (last_updt_dtm = ?))
bind => [new token name, 2018-07-18 23:54:13.35, 616, 2018-07-17 22:59:48.847]
[EL Warning]: 2018-07-18 23:54:13.286--UnitOfWork(998015174)--Thread(Thread[main,5,main])--Local Exception Stack:
Exception [EclipseLink-5006] (Eclipse Persistence Services - 2.6.3.v20160428-59c81c5): org.eclipse.persistence.exceptions.OptimisticLockException
Exception Description: The object [TokenAE [tokenId=616, tokenName=new token name, lastUpdtDtm=2018-07-18 23:54:13.35]] cannot be updated because it has changed or been deleted since it was last read.
Class> com.test.TokenAE Primary Key> 616
at org.eclipse.persistence.exceptions.OptimisticLockException.objectChangedSinceLastReadWhenUpdating(OptimisticLockException.java:144)
at org.eclipse.persistence.descriptors.VersionLockingPolicy.validateUpdate(VersionLockingPolicy.java:790)
at org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.updateObjectForWriteWithChangeSet(DatabaseQueryMechanism.java:1086)
at org.eclipse.persistence.queries.UpdateObjectQuery.executeCommitWithChangeSet(UpdateObjectQuery.java:84)
at org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:301)
at org.eclipse.persistence.queries.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:58)
at org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:904)
at org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:803)
at org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:108)
at org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:85)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2896)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1857)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1839)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1790)
at org.eclipse.persistence.internal.sessions.CommitManager.commitChangedObjectsForClassWithChangeSet(CommitManager.java:273)
at org.eclipse.persistence.internal.sessions.CommitManager.commitAllObjectsWithChangeSet(CommitManager.java:131)
at org.eclipse.persistence.internal.sessions.AbstractSession.writeAllObjectsWithChangeSet(AbstractSession.java:4264)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitToDatabase(UnitOfWorkImpl.java:1441)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitToDatabaseWithChangeSet(UnitOfWorkImpl.java:1531)
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:278)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commit(UnitOfWorkImpl.java:1113)
at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:137)
at sunlife.us.dc.bds.token.domain.TokenDAOTest.optimisticLockingTest1(TokenDAOTest.java:39)

hibernate.hbm2ddl.auto does not link sequence to id column

Question
Why I get NULL not allowed for column "ID" exception when I execute INSERT INTO PUBLIC.MY_ENTITY (name) VALUES ('test name');?
Setup
I'm using Spring Boot and Hibernate. Spring Boot is launched with properties:
hibernate.hbm2ddl.auto=update
spring.jpa.hibernate.ddl-auto=update
I have entity:
#Entity
#Table(name = "MY_ENTITY")
public class MyEntity {
#Id
#SequenceGenerator(sequenceName = "MY_ENTITY_SEQ", name = "MyEntitySeq")
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "MyEntitySeq")
private Long id;
#Column(unique = true, nullable = false)
private String name;
// getters & setters
// ...
}
Table has been generated on application start.
I can prove that sequence has been created with the next query:
SELECT * FROM INFORMATION_SCHEMA.SEQUENCES WHERE SEQUENCE_NAME = 'MY_ENTITY_SEQ'
P.S.
For some reason Hibernate does not link sequence to id auto generation. I can solve the problem with the query below. But how make Hibernate generate the query below?
ALTER TABLE PUBLIC.MY_ENTITY ALTER COLUMN ID BIGINT DEFAULT (NEXT VALUE FOR PUBLIC.MY_ENTITY_SEQ) NOT NULL NULL_TO_DEFAULT SEQUENCE PUBLIC.MY_ENTITY_SEQ;
INSERT INTO PUBLIC.MY_ENTITY (name) VALUES ('test name');
Give the #SequenceGenerator an allocationSize: #SequenceGenerator(sequenceName = "MY_ENTITY_SEQ", name = "MyEntitySeq", allocationSize=1)
Check the dialect you are using
Set "hibernate.id.new_generator_mappings" to "true"

How to create database schema using slick?

I have tried
val schemas = addresses.schema
val setup = schemas.create
val db = Database.forConfig("h2disk")
Await.result(db.run(setup), Duration.Inf)
but, apparently, it is not working. Here are some logs
[error] Caused by: org.h2.jdbc.JdbcSQLException: Schema "apps" not found; SQL statement:
[error] create table "apps"."t_address" ("name" VARCHAR,"domain" VARCHAR,"t_address_id" VARCHAR NOT NULL PRIMARY KEY) [90079-196]
[error] at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
[error] at org.h2.message.DbException.get(DbException.java:179)
[error] at org.h2.message.DbException.get(DbException.java:155)
[error] at org.h2.command.Parser.getSchema(Parser.java:688)
[error] at org.h2.command.Parser.getSchema(Parser.java:694)
We can try
val schemas = addresses.schema
val setup = DBIO.seq(sqlu"""create schema apps;""", schemas.create)
val db = Database.forConfig("h2disk")
Await.result(db.run(setup), Duration.Inf)
Notes: the schema name for some dbms is case sensitive, e.g. H2 will automatically convert schema to APPS
I had to use bind variables to make it work as follow : (prefixing the variable with #)
src: https://scala-slick.org/doc/3.3.3/sql.html#splicing-literal-values
val schemaName = "something"
val schemas = Cases(schemaName).schema
val setup = DBIO.seq(
sqlu"""create schema #${schemaName} AUTHORIZATION postgres""",
// create table schemas
schema.createIfNotExists
//add default data
...
// add rights
...
)
all the tables are defined like
class Cases(_tableTag: Tag, schemaName: String) extends profile.api.Table[CasesRow](_tableTag, Some(schemaName), "cases") {
....
}
def Cases(schema: String) = new TableQuery(tag => new Cases(tag,schemaName = schema))