MongoDB: updateOne() duplicate key exception - mongodb-query

I am trying to save a record in cosmos db using com.mongodb.client.MongoCollection.updateOne() with UPSERT flag "true". But I am getting duplicate key _id error and on retry the same object saves into db. I am unable to figure out the root cause of this error.
Below are the environmental details
Azure cosmos version 3.6
mongo driver version 2.1.6
Unique constraint on all index field is set to false
Code
mongoCollection.updateOne(filter, new Document("$set", doc), updateOptions.upsert(true));
Exception
E11000 duplicate key error collection: my-db.myCollection. Failed _id or unique index constraint.
com.mongodb.MongoWriteException:
at com.mongodb.client.internal.MongoCollectionImpl.executeSingleWriteRequest (MongoCollectionImpl.java967)
at com.mongodb.client.internal.MongoCollectionImpl.executeUpdate (MongoCollectionImpl.java951)
at com.mongodb.client.internal.MongoCollectionImpl.updateOne (MongoCollectionImpl.java613)
at com.xyz.util.myclass.myMethod (myClass.java162)
at com.xyz.util.myclass.myMethod (myClass.java73)
at com.xyz.process.myclass.myMethod (myClass.java135)
at com.xyz.process.myclass.myMethod (myClass.java87)
at com.xyz.process.myclass.myMethod (myClass.java51)
at com.xyz.springcloudflow.myclass.myMethod (myClass.java34)
at sun.reflect.GeneratedMethodAccessor129.invoke
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java43)
at java.lang.reflect.Method.invoke (Method.java498)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke (InvocableHandlerMethod.java171)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke (InvocableHandlerMethod.java120)
at org.springframework.cloud.stream.binding.StreamListenerMessageHandler.handleRequestMessage (StreamListenerMessageHandler.java55)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal (AbstractReplyProducingMessageHandler.java123)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage (AbstractMessageHandler.java169)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch (AbstractDispatcher.java115)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch (UnicastingDispatcher.java132)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch (UnicastingDispatcher.java105)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend (AbstractSubscribableChannel.java73)
at org.springframework.integration.channel.AbstractMessageChannel.send (AbstractMessageChannel.java453)
at org.springframework.integration.channel.AbstractMessageChannel.send (AbstractMessageChannel.java401)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend (GenericMessagingTemplate.java187)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend (GenericMessagingTemplate.java166)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend (GenericMessagingTemplate.java47)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send (AbstractMessageSendingTemplate.java109)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage (MessageProducerSupport.java205)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.sendMessageIfAny (KafkaMessageDrivenChannelAdapter.java369)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$400 (KafkaMessageDrivenChannelAdapter.java74)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage (KafkaMessageDrivenChannelAdapter.java431)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage (KafkaMessageDrivenChannelAdapter.java402)
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.lambda$onMessage$0 (RetryingMessageListenerAdapter.java120)
at org.springframework.retry.support.RetryTemplate.doExecute (RetryTemplate.java287)
at org.springframework.retry.support.RetryTemplate.execute (RetryTemplate.java211)
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage (RetryingMessageListenerAdapter.java114)
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage (RetryingMessageListenerAdapter.java40)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage (KafkaMessageListenerContainer.java1275)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage (KafkaMessageListenerContainer.java1258)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener (KafkaMessageListenerContainer.java1219)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords (KafkaMessageListenerContainer.java1200)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener (KafkaMessageListenerContainer.java1120)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener (KafkaMessageListenerContainer.java935)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke (KafkaMessageListenerContainer.java751)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run (KafkaMessageListenerContainer.java700)
at java.util.concurrent.Executors$RunnableAdapter.call (Executors.java511)
at java.util.concurrent.FutureTask.run (FutureTask.java266)
at java.lang.Thread.run (Thread.java748)

Related

Mapping Data Flows - Cannot retrieve value from cached sink

I am trying lookup up a value from a cached sink. The Dataflow looks like the following
I have created a hash value in my cashed sink and want to reference that in my main pipeline.
My key for the cached sink is an array of columns. When I preview the data I get results.
My derived column is then trying to do a lookup against the cached data and running into an error.
When debugging I get the following error. What am I missing or getting wrong in this statement?
Spark job failed: {
"text/plain": "{"runId":"98c9bae9-210e-4791-9b0d-60bc557ff416","sessionId":"02bc59a8-ac6f-4eeb-952c-2e9bdda49691","status":"Failed","payload":{"statusCode":400,"shortMessage":"DF-SYS-01 at Derive 'GenerateHashKey': java.util.NoSuchElementException: key not found: Id","detailedMessage":"Failure 2022-04-26 04:07:47.375 failed DebugManager.processJob, run=98c9bae9-210e-4791-9b0d-60bc557ff416, errorMessage=DF-SYS-01 at Derive 'GenerateHashKey': java.util.NoSuchElementException: key not found: Id"}}\n"
} - RunId: 98c9bae9-210e-4791-9b0d-60bc557ff416
Thanks

Could not restore Flink keyed state backend from savepoint when upgrading from 1.10 to 1.11

We tried to migrate to Flink 1.11 recovering the job from a savepoint taken in 1.10. The job code was not changed, only updated the Flink version of the dependencies to 1.11 (in SBT, we use Scala) and re-built the jar.
All operators have uids and the job correctly recovers from that savepoint if run on a 1.10 cluster, we are getting the following exception and have no clue:
java.lang.Exception: Exception while creating StreamOperatorStateContext.
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:204)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:247)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:290)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:473)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:469)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:522)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.util.FlinkException: Could not restore keyed state backend for CoStreamFlatMap_8a6da66867c6cf8469bae55e9f834297_(1/1) from any of the 1 provided restore options.
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:317)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:144)
... 9 more
Caused by: org.apache.flink.runtime.state.BackendBuildingException: Failed when trying to restore heap backend
at org.apache.flink.runtime.state.heap.HeapKeyedStateBackendBuilder.build(HeapKeyedStateBackendBuilder.java:116)
at org.apache.flink.runtime.state.filesystem.FsStateBackend.createKeyedStateBackend(FsStateBackend.java:540)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:301)
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:142)
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:121)
... 11 more
Caused by: java.lang.IllegalStateException: Missing value for the key 'org.apache.flink.runtime.checkpoint.savepoint.Savepoint'
at org.apache.flink.util.LinkedOptionalMap.unwrapOptionals(LinkedOptionalMap.java:190)
at org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializerSnapshot.restoreSerializer(KryoSerializerSnapshot.java:86)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:546)
at java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:505)
at org.apache.flink.api.common.typeutils.NestedSerializersSnapshotDelegate.snapshotsToRestoreSerializers(NestedSerializersSnapshotDelegate.java:225)
at org.apache.flink.api.common.typeutils.NestedSerializersSnapshotDelegate.getRestoredNestedSerializers(NestedSerializersSnapshotDelegate.java:83)
at org.apache.flink.api.common.typeutils.CompositeTypeSerializerSnapshot.restoreSerializer(CompositeTypeSerializerSnapshot.java:204)
at org.apache.flink.runtime.state.StateSerializerProvider.previousSchemaSerializer(StateSerializerProvider.java:189)
at org.apache.flink.runtime.state.StateSerializerProvider.currentSchemaSerializer(StateSerializerProvider.java:164)
at org.apache.flink.runtime.state.RegisteredKeyValueStateBackendMetaInfo.getStateSerializer(RegisteredKeyValueStateBackendMetaInfo.java:136)
at org.apache.flink.runtime.state.heap.StateTable.getStateSerializer(StateTable.java:315)
at org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.createStateMap(CopyOnWriteStateTable.java:54)
at org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.createStateMap(CopyOnWriteStateTable.java:36)
at org.apache.flink.runtime.state.heap.StateTable.<init>(StateTable.java:98)
at org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.<init>(CopyOnWriteStateTable.java:49)
at org.apache.flink.runtime.state.heap.AsyncSnapshotStrategySynchronicityBehavior.newStateTable(AsyncSnapshotStrategySynchronicityBehavior.java:41)
at org.apache.flink.runtime.state.heap.HeapSnapshotStrategy.newStateTable(HeapSnapshotStrategy.java:243)
at org.apache.flink.runtime.state.heap.HeapRestoreOperation.createOrCheckStateForMetaInfo(HeapRestoreOperation.java:185)
at org.apache.flink.runtime.state.heap.HeapRestoreOperation.restore(HeapRestoreOperation.java:152)
at org.apache.flink.runtime.state.heap.HeapKeyedStateBackendBuilder.build(HeapKeyedStateBackendBuilder.java:114)
Can anyone help?
Thanks
UPDATE
The savepoint comes from a savepoint processed with the stateprocessor API and the state in the KeyedStateBootstrapFunction is made of:
var mapToDetector: MapState[String, Map[String, Detector]] = null
var detectorsConfigs: MapState[String,AnomalyStepConfiguration] = null
var outputTopic : ValueState[String]= null
var pipeStatus: MapState[String, String] = null
var debounceMap: MapState[String, Map[String, DebounceStats]] = null
org.apache.flink.runtime.checkpoint.savepoint.Savepoint is renamed in FLINK-16247. However, this class is used in savepoint metadata and should not exist in keyed state serializer on task side. In other words, did you use something related to checkpoint or savepoint on task side in state access?
I also try to use StateMachineExample to create savepoint in Flink-1.10.2 and it resumes successfully within Flink-1.11.1 cluster. The program also used CopyOnWriteStateTable by default which is what you see in your exception stack trace.

Exception twisted._threads._ithreads.AlreadyQuit: AlreadyQuit()

I'm running scrapy and inserting the result into mysql database. The spider doesn't finish successfully and gives me this error:
Exception twisted._threads._ithreads.AlreadyQuit: AlreadyQuit()
I'm not sure why workers die/quit.
Edit:
Basically I used this code to insert into a table that has one field with unique index on it.
Here's the whole error that I got:
mysql_exceptions.IntegrityError: (1062, "Duplicate entry 'www.example.com' for key 'idx_url'")
2016-02-01 03:22:07 [twisted] CRITICAL:
Exception twisted._threads._ithreads.AlreadyQuit: AlreadyQuit() in > ignored
but I got this error after running for a while (sometimes close to the end)

Rails: repeated ActiveRecord::RecordNotUnique when creating objects with Postgres?

I'm working with a Rails 4 app that needs to create a large number of objects in response to events from another system. I am getting very frequent ActiveRecord::RecordNotUnique errors (caused by PG::UniqueViolation) on the primary key column when I call create! on one of my models.
I found other answers on SO that suggest rescuing the exception and calling retry:
begin
TableName.create!(data: 'here')
rescue ActiveRecord::RecordNotUnique => e
if e.message.include? '_pkey' # Only retry primary key violations
log.warn "Retrying creation: #{e}"
retry
else
raise
end
end
While this seems to help, I am still getting tons of ActiveRecord::RecordNotUnique errors, for sequential IDs that already exist in the database (log entries abbreviated):
WARN -- Retrying creation: PG::UniqueViolation: DETAIL: Key (id)=(3067) already exists.
WARN -- Retrying creation: PG::UniqueViolation: DETAIL: Key (id)=(3068) already exists.
WARN -- Retrying creation: PG::UniqueViolation: DETAIL: Key (id)=(3069) already exists.
WARN -- Retrying creation: PG::UniqueViolation: DETAIL: Key (id)=(3070) already exists.
The IDs it's trying are in the 3000-4000 range, even though there are over 90000 records in the table in question.
Why is ActiveRecord or PostgreSQL wasting so much time sequentially trying existing IDs?
The original exception (simplified/removed query string):
{
"exception": "ActiveRecord::RecordNotUnique",
"message": "PG::UniqueViolation: ERROR: duplicate key value violates unique constraint \"table_name_pkey\"\nDETAIL: Key (id)=(3023) already exists."
}
I'm not sure how it happened, but it turned out that the PostgreSQL sequence for the table's primary key was somehow reset or got out of sync with the table:
SELECT nextval('table_name_id_seq');
-- 3456
SELECT max(id) FROM table_name;
-- 95123
I had to restart the primary key sequence at the table's last ID:
ALTER SEQUENCE table_name_id_seq RESTART 95124;
Update: here's a Rake task to reset the ID sequence for most models on a Rails 4 with PostgreSQL project:
desc 'Resets Postgres auto-increment ID column sequences to fix duplicate ID errors'
task :reset_sequences => :environment do
Rails.application.eager_load!
ActiveRecord::Base.descendants.each do |model|
unless model.attribute_names.include?('id')
Rails.logger.debug "Not resetting #{model}, which lacks an ID column"
next
end
begin
max_id = model.maximum(:id).to_i + 1
result = ActiveRecord::Base.connection.execute(
"ALTER SEQUENCE #{model.table_name}_id_seq RESTART #{max_id};"
)
Rails.logger.info "Reset #{model} sequence to #{max_id}"
rescue => e
Rails.logger.error "Error resetting #{model} sequence: #{e.class.name}/#{e.message}"
end
end
end
The following references proved useful:
https://stackoverflow.com/a/1427188
http://apidock.com/rails/ActiveRecord/Relation/find_or_create_by
https://stackoverflow.com/a/10712838
https://stackoverflow.com/a/16533829
You can also reset a sequence of a table 'table_name' using rails console
> ActiveRecord::Base.connection.reset_pk_sequence!('table_name')
(tested in rails 3.2, rails 5.0.1)

Liquibase - Error on SYSTEM.DATABASECHANGELOGLOCK while re-executing a migration

I'm using Liquibase 3.0.2, Ant task updateDatabase and change sets defined directly inside SQL scripts using comments like
--liquibase formatted sql
--changeset com.noemalife:1 dbms:oracle
etc.
The first run works fine, all change sets are executed and DB objects (oracle) are deployed. I can see DATABASECHANGELOG and DATABASECHANGELOGLOCK tables filled up.
Then I try to re-run the Ant task with the same exact configuration, expecting Liquibase to say something like "Ok, all is already deployed, nothig to do here."
But I get this instead:
C:\Users\dmusiani\Desktop\liquibase-test>ant migrate
Buildfile: build.xml
migrate:
[copy] Copying 1 file to C:\Users\dmusiani\Desktop\liquibase-test
BUILD FAILED
liquibase.exception.LockException: liquibase.exception.DatabaseException: Error executing SQL CREATE
TABLE SYSTEM.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIM
ESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID)); on jdbc:oracl
e:thin:#localhost:1521:WBMDINSERT INTO SYSTEM.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0): ORA-
00955: nome giÓ utilizzato da un oggetto esistente
at liquibase.lockservice.LockServiceImpl.acquireLock(LockServiceImpl.java:122)
at liquibase.lockservice.LockServiceImpl.waitForLock(LockServiceImpl.java:62)
at liquibase.Liquibase.update(Liquibase.java:123)
at liquibase.integration.ant.DatabaseUpdateTask.executeWithLiquibaseClassloader(DatabaseUpda
teTask.java:45)
at liquibase.integration.ant.BaseLiquibaseTask.execute(BaseLiquibaseTask.java:70)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:357)
at org.apache.tools.ant.Target.performTasks(Target.java:385)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337)
at org.apache.tools.ant.Project.executeTarget(Project.java:1306)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1189)
at org.apache.tools.ant.Main.runBuild(Main.java:758)
at org.apache.tools.ant.Main.startAnt(Main.java:217)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:104)
Caused by: liquibase.exception.DatabaseException: Error executing SQL CREATE TABLE SYSTEM.DATABASECH
ANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR
2(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID)); on jdbc:oracle:thin:#localhost:1521:W
BMDINSERT INTO SYSTEM.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0): ORA-00955: nome giÓ utilizza
to da un oggetto esistente
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:56)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:98)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:64)
at liquibase.database.AbstractJdbcDatabase.checkDatabaseChangeLogLockTable(AbstractJdbcDatab
ase.java:771)
at liquibase.lockservice.LockServiceImpl.acquireLock(LockServiceImpl.java:95)
... 21 more
Caused by: java.sql.SQLException: ORA-00955: nome giÓ utilizzato da un oggetto esistente
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:113)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:754)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:210)
at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:963)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1192)
at oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1731)
at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1701)
at liquibase.executor.jvm.JdbcExecutor$1ExecuteStatementCallback.doInStatement(JdbcExecutor.
java:86)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:49)
... 25 more
Total time: 1 second
C:\Users\dmusiani\Desktop\liquibase-test>
It seems to me that Liquibase is trying to re-create the DATABASECHANGELOGLOCK table.
I have this problem when I run Liquibase with the Oracle "system" user (my patch cares about creating a couple of other users, thus for testing purposes I use system directly to do that).
The other strange thing is that after the system's patch run succesfully, in the lock table I can still see the lock is active.
When I run other patches in other schemas(ex. the ones created by the system's patch), I have the patch completing successfully and the lock released in the lock table; relaunching that patch behaves as expected: Liquibase detects the patch is already in place ad does nothing.
This said, now my doubt is if Liquibase has problems, in the system schema, in detecting the lock table is already existing (and fails trying to deploy it) or if there is some kind of locking/commit problem.
Any suggestion is welcome
Thanks
Davide
I'm facing the same issue as you.
From what I see from the sources, when running as SYSTEM, the following condition (DatabaseSnapshot#include) is evaluated to true.
if (database.isSystemObject(example)) {
return null;
}
Because of that, the creation will always be attempted.
I'm gonna work further on a patch and keep you updated.
And here is a patch proposal.