Rails: repeated ActiveRecord::RecordNotUnique when creating objects with Postgres? - sql

I'm working with a Rails 4 app that needs to create a large number of objects in response to events from another system. I am getting very frequent ActiveRecord::RecordNotUnique errors (caused by PG::UniqueViolation) on the primary key column when I call create! on one of my models.
I found other answers on SO that suggest rescuing the exception and calling retry:
begin
TableName.create!(data: 'here')
rescue ActiveRecord::RecordNotUnique => e
if e.message.include? '_pkey' # Only retry primary key violations
log.warn "Retrying creation: #{e}"
retry
else
raise
end
end
While this seems to help, I am still getting tons of ActiveRecord::RecordNotUnique errors, for sequential IDs that already exist in the database (log entries abbreviated):
WARN -- Retrying creation: PG::UniqueViolation: DETAIL: Key (id)=(3067) already exists.
WARN -- Retrying creation: PG::UniqueViolation: DETAIL: Key (id)=(3068) already exists.
WARN -- Retrying creation: PG::UniqueViolation: DETAIL: Key (id)=(3069) already exists.
WARN -- Retrying creation: PG::UniqueViolation: DETAIL: Key (id)=(3070) already exists.
The IDs it's trying are in the 3000-4000 range, even though there are over 90000 records in the table in question.
Why is ActiveRecord or PostgreSQL wasting so much time sequentially trying existing IDs?
The original exception (simplified/removed query string):
{
"exception": "ActiveRecord::RecordNotUnique",
"message": "PG::UniqueViolation: ERROR: duplicate key value violates unique constraint \"table_name_pkey\"\nDETAIL: Key (id)=(3023) already exists."
}

I'm not sure how it happened, but it turned out that the PostgreSQL sequence for the table's primary key was somehow reset or got out of sync with the table:
SELECT nextval('table_name_id_seq');
-- 3456
SELECT max(id) FROM table_name;
-- 95123
I had to restart the primary key sequence at the table's last ID:
ALTER SEQUENCE table_name_id_seq RESTART 95124;
Update: here's a Rake task to reset the ID sequence for most models on a Rails 4 with PostgreSQL project:
desc 'Resets Postgres auto-increment ID column sequences to fix duplicate ID errors'
task :reset_sequences => :environment do
Rails.application.eager_load!
ActiveRecord::Base.descendants.each do |model|
unless model.attribute_names.include?('id')
Rails.logger.debug "Not resetting #{model}, which lacks an ID column"
next
end
begin
max_id = model.maximum(:id).to_i + 1
result = ActiveRecord::Base.connection.execute(
"ALTER SEQUENCE #{model.table_name}_id_seq RESTART #{max_id};"
)
Rails.logger.info "Reset #{model} sequence to #{max_id}"
rescue => e
Rails.logger.error "Error resetting #{model} sequence: #{e.class.name}/#{e.message}"
end
end
end
The following references proved useful:
https://stackoverflow.com/a/1427188
http://apidock.com/rails/ActiveRecord/Relation/find_or_create_by
https://stackoverflow.com/a/10712838
https://stackoverflow.com/a/16533829

You can also reset a sequence of a table 'table_name' using rails console
> ActiveRecord::Base.connection.reset_pk_sequence!('table_name')
(tested in rails 3.2, rails 5.0.1)

Related

Odoo create stock.move.line

when i try to create stock move line in transfer with automation with the following code, error pop up saying "psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block".
The code:
result=[]
result.append(
{'company_id':record.partner_id.id,
'date':record.date,
'location_dest_id':5,
'location_id':8 ,
'product_uom_qty':1,
'product_uom_id':32,
'product_id':465
})
env['stock.move.line'].create(result)
May I ask any idea what is the problem with my code or how can i programmtically create stock move line. Thanks
You set the company_id to the partner_id which may not be present in res.company table and
if it happens you should see the following error message:
DETAIL: Key (company_id)=(...) is not present in table "res_company".
This will prevent Odoo from creating a stock move line, try to set the company_id to self.env.user.company_id.id
Note that since v12 Odoo supports passing a list of values to create function

SeaORM column "owner_id" referenced in foreign key constraint does not exist

I have an error with SeaORM, Rust and Postgres, I'm new in the world of the databases so sorry if this is a newbie question. I wanted to refresh the migrations with sea-orm-cli, but i got this error Execution Error: error returned from database: column "owner_id" referenced in foreign key constraint does not exist.
the full output of the command is this one:
Running `cargo run --manifest-path ./migration/Cargo.toml -- fresh -u postgres://postgres#localhost:5432/task_manager`
Finished dev [unoptimized + debuginfo] target(s) in 0.54s
Running `migration/target/debug/migration fresh -u 'postgres://postgres#localhost:5432/task_manager'`
Dropping table 'seaql_migrations'
Table 'seaql_migrations' has been dropped
Dropping table 'owner'
Table 'owner' has been dropped
Dropping all types
Applying all pending migrations
Applying migration 'm20221106_182043_create_owner_table'
Migration 'm20221106_182043_create_owner_table' has been applied
Applying migration 'm20221106_182552_create_comment_table'
Execution Error: error returned from database: column "owner_id" referenced in foreign key constraint does not exist
I thought it could be some problem with my code in the file called m20221106_182552_create_comment_table.rs, but I took a look to it and it doesn't seems to be a problem.
use sea_orm_migration::prelude::*;
use super::m20221106_182043_create_owner_table::Owner;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table(
Table::create()
.table(Comment::Table)
.if_not_exists()
.col(
ColumnDef::new(Comment::Id)
.integer()
.not_null()
.auto_increment()
.primary_key(),
)
.col(ColumnDef::new(Comment::Url).string().not_null())
.col(ColumnDef::new(Comment::Body).string().not_null())
.col(ColumnDef::new(Comment::IssueUrl).string().not_null())
.col(ColumnDef::new(Comment::CreatedAt).string())
.col(ColumnDef::new(Comment::UpdatedAt).string())
.foreign_key(
ForeignKey::create()
.name("fk-comment-owner_id")
.from(Comment::Table, Comment::OwnerId)
.to(Owner::Table, Owner::Id),
)
.to_owned(),
)
.await
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.drop_table(Table::drop().table(Comment::Table).to_owned())
.await
}
}
#[derive(Iden)]
pub enum Comment {
Table,
Id,
Url,
Body,
IssueUrl,
CreatedAt,
UpdatedAt,
OwnerId,
}
I was following this official tutorial from the SeaQL team but I don't know if there is something outdated, hope you can help me with this.
You are missing the OwnerId column in the comments table. You have to create it first and then create the FK constraint on it.
See in the tutorial there is
.col(ColumnDef::new(Chef::BakeryId).integer().not_null())
.foreign_key(
ForeignKey::create()
.name("fk-chef-bakery_id")
.from(Chef::Table, Chef::BakeryId)
.to(Bakery::Table, Bakery::Id),
)

How to reset the auto incremented id column to 1 whenever I seed the database in postgreSQL?

I am using Supabase as a backend which provides postgres as its database and prisma for my next-js app. I wanted to seed some dummy data into the database. The first time I run the seed script, everything's fine but when I ran the script multiple times, even though I deleted all rows before seeding, the primary id (which auto increments by default) of the table is not resetting to 1. Instead, it is incrementing from the previous value. I tried something like this:
await prisma.user.deleteMany();
await prisma.post.deleteMany();
await prisma.$queryRaw`ALTER SEQUENCE user_id_seq RESTART WITH 1`;
await prisma.$queryRaw`ALTER SEQUENCE post_id_seq RESTART WITH 1`;
This is error occurred when I run raw SQL code in prisma:
PrismaClientKnownRequestError:
Invalid `prisma.$queryRaw()` invocation:
Raw query failed. Code: `42P01`. Message: `relation "user_id_seq" does not exist`
at RequestHandler.handleRequestError (/home/surya/projects/social-media-demo/node_modules/#prisma/client/runtime/index.js:29909:13)
at RequestHandler.request (/home/surya/projects/social-media-demo/node_modules/#prisma/client/runtime/index.js:29892:12)
at async Proxy._request (/home/surya/projects/social-media-demo/node_modules/#prisma/client/runtime/index.js:30864:16) {
code: 'P2010',
clientVersion: '4.3.1',
meta: { code: '42P01', message: 'relation "user_id_seq" does not exist' }
}
Is deleting all the tables and running migrations again as a fresh start the only way?
Try it like this, it will delete the data and restart the sequence:
TRUNCATE TABLE <tableName> RESTART IDENTITY;

MongoDB: updateOne() duplicate key exception

I am trying to save a record in cosmos db using com.mongodb.client.MongoCollection.updateOne() with UPSERT flag "true". But I am getting duplicate key _id error and on retry the same object saves into db. I am unable to figure out the root cause of this error.
Below are the environmental details
Azure cosmos version 3.6
mongo driver version 2.1.6
Unique constraint on all index field is set to false
Code
mongoCollection.updateOne(filter, new Document("$set", doc), updateOptions.upsert(true));
Exception
E11000 duplicate key error collection: my-db.myCollection. Failed _id or unique index constraint.
com.mongodb.MongoWriteException:
at com.mongodb.client.internal.MongoCollectionImpl.executeSingleWriteRequest (MongoCollectionImpl.java967)
at com.mongodb.client.internal.MongoCollectionImpl.executeUpdate (MongoCollectionImpl.java951)
at com.mongodb.client.internal.MongoCollectionImpl.updateOne (MongoCollectionImpl.java613)
at com.xyz.util.myclass.myMethod (myClass.java162)
at com.xyz.util.myclass.myMethod (myClass.java73)
at com.xyz.process.myclass.myMethod (myClass.java135)
at com.xyz.process.myclass.myMethod (myClass.java87)
at com.xyz.process.myclass.myMethod (myClass.java51)
at com.xyz.springcloudflow.myclass.myMethod (myClass.java34)
at sun.reflect.GeneratedMethodAccessor129.invoke
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java43)
at java.lang.reflect.Method.invoke (Method.java498)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke (InvocableHandlerMethod.java171)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke (InvocableHandlerMethod.java120)
at org.springframework.cloud.stream.binding.StreamListenerMessageHandler.handleRequestMessage (StreamListenerMessageHandler.java55)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal (AbstractReplyProducingMessageHandler.java123)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage (AbstractMessageHandler.java169)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch (AbstractDispatcher.java115)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch (UnicastingDispatcher.java132)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch (UnicastingDispatcher.java105)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend (AbstractSubscribableChannel.java73)
at org.springframework.integration.channel.AbstractMessageChannel.send (AbstractMessageChannel.java453)
at org.springframework.integration.channel.AbstractMessageChannel.send (AbstractMessageChannel.java401)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend (GenericMessagingTemplate.java187)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend (GenericMessagingTemplate.java166)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend (GenericMessagingTemplate.java47)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send (AbstractMessageSendingTemplate.java109)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage (MessageProducerSupport.java205)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.sendMessageIfAny (KafkaMessageDrivenChannelAdapter.java369)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$400 (KafkaMessageDrivenChannelAdapter.java74)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage (KafkaMessageDrivenChannelAdapter.java431)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage (KafkaMessageDrivenChannelAdapter.java402)
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.lambda$onMessage$0 (RetryingMessageListenerAdapter.java120)
at org.springframework.retry.support.RetryTemplate.doExecute (RetryTemplate.java287)
at org.springframework.retry.support.RetryTemplate.execute (RetryTemplate.java211)
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage (RetryingMessageListenerAdapter.java114)
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage (RetryingMessageListenerAdapter.java40)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage (KafkaMessageListenerContainer.java1275)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage (KafkaMessageListenerContainer.java1258)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener (KafkaMessageListenerContainer.java1219)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords (KafkaMessageListenerContainer.java1200)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener (KafkaMessageListenerContainer.java1120)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener (KafkaMessageListenerContainer.java935)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke (KafkaMessageListenerContainer.java751)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run (KafkaMessageListenerContainer.java700)
at java.util.concurrent.Executors$RunnableAdapter.call (Executors.java511)
at java.util.concurrent.FutureTask.run (FutureTask.java266)
at java.lang.Thread.run (Thread.java748)

Exception twisted._threads._ithreads.AlreadyQuit: AlreadyQuit()

I'm running scrapy and inserting the result into mysql database. The spider doesn't finish successfully and gives me this error:
Exception twisted._threads._ithreads.AlreadyQuit: AlreadyQuit()
I'm not sure why workers die/quit.
Edit:
Basically I used this code to insert into a table that has one field with unique index on it.
Here's the whole error that I got:
mysql_exceptions.IntegrityError: (1062, "Duplicate entry 'www.example.com' for key 'idx_url'")
2016-02-01 03:22:07 [twisted] CRITICAL:
Exception twisted._threads._ithreads.AlreadyQuit: AlreadyQuit() in > ignored
but I got this error after running for a while (sometimes close to the end)