Postgresql Locking Rows on a Key/Foreign Key - sql

I'm using postgresql. I have 3 tables.
Table A has an ID column that's a Primary Key
Table B and Table C have ID columns that are foreign key references to A's ID.
In a single process, I would like to lock any rows that have a particular ID and then possibly delete rows and insert rows with that ID in B and C
My current approach is
SELECT FOR UPDATE on A on the ID.
Then I try to delete and insert rows in B and C.
commit/end
Unfortunately, my code deadlocks trying to do the insert.
What am I doing wrong? What is the proper way to prevent other processes from adding, removing, or updating rows with a given ID in B and C (until I am done with my transaction)?
Thanks in advance!

It looks I was doing things correctly from the start. My issue was that I was accidentally creating two different database connections in my code. So, from postgresql's perspective, there were two different transactions - hence the deadlocking.

Related

How to implement cascading deletes in one -> many *from the many side*

I have a use case where multiple rows in table A are aggregated down to a single row in table B. We represent the origin of rows in table B with a foreign key column in table A, saying "as a row, I contributed to X row in table B".
We want to find the best solution so that once every row from table A which contributed to table B has been deleted, deleted the row in table B as an orphan.
I'm not sure if there's some way to use ON DELETE CASCADE to handle this. But I'm guessing not and that maybe triggers are the best option.
I can't just purge all orphans on a schedule because the changes need to be persisted very soon after occurring.
Using the given schema, what our best option? Alternatively, is there some other schema that better sets us up for the scenario I gave?

Trigger delete on many-to-many middle table deletion

There are two tables A and B. As they have a many-to-many relation, there's also table C.
A
------
id PK
B
------
id PK
C
------
id_A PK
id_B PK
Now, a row B only exists only exists when at least one row of A has a relation to it, and one B may contain a relation to two or more different rows of A.
My question is, how do i automatically delete a row from B if there isn't any foreign to it in C? My initial though was to set a trigger, but i'm not to sure about this and i'd want a second opinion in how to proceed. Thank you.
First, one assumes that the data is initially set up correctly. That is, the only b records are the ones that meet your condition.
Then, the solution involves triggers on table c. When a row is deleted, it would check:
Does id_b have any other rows in the table?
If not, then delete the row.
This can actually be a bit tricky. In general, you don't want to query the table being triggered. So, I might suggest an alternative approach:
Add a counter on b.
Add insert/update/delete triggers on c that increments or decrements the count in b.
If the counter is 0 (or 1 before decrementing), then delete the row.
Gosh, you might find that the counter itself is sufficient, and there is no need to actually delete the row. You can get that effect if you use a view:
create view v_b as
select b.*
from b
where ab_counter > 0;
You could also create a view on b and not have to deal with triggers at all:
create view v_b as
select b.*
from b
where exists (select 1 from c where c.b_id = b.id);
#Gordon's solution above is great, However a slight modification might help.
First, one assumes that the data is initially set up correctly. That is, the only b records are the ones that meet your condition.
Then, the solution involves triggers on table c. When a row is deleted, it would check:
Does id_b have any other rows in the table?
If not, then delete the
row.
This is a bit tricky because you have to check if other rows exist. This check can be automated by using,
FOREIGN KEY(id) REFERENCES B(id) ON DELETE RESTRICT
on table C. Now you only need to delete row from B in the trigger without any checks, since the restrict constraint will automatically check if row exists in table C and restrict the delete of a referenced row in table B else delete is successful.

Inserting Data into Tables and Integrity

This is my first post, so please excuse me for any obvious or simple questions as I am very new to programming and all my projects are a first to me.
I am currently working on my first database project. A relational database using Oracle sql. I'm new on my course, so I am not sure on all the concepts yet, but working at it.
I have used some modelling software to help me construct a 13 table database. I have setup all my columns and assigned primary and foreign keys to all 13 tables. What I am looking to do now is insert 10 rows of test data into each table. I have done the parent tables but am confused about the child tables. When I assign ID numbers to all the parent tables primary keys, will the child tables foreign keys be populated at the same time?
I have not used sequences yet as I'm not 100% how to make them work, but instead inputted my own values like 100, 101, 102 etc. I know those values need to be in the foreign key, but wouldn't manually inserting them into many tables get confusing?
Is there an easier approach to this or am I over complicating the process?
I will need to use some queries later but I just want to be happy that the data is sound.
Thanks for your help
Rob
No, the child table data won't be populated automatically-- if there is a child table, that implies that there is a 0 or 1 to m relationship between the two. One row in the parent table may have 0 rows in the child table or it may have dozens so nothing could possibly be populated automatically.
If you are manually assigning primary key values, you'd need to hard code those same values as the foreign key values when you insert data into the child tables. In the real world, you wouldn't manually insert data into many tables at once, you'd have an application that did so and that knew what keys to use based on parameters passed in or by getting the currval of the sequence used to populate the primary key after inserting into the parent table.
Its necessary that data for foreign key should be present in parent table, but not the other way around.
If you want to create test data, i suggest you use something like below query.
insert into child_table(fk_column,column1,column2....)
select pk_column,'#dummy_value1#','#dummy_value2#',..
from parent_table
if you have 10 rows in parent, this will add 10 rows in child.
If you want more rows, e.g. 100 for each parent value you need to duplicate the parent data. for that use below query.
insert into child_table(fk_column,column1,column2....)
select pk_column,'#dummy_value1#','#dummy_value2#',..
from parent_table
join (select level from dual connect by level<10)
this will add 100 child values for 10 parent values..

Trying to copy one table from another database to another in SQL Server 2008 R2

i am trying to copy table information from a backup dummy database to our live sql database(as an accident happened in our program, Visma Business, where someone managed to overwrite 1300 customer names) but i am having a hard time figuring out the perfect code for this, i've looked around and yes there are several similar problems, but i just can't get this to work even though i've tried different solutions.
Here is the simple code i used last time, in theory all i need is the equivilant of mysqls On Duplicate, which would be MERGE on SQL server? I just didn't quite know what to write to get that merge to work.
INSERT [F0001].[dbo].[Actor]
SELECT * FROM [FDummy].[dbo].[Actor]
The error message i get with this is:
Violation of PRIMARY KEY constraint 'PK__Actor'. Cannot insert duplicate key in object 'dbo.Actor'.
What error message says is simply "You cant add same value if an attribute has PK constraint". If you already have all the information in your backup table what you should do is TRUNCATE TABLE which removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain.
After that step you should follow this answer . Or alternatively i recommend a tool called Kettle which is open source and easy to use for these kinds of data movements. That will save you a lot of work.
Here are thing which can be the reason :
You have multiple row in [FDummy].[dbo].[Actor] with same data in a column which is going to be inserted in primary key column of [F0001].[dbo].[Actor].
You have existing rows in [FDummy].[dbo].[Actor] with some value x in primary key column and there is/are row(s) in [F0001].[dbo].[Actor] with same value x in the column which is going to be inserted in primary key column.
List item
-- to check first point. if it returns row then you have some problem
SELECT ColumnGoingToBeMappedWithPK,
Count(*)
FROM [FDummy].[dbo].[Actor]
GROUP BY ColumnGoingToBeMappedWithPK
HAVING Count(*) > 1
-- to check second point. if count is greater than 0 then you have some problem
SELECT Count(*)
FROM [FDummy].[dbo].[Actor] a
JOIN [F0001].[dbo].[Actor] b
ON a.ColumnGoingToBeMappedWithPK = b.PrimaryKeyColumn
The MERGE statement will be possibly the best for you here, unless the primary key of the Actor table is reused after a previous record is deleted, so not autoincremented and say record with id 13 on F0001.dbo.Actor is not the same "actor" information as on FDummy.dbo.Actor
To use the statement with your code, it will look something like this:
begin transaction
merge [F0001].[dbo].[Actor] as t -- the destination
using [FDummy].[dbo].[Actor] as s -- the source
on (t.[PRIMARYKEY] = s.[PRIMARYKEY]) -- update with your primary keys
when matched then
update set t.columnname1 = s.columnname1,
t.columnname2 = s.columnname2,
t.columnname3 = s.columnname3
-- repeat for all your columns that you want to update
output $action,
Inserted.*,
Deleted.*;
rollback transaction -- change to commit after testing
Further reading can be done at the sources below:
MERGE (Transact-SQL)
Inserting, Updating, and Deleting Data by Using MERGE
Using MERGE in SQL Server to insert, update and delete at the same time

Deleting record from sql table and update sql table

I am trying to delete a record in sqlite. i have four records record1, record2, record3, record 4
with id as Primary Key.
so it will auto increment for each record that i insert. now when i delete record 3, the primary key is not decrementing. what to do to decrement the id based on the records that i am deleting.
i want id to be 1,2,3 when i delete the record 3 from the database. now it is 1,2,4. Is there any sql query to change it. I tried this one
DELETE FROM TABLE_NAME WHERE name = ?
Note: I am implementing in xcode
I don't know why you want this but I would recommend leaving these IDs as is.
What is wrong with having IDs as 1,2,4?
Also you can potentially break things (referential integrity) if you use these ID values as foreign keys somewhere else.
Also please refer to this page to get a better understanding how autoincrement fields works
http://sqlite.org/autoinc.html
The sense of auto increment is always to create a new unique ID and not to fill the gaps created by deleting records.
EDIT
You can reach it by a special table design. There are no deleted records but with a field "del" marked as deleted.
For example, with a "select ... where del> 0" will find all active records.
Or place without the "where" all the records, then the ID's remain unaffected. To loop through an array with "if del = 0 continue". Thus, the array is always in consecutive order.
It's very flexible. Depending on the select ... you get.
all active records
all the deleted records
all records