Restoring table from duplicated in SQL Server - sql

In SQL Server, I'm creating a backup table using the query below, this gives me an copy of the table Original named Backup:
select *
into Backup
from Original
How can I use my Backup table to restore all data to the Original (including id's)?
The original table contains ~20 columns and have 1 or more foreign keys pointing at its id.
Some entries from the Original may have been deleted, no new entries have been added.
I where thinking about doing something like:
TRUNCATE TABLE Original
SET IDENTITY_INSERT Original ON;
INSERT INTO Original (id, val1, val2,....)
SELECT id, val1, val2,....
FROM Backup;
SET IDENTITY_INSERT Original OFF;
but I can't truncate a table being referenced by a foreign key, and without emptying the table, the INSERT can run in to duplicated id's
Can someone help me with this?

As I mentioned, this is an XY Problem. What you think you want to do, and what you really need to do are completely different. YOu have described Y to us, but actually you want to do X.
This answer is based on your comment below:
From a program I need to delete a subset of them (around 100k) then preform some other operation, these can fail. If these operations fail i need to restore the deleted rows. I don't necessary want to delete all of them, I just need to put the rows that where deleted back in to the Original table, from the Backup table.
This is a trivial thing to do in SQL Server, you just need to use transactions. In very simple psuedo-SQL terms:
BEGIN TRY
BEGIN TRANSACTION DoWork;
--Do your logic here
--Do you need to also ROLLBACK under some scenario?
IF {Some Boolean Logic}
ROLLBACK TRANSACTION DoWork;
ELSE
COMMIT TRANSACTION DoWork;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION DoWork;
THROW;
END CATCH;

Related

Locking a specific row in postgres

I'm new enough to Postgres, and I'm trying to figure out how to lock a specific row of a table.
As an example, I have a table of users:
Name: John, Money: 1
Name: Jack, Money: 2
In my backend, I want to select John and make sure that no other calls can update (or even select possibly) John's row until my transaction is complete.
I think I need an exclusive lock from what I've read up online? I can't seem to find a good example of locking just 1 row from a table online, any idea?
Edit - Should I be doing it at method level like #SqlUpdate (or some form of that - using org.skife.jdbi.v2) or in the query itself?
If you want to lock the table in a specific selected row you need to LOCK FIRST them use the FOR UPDATE / FOR SHARE statement.
For example, in your case if you need to lock the first row you do this:
BEGIN;
LOCK TABLE person IN ROW EXCLUSIVE MODE;
-- BLOCK 1
SELECT * FROM person WHERE name = 'John' and money = 1 FOR UPDATE;
-- BLOCK 2
UPDATE person set name = 'John 2' WHERE name = 'John' and money = 1;
END;
In the BLOCK1 before the SELECT statement you are doing nothing only telling the database "Hey, I will do something in this table, so when I do, lock this table in this mode". You can select / update / delete any row.
But in BLOCK2 when you use the FOR UPDATE you lock that row to other transactions to specific modes(read the doc for more details). Will be locked until that transaction ends.
If you need a example do a test and try to do another SELECT ... FOR UPDATE in BLOCK2 before end the first transaction. It will be waiting the first transaction to end and will select right after it.
Only an ACCESS EXCLUSIVE lock blocks a SELECT (without FOR
UPDATE/SHARE) statement.
I am using it in a function to control subsequences and it is great. Hope you enjoy.
As soon as you update (and not commit) the row, no other transaction will be able to update that row.
If you want to lock the row before doing the update (which seems useless), you can do so using select ... for update.
You can not prevent other sessions from reading that row, and frankly that doesn't make sense either.
Even if your transaction hasn't finished (=committed) other sessions will not see any intermediate (inconsistent) values - they will see the state of the database as it was before your transaction started. That's the whole point of having a relational database that supports transactions.
You can use
LOCK TABLE table IN ACCESS EXCLUSIVE MODE;
when you are ready to read from your table. "SELECT" and all other operations will be queued until the end of the transaction (commit changes or rollback).
Note that this will lock the entire table and referring to PostgreSQL there is no table level lock that can lock exclusively a specific row.
So you can use
FOR UPDATE
row level lock in all your SELECT that will update your row and this will prevent all those SELECT that will update a row from reading your row !
PostgreSQL Documentation :
FOR UPDATE causes the rows retrieved by the SELECT statement to be locked as though for update. This prevents them from being locked, modified or deleted by other transactions until the current transaction ends. That is, other transactions that attempt UPDATE, DELETE, SELECT FOR UPDATE, SELECT FOR NO KEY UPDATE, SELECT FOR SHARE or SELECT FOR KEY SHARE of these rows will be blocked until the current transaction ends; conversely, SELECT FOR UPDATE will wait for a concurrent transaction that has run any of those commands on the same row, and will then lock and return the updated row (or no row, if the row was deleted). Within a REPEATABLE READ or SERIALIZABLE transaction, however, an error will be thrown if a row to be locked has changed since the transaction started. For further discussion see Section 13.4.
The FOR UPDATE lock mode is also acquired by any DELETE on a row, and also by an UPDATE that modifies the values on certain columns. Currently, the set of columns considered for the UPDATE case are those that have a unique index on them that can be used in a foreign key (so partial indexes and expressional indexes are not considered), but this may change in the future.*
I'm using my own table, my table name is paid_properties and it has two columns user_id and counter.
As you want one transaction at a time, so you can use one of the following locks:
FOR UPDATE mode assumes a total change (or delete) of a row.
FOR NO KEY UPDATE mode assumes a change only to the fields that are not involved in unique indexes (in other words, this change does not affect foreign keys).
The UPDATE command itself selects the minimum appropriate locking mode; rows are usually locked in the FOR NO KEY UPDATE mode.
To test it run following query in one tab (I'm using pgadmin4):
BEGIN;
SELECT * FROM paid_properties WHERE user_id = 37 LIMIT 1 FOR NO KEY UPDATE;
SELECT pg_sleep(60);
UPDATE paid_properties set counter = 4 where user_id = 37;
-- ROLLBACK; -- If you want to discard the operations you did above
END;
And the following query in another tab:
UPDATE paid_properties set counter = counter + 90 where user_id = 37;
You'll see that you're the second query will not be executed until the first one finishes and you'll have an answer of 94 which is correct in my case.
For more information:
https://postgrespro.com/blog/pgsql/5968005
https://www.postgresql.org/docs/current/explicit-locking.html
Hope this is helpful

Trying to copy one table from another database to another in SQL Server 2008 R2

i am trying to copy table information from a backup dummy database to our live sql database(as an accident happened in our program, Visma Business, where someone managed to overwrite 1300 customer names) but i am having a hard time figuring out the perfect code for this, i've looked around and yes there are several similar problems, but i just can't get this to work even though i've tried different solutions.
Here is the simple code i used last time, in theory all i need is the equivilant of mysqls On Duplicate, which would be MERGE on SQL server? I just didn't quite know what to write to get that merge to work.
INSERT [F0001].[dbo].[Actor]
SELECT * FROM [FDummy].[dbo].[Actor]
The error message i get with this is:
Violation of PRIMARY KEY constraint 'PK__Actor'. Cannot insert duplicate key in object 'dbo.Actor'.
What error message says is simply "You cant add same value if an attribute has PK constraint". If you already have all the information in your backup table what you should do is TRUNCATE TABLE which removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain.
After that step you should follow this answer . Or alternatively i recommend a tool called Kettle which is open source and easy to use for these kinds of data movements. That will save you a lot of work.
Here are thing which can be the reason :
You have multiple row in [FDummy].[dbo].[Actor] with same data in a column which is going to be inserted in primary key column of [F0001].[dbo].[Actor].
You have existing rows in [FDummy].[dbo].[Actor] with some value x in primary key column and there is/are row(s) in [F0001].[dbo].[Actor] with same value x in the column which is going to be inserted in primary key column.
List item
-- to check first point. if it returns row then you have some problem
SELECT ColumnGoingToBeMappedWithPK,
Count(*)
FROM [FDummy].[dbo].[Actor]
GROUP BY ColumnGoingToBeMappedWithPK
HAVING Count(*) > 1
-- to check second point. if count is greater than 0 then you have some problem
SELECT Count(*)
FROM [FDummy].[dbo].[Actor] a
JOIN [F0001].[dbo].[Actor] b
ON a.ColumnGoingToBeMappedWithPK = b.PrimaryKeyColumn
The MERGE statement will be possibly the best for you here, unless the primary key of the Actor table is reused after a previous record is deleted, so not autoincremented and say record with id 13 on F0001.dbo.Actor is not the same "actor" information as on FDummy.dbo.Actor
To use the statement with your code, it will look something like this:
begin transaction
merge [F0001].[dbo].[Actor] as t -- the destination
using [FDummy].[dbo].[Actor] as s -- the source
on (t.[PRIMARYKEY] = s.[PRIMARYKEY]) -- update with your primary keys
when matched then
update set t.columnname1 = s.columnname1,
t.columnname2 = s.columnname2,
t.columnname3 = s.columnname3
-- repeat for all your columns that you want to update
output $action,
Inserted.*,
Deleted.*;
rollback transaction -- change to commit after testing
Further reading can be done at the sources below:
MERGE (Transact-SQL)
Inserting, Updating, and Deleting Data by Using MERGE
Using MERGE in SQL Server to insert, update and delete at the same time

Using SQL*Plus with Oracle to DELETE data

I have above 60M rows to delete from 2 separate tables (38M and 19M). I have never deleted this amount of rows before and I'm aware that it'll cause things like rollback errors etc. and probably won't complete.
What's the best way to delete this amount of rows?
You can delete some number of rows at a time and do it repeatedly.
delete from *your_table*
where *conditions*
and rownum <= 1000000
The above sql statement will remove 1M rows at once, and you can execute it 38 times, either by hand or using PL/SQL block.
The other way I can think of is ... If the large portion of data should be removed, you can negate the condition and insert the data (that should be remained) to a new table, and after inserting, drop the original table and rename the new table.
create table *new_table* as
select * from *your_table*
where *conditions_of_remaining_data*
After the above, you can drop the old table, and rename the table.
drop table *your_table*;
alter table *new_table* rename to *your_table*;

Oracle delete query taking long time to delete the records from the table

Query:
Delete from test_table; (i am using this delete command in Procedure to perform daily)
This table contains 2 column name like scenario_id, item_id, these 2 column are composite primary key. So Each scenario_id will have 2 millions item_id, How to delete this table quicky.
The fastest way to delete the table might be DROP TABLE test_table command, and recreate the table using CREATE TABLE... command. DROP TABLE... will drop the table immediately. Well, actually it will move the table into recyclebin. You should PURGE RECYCLEBIN if want to completely remove the table.
Other way to delete the data in the table is to use TRUNCATE TABLE.... This is a little slower than DROP TABLE..., however, much faster than DELETE FROM.... Since there's no way to rollback the data after you truncate the table, you should be careful when you use TRUNCATE TABLE.
Check if there are any Foreign keys pointing onto this table. If they are, there MUST be indexes on these referring columns. Otherwise maintenance of the referential integrity will take ages
(Transactional) Delete on Oracle is slow by nature. Simple because of deleted data must be copied into UNDO tablespace, REDO logs and also archived REDO logs. This is the way how Oracle protects your data.
If our are deleting more than 50% of data, it is much faster when you simply create new table as select * from old_table where ... and then drop the new old one and rename new to old.
You can achieve similar goal by exchanging partitions in the partitioned table.
Or you can simply drop table's partitions if your table is partitioned wisely

trigger and transactions on temporary tables

can we create trigger and transactions on temporary tables?
when user will insert data then , if it is committed then the trigger would be fired , and that data would go from the temporary table into the actual tables.
and when the SQL service would stop, or the server would be shutdown, then the temporary tables would be deleted automatically.
or shall i use an another actual table , in which first the data would be inserted and then if it is committed then the trigger would be fired and the data would be sent to the main tables and then i would execute a truncate query to remove data from the interface table, hence removing the duplicate data.
I don't think you understand triggers - trigger firing is associated with the statement that they're related to, rather than when the transaction commits. Two scripts:
Script 1:
create table T1 (
ID int not null,
Val1 varchar(10) not null
)
go
create table T2 (
ID int not null,
Val2 varchar(10) not null
)
go
create trigger T_T1_I
on T1
after insert
as
insert into T2 (ID,Val2) select ID,Val1 from inserted
go
begin transaction
insert into T1 (ID,Val1)
select 10,'abc'
go
RAISERROR('Run script 2 now',10,1) WITH NOWAIT
WAITFOR DELAY '00:01:00'
go
commit
Script 2:
select * from T2 with (nolock)
Open two connections to the same DB, put one script in each connection. Run script 1. When it displays the message "Run script 2 now", switch to the other connection. You'll see that you're able to select uncommitted data from T2, even though that data is inserted by the trigger. (This also implies that appropriate locks are being held on T2 by script 1 until the trigger commits).
Since this implies that the equivalent of what you're asking for is to just insert into the base table and hold your transaction open, you can do that.
If you want to hide the actual shape of the table from users, create a view and write triggers on that to update the base tables. As stated above though, as soon as you've performed a DML operation against the view, the triggers will have fired, and you'll be holding locks against the base table. Depending on the transaction isolation level of other connections, they may see your changes, or be blocked until the transaction commits.
Triggers cannot be created on temp tables. But it is an unusual requirement to do so.
Temp tables can be part of a transaction, BUT table variables cannot.
As #Damien points out, triggers do NOT fire when a transaction is commited, rather they fire when an action on the table (INSERT, UPDATE, DELETE) with a corresponding trigger occurs.
Or create a view that you can insert data into. It will write back to the table and then the triggers will fire.